Search Results

Search found 41582 results on 1664 pages for 'fault tolerance'.

Page 241/1664 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Setting up a fileserver, some questions?

    - by Tanax
    Recently I've become very interested in setting up a fileserver, mostly for home usage but also because of the fact that I live in 2 places, I need to be able to access my files from both homes. I have already done some research into this but I am unclear about some things. My requirements are the following; Needs to work on both Mac and PC(only using Windows atm on PC but could be good if it supports more OS's to make it futureproof in case I need Linux or something else) Need to be able to set up a folder/drive/network space to act as a link to a certain folder on the fileserver All files should only be stored on the fileserver, e.g. no "shared" folders like in Dropbox where files are stored on the client computer Would prefer it if folders are password protected or that I can somehow specify what users can access the fileserver's shares Fileserver's OS most likely have to be Windows due to other factors outside of being just a fileserver I've already kinda figured out that I will need to set up a VPN so that I can access my fileserver from outside the local network. Probably going to use OpenVPN. Question 1: How would I go about to set up a VPN server so that I can connect to my local network at the fileserver's location? I know that since I'm on a dynamic IP I will have to get some sort of dynamic DNS server - I've already checked into this and I'm fairly sure I know how to fix that. I also know that I will have to forward the port OpenVPN uses in my router. Question 2: How would I actually share the folders on the fileserver so that I can access them on my other computers? I've researched into Samba but I'm uncertain if it needs to be run on a Linux OS. I know that the clients connecting to it can be Windows for example but can the Samba "server" be run on Windows? Also it appears that Samba shares a folder, meaning it works like Dropbox - I don't want that. So how would I share a folder in that case to make it work like I want it to? Sorry for the incredibly long question, I tried to structure it the best I could for easier read. Thanks in advance!

    Read the article

  • Is it possible to a VM inside a VM (e.g., KVM on Vmware)?

    - by lorin
    I'd like to do some development on Eucalyptus, an open source project which provides an Amazon EC2 interface for launching virtual machine instances on a collection of privately managed nodes. I'd really like to be able to do some of the development on my desktop, rather than having to deploy Eucalyptus on our shared local cluster each time I make a change to the source code. (Especially since there are a group of us sharing that test cluster). Unfortunately, my desktop machine is a Mac, which won't run Eucalyptus natively. I do have VMWare Fusion, and it would be really nice if I could do my Eucalyptus testing inside a VMWare instance. The problem is, to test out Eucalyptus, it will have to launch (KVM or Xen) VM instances. I've got no idea if it's possible to actually launch a KVM or Xen instance inside a VMWare instance.

    Read the article

  • My -tpl file won't update!

    - by Kyle Sevenoaks
    Hi, I am running the site at www.euroworker.no, it's a linux server and the site has a backend editor. It's a smarty/php site, and when I try to update a few of the .tpl's (two or three) they don't update. I have tried uploading through FTP and that doesn't work either. I have no knowledge of how servers work or anything, please help? It runs on the livecart system. Thanks!

    Read the article

  • Nexenta/OpenSolaris filer kernel panic/crash

    - by ewwhite
    I've an x4540 Sun storage server running NexentaStor Enterprise. It's serving NFS over 10GbE CX4 for several VMWare vSphere hosts. There are 30 virtual machines running. For the past few weeks, I've had random crashes spaced 10-14 days apart. This system used to open OpenSolaris and was stable in that arrangement. The crashes trigger the automated system recovery feature on the hardware, forcing a hard system reset. Here's the output from mdb debugger: panic[cpu5]/thread=ffffff003fefbc60: Deadlock: cycle in blocking chain ffffff003fefb570 genunix:turnstile_block+795 () ffffff003fefb5d0 unix:mutex_vector_enter+261 () ffffff003fefb630 zfs:dbuf_find+5d () ffffff003fefb6c0 zfs:dbuf_hold_impl+59 () ffffff003fefb700 zfs:dbuf_hold+2e () ffffff003fefb780 zfs:dmu_buf_hold+8e () ffffff003fefb820 zfs:zap_lockdir+6d () ffffff003fefb8b0 zfs:zap_update+5b () ffffff003fefb930 zfs:zap_increment+9b () ffffff003fefb9b0 zfs:zap_increment_int+68 () ffffff003fefba10 zfs:do_userquota_update+8a () ffffff003fefba70 zfs:dmu_objset_do_userquota_updates+de () ffffff003fefbaf0 zfs:dsl_pool_sync+112 () ffffff003fefbba0 zfs:spa_sync+37b () ffffff003fefbc40 zfs:txg_sync_thread+247 () ffffff003fefbc50 unix:thread_start+8 () Any ideas what this means?

    Read the article

  • Error with Apache, Nagios and Snorby integration

    - by user1428366
    I'm trying to use apache to serve two different websites (Nagios and Snorby). The problem is that when I try to see the "/snorby" website, apache sends me the "It works" page. If I try to access to "/nagios" it works perfectly. Snorby is running under ruby passenger .This are the config files. <VirtualHost *:80> ScriptAlias /nagios/cgi-bin "/srv/nagios/sbin" <Directory "/srv/nagios/sbin"> # SSLRequireSSL Options ExecCGI AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /srv/nagios/etc/htpasswd.users Require valid-user </Directory> Alias /nagios "/srv/nagios/share" <Directory "/srv/nagios/share"> # SSLRequireSSL Options None AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /srv/nagios/etc/htpasswd.users Require valid-user </Directory> </VirtualHost> And the other one is this: <VirtualHost *:80> #Alias /snorby "/var/www/snorby-2.6.0/public" # !!! Be sure to point DocumentRoot to 'public'! DocumentRoot /var/www/snorby-2.6.0/public <Directory /var/www/snorby-2.6.0/public> # This relaxes Apache security settings. AllowOverride all # MultiViews must be turned off. Options -MultiViews </Directory> </VirtualHost> If I disable the Nagios webpage, the Snorby webpage works. I think the problem is Snorby because when I try to access to the Ip address with Nagios page disable, the webapplication redirects me to http:// myserverip/dashboard. Can anyone help me please? Thank you so much! Regards

    Read the article

  • Can connect to Samba server but cannot access shares?

    - by jlego
    I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. Needs to be able to share files with a PC running Windows 7 and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error 0x80070035 " Windows cannot access \SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error "The original item for ShareName cannot be found." When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? UPDATE 2: As the directory I am trying to share is actually an entire internal disk, I have tried to things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. UPDATE 3: Not sure exactly what I did, but now whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Any suggestions? UPDATE 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. UPDATE 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally) UPDATE 6: Figured it out. See answer below

    Read the article

  • Draytek Vigor 2820 static IP's

    - by dannymcc
    I have a Draytek Vigor 2820 router which is connected to our ADSL provider (British Telecom, BT). We currently have one static IP address which is accessible from anywhere outside of our network and points at a simple web server on port 80. We have just been given 5 more static IP addresses which I would like to point at five servers that have static IP's. As an example: Current static IP - 80.123.123.123 New Static IP's - 100.100.100.100-105 Server IP's - 192.168.1.129-133 I have confused myself completely between NAT addresses, static routes and WAN IP aliases. If anyone can give me a clear idea of what I need to do it would be greatly appreciated.

    Read the article

  • Postfix installation error on Ubuntu

    - by kgpdeveloper
    How do I fix this error on Ubuntu 10.04 ? Reading package lists... Done Building dependency tree Reading state information... Done postfix is already the newest version. The following packages were automatically installed and are no longer required: libaprutil1-dbd-sqlite3 libcap2 apache2.2-bin libapr1 libaprutil1-ldap libaprutil1 php5-common Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up postfix (2.7.0-1) ... Postfix configuration was not changed. If you need to make changes, edit /etc/postfix/main.cf (and others) as needed. To view Postfix configuration values, see postconf(1). After modifying main.cf, be sure to run '/etc/init.d/postfix reload'. Running newaliases newaliases: warning: valid_hostname: numeric hostname: 202002 newaliases: fatal: file /etc/postfix/main.cf: parameter myhostname: bad parameter value: 202002 dpkg: error processing postfix (--configure): subprocess installed post-installation script returned error exit status 75 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: postfix E: Sub-process /usr/bin/dpkg returned an error code (1) Even if I reboot, the same error shows up. Thanks for the help..

    Read the article

  • Running a small IPTV station

    - by nixterrimus
    I'm looking to run an iptv station for my dorm. I know I can serve multicast so that's not a problem. The station will serve out podcasts and other cc licensed content. The target endpoint is xbmc- a media center. So far I know that I need to serve an rtp stream over udp that's streaming an mpeg-4 avc main or high profile with aac ( or ac3 ?) audio. I've had some luck using vlc with vlm to stream but it seems limited. What are my other options?  Everything has to run on Linux- hopefully open source. How can I use playlists and not live streams? What are my software options?

    Read the article

  • Performance issues concurrently running MySQL and MS SQL Sever

    - by pacifika
    We're considering installing MySQL on the same database server that has been running MS SQL Server. From my research there are no technical issues running both concurrently, but I am worried that the performance will be affected. Is by default SQL Server set up to use all available memory for example? What should I look out for? Thanks

    Read the article

  • Filezilla/Puttygen doesn't recognize private key file

    - by devzoner
    I have generated a key for an Ubuntu Virtual Machine running on Azure Cloud Services http://www.windowsazure.com/en-us/manage/linux/how-to-guides/ssh-into-linux/ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myPrivateKey.key -out myCert.pem When loading the private key into Filezilla, it asks me to convert the format, however, when converting the key it fails, the same happens with puttygen from linux console, using this: puttygen myPrivateKey.key -o myKey.ppk In both cases I have the following error: puttygen: error loading `myPrivateKey.key': unrecognised key type By the way, this key doesn't have a passphrase. I found an old thread about it, but I'm using 0.6.3 version which is newer than what this thread recommends: http://fixunix.com/ssh/541874-puttygen-unable-import-openssh-key.html I've managed to solve this issue by using another gui client Fugu for Mac, but one of my co-worker uses windows and I still have to figure this out. Since Filezilla is the de-facto ftp client, I thought it would be easier to solve it there. Thanks

    Read the article

  • Django Dying on Shared Hosting Environment (Too Many MySQL Connections)

    - by Tom
    I've had a Django site up and running on HostGator (client requirement), following these instructions, for a few weeks now. I had seen two error emails about pages dying with (1040: Too many MySQL connections) but had never been able to recreate the problem. As of today, the site is completely unresponsive and all pages, even the static files, are dying with that error. Two questions: What can I do to fix this (other than caching more stuff)? Why would static files be dying like that? I can request them directly without a problem, so how are they getting run through Django? The shared hosting setup doesn't allow for a <Location> block, but there's a flag in the rewrite rule that says only requests for files that don't exist in the filesystem should be processed. All of my static files exist on the system, though they are symbolically linked files if it matters.

    Read the article

  • Bind9 as a caching resolver fails with mismatch ID on localhost but not external IP

    - by argibbs
    I'm running Ubuntu 12.04 LTS on a machine on my private network. I have bind9 installed (v9.8.1-P1) via aptitude, so it appears to have put all the bits in the right places and the service starts automatically. I plan on adding some zones later, but first I'm just trying to get it working as a caching resolver. I installed bind, configured it, and starting using it. Initially I thought it was working ok, but then I found some sites weren't being resolved. I've pinned it down to being linked to the size of the result and bind failing-over to TCP mode. So: I'm trying to find out why bind is failing when I query for domain info and the result is 512 bytes (causing a truncation and retry on TCP). Specifically it fails with ID mismatches if I point dig at localhost, but works when I query the machine's own IP (192.168.0.2). This appears to be backwards to the problem that most people have when using bind (fails on external ip, works on localhost). If I do dig @localhost google.com (which has a response of <512 bytes) then it works; I get no warnings, and plenty of output. $ dig @localhost google.com ; <<>> DiG 9.8.1-P1 <<>> @localhost google.com [snip lots of output] ;; Query time: 39 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:08:34 2013 ;; MSG SIZE rcvd: 495 If I do dig @localhost play.google.com (which has a larger response) then I get back something like: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ;; ERROR: ID mismatch: expected ID 3696, got 27130 This seems to be standard, documented behaviour - when the UDP response is large (here 'large' == 512 bytes) it falls back to TCP. The ID mismatch is not expected though. If I do dig @192.168.0.2 play.google.com then I still get the warning about using TCP mode, but it otherwise works $ dig @192.168.0.2 play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @192.168.0.2 play.google.com [snip most of the output] ;; Query time: 5 msec ;; SERVER: 192.168.0.2#53(192.168.0.2) ;; WHEN: Thu Oct 17 23:05:55 2013 ;; MSG SIZE rcvd: 521 At the moment I've not set up any zones in my local instance, so it's just acting as a caching resolver. My options config is pretty much unchanged from standard, I've got the following set: options { directory "/var/cache/bind"; allow-query { 192.168/16; 127.0.0.1; }; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; edns-udp-size 4096 ; allow-transfer { any; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; And my /etc/resolv.conf is just nameserver 127.0.0.1 search .local The problem definitely seems linked to the failover to TCP mode: if I do dig +bufsize=4096 @localhost play.google.com then it works; no warning about failover to TCP, no ID mismatch, and a standard looking result. To be honest, if there was a way to force bind to use a much larger UDP buffer, that'd probably be good enough for me, but all I've been able to find mention of is max-udp-size 4096 and that doesn't change the behaviour in any way. I've also tried setting edns-udp-size 512 in case the problem is some weird EDNS issue with my router (which seems unlikely since the +bufsize=4096 flag works fine). I've also tried dig +trace @localhost play.google.com; this works. No truncation/TCP warning, and a full result. I've also tried changing the servers used in the forwarder (e.g. to OpenDNS), but that makes no difference. There's one last data point: if I repetitively do dig @localhost play.google.com I don't always get an ID mismatch, but sometimes a REFUSED error. I'm much more likely to get a REFUSED error if I dig the non-localhost IP (192.168.0.2) first: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @localhost play.google.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 35104 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;play.google.com. IN A ;; Query time: 4 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:20:13 2013 ;; MSG SIZE rcvd: 33 Any insights or things to try would be much appreciated.

    Read the article

  • vmware server end of life, where to go now?

    - by matnagel
    We have some virtual machines on vmware server 2.x running on 64 bit hardware and quite happy with it. As vmware server will no longer be offered we are thinking to migrate to ESXi, which seems is free. We will have to install the specialized network cards but that's a minor problem. But once left alone with a quite silently discontinued product there is some resistance to vmware. VirtualBox seems to work: http://blogs.oracle.com/virtualization/2010/06/migrating_from_vmware_to_virtu.html What other free (of licencing cost) options are there? We have windows server 2003 32 bit VMs and also linux 32 and 64 bit VMs to migrate. So xen does not seem an option, which does not run microsoft OSes.

    Read the article

  • Assigning multiple IPv6 addresses on a Server

    - by andrewk
    Let me uncover my intent. My host provides hundreds of IPV6 addresses free, but charge for an IPV4 address. I have several sites under one server and I was wondering if I can give each site/domain it's own ipv6 address. Is that even possible? If so how? I've read quite a bit about ipv6 but I do not understand it as clear as I'd like. My main goal is, for each domain/site to have it's own unique IP, so someone can't do a reverse ip look up and see what sites I have on that server. Thanks in advance for the patience.

    Read the article

  • Folder Redirection - Explorer requires manual refresh

    - by Pete
    Hello, I am having an issue where - when a users my documents folder is redirected to a DFS share, windows explorer requires a manual refresh after creating a new folder, file, etc. (Interestingly, not when making a new briefcase) I have tried a number of MS knowledge base articles, a hot-fix and a registry change, all with no success. (( http:// support.microsoft.com/?kbid=823291 ; http:// support.microsoft.com/kb/873392 )) The problem only occurs when going through the my documents icon. If I map a home drive for the user to the exact same location (IE - H:\DFS\user\documents) , open that drive and make new folders, then there is no problem. Mapping my documents to H:\ also resolves the issue, however, as we need folder sync and people logging on off site with cached profiles this is not a workable solution (as H: will not map and there will be no access to their docs.) Has anyone managed to figure a fix for this? Thanks, Pete.

    Read the article

  • Authenticating Apache HTTPd against multiple LDAP servers with expired accounts

    - by Brian Bassett
    We're using mod_authnz_ldap and mod_authn_alias in Apache 2.2.9 (as shipped in Debian 5.0, 2.2.9-10+lenny7) to authenticate against multiple Active Directory domains for hosting a Subversion repository. Our current configuration is: # Turn up logging LogLevel debug # Define authentication providers <AuthnProviderAlias ldap alpha> AuthLDAPBindDN "CN=Subversion,OU=Service Accounts,O=Alpha" AuthLDAPBindPassword [[REDACTED]] AuthLDAPURL ldap://dc01.alpha:3268/?sAMAccountName?sub? </AuthnProviderAlias> <AuthnProviderAlias ldap beta> AuthLDAPBindDN "CN=LDAPAuth,OU=Service Accounts,O=Beta" AuthLDAPBindPassword [[REDACTED]] AuthLDAPURL ldap://ldap.beta:3268/?sAMAccountName?sub? </AuthnProviderAlias> # Subversion Repository <Location /svn> DAV svn SVNPath /opt/svn/repo AuthName "Subversion" AuthType Basic AuthBasicProvider alpha beta AuthzLDAPAuthoritative off AuthzSVNAccessFile /opt/svn/authz require valid-user </Location> We're encountering issues with users that have accounts in both Alpha and Beta, especially when their accounts in Alpha are expired (but still present; company policy is that the accounts live on for at a minimum of 1 year). For example, when the user x (which has en expired account in Alpha, and a valid account in Beta), the Apache error log reports the following: [Tue May 11 13:42:07 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14817] auth_ldap authenticate: using URL ldap://dc01.alpha:3268/?sAMAccountName?sub? [Tue May 11 13:42:08 2010] [warn] [client 10.1.1.104] [14817] auth_ldap authenticate: user x authentication failed; URI /svn/ [ldap_simple_bind_s() to check user credentials failed][Invalid credentials] [Tue May 11 13:42:08 2010] [error] [client 10.1.1.104] user x: authentication failure for "/svn/": Password Mismatch [Tue May 11 13:42:08 2010] [debug] mod_deflate.c(615): [client 10.1.1.104] Zlib: Compressed 527 to 359 : URL /svn/ Attempting to authenticate as a non-existant user (nobodycool) results in the correct behavior of querying both LDAP servers: [Tue May 11 13:42:40 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14815] auth_ldap authenticate: using URL ldap://dc01.alpha:3268/?sAMAccountName?sub? [Tue May 11 13:42:40 2010] [warn] [client 10.1.1.104] [14815] auth_ldap authenticate: user nobodycool authentication failed; URI /svn/ [User not found][No such object] [Tue May 11 13:42:40 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14815] auth_ldap authenticate: using URL ldap://ldap.beta:3268/?sAMAccountName?sub? [Tue May 11 13:42:44 2010] [warn] [client 10.1.1.104] [14815] auth_ldap authenticate: user nobodycool authentication failed; URI /svn/ [User not found][No such object] [Tue May 11 13:42:44 2010] [error] [client 10.1.1.104] user nobodycool not found: /svn/ [Tue May 11 13:42:44 2010] [debug] mod_deflate.c(615): [client 10.1.1.104] Zlib: Compressed 527 to 359 : URL /svn/ How do I configure Apache to correctly query Beta if it encounters an expired account in Alpha?

    Read the article

  • Cannot start listening on a certain TCP port, but there's nothing currently listening on it

    - by John Rasch
    I have Windows Service that uses a WCF service host to listen for connections on TCP port 61000. When I try to start the service, I get the error: Service cannot be started. System.ServiceModel.AddressAlreadyInUseException: HTTP could not register URL http://+:61000/ because TCP port 61000 is being used by another application. ---> System.Net.HttpListenerException: The process cannot access the file because it is being used by another process at System.Net.HttpListener.AddAll() at System.Net.HttpListener.Start() at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen() --- End of inner exception stack trace --- at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen() at System.ServiceModel.Channels.TransportManager.Open(TransportChannelListener channelListener) at System.ServiceModel.Channels.TransportManagerContainer.Open(SelectTransportManagersCallback selectTransportManagerCallback) at System.ServiceModel.Channels.HttpChannelListener.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Dispatcher.ChannelDispatcher.OnOpen(TimeSpan timeout) at... A quick netstat -a shows there is nothing listening on port 61000. I've also found several posts online that mention reserving namespaces using netstat, but the account that the service runs under has administrator privileges so that shouldn't be necessary. Any other ideas as to why I'm getting this message? This service is running on 64-bit Windows Server 2008 R2 Standard.

    Read the article

  • Cisco ASA 5505 site to site IPSEC VPN won't route from multiple LANs

    - by franklundy
    Hi I've set up a standard site to site VPN between 2 ASA 5505s (using the wizard in ASDM) and have the VPN working fine for traffic between Site A and Site B on the directly connected LANs. But this VPN is actually to be used for data originating on LAN subnets that are one hop away from the directly connected LANs. So actually there is another router connected to each ASA (LAN side) that then route to two completely different LAN ranges, where the clients and servers reside. At the moment, any traffic that gets to the ASA that has not originated from the directly connected LAN gets sent straight to the default gateway, and not through the VPN. I've tried adding the additional subnets to the "Protected Networks" on the VPN, but that has no effect. I have also tried adding a static route to each ASA trying to point the traffic to the other side, but again this hasn't worked. Here is the config for one of the sites. This works for traffic to/from the 192.168.144.x subnets perfectly. What I need is to be able to route traffic from 10.1.0.0/24 to 10.2.0.0/24 for example. ASA Version 8.0(3) ! hostname Site1 enable password ** encrypted names name 192.168.144.4 Site2 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.144.2 255.255.255.252 ! interface Vlan2 nameif outside security-level 0 ip address 10.78.254.70 255.255.255.252 (this is a private WAN circuit) ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! passwd ** encrypted ftp mode passive access-list inside_access_in extended permit ip any any access-list outside_access_in extended permit icmp any any echo-reply access-list outside_1_cryptomap extended permit ip 192.168.144.0 255.255.255.252 Site2 255.255.255.252 access-list inside_nat0_outbound extended permit ip 192.168.144.0 255.255.255.252 Site2 255.255.255.252 pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-603.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 access-group inside_access_in in interface inside access-group outside_access_in in interface outside route outside 0.0.0.0 0.0.0.0 10.78.254.69 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout uauth 0:05:00 absolute dynamic-access-policy-record DfltAccessPolicy aaa authentication ssh console LOCAL http server enable http 0.0.0.0 0.0.0.0 outside http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto map outside_map 1 match address outside_1_cryptomap crypto map outside_map 1 set pfs crypto map outside_map 1 set peer 10.78.254.66 crypto map outside_map 1 set transform-set ESP-3DES-SHA crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 no crypto isakmp nat-traversal telnet timeout 5 ssh 0.0.0.0 0.0.0.0 outside ssh timeout 5 console timeout 0 management-access inside threat-detection basic-threat threat-detection statistics port threat-detection statistics protocol threat-detection statistics access-list group-policy DfltGrpPolicy attributes vpn-idle-timeout none username enadmin password * encrypted privilege 15 tunnel-group 10.78.254.66 type ipsec-l2l tunnel-group 10.78.254.66 ipsec-attributes pre-shared-key * ! ! prompt hostname context

    Read the article

  • fedora tomcat log file path

    - by Kamil
    My log file is inside: kamil@localhost tomcat$ grep "logs/" ./* ./log4j.properties:log4j.appender.R.File=${catalina.home}/logs/tomcat.log my CATALINA_HOME is kamil@localhost tomcat$ sudo grep "CATALINA" ./* ... ./tomcat.conf:CATALINA_HOME="/usr/share/tomcat" that above suggests that my log file is hare, and there it's: kamil@localhost tomcat$ sudo ls /usr/share/tomcat/logs/ | grep .out catalina.out So why can't I start server: kamil@localhost tomcat$ sudo tomcat start /usr/sbin/tomcat: line 30: /logs/catalina.out: No such file or directory

    Read the article

  • How to add a Linux Partition on FreeBSD

    - by Ömer
    Today I installed FreeBSD 9.0 PPC on my Mac mini G4 with 40GB HDD. During installation, (using the FSBD utility 'gpart') I have allocated a total of about 23GB for FreeBSD leaving 17GB totally free (neither partitioned, nor formatted) for a later Linux installation. Now, when try to install Linux (Ubuntu 10.10 PPC) on the remaining 17GB, the Linux/Ubuntu installer (or Linux's Disk Utility for the same matter) wants presumably a linux partition and when I try to add a (Linux) partition on that area using Linux DU it fails with this message: Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/hda, start=23363101696, size=16644660224, type= Entering MS-DOS parser (offset=0, size=40007761920) No MSDOS_MAGIC found Exiting MS-DOS parser Entering Apple parser Mac MAGIC found, block_size=512 map_count = 17 Leaving Apple parser Apple partition table detected containing partition table scheme = 2 got it Error: The partition's data region doesn't occupy the entire partition. ped_disk_new() failed Now, I'm trying to add a Linux partition on FreeBSD running on the harddisk. I use seemingly most suitable tool for this job: gpart. Here is the 'gpart show ad0' But it seems unable to add a Linux partition because "man gpart" doesn't list either "Linux Partition" nor anything like Ext2 or Ext3/Ext4. The closest thing to Linux Partition in gpart is "mbr" but it doesn't work: #gpart add -t mbr ado So, how to add properly a Linux Partition on FreeBSD? Thanks.

    Read the article

  • Dell R510 vs R710

    - by AX1
    Hello, the Dell R510 and R710 can both hold regular configurations (e.g. X5650, 24 GB RAM, etc.) and these usually come out to about the same price. Is there a particular reason why one would choose the R510 over the R710 or vice versa? There really appears a lack of differentiating factors. The only 'major' factor I found, which doesn't apply to me though, is that the R510 can hold up to 12 3.5in HDDs while the R710 (which is slightly more expensive) can only hold up to 6 3.5in HDDs. Maybe you guys have some input and bought either of these machines (or both) to shed some light on other differences and why someone should choose one over the other as the pricing is pretty much the same with my configuration. Thanks!

    Read the article

  • HP Storageworks 448 tape drive input/output error with Ubuntu

    - by Dan D
    I'm trying to set a backup to tape of a machine using flexbackup. However any attempt to write to the tape drive (via either flexbackup or just tar) results in "/dev/st0: Input/output error" The machine seems to recognise the drive (HP Storageworks Ultrium 448) and that there's a tape in it and "mt status" seems to work... "mt -f /dev/st0 rewind" or "erase" throw no errors... root@stor001:/# mt status SCSI 2 tape drive: File number=0, block number=0, partition=0. Tape block size 0 bytes. Density code 0x42 (LTO-2). Soft error count since last status=0 General status bits on (41010000): BOT ONLINE IM_REP_EN root@stor001:/# cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: DVDRAM GSA-4084N Rev: KS02 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi2 Channel: 00 Id: 03 Lun: 00 Vendor: HP Model: Ultrium 2-SCSI Rev: S65D Type: Sequential-Access ANSI SCSI revision: 03 "tell" does however root@stor001:/# mt -f /dev/st0 tell /dev/st0: Input/output error Based on a forum post I found, I tried: root@stor001:/# dd if=/dev/zero of=/dev/nst0 bs=1024 count=10 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 5.0815 s, 2.0 kB/s which gave the person on the forum an error but seems to work for me. If anyone has any suggestions, I'm all ears...

    Read the article

  • Cisco "copy startup-config tftp" results in a 0 byte file on the server?

    - by Geoffrey
    I'm pulling my hair out figuring this out. My startup-config is good, I can view it with a show command. I'm trying to copy it to a tftp server: asa5505# copy startup-config tftp Address or name of remote host []? ipaddress Destination filename [startup-config]? t !! %Error writing tftp://ipaddress/t (Timed out attempting to connect) On my TFTP server (SolarWinds), I get the following: binary, PUT. Started file name: C:\TFTP-Root\t binary, PUT. File Exists, C:\TFTP-Root\t binary, PUT. Deleting Existing File. binary, PUT. Interrupted by client, cause: The process cannot access the file 'C:\TFTP-Root\t' because it is being used by another process I've used tftpd32 with same results. I've tried different servers, even one on the same network as the asa ... same results. It'll create a 0 byte file and never do the dump. What's going on? Everything is working normally except for this.

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >