Search Results

Search found 21130 results on 846 pages for 'nfs client'.

Page 700/846 | < Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >

  • Samba 3.5 Shadow Copy for Windows 7

    - by Prashanth Sundaram
    Over the past several days I have been trying to get the shadow to work with samba but haven’t been successful. Can someone check below config and let me know if I am missing something? We are using Equallogic SAN and iSCSI LUNS to mount volumes. I can cleanly access samba shares on Windows 7 clients but just not shadow copy. I have referred the official how-to but couldn’t get it to work. I see these messages in the logs. Any help is deeply appreciated. [2012/10/31 12:20:53.549863, 0] smbd/nttrans.c:2170(call_nt_transact_ioctl) FSCTL_GET_SHADOW_COPY_DATA: connectpath /fs/test-01, failed. [2012/10/31 12:21:13.887198, 0] modules/vfs_shadow_copy2.c:734(shadow_copy2_get_shadow_copy2_data) shadow:snapdir not found for /fs/test-01 in get_shadow_copy_data [2012/10/31 12:21:13.887265, 0] smbd/nttrans.c:2170(call_nt_transact_ioctl) FSCTL_GET_SHADOW_COPY_DATA: connectpath /fs/test-01, failed. == Samba pkgs == samba-3.5.10-116.el6_2.x86_64 samba-common-3.5.10-116.el6_2.x86_64 samba-winbind-clients-3.5.10-116.el6_2.x86_64 samba-client-3.5.10-116.el6_2.x86_64 === df –h == First is the iSCSI LUN and 2 others are snapshots. /dev/mapper/eql-0-fs-test01 5.0G 2.3G 2.5G 48% /fs/test-01 /dev/mapper/eql-2-0+fs-test01 5.0G 2.3G 2.5G 48% /fs/test-01/@GMT-2012.10.26-17.32.42/fs/test-01 (SNAPSHOT-1) /dev/mapper/eql-d-0+fs-test01 5.0G 2.3G 2.5G 48% /fs/test-01/@GMT-2012.10.31-11.52.42/fs/test-01 (SNAPSHOT- 2) ===/etc/samba/smb.conf === [global] workgroup = DOMAIN server string = Samba Server Version %v security = ads realm = DOMAIN.CORP encrypt passwords = yes guest account = nobody map to guest = bad uid log file = /var/log/samba/%m.log domain master = no local master = no preferred master = no os level = 0 load printers = no show add printer wizard = no printable = no printcap name = /dev/null disable spoolss = yes follow symlinks = yes wide links = yes unix extensions = no [test] comment = Test Directories path = /fs/test-01 vfs objects = shadow_copy2 #shadow_copy2: sort = desc #shadow: localtime = yes #shadow: snapdir = /fs/test-01/test #shadow: basedir = /fs/test-01 guest ok = yes writeable = yes map archive = no force create mode = 0660 force directory mode = 2770 inherit owner = yes inherit permissions = yes All feedback is welcome. Thanks!

    Read the article

  • Windows file locks allowing multiple users to write to open file over network

    - by JPbuntu
    I have 6 windows computers (xp,vista,7) that need to access a samba share (Ubuntu 12.04). I am trying to make it so only one client can open a file at a given time. I thought this was pretty standard behavior of file locks, but I can't get it to work. The way it is right now a file can be open by two users, and changed and saved by either one of them. The last file saved overwrites what ever changes the other user made. At first I thought this was a Samba configuration problem, but I get this behavior even between two windows machines. So far I have only tested: Windows Xp Windows Vista Windows XP Samba << Windows Vista and both give the same behavior. When I tested the Samba configuration, I had set strict locking = yes and get errors logged like this: close_remove_share_mode: Could not get share mode lock for file _prod/part_number_list_COPY.xlsx Eventually all of the files are going to be moved onto the Samba share, so that is the configuration I am most concerned about fixing. Any ideas? Thanks in advance. EDIT: I tested an excel file again, and it is now working properly in both above mentioned cases, I am also no longer getting the above mentioned error. I don't know what happened, perhaps a restart fixed it? (also works with strict locking = no) Although I still need to find a solution for the CAD/CAM files we use, the software is Vector and it does not seem to be using file locks. Is there any software that I can use to manage these files, so two people can't open/edit them at a time? Maybe a windows application that forces file locks? Or a dirt simple version control system? (the only ones I have seen at are too complicated for our needs).

    Read the article

  • Limiting bandwidth on internal interface on Linux gateway

    - by Jack Scott
    I am responsible for a Linux-based (it runs Debian) branch office router that takes a single high-speed Internet connection (eth2) and turns it into about 20 internal networks, each with a seperate subnet (192.168.1.0/24 to 192.168.20.0/24) and a seperate VLAN (eth0.101 to eth0.120). I am trying to restrict bandwidth on one of the internal subnets that is consistently chewing up more bandwidth than it should. What is the best way to do this? My first try at this was with wondershaper, which I heard about on SuperUser here. Unfortunately, this is useful for exactly the opposite situation that I have... it's useful on the client side, not on the Internet side. My second attempt was using the script found at http://www.topwebhosts.org/tools/traffic-control.php, which I modified so the active part is: tc qdisc add dev eth0.113 root handle 13: htb default 100 tc class add dev eth0.113 parent 13: classid 13:1 htb rate 3mbps tc class add dev eth0.113 parent 13: classid 13:2 htb rate 3mbps tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip dst 192.168.13.0/24 flowid 13:1 tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip src 192.168.13.0/24 flowid 13:2 What I want this to do is restrict the bandwidth on VLAN 113 (subnet 192.168.13.0/24) to 3mbit up and 3mbit down. Unfortunately, it seems to have no effect at all! I'm very inexperienced with the tc command, so any help getting this working would be appreciated.

    Read the article

  • o2cb thinks ocfs2 cluster is still online, and refuses to shut down

    - by Kendall
    I have a handful of OpenSuSE 11.2 servers that utilize OCFS2 volumes. I've noticed that o2cb can't figure out when the OCFS2 cluster is actually mounted. For example, when I try to shutdown o2cb, after stopping OCSF2, o2cb refuses to shutdown because it thinks OCFS2 is still up! After stopping OCFS2 I try to stop o2cb... hamguy:/dev/disk/by-label # /etc/init.d/o2cb stop Stopping O2CB cluster ocfs2: Failed Unable to stop cluster as heartbeat region still active So I check the status... hamguy:/dev/disk/by-label # /etc/init.d/o2cb status Driver for "configfs": Loaded Filesystem "configfs": Mounted Stack glue driver: Loaded Stack plugin "o2cb": Loaded Driver for "ocfs2_dlmfs": Loaded Filesystem "ocfs2_dlmfs": Mounted Checking O2CB cluster ocfs2: Online Heartbeat dead threshold = 31 Network idle timeout: 30000 Network keepalive delay: 2000 Network reconnect delay: 2000 Checking O2CB heartbeat: Active And double check OCFS2... hamguy:/dev/disk/by-label # /etc/init.d/ocfs2 status Configured OCFS2 mountpoints: /u/conf /u/logs /u/backup /u/client /u/data /u/mdata OCFS2 is clearly down, while o2cb clearly thinks otherwise. The versions of OCFS2 and o2cb are... kendall@hamguy:~> rpm -qa |grep ocfs2 ocfs2console-1.4.1-25.6.x86_64 ocfs2-tools-o2cb-1.4.1-25.6.x86_64 ocfs2-tools-1.4.1-25.6.x86_64 kendall@hamguy:~> rpm -qa |grep o2cb ocfs2-tools-o2cb-1.4.1-25.6.x86_64 What causes this, and is there a way around it? If I try to reboot the machine, it will just sit there forever until your physically power cycle it. That obviously is a bit of a problem. Any insight is appreciated, thank you. Kendall

    Read the article

  • How to setup equivalent USVIDEO.ORG DNS-Proxy on Linux

    - by Gary
    I have a VPS in the USA running Ubuntu. I want to setup something similar to http://www.usvideo.org Basically, USVIDEO is a DNS service that allows Canadians to access American content like Hulu, Netflix, NBC, and etc (restricted by geographical IP). Here is how I think USVideo does it: Clients (PS3, XBOX, PC) specifies the DNS server(s) as specified on USVIDEO.org's website. If the DNS request is a video/audio site such as Netflix or Pandora, forward the request to a proxy. Otherwise, for all other requests, forward it to a different DNS server. If the specific video/audio URL is requested, return the address of the proxy server, which in turn relays traffic to the destination video/audio domain via the U.S. gateway so that it appears that the access is coming from a U.S. IP address. Once the DNS request has passed the U.S. IP address check, their proxy server steps out of the loop and lets the video streaming site contact you directly to start the video stream. This trick relies on the way that the video streaming sites check the country of your IP address once up front, but don't actually check the country of the destination IP address while the video is streaming. What is elegant about this solution is that a VPN Tunnel is not required to bypass geographical IP checks from certain websites. All that is required on the client side is to specify the DNS server (the VPS). If a certain site is geographically locked, just forward the traffic to a proxy, and that's it. These sites can be specified in the DNS entries, or perhaps in the proxy service to redirect the DNS request to its own proxy. I believe what I need to setup something similar is Squid Proxy, IPTables, and DNS. What I need help is how to exactly approach this? Would Squid Proxy be setup as a transparent proxy?

    Read the article

  • Plesk + Apache + PHP (FastCGI): Constant session permissions problems, conflicts between HTTP / HTTPS

    - by Hans Engel
    I've just moved a collection of sites over to a brand-new server, running Apache 2.2.3, PHP 5.3, and Plesk 10.1.1. I am having problems with file permissions on PHP sessions, which are being stored in /var/lib/php/session. I originally set the permissions like so for this folder: drwxrwx--- 2 apache psacln 8192 Mar 22 23:25 session This worked fine, for HTTP sessions. Files were being saved in that folder with these permissions: -rw------- 1 client1 psacln 0 Mar 22 23:24 sess_507... -rw------- 1 client2 psacln 0 Mar 22 23:25 sess_8o1... The problem, however, is that PHP scripts accessed via HTTPS do not seem to be run by the same client1 or client2 user. I deleted files in the session directory and accessed a login page via HTTPS to see how sessions were being saved when initiated via this protocol: -rw------- 1 apache apache 0 Mar 22 23:25 sess_507... So, for whatever reason, sessions initiated by clients browsing with HTTPS were being saved by apache:apache, while sessions from HTTP clients were saved with someclient:psacln. What I'd like to ask: How can I avoid this problem with session permissions? When sessions are created via unencrypted HTTP and a client visits an HTTPS portion of the site, permission errors are shown, since apache:apache tries to access the session save created by someclient:psacln. The converse is also true. Can I change the user which runs the Apache HTTPS server, via Plesk or the command line? If not, can I have PHP sessions save with rw-rw---- permissions, and then add apache to the psacln group? Any other suggestions on how to fix this issue?

    Read the article

  • Increasing MSSQL/Sage performance with SSD? (Dell PE T410)

    - by Anthony
    I have a client wanting better performance of their Sage (Accpac & CRM) server (v5.5, soon to be v7). It's running on 1 of 2 Hyper-V VMs (Svr2008) on a Dell PE T410 server with 24GB of RAM (1333MHz) & dual quad-core, and both VMs (only their C: drives) are on a single RAID5 array. All clients connect via 1Gb ethernet. The 2nd VM is SBS2008 with 9GB RAM (& all SBS dbs & company data are on a separate RAID5 array), & 3GB RAM for the Svr2008 hypervisor. I've given the Sage/MSSQL VM all the RAM I can (12GB) & SQL RAM caching (~8GB, never exceeds ~7.5GB, eg. entire db can now be cached in RAM) and that's helped significantly. Upgrading the Hypervisor to Svr2012 is an obvious step, but probably not a dramatic improvement? What about an SSD for this Sage/SQL VM (VM = 100GB, <10GB for the actual live DB) ? Can SSDs be put into the SAS hot-swap bays? Or will I have to use the mobo SATA(3Gbps?) ports, or PCI-E SSD card? Should SSDs be RAIDed for this situation? Or is SSD's higher reliability offsetting the need for RAID1/5/10? (I have nightly full disk backups) New territory for me, would appreciate some feedback. Thanks, Anthony.

    Read the article

  • Viewing a large-resolution VNC server through a small-resolution viewer in Ubuntu

    - by Madiyaan Damha
    I have two Ubuntu computers, one with a large screen resolution (1920x1600) that is running the default Ubuntu VNC server. I have another computer that has a resolution of about 1200x1024 that I use to VNC into the server using the default Ubuntu VNC viewer). Now everything works fine except there are annoying scrollbars in the viewer because the server's desktop resolution is so much higher than the viewer's. Is there a way to: Scale the server's desktop down to the viewer's resolution. I know there will be a loss of image quality, but I am willing to try it out. This should be something like how Windows Media Player or VLC scales down the window (and does some interpolation of pixels). Automatically shrink the resolution of the server to the client's when I connect and scale the resolution back when I disconnect. This seems like a less attractive solution. Any other solution that gurus out there use? I am sure someone has experienced this before (annoying scroll bars) so there must be a solution out there.

    Read the article

  • Odd IIS FTP Failure

    - by Monkey Boson
    We're running a script on our production box that zips up our database and FTPs it to a backup box every night. Our production box is running Redhat Enterprise 5. Our backup box is running Windows XP Pro / IIS 5.1. Both machines are on the same VLAN (not sure if this is imporatant). The backup file usually clocks in at around 3GB. Every now and again (~5% of the time), the backup script fails. The shell script on the "client side" - which looks at return codes - never identifies any problem since ftp always returns 0. On the "server side", IIS writes out a log that looks like this: #Software: Microsoft Internet Information Services 5.1 #Version: 1.0 #Date: 2009-08-08 07:04:25 #Fields: time c-ip cs-method cs-uri-stem sc-status sc-win32-status 07:04:25 192.168.111.235 [15]USER backup 331 0 07:04:25 192.168.111.235 [15]PASS - 230 0 07:05:54 192.168.111.235 [15]created backup_20090808.zip 426 10035 07:06:16 192.168.111.235 [15]QUIT - 426 0 Now, I know that 426 means "Connection closed, transfer aborted", which is sort-of a catch-all for "IIS was not happy". The real puzzler is the wincode: 10035 (WSAEWOULDBLOCK -- Resource temporarily unavailable). My understanding is that this code is normal when using non-blocking socket calls - which would almost certainly be used by any FTP Server implementation. My first guess that it might be a timeout issue doesn't make sense, since we're only talking about a few minutes here and the timeout was left at the default 900 s. Does anybody have any ideas about what is causing this problem, and how it may be fixed? Thanks!

    Read the article

  • Hostname error on my Slicehost Ubuntu server

    - by allesklar
    Like many folks who upgraded to Rails 2.2, I got an exception raised when sending an email. This version of Rails or later does require using tls for sending emails. The message in the production log file says: hostname was not match with the server certificate I did a whole lot of research and work on this and did everything I could. I changed my slice's hostname to ohlalaweb.com. If I run the command 'hostname' at the CL I get: ohlalaweb.com Postfix seems to work fine. I can send emails from the CL to my gmail, yahoo, and google apps gmail accounts with no problems. Here is the result of cat /etc/postfix/main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smmtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ohlalaweb.pem smtpd_tls_key_file=/etc/ssl/certs/ohlalaweb.pem smtpd_use_tls=yes # SA created next line to force postfix to use self create certificate smtpd_tls_auth_only=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = ohlalaweb.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = localhost.localdomain, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all I have regenerated the ssl keys with the ohlalaweb.com host name. Any ideas or suggestions?

    Read the article

  • System Idle Process network traffic?-Updated

    - by Moab
    I was using NetBalancer and noticed network traffic on an unidentified service, but when I highlight it and then go to the lower center pane and click the parent process it says it is the System Idle process, it is showing incoming and outgoing traffic in the upper pane, anyone know why this Windows System Idle Process is talking on the network? Windows 7 HP 64bit . . . Edit, after blocking the traffic for that unidentified Service I checked my event viewer (Windows LogsSystem) and found 3 new events that were never recorded before and matched the time I blocked the traffic. So is this part of the Windows local DNS cache? Event ID 1014 DNS Client Events Name resolution for the name dns.msftncsi.com timed out after none of the configured DNS servers responded. dns.msftncsi.com Name resolution for the name wpad.home timed out after none of the configured DNS servers responded. wpad Name resolution for the name mscrl.microsoft.com timed out after none of the configured DNS servers responded. mscrl.microsoft.com . Then My Web Browser refused to work, I re-enabled the traffic and all returned to normal. .

    Read the article

  • Sendmail configs and logs look correct, but I get no mail

    - by Christian Dechery
    I used this tutorial to config sendmail on Ubuntu. Followed every step and when I test it, it seems to have worked, but I get no mail (not even on the spam folder) Below is the log for a test message: 050 >>> MAIL From:<[email protected]> SIZE=345 AUTH=<> 050 250 2.1.0 OK ek1sm23505399vdc.28 - gsmtp 050 >>> RCPT To:<######@gmail.com> 050 250 2.1.5 OK ek1sm23505399vdc.28 - gsmtp 050 >>> DATA 050 354 Go ahead ek1sm23505399vdc.28 - gsmtp 050 >>> . 050 250 2.0.0 OK 1401150762 ek1sm23505399vdc.28 - gsmtp 050 <########@gmail.com>... Sent (OK 1401150762 ek1sm23505399vdc.28 - gsmtp) 250 2.0.0 s4R0WdYN007263 Message accepted for delivery ######@gmail.com... Sent (s4R0WdYN007263 Message accepted for delivery) And this is my /var/log/mail.log May 26 21:32:39 UX-BLUEROOM sendmail[7262]: s4R0Wdxq007262: from=christian, size=105, class=0, nrcpts=1, msgid=<[email protected]>, relay=christian@localhost May 26 21:32:40 UX-BLUEROOM sm-mta[7263]: s4R0WdYN007263: from=<[email protected]>, size=345, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] May 26 21:32:41 UX-BLUEROOM sm-mta[7263]: STARTTLS=client, relay=gmail-smtp-msa.l.google.com., version=TLSv1/SSLv3, verify=FAIL, cipher=ECDHE-RSA-RC4-SHA, bits=128/128 May 26 21:32:42 UX-BLUEROOM sm-mta[7263]: s4R0WdYN007263: to=<######@gmail.com>, ctladdr=<[email protected]> (1000/1000), delay=00:00:02, xdelay=00:00:02, mailer=relay, pri=30345, relay=gmail-smtp-msa.l.google.com. [173.194.75.109], dsn=2.0.0, stat=Sent (OK 1401150762 ek1sm23505399vdc.28 - gsmtp) May 26 21:32:42 UX-BLUEROOM sendmail[7262]: s4R0Wdxq007262: to=#####@gmail.com, ctladdr=christian (1000/1000), delay=00:00:03, xdelay=00:00:03, mailer=relay, pri=30105, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s4R0WdYN007263 Message accepted for delivery)

    Read the article

  • Problems with OpenVPN setup

    - by user70617
    Hi, I'm trying to set up a VPN server using OpenVPN and I'm getting some errors while trying to connect the client to the server. I'm getting the following error: Sun Feb 13 14:54:16 2011 OpenVPN 2.1.4 i686-pc-linux-gnu [SSL] [LZO2] [EPOLL] built on Feb 5 2011 Sun Feb 13 14:54:16 2011 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Sun Feb 13 14:54:16 2011 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Sun Feb 13 14:54:16 2011 ******* WARNING *******: all encryption and authentication features disabled -- all data will be tunnelled as cleartext Sun Feb 13 14:54:16 2011 RESOLVE: NOTE: localhost resolves to 2 addresses Sun Feb 13 14:54:16 2011 Note: Cannot ioctl TUNSETIFF tap0: Device or resource busy (errno=16) Sun Feb 13 14:54:16 2011 Note: Attempting fallback to kernel 2.2 TUN/TAP interface Sun Feb 13 14:54:16 2011 Cannot open TUN/TAP dev /dev/tap0: No such file or directory (errno=2) Sun Feb 13 14:54:16 2011 Exiting I have bridge-utils installed and tap0 shows up in ifconfig. Can anybody give me a hand? Thanks in advance.

    Read the article

  • Port forwarding for samba

    - by EternallyGreen
    Alright, here's the setup: Internet - Modem - WRT54G - hubs - winxp workstations & linux smb server. Its basically a home-style distributed internet connection setup, except its at a school. What I want is remote, offsite smb access. I figured I'd need to find out which ports need forwarding and then forward them to the server on the router. I'm told in another question on SF that multiple ports will need forwarding, and it gets somewhat complicated. One of the things I need to know is which ports require forwarding for this, and what complications or vulnerabilities could arise from this. Any additional information you think I should have before doing this would be great. I'm told SMB doesn't support encryption, which is fine. Given I set up authentication/access control, all this means is that once one of my users authenticates and starts downloading data, the unencrypted traffic could be intercepted and read by a MITM, correct? Given that that's the only problem arising from lack of encryption, this is of no concern to me. I suppose that it could also mean a MITM injecting false data into the data stream, eg: user requests file A, MITM intercepts and replaces the contents of file A with some false data. This isn't really an issue either, because my users would know that something was wrong, and its not likely anyone would have incentive to do this anyway. Another thing I've been informed of is Microsoft's poor implementation of SMB, and its crap track record for security. Does this apply if only the client-end is MS? My server is linux.

    Read the article

  • SMTP Unreachable from Specific Networks

    - by Jason George
    I host my business site through a VPS account. The instance runs Ubuntu and I'm using POSTFIX+Dovecot as my mail server. For the most part, the mail server works fine. I have noticed, however, that I can not send mail from specific local networks. I noticed this at a client's office serval months ago. I can receive email, but any time I tried to send mail when connected to their network the connection would time out. Since I could send my mail after leaving, I chalked it up to improper network configuration and didn't worry about it. Unfortunately I've recently moved, switched service providers, and am forced to use the service providers router due to the special set-up they put in place to give me DSL in the sticks--well beyond the typical range for a DSL run. Now I'm unable to send email from home, which is a problem. I have tried sending email through my phone (using cellular service rather than my DSL) just to confirm the server is currently working. I'm not even sure where start debugging. Any ideas on how I might track down the issue would be greatly appreciated.

    Read the article

  • LAN->LAN IP translation (for TortoiseSVN + Artifacts + Buffalo router)

    - by Armchair Bronco
    Here's my scenario: I've got a VisualSVN server on my main dev box @ home. I'm also using Visual Studio 2010, TortoiseSVN, VisualSVN client (for source control), and Versioned 'Artifacts' (for bug tracking). (I had to modify the fake URL's below to use only one slash because as a new user, I can't post more than one real URL.) I've got my Buffalo AirStation WHR-HP-G300N router properly configured so my business partner can connect to the SVN server. I have port forwarding enabled for the internet-side IP address (like http:/99.888.77.66:443) which gets forwarded to an internal IP (like 192.168.11.6). This part is working great. The problem I'm having is with the integration piece between TortoiseSVN and my bug tracking system. I need to provide a bugtraq:url property, but I haven't been able to get relative paths to work. So I'm forced to use an absolute URL. On my end, I need to use the name of my server (for example: bugtraq:url = https:/my-server/svn/bla..), but this doesn't work for my partner. He needs to specify the IP address (for example: bugtraq:url = https:/999.888.77.66:443/svn/bla...) Is there a way to configure my router such that the IP address for this parameter gets re-routed/re-mapped to "https://my-server" if the request originates from the LAN itself? My router's software supports LAN-Internet and Internet-LAN, but I don't see LAN-LAN.

    Read the article

  • LAMP server VM issues

    - by nullArray
    After getting a recommendation to salvage a wiki by installing a LAMP server, I went on the prowl for a good virtualized one. I used the VMware Player version. Since the windows box has Bonjour, I can, for example, go to http://lamp.local. and it works see the web client. The problem is, I can't ssh to a directory to scp the files I need, mount a usb thumbdrive (usbfs is unsupported) nor get samba working. I can't even update the ubuntu installation, it fails. I've tried bridged, nat and host-only networking settings in VMware Player. Bridged gives me an undefined IP, while the other two each have different IPs. All three settings allow me to access the web config, but none of them give me samba access. Windows usually freezes, then reports that it cannot connect. I'd rather not wipe a box to do a dedicated install, is there I way I can get this VM working, or are there better LAMP VMs out there? This one came already working and set up with VMware Player, so I thought it would be perfect... Thanks,

    Read the article

  • Route forwarded traffic through eth0 but local traffic through tun0

    - by Ross Patterson
    I have a Ubuntu 12.04/Zentyal 2.3 server configured with WAN NATed on eth0, local interfaces eth1 and wlan0 bridged on br1 on which DHCP runs, and an OpenVPN connection on tun0. I only need the VPN for some things running on the gateway itself and I need to make sure that everything running on the gateway goes through the VPNs tun0. root:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default gw... 0.0.0.0 UG 100 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 br1 192.168.1.0 * 255.255.255.0 U 0 0 0 br1 A.B.C.0 * 255.255.255.0 U 0 0 0 eth0 root:~# ip route 169.254.0.0/16 dev br1 scope link metric 1000 192.168.1.0/24 dev br1 proto kernel scope link src 192.168.1.1 A.B.C.0/24 dev eth0 proto kernel scope link src A.B.C.186 root:~# ip route show table main 169.254.0.0/16 dev br1 scope link metric 1000 192.168.1.0/24 dev br1 proto kernel scope link src 192.168.1.1 A.B.C.0/24 dev eth0 proto kernel scope link src A.B.C.D root:~# ip route show table default default via A.B.C.1 dev eth0 How can I configure routing (or otherwise) such that all forwarded traffic for other hosts on the LAN goes through eth0 but all traffic for the gateway itself goes through the VPN on tun0? Also, since the OpenVPN client changes routing on startup/shutdown, how can I make sure that everything running on the gateway itself loses all network access if the VPN goes down and never goes out eth0.

    Read the article

  • PPTP server connection closes - Too much data?

    - by Sebastian Hoitz
    I set up a PPTP server for my company. However, every time I have another computer connected to this server (i.e. our backup server) and a lot of data gets transferred, the connection to this computer closes. In the syslog on the PPTP server I find this message: Apr 22 12:44:34 komola-chase pptpd[2581]: CTRL: Reaping child PPP[2583] Apr 22 12:44:34 komola-chase pppd[2583]: MPPE disabled Apr 22 12:44:34 komola-chase pppd[2583]: Connection terminated. Apr 22 12:44:34 komola-chase pppd[2583]: Exit. Apr 22 12:44:34 komola-chase pptpd[2581]: CTRL: Client 192.168.0.11 control connection finished Apr 22 12:55:11 komola-chase pptpd[2674]: GRE: xmit failed from decaps_hdlc: No buffer space available Apr 22 12:55:11 komola-chase pptpd[2674]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Apr 22 12:55:11 komola-chase pppd[2675]: Modem hangup Apr 22 12:55:11 komola-chase pppd[2675]: Connect time 23.0 minutes. Hopefully you can help me as to what is wrong. As far as I can tell, there is no compression enabled on the PPTP server (npbsdcomp option). Thank you!

    Read the article

  • Socket options for a tcp server with 3G clients & frequent disconnections

    - by Joel
    I have a TCP server, written in java, sending and receiving many short messages, from 500 bytes to 100 KB long. It's a chess game and chat server, to make it simple. The server is running Debian 6. Half of the clients are connecting from 3G networks, and half over standard DSL. A portion of the 3G clients lose connection pretty often. The error I get on the server and on the client socket is Connection reset. I have come across this page at Oracle documentation: socketOpt. I am wondering what I could tune there to lower the number of disconnections from 3G clients. I don't mind about the ping or transfer rate, but just about the TCP disconnections. I am not skilled enough to understand the impact of each setting, but I sort of understood that the TCP window was important, although I don't know exactly how. So I'm asking if anyone here has an idea ? Thanks if you can help.

    Read the article

  • Windows xp mapped drives disconnect from Windows 7 on Peer to Peer network

    - by Kathryn Codo
    I have a 5 user peer to peer network. 2-XP machines, 3- Windows 7 machines and a Windows 7 machine acting as the file server. I have NO issues with the Windows 7 machines staying connected via the mapped drives to the "server". However, once a day the 2 XP machines will not connect to the Windows 7 machine "server" via the mapped drives. I must restart the "server" in order to see the server and access the files via the mapped drives or Explorer. I have tried the persistent : yes command in NET USE, on the XP machines and also setting a static IP address on the "server". I've turned the firewall off on the "server:, no difference so I turned the firewall back on. No interruption occurs with the internet when this happens, just can't see the server. Again, the Windows 7 users are unaffected. I have gone into the advanced network settings on the "server" and added read/write permissions for Everyone as well as the users of the XP machines. I have double checked that we are all on the same WORKGROUP. I'm at a loss. It seems to happen around mid-day and I've not been able to find any activity that is happening at this time (Like someone plugging in a thumbdrive). Any suggestions would be greatly appreciated as my client is running old XP software he no longer has the disks for and needs for his engineering firm.

    Read the article

  • OS X AFP shares and access

    - by gbrandt
    I am running 10.5.6 Client as a mini server and am having problems with AFP shares. All clients are OS X 10.5.7 I have created three users for 'File Sharing' only on the 'server'. I have created groups and placed these users into specific groups. I have created ACL's to give each group access to certain shares. Two of those users can read and write to any share, one user cannot write to the shares, with different results: when copying a directory, only the directory is created, no files inside are copied, the OS does not give any errors when copying a single file I get three dialogs: "You may need to enter the name and password for an administrator on this computer to change the item named 'xxxx', "The item 'xxxxx' contains one or more items you do not have permission to read. Do you want to copy the items you are allowed to read?, and, The operation cannot be completed because you do not have sufficient priveleges for some of the items. With the single file, a file gets created on the server, but is empty. My ACL for the group this user belongs to is: 0: group:projectmembers allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 1: group:informationtechnology inherited allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 2: group:executive inherited allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 3: group:everyone inherited deny list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit User 1 & 2 belong to informationtechnology and executive and projectmembers, they can read and write freely on the share. User 3 belongs to projectmembers and cannot read and write freely. I have read that this is a UID issue, however User 1 & 2 do not have matching UID's across clients and server and they work, so I don't think this is the case. Any ideas?

    Read the article

  • Synergy on macbook with osx mavericks wifi connection

    - by user332956
    I'm trying to set up Synergy with my macbook pro running OS X 10.9.3 as a client and my Windows 7 desktop as a server. I'm having some pretty bad connection problems though when I try to use my mac. Every couple seconds the mouse or the keyboard will stop working entirely then come back. I ran some tests and found that the ping from my desktop to my mac would be very high every third ping or so(1000+ ms) or sometimes even time out. If I ping my desktop from my mac the pings are all reasonably low. I believe that this is a power saving feature of Mavericks and I have found a way to get around it by continually pinging my router on my mac, keeping my wifi card from going to sleep. I'm using this right now to type this up with synergy and have had zero issues. Has anyone else ran into this issue and found a better solution? So far, I think my best bet would be to buy an ethernet adapter but I'd rather not have yet another cable running across my desk.

    Read the article

  • Change XRDP keyboard layout to en-gb Ubuntu 12.04

    - by Earl Sven
    Does anybody know how to change the keyboard layout to en-gb in an XRDP session on Ubuntu 12.04? I am using mstsc.exe to connect to an XRDP server hosting an XVNC session, however I cannot work out how to apply the UK keyboard layout. A bit of googling has yeilded these instructions which allow me to change the keymap, however using the keymap file I downloaded from here I loose the ability to use the arrow keys, home/end etc. Comparing the file with the standard one there are substantially more differences than I would expect considering the similarity between the layouts. I only have RDP access to the box so i don't seem to be able to actually generate a new layout per the instructions above, maybe it's a local console thing? Also I can't change either the RDP client used or the RDP server as they are my only access to the system, I don't have local console access. I do have root priveleges on the OS however. Any thoughts? Edit: I have found http:// xrdp.sourceforge.net/documents/keymap/newkeymap.html (apologies for not typing the link properly but the antispam filter won't let me post more than 2 links) this documentation on the XRDP sourceforge page which describes keymap file format. It indicates the values in the keymap files are unicode 0x64 etc, however the files I have already on my system seem to use a different format 0:0 or 65307:27 etc, does anybody know what the difference is?

    Read the article

  • AD server within another network - DNS issues

    - by Harry Muscle
    Here's a quick summary of the environment I support: we have a domain (domain A) that has about 20 client computers. The domain server for this domain and all the clients sit within the network infrastructure of a larger domain (domain B). All the computers get their network settings via DHCP from domain B's servers. I have no control and am unable to make changes to anything to do with domain B. The problem I have is that currently in order for my domain's (domain A) clients to be able to resolve the domain server and the shares on it they have their DNS server IP address set to domain A's domain server (via the default GPO). Unfortunately when a laptop (windows and mac) gets taken home, they are still looking for the domain server as their DNS server and obviously can't access the internet correctly outside of our environment. Ideally I need a solution where the machines use domain A's domain server as their DNS when inside the office and use what ever DNS server DHCP gives them when they are outside the office. However, since I have no control over the office DHCP server, I'm not sure how this can be accomplished. Any help and advice that anyone can offer is highly appreciated. Thanks, Harry P.S. The solution I'm trying to find needs to require no involvement from the user.

    Read the article

< Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >