Search Results

Search found 21091 results on 844 pages for 'dsl client'.

Page 370/844 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • How do I set up a Windows NFS share so that I can view it's contents on Linux?

    - by hewhocutsdown
    My NFS server is a Windows XP SP3 box with the Microsoft Windows Services for Unix installed. I have a share configured under C:\NFS with the share name NFS and ANSI encoding. Anonymous access is enabled, with the anon UID/GID set to 0/0. Additionally, I've set ALL MACHINES to Read-Write, and checked the checkbox to Allow root access. My first NFS client is a Ubuntu 10.04 box, with nfs-common installed. Running sudo mount -t nfs 1.1.1.1:/NFS /home/user/NFS succeeds, but when I attempt to view the folder (even as root), it tells me that I do not have the permissions necessary to view the contents of the folder. My second NFS client is an IBM iSeries box running OS/400 V5R3. I used the mount command below: MOUNT TYPE(*NFS) MFS('1.1.1.1:/NFS') MNTOVRDIR('/PARENT/NFS') OPTIONS('rw,nosuid,retry=5,rsize=8096,wsize=8096,timeo=20,retrans=2,acregmin=30,acregmax=60,acdirmin=30,acdirmax=60,soft') CODEPAGE(*BINARY *ASCII) which also mounts successfully. Attempting to WRKLNK '/PARENT/NFS' and use Option 5 to enter the directory yields a Not authorized to object error - even though I am a security officer with the *ALLOBJ special authority. My gut says that it's a problem with the Windows share, but I don't know what it could be. Do you have any suggestions?

    Read the article

  • Error setting up Data Protection Manager 2010 Agents / Network "Unauthenticated" in network settings

    - by Bowsa
    I'm not sure if the two are connected but i suspect they are. Basically I'm tring to setup Data Protection Manager 2010 on a fresh install of Server 2008 R2 in a SBS 2003 domain. Everything went fine until trying to install agents across the network. Upon clicking add, i get the following error message: Unable to connect to the Active Directory Domain Services Database. Make sure that the DPM server is a member of a domain and that the controller is running. Also verify that there is network connectivity between the DPM server and the domain controller. ID: 7 As usual (worryingly) the MSDN support for 2010 products is nearly non existant, clicking the error ID simply gives a page not found error. So after 2 days of Googling and trying various fixes (DNS settings, adding permissions to AD objects, rejoining the domain and many more) I thought I'd ask here in the hope that someone out there may have had this issue before. Any help greatly appreciated! Some further info: Firewalls are disabled on the Server 2008, SBS, and client machines. Manually installing and adding the client in also fails, as the DPM server tries to contact the DC first. Edit: I tried creating a new protection group instead, and it gives a different error upon adding the machines: Following machines are not found in AD: COMPUTERNAME.COMPANYNAME.LOCAL Is there a certain directory structure it follows in AD?

    Read the article

  • Backup Exec 10 - Network connection to the remote agent has been lost

    - by jherlitz
    Okay, so I have 4 remote offices, all running off of a 3mb ethernet connection. Two sites are part of a WAN and 2 sites are using 3mb connections over a site to site tunnel. I am using Backup Exec 2010, I have the remote agent installed on all the remote servers. For the past few weeks now, on the two sites running over the site to site tunnel have been failing with the following error message now. "The network connection to the Backup Exec Remote Agent has been lost. Check for network errors" We used to be on a DSL connection site to site tunnel, now we changed to the 3mb ethernet connection using site to site tunnel. I have to find out, has it been failing ever since we changed, or just recently. Backup exec support is telling me it is a network issue. My communication or connection to the server is solid, we don't have any issues, or outages. So I am baffled on why this continues to fail. And why just those two sites.. Any advice?

    Read the article

  • Vagrant doesn't detect chef-solo unless re-installed

    - by nightowl
    I am using Vagrant to test my Chef recipes in Amazon AWS, and I am encountering an irritating issue: I initially assumed that Vagrant would install chef itself (as it does when using Virtual Box as the provider) but it seems that this needs to be done using the cloud-init script. However, even after I successfully installed the chef gem via cloud-init I was still getting the following error: The chef binary (eitherchef-soloorchef-client) was not found A quick google of this error suggested three probable causes: Chef had failed to install It had installed, but the directory was not in the $PATH environment variable It had installed and in the $PATH but with incorrect permissions I logged in and double checked; chef-solo and chef-client were installed; The path variable for the user, sudo and root all included /usr/local/bin and permissions were all fine. I managed to solve this problem by uninstalling and reinstalling the gem using sudo gem install chef. I don't understand why this should resolve the issue and it is a bit of a problem if I have to ssh into a test box and manually install the gem every time. Does anyone have any suggestions why this might be happening?

    Read the article

  • Traffic shaping on Linux with HTB: weird results

    - by DADGAD
    I'm trying to have some simple bandwidth throttling set up on a Linux server and I'm running into what seems to be very weird stuff despite a seemingly trivial config. I want to shape traffic coming to a specific client IP (10.41.240.240) to a hard maximum of 75Kbit/s. Here's how I set up the shaping: # tc qdisc add dev eth1 root handle 1: htb default 1 r2q 1 # tc class add dev eth1 parent 1: classid 1:1 htb rate 75Kbit # tc class add dev eth1 parent 1:1 classid 1:10 htb rate 75kbit # tc filter add dev eth1 parent 1:0 protocol ip prio 1 u32 match ip dst 10.41.240.240 flowid 1:10 To test, I start a file download over HTTP from the said client machine and measure the resulting speed by looking at Kb/s in Firefox. Now, the behaviour is rather puzzling: the DL starts at about 10Kbyte/s and proceeds to pick up speed until it stabilizes at about 75Kbytes/s (Kilobytes, not Kilobits as configured!). Then, If I start several parallel downloads of that very same file, each download stabilizes at about 45Kbytes/s; the combined speed of those downloads thus greatly exceeds the configured maximum. Here's what I get when probing tc for debug info [root@kup-gw-02 /]# tc -s qdisc show dev eth1 qdisc htb 1: r2q 1 default 1 direct_packets_stat 1 Sent 17475717 bytes 1334 pkt (dropped 0, overlimits 2782 requeues 0) rate 0bit 0pps backlog 0b 12p requeues 0 [root@kup-gw-02 /]# tc -s class show dev eth1 class htb 1:1 root rate 75000bit ceil 75000bit burst 1608b cburst 1608b Sent 14369397 bytes 1124 pkt (dropped 0, overlimits 0 requeues 0) rate 577896bit 5pps backlog 0b 0p requeues 0 lended: 1 borrowed: 0 giants: 1938 tokens: -205561 ctokens: -205561 class htb 1:10 parent 1:1 prio 0 **rate 75000bit ceil 75000bit** burst 1608b cburst 1608b Sent 14529077 bytes 1134 pkt (dropped 0, overlimits 0 requeues 0) **rate 589888bit** 5pps backlog 0b 11p requeues 0 lended: 1123 borrowed: 0 giants: 1938 tokens: -205561 ctokens: -205561 What I can't for the life of me understand is this: how come I get a "rate 589888bit 5pps" with a config of "rate 75000bit ceil 75000bit"? Why does the effective rate get so much higher than the configured rate? What am I doing wrong? Why is it behaving the way it is? Please help, I'm stumped. Thanks guys.

    Read the article

  • RTL8192SU + RTL8191E Linux Issue Installing Driver

    - by s32ialx
    OK I've read tons of fourms of people getting the onboard RTL8191E working and the RTL8192SU working dif is U = USB they are both N and i have both Toshiba L500D-00T pre-installed Win Vistax64-HP and i have obtained the free Win7x64-HP upgrade the onboard wificard sucks and can't hold a stable connection for more then 20minutes in windows but the usb is amazing. Now problem is i tried both Ubuntu and Mandriva with no resolve the issue is the onboard drive detects and actually SHOWS that it's there but no wireless networks detect so it's saying no SSID's are broadcasting which i know is a lie since I'm running a 2wire bell dsl modem with built in wifi and a Linksys wrt54g w/ DD-WRT firmware and both are broadcasting fine. Why don't i use the USB? new in Mandriva Linux Control Center 2010.0 it shows up in Other/Unknown as RTL8191S WLAN Adapter and on the right pane this shows up Identification Vendor: ?Manufacturer Realtek Description: ?RTL8191S WLAN Adapter Media class: ? Connection Bus: ?USB Bus PCI #: ?1 PCI device #: ?5 Vendor ID: ?0x0bda Device ID: ?0x8172 Sub vendor ID: ?0x0000 Sub device ID: ?0x0000 Misc Module: ?rtl819xU In the hardware device manager in mandriva it shows up as unknown but shows that it's realtek and that it's a 8192 chipset. but no option to for a driver install and when i do a make in terminal i get this error and no clue what it means [root@John-PC rtl8192se_linux_2.6.0010.1020.2009_64bit]# make make: *** /lib/modules/2.6.31.12-desktop-3mnb/build: No such file or directory. Stop. make: *** [all] Error 2 [root@John-PC rtl8192se_linux_2.6.0010.1020.2009_64bit]# any help appreciated. and just encase I'm running currently Mandriva Spring 2010 Free

    Read the article

  • Nginx + PHP-FPM Timeouts, almost zero load consumption?

    - by javipas
    I've got a server running on a Linode with Ubuntu 10.04 LTS, Nginx 0.7.65, MySQL 5.1.41 and PHP 5.3.2 with PHP-FPM. There is a WordPress blog on it, updated to WordPress 3.2.1 recently. I have made no changes to the server (except updating WordPress) and while it was running fine, a couple of days ago I started having downtimes. I tried to solve the problem, and checking the error_log I saw many timeouts and messages that seemed to be related to timeouts. The server is currently logging this kind of errors: 2011/07/14 10:37:35 [warn] 2539#0: *104 an upstream response is buffered to a temporary file /var/lib/nginx/fastcgi/2/00/0000000002 while reading upstream, client: 217.12.16.51, server: www.mydomain.com, request: "GET /page/2/ HTTP/1.0", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.mydomain.com/" 2011/07/14 10:40:24 [error] 2539#0: *231 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 46.24.245.181, server: www.mydomain.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.google.es/search?sourceid=chrome&ie=UTF-8&q=mydomain" and even saw this previous serverfault discussion with a possible solution: to edit /etc/php/etc/php-fpm.conf and change request_terminate_timeout=30s instead of ;request_terminate_timeout= 0 The server worked for some hours, and then broke again. I edited the file again to leave it as it was, and restarted again php-fpm (service php-fpm restart) but no luck: the server worked for a few minutes and back to the problem over and over. The strange thing is, although the services are running, htop shows there is no CPU load (see image) and I really don't know how to solve the problem. The config files are on pastebin The php-fpm.conf file is here The /etc/nginx/nginx.conf is here The /etc/nginx/sites-available/www.mydomain.com is here Please help :(

    Read the article

  • Networking 2 Virtual PC with one VPC as DHCP server

    - by vivek
    My host OS is Win XP Professional. The host has a real network connection via DSL and I created a second network connection using Microsoft Loopback Adapter. Internet connection sharing is enabled. The Microsoft Loopback adapter has a IP address of 192.168.0.1. I have 1 Virtual PC which has Windows Server 2003. I have setup the network connection on this VPC to use Microsoft Loopback Adapter. I setup this VPC to be the Domain Controller , DNS Server and DHCP Server. I set this to a static IP address 192.168.0.2 (on the same subnet as the MS Loopback adapter) I have a second Virtual PC which also has Windows Server 2003. The network connection on this VPC is set to "Local Only". I want this VPC to get its IP address from the 1st VPC on which I setup as a DHCP server. What i want is the 2 VPC should be in a network with one of the VPC acting as the domain controller, DNS Server and DHCP server. The second VPC shoud get its IP address from the 1st VPC. It should be a part of the domain of the 1st VPC. When i tried to make the second VPC get the IP address from the first VPC I am not succeeding. Can somebody post some suggestions on how to go about this ?

    Read the article

  • hdfs configuration

    - by Ananymous
    I am a newbie. Trying to setup a hdfs system to serve my data (I don't plan to use mapreduce) at my lab. So far I have read, cluster setup in but I am still confused. Several questions: Do I need to have a secondary namenode? There are 2 files, masters and slaves. Do I really need these 2 files eventhough I just want hdfs? If I need them, what should go in there? I assume my namenode in masters and datanodes as slaves? Do I need slaves nodes What configuration files are needed for namenode, secondary namenode, datanode and client? (I assume core-site.xml is needed for all 4)? In addition, can someone suggest a good configuration model? sample configuration for namenode, secondary namenode, datanode, and the client would be very helpful. I am getting confused because it seems most of the documentation assumes I want to use map-reduce which isn't the case.

    Read the article

  • Sonicwall NSA 240, Configured for LAN and DMZ, X0 and X2 on same switch - ping issues

    - by Klaptrap
    Our Sonicwall vendor supplied and networked the NSA240 when we required a DMZ in our infrastructure. This was configured and appeared correct although VPN users periodically dropped DNS and Terminal Services. The vendor could not resolve and so the call was escalated to Sonicwall. The Sonicwall support engineer took a look and concluded that the X0 (LAN) and X2 (DMZ) intefaces were cabled to the same switch and so this is the issue. What he observed is a ping request to the LAN Domain Controller, from a connected VPN user, is forwarded (x0) from the VPN client IP to the DC IP but the ping response from the DC IP to the VPN client IP is on X2, a copy of the log is detailed below:- 02/02/2011 10:47:49.272 X1*(hc) X0 192.168.1.245 192.168.1.8 IP ICMP -- FORWARDED 02/02/2011 10:47:49.272 -- X0* 192.168.1.245 192.168.1.8 IP ICMP -- FORWARDED 02/02/2011 10:47:49.272 X2*(i) -- 192.168.1.8 192.168.1.245 IP ICMP -- Received X0 - LAN X1 - WAN X2 - DMZ The Sonicwall engineer concluded that we either need a seperate switch for X2 or we use a VLAN switch for both. I am the companies software engineer and we have yet to have heard back from the vendor, so I am lost at sea at the moment. Do we need to buy this additional equipment or is there another configuration on the NSA240 we can use?

    Read the article

  • Get Internal IP Address From DHCP Hostname

    - by ell
    I would like to try and get an internal ip address of one of the computers on my network. The reason for this is I have a little home server box downstairs but every time I want to SSH into it I have to open my router configuration and go on the DHCP client table and look at the IP address. For example I would like to be able to go ssh ell-sever instead of ssh 192.168.1.105 or whatever it happens to be. My network configuration is like so: Router downstairs that is connected to the Internet and is running a DHCP server My server computer (ell-server) is a headless pc connected to the router via ethernet cable. Running Ubuntu 11.04 Server Edition My laptop upstairs (ell-laptop) that is running Ubuntu 11.10 Desktop Edition connected wirelessly Other (irrelevant) computers - 2 x Windows XP, 1 x Xubuntu - all connected with cables. (It seemed to me the method of connection isn't useful information but I put it in anyway - just in case. If I have missed any information please tell me) Do I have to run a DNS server on one of my computers? If so which one? And does that mean I will have to run a DDNS client on each computer? Thanks in advance, ell.

    Read the article

  • Use Apache authentication + authorization to control access to Subversion subdirectories

    - by Stefan Lasiewski
    I have a single SVN repo at /var/svn/ with a few subdirectories. Staff must be able to access the top-level directory and all subdirectories within it, but I want to restrict access to subdirectories using alternate htpasswd files. This works for our Staff. <Location /> DAV svn SVNParentPath /var/svn AuthType Basic AuthBasicProvider ldap # mod_authnz_ldap AuthzLDAPAuthoritative off AuthLDAPURL "ldap.example.org:636/ou=people,ou=Unit,ou=Host,o=ldapsvc,dc=example,dc=org?uid?sub?(objectClass=PosixAccount)" AuthLDAPGroupAttribute memberUid AuthLDAPGroupAttributeIsDN off Require ldap-group cn=staff,ou=PosixGroup,ou=Unit,ou=Host,o=ldapsvc,dc=example,dc=org </Location> Now, I am trying to restrict access to a subdirectory with a separate htpasswd file, like this: <Location /customerA> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd.customerA Require user customerA </Location> I can use Firefox and curl to browse to this folder fine: curl https://svn.example.org/customerA/ --user customerA:password But I cannot use check out this SVN repository: $ svn co https://svn.example.org/customerA/ svn: Repository moved permanently to 'https://svn.example.org/customerA/'; please relocate And on the server logs, I get this strange error: # httpd-access.log 192.168.19.13 - - [03/May/2010:16:40:00 -0700] "OPTIONS /customerA HTTP/1.1" 401 401 192.168.19.13 - customerA [03/May/2010:16:40:00 -0700] "OPTIONS /customerA HTTP/1.1" 301 244 # httpd-error.log [Mon May 03 16:40:00 2010] [error] [client 192.168.19.13] Could not fetch resource information. [301, #0] [Mon May 03 16:40:00 2010] [error] [client 192.168.19.13] Requests for a collection must have a trailing slash on the URI. [301, #0] My question: Can I restrict access to Subversion subdirectories using Apache access controls? DocumentRoot is commented out, so it's not clear that the FAQ at http://subversion.apache.org/faq.html#http-301-error applies.

    Read the article

  • Need to have access to my office PC from my laptop hopping through two VPN servers

    - by Andriy Yurchuk
    Here's the illustration of what I have ( http://clip2net.com/s/2fvar ): My office PC with it's IP of 123.45.e.f. Office VPN, which I will connect to from my VPS to get to my office PC. My own VPS, which I use as a: client to connect to office VPN (through vpnc, which creates a tun0 with 123.45.c.d IP address); VPN server my laptop can connect to (OpenVPN, tun1, 10.8.0.1) My own laptop I will use as a VPN client to connect to VPS OpenVPN server (will create a tun0 with 10.8.0.2 IP address) Now what I have to do is to allow my laptop to connect to at least my office PC, but preferably to all the 123.45.x.x subnet. Please advice on how to best configure OpenVPN, routing, iptables or whatever else is needed on my VPS so that my laptop could gain access to my office PC. P.S. The reason I'm hopping through my VPS is that being connected to the office WiFi I cannot access my office PC and I cannot connect to office VPN (which is another way to access my office PC). The only way to access my PC from office WiFi I have is hopping though an outside network.

    Read the article

  • SSH multi-hop connections with netcat mode proxy

    - by aef
    Since OpenSSH 5.4 there is a new feature called natcat mode, which allows you to bind STDIN and STDOUT of local SSH client to a TCP port accessible through the remote SSH server. This mode is enabled by simply calling ssh -W [HOST]:[PORT] Theoretically this should be ideal for use in the ProxyCommand setting in per-host SSH configurations, which was previously often used with the nc (netcat) command. ProxyCommand allows you to configure a machine as proxy between you local machine and the target SSH server, for example if the target SSH server is hidden behind a firewall. The problem now is, that instead of working, it throws a cryptic error message in my face: Bad packet length 1397966893. Disconnecting: Packet corrupt Here is an excerpt from my ~/.ssh/config: Host * Protocol 2 ControlMaster auto ControlPath ~/.ssh/cm_socket/%r@%h:%p ControlPersist 4h Host proxy-host proxy-host.my-domain.tld HostName proxy-host.my-domain.tld ForwardAgent yes Host target-server target-server.my-domain.tld HostName target-server.my-domain.tld ProxyCommand ssh -W %h:%p proxy-host ForwardAgent yes As you can see here, I'm using the ControlMaster feature so I don't have to open more than one SSH connection per-host. The client machine I tested this with is an Ubuntu 11.10 (x86_64) and both proxy-host and target-server are Debian Wheezy Beta 3 (x86_64) machines. The error happens when I call ssh target-server. When I call it with the -v flag, here is what I get additionally: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /home/aef/.ssh/config debug1: Applying options for * debug1: Applying options for target-server.my-domain.tld debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/home/aef/.ssh/cm_socket/[email protected]:22" does not exist debug1: Executing proxy command: exec ssh -W target-server.my-domain.tld:22 proxy-host.my-domain.tld debug1: identity file /home/aef/.ssh/id_rsa type -1 debug1: identity file /home/aef/.ssh/id_rsa-cert type -1 debug1: identity file /home/aef/.ssh/id_dsa type -1 debug1: identity file /home/aef/.ssh/id_dsa-cert type -1 debug1: identity file /home/aef/.ssh/id_ecdsa type -1 debug1: identity file /home/aef/.ssh/id_ecdsa-cert type -1 debug1: permanently_drop_suid: 1000 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-3 debug1: match: OpenSSH_6.0p1 Debian-3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent Bad packet length 1397966893. Disconnecting: Packet corrupt

    Read the article

  • Can't connect to EC2 instance Permission denied (publickey)

    - by Assad Ullah
    I got this when I tried to connect my new instace (UBUNTU 12.01 EC2) with my newly generated key sh-3.2# ssh ec2-user@**** -v ****.pem OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to **** [****] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /var/root/.ssh/id_rsa type -1 debug1: identity file /var/root/.ssh/id_rsa-cert type -1 debug1: identity file /var/root/.ssh/id_dsa type -1 debug1: identity file /var/root/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '****' is known and matches the RSA host key. debug1: Found key in /var/root/.ssh/known_hosts:4 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /var/root/.ssh/id_rsa debug1: Trying private key: /var/root/.ssh/id_dsa debug1: No more authentication methods to try.

    Read the article

  • PEAP validating a secondary domain suffix

    - by sam
    Probably the title is a little bit confusing, let me explain the situation. Our company wants to implement a corporate wireless lan with PEAP authentication. unfortunately someone made a big mistake in our AD design 10 years ago. The domain name we are using "company.ch" is not owned by company but by someone else. so it is not possible to issue a public SSL certificate for the RADIUS server. Our AD is to big to rename it. We already thought about using our private PKI and rollout the CA certificate via GPO but that would only cover our corporate managed clients but not the BYOD (Smartphones, Tablets, Laptops..) Is there a way to add a secondary domain name like “company2.ch” and issue a public certificate and join that radius to that secondary domain aslwell, and configure that secondary dns suffix via DHCP for all the client pools... or is there another way with for example a new radius server which has his own domain company2.ch which is connected with some kind of trust between the company.ch doamin? sorry i'am not a client server guy.. hopefully you get my drift.!?

    Read the article

  • How do I secure Sql Server 2008 R2

    - by Mark Tait
    I have both a dedicated and a VPS (from Fasthosts) virtual server - the web sites/applications I run on these, access Sql Server stored on the same web server. Until now, I have logged onto Sql Server on both the deidicated and VPS server, from Sql Server Management Studio - until I noticed in my server application logs, multiple attempts to logon to Sql Server using the 'sa' username, but failed password. So someone/bot is trying hard (repeatedly every couple of hours, for approx 20 attempts during each instance) to log on... so obviously I have to lock down access to Sql Sever remotely. What I have done is gone into Configuration Manager, and in Sql Server Network Configuration - Protocols for Sql2008 and also in Sql Native Client 10.0 Configuration - Client Protocols - I have diabled Named Pipes, TCP/IP (and VIA by default). I have left Shared Memory enabled. I also disabled in Sql Server Services, the Sql Server Browser. Now the only way I can manage the databases on these servers, is by logging on to them via Remote Desktop. Can anyone confirm if this is the correct way of stopping anyone maliciously logging on to Sql Server? (I'm not a DBA or security expert - and there are hundreds of articles advising all different ways - but I was hoping for the experts here to confirm, or otherwise, if what I've done is correct) Thank you, Mark

    Read the article

  • Rename Active Directory domain following Windows 2000 -> 2008 migration.

    - by ewwhite
    I'm working with a site that needs an internal DNS domain rename. It currently has a DNS name of domain.abc.com and NT name of ABC. I'm trying to get to a DNS name of abctrading.com and NT name of ABCTRADING. Split DNS would be used. The site originally ran from a single Windows 2000 domain controller hosting AD, file, print, DHCP and DNS services. There was no Exchange system in the environment. The 50 client PCs are all Windows XP with a handful of users using roaming profiles. All users are in a single OU and there are no group policy/GPOs. I'm a Linux engineer, but have been trying to guide another group of consultants to reach a more suitable setup. With the help of this group, we were able to move the single Windows 2000 system to a set of Windows 2008 R2 servers separated into domain controller and file/print systems (virtualized). We are also trying to add an Exchange 2010 system to this mix. The Windows 2000 server was demoted and is no longer in the picture. This is the tricky part, as client wants the domain renamed and the consultants aren't quite sure how to get through it without another 32-40 hours of testing/implementation. THey say that there's considerable risk to do the rename without a completely isolated test environment. However, this rename has to be done before installing Exchange. So we're stuck at this point. I'd like to know what's involved in renaming the domain at this point. We're on Windows Server 2008. The AD is healthy now. Coming from a Linux background, it seems as though there should be a reasonable path to this. Also, since the original domain appears to be a child/subdomain, would that be a problem here. I'd appreciate any guidance.

    Read the article

  • Internet Troubles - PPPoE vs PPPoA?

    - by AkkA
    I have been having some internet troubles at home (ADSL2+ connection in Australia). We get random drop-outs from the authentication connection. It will keep the connection to the DSL service, but we lose authentication and either have to restart the router/modem (its combined, a Belkin one, not sure on model number) or unplug the phone cable, wait about 30 seconds and plug it in again. I've called the ISP (Telstra) a few times, but they only offer limited support when we dont use their supported hardware. Apparently something had happened on their side, they checked the box again (at least it sounded that simple), and told me it would be fine. It wasnt. I've replaced all the filters around the house, but that didnt help either. We do live a little bit away from the exchange (get a sync speed of about 3000/900), so I thought it could be due to line noise but that hasnt helped. Telstra allow both PPPoE and PPPoA connections (which I'm configuring through my router, dont have software on the PC side). I've been running PPPoA the whole time, would it make any difference changing it to PPPoE? If not, are there any other theories as to why we would be experiencing these drop-outs? It has been fine for at least 12 months, then suddenly started about 2 months ago.

    Read the article

  • Cutting Ubuntu to the bone for Virtualbox VM

    - by user32853
    I've been looking around for a Linux variant which will install only the software I need rather than everything Ubuntu (for example) puts in by default. This is to create a virtual machine in Virtualbox which has bash, apache, python, perl, SQLite, openssh and a few other programs but nothing else. I'd prefer to go with Ubuntu if possible but another modern distro would do as well (I like using apt-get and yum rather than downloading/compiling etc). So far, I've tried: SuseStudio.com, which is probably the best so far. Pressing F4 to get the boot options on Ubuntu 9.10, but there is no minimal installation (I think there was once). Arch Linux, slightly confusing install procedure but I might go back and try again. Gentoo, started well but fairly soon the HD on the virtual machine went to 2Gb, even before the installation had started in earnest (I'd partitioned the disks is all). I realise there are various "small" Linuxes around like Puppy, Feather, DSL, etc, but they seem to be aimed at desktop users or as a techie's toolkit, and I want a small-as-possible server distro which can be managed with tools like apt or yum or similar. TIA for any advice you can offer! -- Monty

    Read the article

  • Setting Up My Home Network

    - by Skizz
    I currently have five PCs at home, three running WinXP and two running Ubuntu. They are set up like this: ISP ----- Modem ---- Switch ---- Ubuntu1 -- B&W Printer | |--WinXP1 | |--WinXP2 Wireless |--Colour Printer | |---------Ubuntu2 |---------WinXP3 (laptop) The Ubuntu1 machine is set up as a PDC using Samba and runs fetchmail, procmail, dovecot to get my e-mail and allow me to access the e-mail via imap so I can read the e-mail on any PC. I'd like to set up the network like this: ISP ----- Modem ---- Ubuntu1 ---- Switch ------WinXP1 | | |--WinXP2 B&W Printer Wireless |--Colour Printer | |---------Ubuntu2 |---------WinXP3 (laptop) My questions are: How to configure Ubuntu1 to act as a firewall. How to configure Ubuntu1 to provide a consistant user authentication across the network, at the moment Samba provides roaming profiles for the XP machines but the Ubuntu2 machine has it's own user lists. I'd like to have a single authentication for both XP machines and linux machines so that users added to the server list will propagate to all PCs (i.e. new users can log on using any PC without modifying any of the client PCs). How to configure a linux client (Ubuntu2 above) to access files on the server (Ubuntu1), some of which are in user specific folders, effectively sharing /home/{user} per user (read and write access) and stuff like /home/media/photos with read access for everyone and limited write access. How to configure the XP machines (if it is different from a the Samba method). How to set up e-mail filtering. I'd like to have a whitelist/blacklist system for incoming e-mails for some of the e-mail accounts (mainly, my kids' accounts) with filtered e-mails being put into quaranteen until a sysadmin either adds the sender to a blacklist or whitelist. OK, that's a lot of stuff. For now, I don't want config files*, rather, what services / applications to use and how they interact. For example, LDAP could be used for authentication but what else would be useful to make the administration of the LDAP easier. Once I have a general idea for the overall configuration, I can ask other questions about the specifics. Skizz I have looked around for information, but most answers are usually in the form of abstract config files and lists of packages to install.

    Read the article

  • iperf max udp multicast performance peaking at 10Mbit/s?

    - by Tom Frey
    I'm trying to test UDP multicast throughput via iperf but it seems like it's not sending more than 10Mbit/s from my dev machine: C:\> iperf -c 224.0.166.111 -u -T 1 -t 100 -i 1 -b 1000000000 ------------------------------------------------------------ Client connecting to 224.0.166.111, UDP port 5001 Sending 1470 byte datagrams Setting multicast TTL to 1 UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [156] local 192.168.1.99 port 49693 connected with 224.0.166.111 port 5001 [ ID] Interval Transfer Bandwidth [156] 0.0- 1.0 sec 1.22 MBytes 10.2 Mbits/sec [156] 1.0- 2.0 sec 1.14 MBytes 9.57 Mbits/sec [156] 2.0- 3.0 sec 1.14 MBytes 9.55 Mbits/sec [156] 3.0- 4.0 sec 1.14 MBytes 9.56 Mbits/sec [156] 4.0- 5.0 sec 1.14 MBytes 9.56 Mbits/sec [156] 5.0- 6.0 sec 1.15 MBytes 9.62 Mbits/sec [156] 6.0- 7.0 sec 1.14 MBytes 9.53 Mbits/sec When I run it on another server, I'm getting ~80Mbit/s which is quite a bit better but still not anywhere near the 1Gbps limits that I should be getting? C:\> iperf -c 224.0.166.111 -u -T 1 -t 100 -i 1 -b 1000000000 ------------------------------------------------------------ Client connecting to 224.0.166.111, UDP port 5001 Sending 1470 byte datagrams Setting multicast TTL to 1 UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [180] local 10.0.101.102 port 51559 connected with 224.0.166.111 port 5001 [ ID] Interval Transfer Bandwidth [180] 0.0- 1.0 sec 8.60 MBytes 72.1 Mbits/sec [180] 1.0- 2.0 sec 8.73 MBytes 73.2 Mbits/sec [180] 2.0- 3.0 sec 8.76 MBytes 73.5 Mbits/sec [180] 3.0- 4.0 sec 9.58 MBytes 80.3 Mbits/sec [180] 4.0- 5.0 sec 9.95 MBytes 83.4 Mbits/sec [180] 5.0- 6.0 sec 10.5 MBytes 87.9 Mbits/sec [180] 6.0- 7.0 sec 10.9 MBytes 91.1 Mbits/sec [180] 7.0- 8.0 sec 11.2 MBytes 94.0 Mbits/sec Anybody has any idea why this is not achieving close to link limits (1Gbps)? Thanks, Tom

    Read the article

  • My home box as my own host?

    - by Majid
    Hi all, I have a 512 kb/s DSL service at home. I do not have a static IP but I can get one if I pay some extra to my ISP. Now, if I get the static IP, can I make my home box act as my internet host? What else do I need? Thanks P.S. I know that if at all possible, the site I make available this way might be slow, that is alright, my question is if it is possible at all. Edit: I need this for very small traffic. I am a php developer and for my projects I am often asked to provide a demo. I currently use a free hosting for this purpose but it is down most of the time and support is non-existent. So I thought to set-up my home computer as my test server. With this please note that: I will only occasionally have 'visitors' and that will be one or possibly two visitors at any time. These demos, are to showcase functionality, so no big images are served and page views will normally generate under 100KB of traffic.

    Read the article

  • Timeout settings for Remote Desktop Sessions to lock

    - by atroon
    Our office uses a Windows 2003 server to provide access to an accounting application. Recently I was asked to increase the amount of time it takes for the session to lock itself and require the entry of the user's password to resume. That seems to be about ten minutes, at present. I am familiar with group policy and have tweaked those settings to scavenge sessions (and thereby licenses) from sessions that have been disconnected (by the user closing the mstsc.exe client or by a network issue). That's simple and straightforward. But I can't find anything in GP to allow a longer time period before the RDP client window goes black and then, when clicked upon, requires a username and password to resume the session. I must admit this would be nice personally as well, since most of my time is spent documenting the application and/or monitoring its database, so I usually have a window open to the terminal server along with the rest of the staff in the accounting center, but I interact with it very little. I usually enter my password 10-15 times per workday, but I'm pretty good at it by now. ;) So, can this timeout period be adjusted, or are we out of luck?

    Read the article

  • Updating Samba From RPMs

    - by KnickerKicker
    My Red Hat Enterprise Edition 4 comes with Samba Version 3.0.10, which does not have support for the "inherit owner" attribute that is essential in implementing a Deny-Delete Write Once Read Many share (for examples, search google for a-shared-drop-box-using-samba). (BTW, if any body knows an alternative way to do it without updating samba, I'm all ears!) I am not all that comfortable building from source, and after hours of googling (no, I do not have a red hat subscription, so I cannot just run the up2date command), I found a whole bunch of rpms on http://ftp.sernet.de/pub/samba/tested/rhel/4/i386/ (Samba 3.2.15 for RHEL 4)... Next, I tried updating them with the rpm -U --nodeps command, but I got file conflict errors. So I went ahead and overwrote everything (or so I thought) by using the rpm's --force option. But no good has come of all that. /usr/sbin/smbd -V still returns the old version. As of now, rpm -qa | grep samba returns, samba3-client-3.2.15-40.el4 samba-3.0.10-1.4E.2 samba-client-3.0.10-1.4E.2 system-config-samba-1.2.21-1 samba3-3.2.15-40.el4 samba-common-3.0.10-1.4E.2 samba3-winbind-3.2.15-40.el4 I cannot remove the older ones because samba-common >= 3.0.8-0.pre1.3 is needed by (installed) gnome-vfs2-smb-2.8.2-8.2.x86_64 libsmbclient.so.0()(64bit) is needed by (installed) kdebase-3.3.1-5.8.x86_64 libsmbclient.so.0()(64bit) is needed by (installed) gnome-vfs2-smb-2.8.2-8.2.x86_64 Now thats a whole bunch of dependencies that I dare not touch :) Any and all pointer are welcome at this stage. Thanks in advance!

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >