Search Results

Search found 8543 results on 342 pages for 'documentation'.

Page 305/342 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • Complete Active Directory redesign and GPO application

    - by Wolfgang Kuehne
    after much testing and hundreds of tries and hours invested I decided to consult you experts here. Overview: I want to apply some GPO to our users which will add some specific site to the Trusted Sites in Internet Explorer settings for all users. However, the more I try the more confusing the results become. The GPO is either applied to one group of users, or to another one. Finally, I came to the conclusion that this weird behavior is cause rather by the poor organization in Users and Groups in Active Directory. As such I want to kick the problem from the root: Redesign the Active Directory Users and Groups. Scenario: There is one Domain Controller, and we use Terminal Services (so there is a Terminal Server as well). Users usually log on to the Terminal Server using Remote Desktop to perform their daily tasks. I would classify the users in the following way: IT: Admins, Software Development Business: Administration, Management The current structure of the Active Directory Users and Groups is a result of the previous IT management. The company has used Small Business Server which has created multiple default user groups and containers. Unfortunately, the guys working before me have do no documentation at all. Now, as I inherit this structure I am in the no mans land. No idea which direction to head first. As you can see, the Active Directory User and Groups have become a bit confusing. There is no SBS anymore, but when migrating from SBS to the current Windows Server 2008 R2 environment the guys before me have simply copied the same structure. The real question: Where should I start cleaning from, ensuring that I won't break totally the current infrastructure? What is a nice organization for the scenario that I have explained above? Possible useful info for the current structure: Computers folder contains Terminal Services Computers user group Members: TerminalServer computer located at Server -> Terminalserver OU Member of: NONE Foreign Security Principals : EMPTY Managed Service Accounts : EMPTY Microsoft Exchange Security Groups : not sure if needed, our emails are administered by external service provider Distribution Groups : not sure if needed Security Groups : there are couple of groups which are needed SBS users : contains all the users Terminalserver : contains only the TerminalServer machine

    Read the article

  • Vagrant - Failed login, ssh not set up

    - by motleydev
    This question is two fold because somewhere in my attempts to solve the problem, I created a new one. First: I was trying to vagrant up using a Vagranfile based on the standard hashicorp/precise32 box. Everything worked up until default: SSH auth method: private key where it would eventually time out. Enabling gui in the Vagrantfile showed that the machine never actually logs in. I can use the standard user/pass and log in from that point but the vagrant up process still remains at that prior status. Here's where my understanding might be a little dim. I've tried setting auth method from the insecure_pass_key to my root ~/.ssh/id_rsa or whichever one I wanted to use. I'm not entirely sure where to put a copy of the public key or my authorized keys file. I've got a .vagrant.d folder in my user dir (I'm on OSX) which seems to contain the box images. I've got a .vagrant folder in my directory with the Vagrantfile which seems to contain the specific machine I am building off of. I've tried pouring over the docs and forums but I seem to be missing a key concept here. And Now: After a host of tips/tricks such as rolling back by VirtualBox install, uninstalling/reinstalling Vagrant and VritualBox several times, when I try to run vagrant ssh-config with a Vagrantfile based on the same hashicorp/precise32 it says that the box is not enabled for SSH. Specifically, this error: The provider for this Vagrant-managed machine is reporting that it is not yet ready for SSH. Depending on your provider this can carry different meanings. Make sure your machine is created and running and try again. Additionally, check the output of `vagrant status` to verify that the machine is in the state that you expect. If you continue to get this error message, please view the documentation for the provider you're using. So now I am slightly up a creek. Any help would be appreciated if not just clarifying a concept. Some pertinent info: I'm on OSX Maverics Due to the fact I'm running a dual HD system with system files on one HD and user files on another, my permissions are a little wonky and VBoxManage will only let me run commands via Sudo - not sure if it's pertinent - but maybe. I have no idea what I'm doing. That part is perhaps more important.

    Read the article

  • Using curl -s in *nix command line not working for some reason

    - by JM4
    I am trying to install composer (though to be honest I really have no idea how it fully works and documentation seems to be quite poor) on my MediaTemple DV machine. I am using their [instructions][1] Trying to install globally using: $ curl -s https://getcomposer.org/installer | php My command line (again using putty and logged into my server as root) thinks for a second, then sets up for next prompt. I run a simple ls -l to check for the file it should have downloaded with no luck. Any idea what could be causing the issue? I have tested and do in fact have curl installed. UPDATE 1 Based on the first answer, the verbose response is: > $ curl -vs https://getcomposer.org/installer | php > * About to connect() to getcomposer.org port 443 > * Trying 37.59.4.156... connected > * Connected to getcomposer.org (37.59.4.156) port 443 > * successfully set certificate verify locations: > * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none > * SSLv2, Client hello (1): SSLv3, TLS handshake, Server hello (2): SSLv3, TLS handshake, CERT (11): SSLv3, TLS handshake, Server key > exchange (12): SSLv3, TLS handshake, Server finished (14): SSLv3, TLS > handshake, Client key exchange (16): SSLv3, TLS change cipher, Client > hello (1): SSLv3, TLS handshake, Finished (20): SSLv3, TLS change > cipher, Client hello (1): SSLv3, TLS handshake, Finished (20): SSL > connection using DHE-RSA-AES256-SHA > * Server certificate: > * subject: /C=CH/CN=dl.packagist.org/[email protected] > * start date: 2012-07-07 23:25:35 GMT > * expire date: 2013-07-10 02:55:12 GMT > * SSL: certificate subject name 'dl.packagist.org' does not match target host name 'getcomposer.org' > * Closing connection #0 > * SSLv3, TLS alert, Client hello (1): > > > [1]: http://getcomposer.org/doc/00-intro.md

    Read the article

  • Showing Directory Root When Launching Rails App Using Apache2 and Passenger

    - by LightBe Corp
    I have done the following in an attempt to host a Rails 3.2.3 application using Apache 2.2.21 and Passenger 3.0.13: Installed gem Passenger rvmsudo passenger-install-apache2-module Added website info in /etc/apache2/extra/httpd-vhosts.conf Added line to /etc/hosts (not sure if this was needed or not; not mentioned in Passenger documentation Uncommented out the line in /etc/apache2/httpd.conf to Include /etc/apache2/extra/httpd-vhosts.conf Restarted Apache When I try to pull up my website the following displays: Index of / Name Last modified Size Description Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8r DAV/2 PHP/5.3.10 with Suhosin-Patch Phusion_Passenger/3.0.13 Server at lightbesandbox2.com Port 443 Here is /etc/hosts entry for the website: 127.0.0.1 www.lightbesandbox2.com Here is my /etc/apache2/extra/httpd-vhosts.conf entry for the website: NameVirtualHost *:80 <VirtualHost *:80> ServerName www.lightbesandbox2.com ServerAlias lightbesandbox2.com PassengerAppRoot /Users/server1/Sites/iktusnetlive_RoR/ DocumentRoot /Users/server1/Sites/iktusnetlive_RoR/public <Directory /Users/server1/Sites/iktusnetlive_RoR/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> When I do rvmsudo passenger-status I get the following output: ----------- General information ----------- max = 6 count = 1 active = 0 inactive = 1 Waiting on global queue: 0 ----------- Application groups ----------- /Users/server1/Sites/iktusnetlive_RoR/: App root: /Users/server1/Sites/iktusnetlive_RoR/ * PID: 8140 Sessions: 0 Processed: 2 Uptime: 20m 51s None of my assets are in the public folder in my Rails app. I have written an application using the template presented in Michael Hartl's Ruby on Rails Tutorial. The home page is in /app/views/static_pages/home.html.erb. I decided to copy an index.html file in the public folder to see if it would display. It displayed as I had hoped.. Is there a way to get Passenger to find my assets without me having to rewrite my application? Any help would be appreciated. Update 6/23/2012 10:00 am CDT GMT-6 I corrected the problems with my file and have successfully executed the rake assets:precompile command. I still get the index page as before. I have made no other changes. I did a passenger-status command and it is still loaded. Restarting Apache did nothing.

    Read the article

  • HAProxy + Percona XtraDB Cluster

    - by rottmanj
    I am attempting to setup HAproxy in conjunction with Percona XtraDB Cluster on a series of 3 EC2 instances. I have found a few tutorials online dealing with this specific issue, but I am a bit stuck. Both the Percona servers and the HAproxy servers are running ubuntu 12.04. The HAProxy version is 1.4.18, When I start HAProxy I get the following error: Server pxc-back/db01 is DOWN, reason: Socket error, check duration: 2ms. I am not really sure what the issue could be. I have verified the following: EC2 security groups ports are open Poured over my config files looking for issues. I currently do not see any. Ensured that xinetd was installed Ensured that I am using the correct ip address of the mysql server. Any help with this is greatly appreciated. Here are my current config Load Balancer /etc/haproxy/haproxy.cfg global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy debug #quiet daemon defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend pxc-front bind 0.0.0.0:3307 mode tcp default_backend pxc-back frontend stats-front bind 0.0.0.0:22002 mode http default_backend stats-back backend pxc-back mode tcp balance leastconn option httpchk server db01 10.86.154.105:3306 check port 9200 inter 12000 rise 3 fall 3 backend stats-back mode http balance roundrobin stats uri /haproxy/stats MySql Server /etc/xinetd.d/mysqlchk # default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID #only_from = 0.0.0.0/0 # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } MySql Server /etc/services Added the line mysqlchk 9200/tcp # mysqlchk MySql Server /usr/bin/clustercheck # GNU nano 2.2.6 File: /usr/bin/clustercheck #!/bin/bash # # Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly # # Author: Olaf van Zandwijk <[email protected]> # Documentation and download: https://github.com/olafz/percona-clustercheck # # Based on the original script from Unai Rodriguez # MYSQL_USERNAME="testuser" MYSQL_PASSWORD="" ERR_FILE="/dev/null" AVAILABLE_WHEN_DONOR=0 # # Perform the query to check the wsrep_local_state # WSREP_STATUS=`mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | awk '{if (NR!=1){print $2}}' 2>${ERR_FILE}` if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]] then # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200 /bin/echo -en "HTTP/1.1 200 OK\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is synced.\r\n" /bin/echo -en "\r\n" else # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503 /bin/echo -en "HTTP/1.1 503 Service Unavailable\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is not synced.\r\n" /bin/echo -en "\r\n" fi

    Read the article

  • Why won't vyatta allow SMTP through my firewall?

    - by Solignis
    I am setting up a vyatta router on VMware ESXi, But I see to have hit a major snag, I could not get my firewall and NAT to work correctly. I am not sure what was wrong with NAT but it "seems" to be working now. But the firewall is not allowing traffic from my WAN interface (eth0) to my LAN (eth1). I can confirm its the firewall because I disabled all firewall rules and everything worked with just NAT. If put the firewalls (WAN and LAN) back in place nothing can get through to port 25. I am not really sure what the issue could be I am using pretty basic firewall rules, I wrote the rules while looking at the vyatta docs so unless there is something odd with the documentation they "should" be working. Here is my NAT rules so far; vyatta@gateway# show service nat rule 20 { description "Zimbra SNAT #1" outbound-interface eth0 outside-address { address 74.XXX.XXX.XXX } source { address 10.0.0.17 } type source } rule 21 { description "Zimbra SMTP #1" destination { address 74.XXX.XXX.XXX port 25 } inbound-interface eth0 inside-address { address 10.0.0.17 } protocol tcp type destination } rule 100 { description "Default LAN -> WAN" outbound-interface eth0 outside-address { address 74.XXX.XXX.XXX } source { address 10.0.0.0/24 } type source } Then here is my firewall rules, this is where I believe the problem is. vyatta@gateway# show firewall all-ping enable broadcast-ping disable conntrack-expect-table-size 4096 conntrack-hash-size 4096 conntrack-table-size 32768 conntrack-tcp-loose enable ipv6-receive-redirects disable ipv6-src-route disable ip-src-route disable log-martians enable name LAN_in { rule 100 { action accept description "Default LAN -> any" protocol all source { address 10.0.0.0/24 } } } name LAN_out { } name LOCAL { rule 100 { action accept state { established enable } } } name WAN_in { rule 20 { action accept description "Allow SMTP connections to MX01" destination { address 74.XXX.XXX.XXX port 25 } protocol tcp } rule 100 { action accept description "Allow established connections back through" state { established enable } } } name WAN_out { } receive-redirects disable send-redirects enable source-validation disable syn-cookies enable SIDENOTE To test for open ports I have using this website, http://www.yougetsignal.com/tools/open-ports/, it showed port 25 as open without the firewall rules and closed with the firewall rules. UPDATE Just to see if the firewall was working properly I made a rule to block SSH from the WAN interface. When I checked for port 22 on my primary WAN address it said it was still open even though I outright blocked the port. Here is the rule I used; rule 21 { action reject destination { address 74.219.80.163 port 22 } protocol tcp } So now I am convinced either I am doing something wrong or the firewall is not working like it should.

    Read the article

  • getUserPrincipal() in JCIFS / Lan-Manager authentitation level setting in Windows 2k8

    - by Chris
    I have to find out in which exact format JCIFS stores the user principal in the "getUserPrincipal()" property. Therefor i created a test Environment like this: Windows Server 2008 Domain Controller Domain named "MYDOMAIN" Many Testusers in Active Directory Tomcat Application Server with my Web Application (which simply reads the user Principal and displays its values). The user should be logged in to the web-application with SSO therefor i need the format that jcifs is using to store the user. (For example user@MYDOMAIN or MYDOMAIN\user...) I tested the Authentication with other SSO frameworks with Kerberos Method and it works as expected. I'm now trying to use SSO through the NTLMHttpFilter of JCIFS. When i try to login i get the following error message: jcifs.smb.SmbException: The parameter is incorrect. jcifs.smb.SmbTransport.checkStatus(SmbTransport.java:541) jcifs.smb.SmbTransport.send(SmbTransport.java:641) jcifs.smb.SmbSession.sessionSetup(SmbSession.java:322) jcifs.smb.SmbSession.send(SmbSession.java:224) jcifs.smb.SmbTree.treeConnect(SmbTree.java:176) jcifs.smb.SmbSession.logon(SmbSession.java:153) jcifs.smb.SmbSession.logon(SmbSession.java:146) jcifs.http.NtlmHttpFilter.negotiate(NtlmHttpFilter.java:189) jcifs.http.NtlmHttpFilter.doFilter(NtlmHttpFilter.java:121) Regarding to the documentation i'm using to configure this, this is a know issue with the Group policy. It is stated there, that i have to change the Group policy "Networkaccess: lan-manager authentication level" to respond to NTLMv1 request. I have done this, but it's still not working. So what i also have to configure is the same policy on the client computer. I have to change the policy, so that the client computer sends NTLMv1. But it is always sending NTLMv2 tokens. The problem now is that i'm somehow not able to change this setting. (I already was before) because the dropdown box to choose the authentication method is "greyed out". edit: just to make this clear, this dialog is on the client-side in the "local-security policies" As you can see from this screenshot, the chosen method is "Only send NTLMv2 responses" which is the wrong setting and i'm pretty sure that this is causing the error above. My question is now, why can't i change this setting? Why is it greyd out?

    Read the article

  • Authenticate to VM using vagrant up

    - by utrecht
    Authentication failure during Vagrant Up, while vagrant ssh and ssh vagrant@localhost -p2222 works I would like to execute a shell script using Vagrant at boot. Vagrant is unable to Authenticate, while the VM has been started using vagrant up: c:\temp\helloworld>vagrant up Bringing machine 'default' up with 'virtualbox' provider... ==> default: Importing base box 'helloworld'... ==> default: Matching MAC address for NAT networking... ==> default: Setting the name of the VM: helloworld_default_1398419922203_60603 ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat ==> default: Forwarding ports... default: 22 => 2222 (adapter 1) ==> default: Booting VM... ==> default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 127.0.0.1:2222 default: SSH username: vagrant default: SSH auth method: private key default: Error: Connection timeout. Retrying... default: Error: Authentication failure. Retrying... default: Error: Authentication failure. Retrying... default: Error: Authentication failure. Retrying... default: Error: Authentication failure. Retrying... ... After executing CTRL + C it is possible to authenticate to the VM using vagrant ssh and ssh vagrant@localhost -p2222 Vagrant file I use the default Vagrantfile and I only changed the hostname: # -*- mode: ruby -*- # vi: set ft=ruby : # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| # All Vagrant configuration is done here. The most common configuration # options are documented and commented below. For a complete reference, # please see the online documentation at vagrantup.com. # Every Vagrant virtual environment requires a box to build off of. config.vm.box = "helloworld" ... Vagrant version c:\temp\helloworld>vagrant --version Vagrant 1.5.1 Question How to authenticate to VM using vagrant up?

    Read the article

  • HAProxy causing delay

    - by user1221444
    I am trying to configure HAProxy to do load balancing for a custom webserver I created. Right now I am noticing an increasing delay with HAProxy as the size of the return message increases. For example, I ran four different tests, here are the results: Response 15kb through HAProxy: Avg. response time: .34 secs Transacation rate: 763 trans/sec Throughput: 11.08 MB/sec Response 2kb through HAProxy: Avg. response time: .08 secs Transaction rate: 1171 trans / sec Throughput: 2.51 MB/sec Response 15kb directly to server: Avg. response time: .11 sec Transaction rate: 1046 trans/sec throughput: 15.20 MB/sec Response 2kb directly to server: Avg. Response time: .05 secs Transaction rate: 1158 trans/sec Throughput: 2.48 MB/sec All transactions are HTTP requests. As you can see, there seems to be a much bigger difference between response times for when the response is bigger, than when it is smaller. I understand there will be a slight delay when using HAProxy. Not sure if it matters, but the test itself was run using siege. And during the test there was only one server behind the HAProxy(the same that was used in the direct to server tests). Here is my haproxy.config file: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 10000 user haproxy group haproxy daemon #debug defaults log global mode http option httplog option dontlognull retries 3 option redispatch option httpclose maxconn 10000 contimeout 10000 clitimeout 50000 srvtimeout 50000 balance roundrobin stats enable stats uri /stats listen lb1 10.1.10.26:80 maxconn 10000 server app1 10.1.10.200:8080 maxconn 5000 I couldn't find much in terms of options in this file that would help my problem. I have heard suggestions that I may have to adjust a few of my sysctl settings. I could not find a lot of information on this however, most documentation is for Linux 2.4 and 2.6 on the sysctl stuff, I am running 3.2(Ubuntu server 12.04), which seems to auto tuning, so I have no clue what I should or shouldn't be changing. Most settings changes I tried had no effect or a negative effect on performance. Just a notice, this is a very preliminary test, and my hope is that at deployment time, my HAProxy will be able to balance 10k-20k requests/sec to many servers, so if anyone could provide information to help me reach that goal, it would be much appreciated. Thank you very much for any information you can provide. And if you need anymore information from me please let me know, I will get you anything I can.

    Read the article

  • apcupsd on Linux does not report on APC BackUPS Pro 900

    - by lserni
    From what documentation I could find, the UPS should be (is!) supported by Linux and ought to work with apcupsd. I looked for specific problems such as the infamous Microlink protocol, and found none. I have found a feedback from a guy in UK that reports using this very model on a not-too-different OS version (his OpenSuSE 12.1, mine 12.3 x86_64). The USB port is detected, lsusb reports Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply and lsusb -v -s002:003 confirms and expands: Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x051d American Power Conversion idProduct 0x0002 Uninterruptible Power Supply bcdDevice 0.90 iManufacturer 1 American Power Conversion iProduct 2 Back-UPS RS 900G FW:879.L4 .I USB FW:L4 bNumConfigurations 1 Configuration Descriptor: [...] Interface Descriptor: [...] bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.00 bCountryCode 33 US bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 1134 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 100 Device Status: 0x0000 (Bus Powered) The kernel recognizes this and duly sets up crw------- 1 root root 180, 96 Nov 4 16:11 /dev/usb/hiddev0 As far as I know, everything is as it should be. I have put the standard configuration in /etc/apcupsd/apcupsd.conf (which is Unix-terminated, ASCII-only, no BOM (just in case)) UPSCABLE usb UPSTYPE usb DEVICE (I have also tried commenting out DEVICE, and setting a device of /dev/puppa results in an access attempt to /dev/puppa, not some /var/lib/dev/puppa or /dev/puppa\r\n). Yet, what apcaccess tells me is VERSION : 3.14.10 (13 September 2011) suse CABLE : USB Cable DRIVER : USB UPS Driver UPSMODE : Stand Alone STARTTIME: 2013-11-04 16:24:22 +0100 MODEL : STATUS : NOBATT LINEV : 000.0 Volts LOADPCT : 0.0 Percent Load Capacity BCHARGE : 000.0 Percent TIMELEFT : 0.0 Minutes MBATTCHG : 5 Percent MINTIMEL : 3 Minutes MAXTIME : 0 Seconds SENSE : Low LOTRANS : 000.0 Volts HITRANS : 000.0 Volts It doesn't recognize the model, and reports no battery (and no voltage). This confirms that it's not the Microlink problem, or it would report the battery status, if precious little else. If I disconnect the USB cable, I get an apcupsd message to the effect that communications have been lost; and I get the "communication restored" broadcast too, if I reconnect the cable. apcupsd is monitoring. So everything tells me that it should work -- only it doesn't. Does anyone spot what I'm missing?

    Read the article

  • Powershell: Cannot connect via SSL

    - by JSWork
    Am following "secrets to powershell remoting" to setup an SLL account and seem to be missing a step. I ran Winrm create winrm/config/Listener?Address=*+Transport=HTTPS @{Hostname="redacted";CertificateThumbprint="redacted"} and got PS WSMan:\localhost&gt; dir wsman:\localhost\listener\Listener_1184937132 WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener\Listener_1184937132 Name Value Type ---- ----- ---- Address * System.String Transport HTTP System.String Port 5985 System.String Hostname System.String Enabled true System.String URLPrefix wsman System.String CertificateThumbprint System.String ListeningOn_756355952 10.0.0.54 System.String ListeningOn_1201550598 127.0.0.1 System.String PS WSMan:\localhost&gt; dir wsman:\localhost\listener\Listener_1187163138 WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener\Listener_1187163138 Name Value Type ---- ----- ---- Address * System.String Transport HTTP System.String Port 80 System.String Hostname System.String Enabled true System.String URLPrefix wsman System.String CertificateThumbprint System.String ListeningOn_756355952 10.0.0.54 System.String ListeningOn_1201550598 127.0.0.1 System.String PS WSMan:\localhost&gt; dir wsman:\localhost\listener\Listener_220862350 WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener\Listener_220862350 Name Value Type ---- ----- ---- Address * System.String Transport HTTPS System.String Port 5986 System.String Hostname redacted System.String Enabled true System.String URLPrefix wsman System.String CertificateThumbprint redacted System.String ListeningOn_756355952 10.0.0.54 System.String ListeningOn_1201550598 127.0.0.1 System.String Trouble is when i do this PS C:\Users\redacted> enter-pssession -Computername redacted -Credential redacted\redacted -UseSSL I get this Enter-PSSession : Connecting to remote server failed with the following error message : The client cannot connect to th e destination specified in the request. Verify that the service on the destination is running and is accepting requests . Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or Win RM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig". For more information, see the about_Remote_Troubleshooting Help topic. At line:1 char:16 + enter-pssession <<<< -Computername redacted -Credential redacted\redacted -UseSSL + CategoryInfo : InvalidArgument: (redacted:String) [Enter-PSSession], PSRemotingTransportException + FullyQualifiedErrorId : CreateRemoteRunspaceFailed This happens even when the firewall is off completely and when the machine tires to connect to itself locally. On top of that, despite the listners eing lsited on wsman, when I run PS WSMan:\localhost&gt; Get-PSSessionConfiguration I get Name PSVersion StartupScript Permission ---- --------- ------------- ---------- Microsoft.PowerShell 2.0 PS WSMan:\localhost&gt; Any ideas what I'm missing/doing wrong? edit: Windows 2003. Powershell v2.0

    Read the article

  • unable to join domain using virtualbox

    - by FreshPrinceOfSO
    I'm in the process of setting up a VM environment for a MS certification exam (70-462). Following the training kit's instructions, I've set up a domain controller (DC) and two members (SQL-A, SQL-B) thus far. I can't figure out why I can't join the domain. DC IPv4 Address . . . : 10.10.10.10(Preferred) Subnet Mask. . . . : 255.0.0.0 DNS Servers. . . . : ::1 127.0.0.1 SQL-A IPv4 Address . . . : 10.10.10.20(Preferred) Subnet Mask. . . . : 255.0.0.0 DNS Servers. . . . : 10.10.10.10 SQL-B IPv4 Address . . . : 10.10.10.30(Preferred) Subnet Mask. . . . : 255.0.0.0 DNS Servers. . . . : 10.10.10.10 I've read how to do networking between virtual machines in virtualbox and the documentation. After trying various network adapter configurations, I can't get them to communicate in order to have the two members join the domain. When I ping from .30 to .10, I get: ping 10.10.10.10 Pinging 10.10.10.10 with 32 bytes of data: Reply from 10.10.10.20: Destination host unreachable. Reply from 10.10.10.20: Destination host unreachable. Reply from 10.10.10.20: Destination host unreachable. Reply from 10.10.10.20: Destination host unreachable. Trying to join the domain: netdom join SQL-A /domain:contso.com The specified domain either does not exist or could not be contacted. The command failed to complete successfully. Within VirtualBox, I've tried the following combinations for network adapter: Attached to - Promiscuous Mode ------------------------------- NAT Bridged Adapter - Deny Bridged Adapter - Allow VMs Bridged Adapter - Allow All Internal Network - Deny Internal Network - Allow VMs Internal Network - Allow All Host-only Adapter - Deny Host-only Adapter - Allow VMs Host-only Adapter - Allow All Edit ipconfig /all of DC ipconfig /all of SQL-A

    Read the article

  • Why do my Application Compatibility Toolkit Data Collectors fail to write to my ACT Log Share?

    - by Jay Michaud
    I am trying to get the Microsoft Application Compatibility Toolkit 5.6 (version 5.6.7320.0) to work, but I cannot get the Data Collectors to write to the ACT Log Share. The configuration is as follows. Machine: ACT-Server Domain: mydomain.example.com OS: Windows 7 Enterprise 64-bit Edition Windows Firewall configuration: File and Printer Sharing (SMB-In) is enabled for Public, Domain, and Private networks ACT Log Share: ACT Share permissions*: Group/user names Allow permissions --------------------------------------- Everyone Full Control Administrator Full Control Domain Admins Full Control Administrators Full Control ANONYMOUS LOGON Full Control Folder permissions*: Group/user name Allow permissions Apply to ------------------------------------------------- ANONYMOUS LOGON Read, write & execute This folder, subfolders, and files Domain Admins Full control This folder, subfolders, and files Everyone Read, write & execute This folder, subfolders, and files Administrators Full control This folder, subfolders, and files CREATOR OWNER Full control Subfolders and files SYSTEM Full control This folder, subfolders, and files INTERACTIVE Traverse folder / This folder, subfolders, and files execute file, List folder / read data, Read attributes, Read extended attributes, Create files / write data, Create folders / append data, Write attributes, Write extended attributes, Delete subfolders and files, Delete, Read permissions SERVICE (same as INTERACTIVE) BATCH (same as INTERACTIVE) *I am fully aware that these permissions are excessive, but that is beside the point of this question. Some of the clients running the Data Collector are domain members, but some are not. I am working under the assumption that this is a Windows file sharing permission issue or a network access policy issue, but of course, I could be wrong. It is my understanding that the Data Collector runs in the security context of the SYSTEM account, which for domain members appears on the network as MYDOMAIN\machineaccount. It is also my understanding from reading numerous pieces of documentation that setting the ANONYMOUS LOGON permissions as I have above should allow these computer accounts and non-domain-joined computers to access the share. To test connectivity, I set up the Windows XP Mode virtual machine (VM) on ACT-Server. In the VM, I opened a command prompt running as SYSTEM (using the old "at" command trick). I used this command prompt to run explorer.exe. In this Windows Explorer instance, I typed \ACT-Server\ACT into the address bar, and then I was prompted for logon credentials. The goal, though, was not to be prompted. I also used the "net use /delete" command in the command prompt window to delete connections to the ACT-Server\IPC$ share each time my connection attempt failed. I have made sure that the appropriate exceptions are Since ACT-Server is a domain member, the "Network access: Sharing and security model for local accounts" security policy is set to "Classic - local users authenticate as themselves". In spite of this, I still tried enabling the Guest account and adding permissions for it on the share to no effect. What am I missing here? How do I allow anonymous logons to a shared folder as a step toward getting my ACT Data Collectors to deposit their data correctly? Am I even on the right track, or is the issue elsewhere?

    Read the article

  • Postfix flow/hook reference, or high-level overview?

    - by threecheeseopera
    The Postfix MTA consists of several components/services that work together to perform the different stages of delivery and receipt of mail; these include the smtp daemon, the pickup and cleanup processes, the queue manager, the smtp service, pipe/spawn/virtual/rewrite ... and others (including the possibility of custom components). Postfix also provides several types of hooks that allow it to integrate with external software, such as policy servers, filters, bounce handlers, loggers, and authentication mechanisms; these hooks can be connected to different components/stages of the delivery process, and can communicate via (at least) IPC, network, database, several types of flat files, or a predefined protocol (e.g. milter). An old and very limited example of this is shown at this page. My question: Does anyone have access to a resource that describes these hooks, the components/delivery stages that the hook can interact with, and the supported communication methods? Or, more likely, documentation of the various Postfix components and the hooks/methods that they support? For example: Given the requirement "if the recipient primary MX server matches 'shadysmtpd', check the recipient address against a list; if there is a match, terminate the SMTP connection without notice". My software would need to 1) integrate into the proper part of the SMTP process, 2) use some method to perform the address check (TCP map server? regular expressions? mysql?), and 3) implement the required action (connection termination). Additionally, there will probably be several methods to accomplish this, and another requirement would be to find that which best fits (ex: a network server might be faster than a flat-file lookup; or, if a large volume of mail might be affected by this check, it should be performed as early in the mail process as possible). Real-world example: The apolicy policy server (performs checks on addresses according to user-defined rules) is designed as a standalone TCP server that hooks into Postfix inside the smtpd component via the directive 'check_policy_service inet:127.0.0.1:10001' in the 'smtpd_client_restrictions' configuration option. This means that, when Postfix first receives an item of mail to be delivered, it will create a TCP connection to the policy server address:port for the purpose of determining if the client is allowed to send mail from this server (in addition to whatever other restrictions / restriction lookup methods are defined in that option); the proper action will be taken based on the server's response. Notes: 1)The Postfix architecture page describes some of this information in ascii art; what I am hoping for is distilled, condensed, reference material. 2) Please correct me if I am wrong on any level; there is a mountain of material, and I am just one man ;) Thanks!

    Read the article

  • Automatic layout of manual network mapping

    - by Paul
    So I have a small business network mainly consisting of two routed layer-2 domains with a total of ca. 100 devices spread over ca. 2000m² production and office spaces. Typical problems to solve using the graph would be: Over what (cable) path is a PC connected to the server? Where to expect devices connected to a switch port? I want to generate a graph of the physical network topology: Nodes are endpoint devices, switch ports, wall outlets, patch panel ports etc. Edges are cable connections. Ideally, grouping edges (or segments) that pass through the same bundle could be grouped. Also I would like to augment the graph data with automatically gathered data (monitoring state, MAC address, Switch port <- MAC entries to build up parts of the map). At the moment I use graphviz for this inside a Confluence wiki like that: layout = "neato" overlap = scale subgraph { rankdir = "TB" subgraph cluster_r1pf1 { r1pf1 [label="{ Rack 1 PF 1 | { <p1>P1 | <p2>P2 | <p3>P3} }", shape=record] } subgraph cluster_switch1 { switch1 [label="{ Rack 1 Switch 1 | { <p1> P1 | <p1> P1 | <p3> P3} }", shape=record] } r1pf1:p1 -> switch1:p1 (obviously there are dozens of entries omitted here) Problem is: I have a hard time to influence graphviz to generate a bearable layout. Edges overlap so bad that you can't read the diagram anymore. The question is: What other tools (be it interactive like Visio, Omnigraffle or I/O-oriented like graphviz) exist that would allow an easily versionable (as in: Operates on a text file) documentation that is both machine and human readable and editable? Why not OmniGraffle or Visio? Well we don't have Macs and Visio is not available at the moment. To buy it I would need good arguments. Automation would be one of that. But last time I looked, versioning Visio files or even thinking about automatic handling was a nightmare. Related: Network Mapping Tools basically asks the same with a focus on generating the complete graph automatically (but without the need to document cabling connections) Recommendations for automatic computer inventory brings up links of "all-in-one" solutions

    Read the article

  • IIS6 Virtual Directory 500 Error on Remote Share

    - by David Boike
    We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!

    Read the article

  • How Do I Restrict Repository Access via WebSVN?

    - by kaybenleroll
    I have multiple subversion repositories which are served up through Apache 2.2 and WebDAV. They are all located in a central place, and I used this debian-administration.org article as the basis (I dropped the use of the database authentication for a simple htpasswd file though). Since then, I have also started using WebSVN. My issue is that not all users on the system should be able to access the different repositories, and the default setup of WebSVN is to allow anyone who can authenticate. According to the WebSVN documentation, the best way around this is to use subversion's path access system, so I looked to create this, using the AuthzSVNAccessFile directive. When I do this though, I keep getting "403 Forbidden" messages. My files look like the following: I have default policy settings in a file: <Location /svn/> DAV svn SVNParentPath /var/lib/svn/repository Order deny,allow Deny from all </Location> Each repository gets a policy file like below: <Location /svn/sysadmin/> Include /var/lib/svn/conf/default_auth.conf AuthName "Repository for sysadmin" require user joebloggs jimsmith mickmurphy </Location> The default_auth.conf file contains this: SVNParentPath /var/lib/svn/repository AuthType basic AuthUserFile /var/lib/svn/conf/.dav_svn.passwd AuthzSVNAccessFile /var/lib/svn/conf/svnaccess.conf I am not fully sure why I need the second SVNParentPath in default_auth.conf, but I just added that today as I was getting error messages as a result of adding the AuthzSVNAccessFile directive. With a totally permissive access file [/] joebloggs = rw the system worked fine (and was essentially unchanged), but as I soon as I start trying to add any kind of restrictions such as [sysadmin:/] joebloggs = rw instead, I get the 'Permission denied' errors again. The log file entries are: [Thu May 28 10:40:17 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET websvn:/ [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET svn:/sysadmin What do I need to do to get this to work? Have configured apache wrong, or is my understanding of the svnaccess.conf file incorrect? If I am going about this the wrong way, I have no particular attachment to my overall approach, so feel free to offer alternatives as well. UPDATE (20090528-1600): I attempted to implement this answer, but I still cannot get it to work properly. I know most of the configuration is correct, as I have added [/] joebloggs = rw at the start and 'joebloggs' then has all the correct access. When I try to go repository-specific though, doing something like [/] joebloggs = rw [sysadmin:/] mickmurphy = rw then I got a permission denied error for mickmurphy (joebloggs still works), with an error similar to what I already had previously [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'mickmurphy' GET svn:/sysadmin Also, I forgot to explain previously that all my repositories are underneath /var/lib/svn/repository UPDATE (20090529-1245): Still no luck getting this to work, but all the signs seem to be pointing to the issue being with path-access control in subversion not working properly. My assumption is that I have not conf

    Read the article

  • Hot-swap drive got new name, can I change it on-the-fly?

    - by T.J. Crowder
    One of the HDDs in my server's RAID config failed, so I took it out of the array and had the data center hot-swap it. They've done that, but now the new drive is /dev/sdc rather than /dev/sda. I suspect — correct me if I'm wrong — that if I reboot the server, it will be /dev/sda again, so I'm hesitant to add it back to the array as /dev/sdc because I don't want to lay a trap for myself to fall into on the next reboot. I'd just as soon not reboot the server if I don't need to (if I do need to, well, too bad for me). Is there a way I can change the device name from /dev/sdc to /dev/sda without rebooting? This is on Ubuntu 10.04 LTS. It's an md array ("Linux Software RAID"), where currently one of the devices (there are a couple of them) looks like this ("degraded" because I've removed the old /dev/sda from it): # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Oct 11 21:07:54 2009 Raid Level : raid1 Array Size : 97536 (95.27 MiB 99.88 MB) Used Dev Size : 97536 (95.27 MiB 99.88 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 30 09:31:16 2011 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 496be7a5:ab9177ed:7792c71e:7dc17aa4 Events : 0.112 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed Thanks, Update: Reading through the kernel md documentation, I suspect that if the name changes on reboot, it won't matter. (Good design, that.) Here's why: Boot time autodetection of RAID arrays When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be suppressed with the kernel parameter "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 superblock can be autodetected and run at boot time. The kernel parameter "raid=partitionable" (or "raid=part") means that all auto-detected arrays are assembled as partitionable. I do have md compiled into the kernel, so I'm rebuilding the array now and will do the reboot to see what happens. Even if it works, the above doesn't answer the question I actually asked, so unless someone comes along and answers that question in the meantime (I'd be interested, even if it's not necessary for what I'm doing this very moment), I'll just delete the question to keep noise down.

    Read the article

  • Sharing folder in a Virtual Private Windows Server 2008 R2 ?

    - by Triztian
    See Edit 2: Hello all, seems my involvement with computers has grown and I've found my self in the need to access a shared folder on a server. I've read some documentation and managed to set up the folder as a share, for this I created a local group and for now just one local user that has access to the share, the folder is in the public user folder and it's permissions should be (and I believe they are) read/write. The problem is that I can't connect from a remote machine I mean I don't know how the way it should be accessed, the server has a public IP and we use it also as a host to our website I don't know if that affects it though, the folder will be used as the "keeper" for the QuickBooks company files and has the database server manager installed. I've tried setting up a VPN Connection to the but no success. The server has a domain name a "http://www.example.com" that redirects to our website, I am unsure if it could be accessed that way, also the share has a location displayed when I right-click properties Heres what I've tried Setting up a VPN Connection (Windows Vista and 7) Got to the point where I got asked for credential and entered the user I created (which is not an admin) but I got a "Connection fail error 800" I suppose this is because in the domain field I entered the servers workgroup. right-click add network connection (Windows 7) Went through the wizard until I reached the point of entering the location, tried many things, the name in the share's properties(\\SOMETHING\Share), the http://www.example.com , the IP address I'm quite unfamiliar with this, so I have my guesses: Since the group and user are local they do not have access to the folder. The firewall in the server is blocking my connection. Anyways, any help and guidence is truly appreciated. EDIT 1: As @tony roth pointed out it may be a security fail, an I commented it out to management and said that that is not an issue, so please bare with me. EDIT 2: I've found out that the real question could be streamlined to "Sharing folder in a Virtual Private Server?", as thats what we have, a virtual private windows server 2008 R2, and I would like to know how to make it show like a normal folder in the client computer. Thanks again for all of your support.

    Read the article

  • What Are All the Variables Necessary to Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/O?/B?' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = ? - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = ? - The count of bytes sent including headers. %B = ? - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

  • How Can We Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/$bytes_sent/$body_bytes_sent' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = $request_time - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = $bytes_sent - The count of bytes sent including headers. %B = $body_bytes_sent - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

  • Switch flooding when bonding interfaces in Linux

    - by John Philips
    +--------+ | Host A | +----+---+ | eth0 (AA:AA:AA:AA:AA:AA) | | +----+-----+ | Switch 1 | (layer2/3) +----+-----+ | +----+-----+ | Switch 2 | +----+-----+ | +----------+----------+ +-------------------------+ Switch 3 +-------------------------+ | +----+-----------+----+ | | | | | | | | | | eth0 (B0:B0:B0:B0:B0:B0) | | eth4 (B4:B4:B4:B4:B4:B4) | | +----+-----------+----+ | | | Host B | | | +----+-----------+----+ | | eth1 (B1:B1:B1:B1:B1:B1) | | eth5 (B5:B5:B5:B5:B5:B5) | | | | | | | | | +------------------------------+ +------------------------------+ Topology overview Host A has a single NIC. Host B has four NICs which are bonded using the balance-alb mode. Both hosts run RHEL 6.0, and both are on the same IPv4 subnet. Traffic analysis Host A is sending data to Host B using some SQL database application. Traffic from Host A to Host B: The source int/MAC is eth0/AA:AA:AA:AA:AA:AA, the destination int/MAC is eth5/B5:B5:B5:B5:B5:B5. Traffic from Host B to Host A: The source int/MAC is eth0/B0:B0:B0:B0:B0:B0, the destination int/MAC is eth0/AA:AA:AA:AA:AA:AA. Once the TCP connection has been established, Host B sends no further frames out eth5. The MAC address of eth5 expires from the bridge tables of both Switch 1 & Switch 2. Switch 1 continues to receive frames from Host A which are destined for B5:B5:B5:B5:B5:B5. Because Switch 1 and Switch 2 no longer have bridge table entries for B5:B5:B5:B5:B5:B5, they flood the frames out all ports on the same VLAN (except for the one it came in on, of course). Reproduce If you ping Host B from a workstation which is connected to either Switch 1 or 2, B5:B5:B5:B5:B5:B5 re-enters the bridge tables and the flooding stops. After five minutes (the default bridge table timeout), flooding resumes. Question It is clear that on Host B, frames arrive on eth5 and exit out eth0. This seems ok as that's what the Linux bonding algorithm is designed to do - balance incoming and outgoing traffic. But since the switch stops receiving frames with the source MAC of eth5, it gets timed out of the bridge table, resulting in flooding. Is this normal? Why aren't any more frames originating from eth5? Is it because there is simply no other traffic going on (the only connection is a single large data transfer from Host A)? I've researched this for a long time and haven't found an answer. Documentation states that no switch changes are necessary when using mode 6 of the Linux interface bonding (balance-alb). Is this behavior occurring because Host B doesn't send any further packets out of eth5, whereas in normal circumstances it's expected that it would? One solution is to setup a cron job which pings Host B to keep the bridge table entries from timing out, but that seems like a dirty hack.

    Read the article

  • OpenVPN multiple servers on the same subnet, high availability

    - by andre
    Hey everyone. Let me start by saying that my Linux experience isn't super awesome but I can usually find my way around things easily. Over at work we have an OpenVPN setup that's been due for some improvement for a while now. The main server (tap mode) runs in our office, behind a rather slow DSL connection. The main problem is that, since I'm usually out of the office, every time I want to access something on the virtual network I have to go through that server to get anywhere else. We have two servers up on 100 Mbit connections that we use for development and production purposes, about 3 more servers in the office (one of them behind a different T1 line for VOIP) and about two dozen clients who use the network on a daily basis from various locations. We've had situations where network routing (outside of our control) would not allow people to reach our main OpenVPN server whilst the other locations were connectable. Also any time someone outside the office wants to fetch something from any of the servers (say, a 500 MB code repository), a whopping 20 KB/s download speed is just unacceptable these days (did I mention slow DSL? ok). We had to implement traffic shaping on this server since maxing out this connection was fairly trivial. I had the thought of running two (or more) OpenVPN servers in the network. These would have to have the same subnet though, as our application relies on virtual network's IP addresses for some of its core functionality. The clients would also preferably retain the same IP addresses but that's not vital. For simplicity, lets call the current server office and the second server I'm setting up, cloud. Call the server on the T1 phone. This proved to be rather complex because as soon as I connect to cloud, I cannot see office. Any routes to a server that would go through office also do not work while I'm connected to cloud (no ping, nothing) and vice-versa. There's no rules for iptables that would be blocking the traffic either. Recently I came across this article on linuxjournal but the solution they provide seems to only cover the use of two servers and somewhat outdated (can't even find much documentation, their wiki is offline). They also state that adding more servers would be a complex task. Ideally I would like to keep the existing server office running the virtual network and also run the OpenVPN daemon on the cloud and phone servers (100 Mbit and very reliable connection, respectively) so that we're on safe ground in case of a hardware failure, DSL failure, etc. So, in essence, I'm looking for a highly available OpenVPN solution (fix, patch, hack, tweak, whatever you want to call it) that will accept connections on multiple hosts (2 or more) whilst keeping the same IP address subnet regardless of the server to which you connect to. Thanks for reading and sorry for the long post, I hope it gets the point across :P

    Read the article

  • How to encrypt dual boot windows 7 and xp (bitlocker, truecrypt combo?) on sdd (recommended?)

    - by therobyouknow
    I would like to setup a dual boot Windows 7 and Windows XP laptop/notebook computer where each operation system's partition is fully encrypted. I would like to do this on a SSD - a 128Gb Crucial M4. My research Dual boot of truecrypt encrypted OSs on one drive (not possible - in Truecript 7.x at time of writing) This cannot be done on a standard Truecrypt setup - it will only support encrypting one of the operating systems. I have tried this and also read about it here on superuser.com However, I did see a solution here that uses grub4dos as the initial bootloader to chain to separate truecrypt encrypted OSs, in my case Windows 7 and Windows XP: http://yyzyyz.blogspot.co.uk/2010/06/truecrypt-how-to-encrypt-multiple.html I am not going to consider this solution as it relies upon some custom code for use in the bootloader that is provided by the author. I would prefer a solution that can be fully understood so that I can be sure that there is nothing undesirable occuring (i.e. malware or just simply bugs in the code). I would like to believe such a solution doesn't have those risks but I can't be sure. BitLocker and Truecrypt combination - possible solution? So I am now considering a combination of encryption programs: I now aim to encrypt Windows XP with Truecrypt and Windows 7 with BitLocker. Assuming Truecrypt bootloader can boot into non-Truecrypt OSs (e.g. via hitting Escape to go to another menu), then this solution may be viable. SSDs and Encryption (use fastest possible spinning hard disk instead (?)) I read on various superuser.com posts and elsewhere that current SSDs are not suited to whole drive encryption for various reasons: impact of performance algorithms that give SSDs advantage over spinning harddisks. Algorithms used in compression of data for example. Wear on the SSD, shortening its life Security issues whereby data is repeated, as indicated in some Truecrypt documentation So I am now considering not using SSD. But with the aim to have the fastest drive possible, I am considering using the Western Digital Scorpion black 2.5" 7200rpm harddisk as this appears to be top rated among spinning platter-based harddrives (don't work for Western Digital). Summary So to achieve whole drive encrypted dual boot Windows 7 and Windows XP with minimal performance impact I intend to use a combination of Truecrypt and Bitlocker on a top-rated conventional spinning platter-based harddisk. Questions Will my summary: achieve whole disk encryption of the dual-boot Windows XP, Windows 7? OR an you suggest a simpler solution, including one that only requires only Truecrypt (BitLocker not available on XP). Or another encryption tool, including paid-for? provide the highest performance. Am I correct to avoid using SDD with encryption for the reasons I discovered? Are the concerns about SSDs and encryption still very real (some articles I read go back to 2010) Thanks for your input!

    Read the article

  • Looking for personal scheduling software / todo list with rather particular requirements

    - by Cthulhu
    I've been scouring the web for a couple of (my boss') hours, looking for a piece of software that can organize my tasks in two ways. First, I have a list of bullet points / todo items I can do at any given time. Think of stuff like solve issue X, ask X about Y, write documentation about Z, etcetera. Second, I have a number of running projects I'd like to organize better, as in schedule for a certain part of a day of the week. Ideally (I think), my day would be organized as 50% spent on projects and 50% on the other small things. Now, I don't like most calendar applications (such as Outlook & friends), their UI is too 'official', not really easy to move stuff around (in my experience). I don't like most todo lists either, too static and things. I like new, fast and hip software. I've looked at GTD versions of Tiddlywiki, and I like mGSD for one particular feature. You can make lists of tasks and basically give them one of three statusses - Now (nothing required, you can do it right away), Waiting (you need someone or something before you can work on this), or the most gratifying of all, Done. I like that feature because it's a simple todo list, but indicates more accurately the things you can do right now and the things you depend on someone else for to do. Anyways, that's just a small aspect of that program - most of the other things in there I can't find a particularly good use for. If there's something like that (maybe something that works even snappier, cleaner UI), combined with an easy to use bit of scheduling software (optionally separated into two applications, but preferrably not), I think I'd like that. (Besides something like that, I also use several instances of Trac to monitor tasks and bugs and things for the various clients and projects I have to serve, and TaskCoach to monitor the amount of time I spend on each task / each client. An easy / low-maintenance time tracking software would be neat too) Of course, the software has to be free to use. I don't like shareware, trials, limited software and the like. I could develop my own too, but I'm lazy like that and there's a dozen other projects I'd like to do in my free time (neither of which I actually do). Edit: I like David Seah's printable CEO stuff, if something like that (with some video game / instant achievement / gratification) exists in software, it'd be awesome.

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >