Search Results

Search found 11319 results on 453 pages for 'conversation group'.

Page 100/453 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Fortigate Remote VPN : no matching gateway for new request

    - by Kedare
    I am trying to configure a Fortigate 60C to act as an IPSec endpoint for remote VPN. I configured it like this : SCR-F0-FGT100C-1 # diagnose vpn ike config vd: root/0 name: SCR-REMOTEVPN serial: 7 version: 1 type: dynamic mode: aggressive dpd: enable retry-count 3 interval 5000ms auth: psk dhgrp: 2 xauth: server-auto xauth-group: VPN-group interface: wan1 distance: 1 priority: 0 phase2s: SCR-REMOTEVPN-PH2 proto 0 src 0.0.0.0/0.0.0.0:0 dst 0.0.0.0/0.0.0.0:0 dhgrp 5 replay keep-alive dhcp policies: none Here is the configuration: config vpn ipsec phase1-interface edit "SCR-REMOTEVPN" set type dynamic set interface "wan1" set dhgrp 2 set xauthtype auto set mode aggressive set proposal aes256-sha1 aes256-md5 set authusrgrp "VPN-group" set psksecret ENC xxx next config vpn ipsec phase2-interface edit "SCR-REMOTEVPN-PH2" set keepalive enable set phase1name "SCR-REMOTEVPN" set proposal aes256-sha1 aes256-md5 set dhcp-ipsec enable next end But when I try to connect from a remote device (I tested with an Android Phone), the phone fail to connect and the fortinet return this error : 2012-07-20 13:08:51 log_id=0101037124 type=event subtype=ipsec pri=error vd="root" msg="IPsec phase 1 error" action="negotiate" rem_ip=xxx loc_ip=xxx rem_port=1049 loc_port=500 out_intf="wan1" cookies="xxx" user="N/A" group="N/A" xauth_user="N/A" xauth_group="N/A" vpn_tunnel="N/A" status=negotiate_error error_reason=no matching gateway for new request peer_notif=INITIAL-CONTACT I tried searching on the web, but i did not find anything revelant to this. Do you have any idea of what can be the problem ? I tried many combinaisons of settings on the fortigate without success..

    Read the article

  • Error cloning gitosis-admin on new setup

    - by michaelmior
    I have the following in my gitosis.conf. (Created via gitsosis-init < id_rsa.pub with the key from my laptop) [gitosis] loglevel = DEBUG [group gitosis-admin] writable = gitosis-admin members = michael@laptop When I try git clone git@SERVER:gitsos-admin.git, I get the following errors: Initialized empty Git repository in /home/michael/gitsos-admin/.git/ DEBUG:gitosis.serve.main:Got command "git-upload-pack 'gitsos-admin.git'" DEBUG:gitosis.access.haveAccess:Access check for 'michael@laptop' as 'writable' on 'gitsos-admin.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'gitsos-admin.git', new value 'gitsos-admin' DEBUG:gitosis.group.getMembership:found 'michael@laptop' in 'gitosis-admin' DEBUG:gitosis.access.haveAccess:Access check for 'michael@laptop' as 'writeable' on 'gitsos-admin.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'gitsos-admin.git', new value 'gitsos-admin' DEBUG:gitosis.group.getMembership:found 'michael@laptop' in 'gitosis-admin' DEBUG:gitosis.access.haveAccess:Access check for 'michael@laptop' as 'readonly' on 'gitsos-admin.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'gitsos-admin.git', new value 'gitsos-admin' DEBUG:gitosis.group.getMembership:found 'michael@laptop' in 'gitosis-admin' ERROR:gitosis.serve.main:Repository read access denied fatal: The remote end hung up unexpectedly I know my key is being accepted because I have tried logging in via SSH and although a terminal won't be allocated, the authorization works.

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • How to get remote firewall administration working with Windows Server Core 2008 R2?

    - by Daniel15
    I'm setting up a Windows Server Core 2008 R2 installation in a VMware virtual machine before setting it up on a live VPS. I've gotten remote administration via MMC working on my computer (a PC running Windows 7) for things like event logs, but I can't seem to get the firewall administration working. No matter what I do, I get the following error mesage: You do not have the correct permissions to open the Windows Firewall with Advanced Security console. You must be a member of the Administrators group or the Network Operators group to perform this task. For more information, contact you system administrator. Error code: 0x5. I've used cmdkey to add valid server credentials on my computer, and enabled remote management with the following commands: netsh advfirewall firewall set rule group="remote administration" new enable=yes netsh advfirewall firewall set rule group="windows firewall remote management" new enable=yes netsh advfirewall set currentprofile settings remotemanagement enable I am not running on a domain (just a workgroup), this is the only Windows Server 2008 computer I have. I've tried turning off the firewall completely, but remote administration is still failing How do I debug this issue? Does anyone know how to fix it? I found a few forum topics about it (eg. Remotely managing Windows Firewall on Server Core gives access denied (error 0x5) on Windows Server TechCenter) but they didn't help (I've already tried most of the fixes listed).

    Read the article

  • need help with automating a CMD java tool whcih qurries alexa AWS using batch

    - by Eli.C
    Hi everyone, I need to get all available info on 600 URLs from "Alexa Web Information Service", I downloaded the java tool and I'm able to run a single query each time with a single switch/Response Group. I would like to ask how to write a batch file that would automate the process ? the java tool runs from the CMD with the following: C:java UrlInfo (key1) (key2) (URL) (Response Group) UrlInfo - constant key1 - constant key2 -constant URL - variable (I guess I need to use the "(" sign to read from a file) Response Group - variable - (14 total, and I need to run each Response Group on each of the URLs once ) the app returns data in clear text formatted as XML after each query, here is an example: C:java UrlInfo (key1) (key2) www.url.com Rank Response: (?xml version="1.0"?) (aws:UrlInfoResponse xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:Response xmlns:aws="http://awis.amazonaws.com/doc/2005-07-11") (aws:OperationRequest) (aws:RequestId)ec2b6-e8ae-b392(/aws:RequestId) (/aws:OperationRequest) (aws:UrlInfoResult) (aws:Alexa) (aws:TrafficData) (aws:DataUrl type="canonical")url.com/(/aws:DataUrl) (aws:Rank)472906(/aws:Rank) (/aws:TrafficData) (/aws:Alexa) (/aws:UrlInfoResult) (aws:ResponseStatus xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:StatusCode)Success(/aws:StatusCode) (/aws:ResponseStatus) (/aws:Response) (/aws:UrlInfoResponse) Any help would be really appreciated Thanks and regards Eli.C

    Read the article

  • Giving Select Windows Domain Users Symbolic Link Privilege

    - by fp0n
    I would like to setup select users on our domain to have the ability to create symbolic links on local NTFS drives and network shares without needing to run as Administrator, as part of an application with will call the CreateSymbolicLink() API directly. The default configuration for our users is to be Administrator of their computer and I think I am fighting UAC to make the privileges work the way that I want because of that. I found this link on MSDN: http://social.msdn.microsoft.com/Forums/en-SG/windowssdk/thread/fa504848-a5ea-4e84-99b7-0eb4e469cbef which describes the interaction between the SeCreateSymbolicLinkPrivilege, UAC and a domain but really does not have a solution. Here's the three options I've come up with: 1) Create a new group, give the SeCreateSymbolicLinkPrivilege to the group and assign users to the group 2) Give each individual user (2 now, more later) the privilege 3) Give the privilege to the default User group which opens it up to all Users 4) Change config so Users are not Admins by default (probably would work but not likely) Based on my testing, only 3 works for me and that is the least desirable but I've only got a local server to test with, not a domain. I need to recommend to the admin how to set this up and also have something that we can easily explain to other users of our application that are on their own domain or not on a domain. The other option seems to be to create a Service that runs with a SYSTEM account that creates the links for the application but I'd rather not go that route. Thanks.

    Read the article

  • OpenVPN Bridge LAN-to-LAN Configuration?

    - by Shad Reese
    I'm trying to configure an OpenVPN bridge LAN-to-LAN setup. Currently, I have the OpenVPN bridge Server/Client setup up running. On the server-side my br-lan interface has tap0, eth0, and wlan0 in the bridge group. On the client-side the br-lan interface has eth0 and wlan0 in the bridge group, the client tap0 is outside of the br-lan group. Currently the two bridge groups are connected via the wlanO interfaces (server-side is the Access Point - AP and the client-side is the wireless client). My goal is to connect the two bridge groups with a wireless VPN pipe. My network configuration: Server: br-lan: 10.4.96.50 Client: br-lan: 10.4.96.75 tap0: 10.4.96.100 <---- issued by the VPN server. Unfortunately, I'm stuck with using a bridge instead of a routed OpenVPN setup. My question is how (if possible) do I add the client tap0 interface to the client bridge group, as to ensure all traffic between the server/client bridge groups is using the VPN pipe? SERVER CONFIG FILE. config openvpn sample_server # Set to 1 to enable this instance: option enable 1 option port 1194 option proto udp option dev tap0 option key /etc/easy-rsa/keys/server.key option dh /etc/easy-rsa/keys/dh1024.pem option ifconfig_pool_persist /tmp/ipp.txt option server_bridge "10.4.96.50 255.255.255.0 10.4.96.100 10.4.96.200" list push "redirect-gateway local def1" list push "dhcp-option DNS 10.4.96.14" option duplicate_cn 1 option comp_lzo 1 option max_clients 100 option log /tmp/openvpn.log option verb 3 CLIENT CONFIG FILE: config 'openvpn' 'sample_client' option 'enable' '1' option 'client' '1' option 'dev' 'tap' option 'proto' 'udp' list 'remote' '10.4.96.50 1194' option 'status' /tmp/openvpn-status.log option 'log' /tmp/openvpn.log option 'ca' '/etc/easy-rsa/keys/ca.crt' option 'cert' '/etc/easy-rsa/keys/client.crt' option 'key' '/etc/easy-rsa/keys/client.key' option 'comp_lzo' '1' option 'verb' '5' Thanks in advance,

    Read the article

  • need help with automating a CMD java tool which queries alexa AWS using batch

    - by Eli.C
    Hi everyone, I need to get all available info on 600 URLs from "Alexa Web Information Service", I downloaded the java tool and I'm able to run a single query each time with a single switch/Response Group. I would like to ask how to write a batch file that would automate the process? The java tool runs from the CMD with the following: C:\>java UrlInfo (key1) (key2) (URL) (Response Group) UrlInfo - constant key1 - constant key2 -constant URL - variable (I guess I need to use the "(" sign to read from a file) Response Group - variable - (14 total, and I need to run each Response Group on each of the URLs once ) the app returns data in clear text formatted as XML after each query, here is an example: C:\>java UrlInfo (key1) (key2) www.url.com Rank Response: (?xml version="1.0"?) (aws:UrlInfoResponse xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:Response xmlns:aws="http://awis.amazonaws.com/doc/2005-07-11") (aws:OperationRequest) (aws:RequestId)ec2b6-e8ae-b392(/aws:RequestId) (/aws:OperationRequest) (aws:UrlInfoResult) (aws:Alexa) (aws:TrafficData) (aws:DataUrl type="canonical")url.com/(/aws:DataUrl) (aws:Rank)**472906**(/aws:Rank) (/aws:TrafficData) (/aws:Alexa) (/aws:UrlInfoResult) (aws:ResponseStatus xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:StatusCode)Success(/aws:StatusCode) (/aws:ResponseStatus) (/aws:Response) (/aws:UrlInfoResponse) Any help would be really appreciated Thanks and regards Eli.C

    Read the article

  • "chown mysql:mysql /data/tmp" command

    - by Mellon
    I am on a Linux ubuntu machine with MySQL installed. If there is a MySQL installation on a Ubuntu machine, I saw some people doing the following thing: sudo chown mysql:mysql /data/tmp I get confused, I know the meaning of the above command, which is to change the owner of /data/tmp to user 'mysql' and change the group of it to 'mysql' group. But (my questions): 1. Why would one run the above command? If I create a table in my_db database, by default, there will be .frm, .MYD, and .MYI files (data files) be created automatically by MySQL under /var/lib/mysql/my_db/ . So, does the above command changes the default MySQL data directory to /data/tmp/ instead of /var/lib/mysql/my_db/? Basically, I would like to know the purpose and effect of the above command. (better with examples) 2. Where does the 'mysql' owner and group come from? Does the installation of MySQL on a Linux machine automatically create the 'mysql' user and group? or People need to manually create a mysql account for the linux machine?

    Read the article

  • email output of powershell script

    - by Gordon Carlisle
    I found this wonderful script that outputs the status of the current DFS backlog to the powershell console. This works great, but I need the script to email me so I can schedule it to run nightly. I have tried using the Send-MailMessage command, but can't get it to work. Mainly because my powershell skills are very weak. I believe most of the issue revolve around the script using the Write-Host command. While the coloring is nice I would much rather have it email me the results. I also need the solution to be able to specify a mail server since the dfs servers don't have email capability. Any help or tips are welcome and appreciated. Here is the code. $RGroups = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query "SELECT * FROM DfsrReplicationGroupConfig" $ComputerName=$env:ComputerName $Succ=0 $Warn=0 $Err=0 foreach ($Group in $RGroups) { $RGFoldersWMIQ = "SELECT * FROM DfsrReplicatedFolderConfig WHERE ReplicationGroupGUID='" + $Group.ReplicationGroupGUID + "'" $RGFolders = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGFoldersWMIQ $RGConnectionsWMIQ = "SELECT * FROM DfsrConnectionConfig WHERE ReplicationGroupGUID='"+ $Group.ReplicationGroupGUID + "'" $RGConnections = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGConnectionsWMIQ foreach ($Connection in $RGConnections) { $ConnectionName = $Connection.PartnerName.Trim() if ($Connection.Enabled -eq $True) { if (((New-Object System.Net.NetworkInformation.ping).send("$ConnectionName")).Status -eq "Success") { foreach ($Folder in $RGFolders) { $RGName = $Group.ReplicationGroupName $RFName = $Folder.ReplicatedFolderName if ($Connection.Inbound -eq $True) { $SendingMember = $ConnectionName $ReceivingMember = $ComputerName $Direction="inbound" } else { $SendingMember = $ComputerName $ReceivingMember = $ConnectionName $Direction="outbound" } $BLCommand = "dfsrdiag Backlog /RGName:'" + $RGName + "' /RFName:'" + $RFName + "' /SendingMember:" + $SendingMember + " /ReceivingMember:" + $ReceivingMember $Backlog = Invoke-Expression -Command $BLCommand $BackLogFilecount = 0 foreach ($item in $Backlog) { if ($item -ilike "*Backlog File count*") { $BacklogFileCount = [int]$Item.Split(":")[1].Trim() } } if ($BacklogFileCount -eq 0) { $Color="white" $Succ=$Succ+1 } elseif ($BacklogFilecount -lt 10) { $Color="yellow" $Warn=$Warn+1 } else { $Color="red" $Err=$Err+1 } Write-Host "$BacklogFileCount files in backlog $SendingMember->$ReceivingMember for $RGName" -fore $Color } # Closing iterate through all folders } # Closing If replies to ping } # Closing If Connection enabled } # Closing iteration through all connections } # Closing iteration through all groups Write-Host "$Succ successful, $Warn warnings and $Err errors from $($Succ+$Warn+$Err) replications." Thanks, Gordon

    Read the article

  • RPM Spec How to specify in package so that previous RPM is removed

    - by user123819
    Question: What do I put in the foo.spec file so that the rpm's will remove the previous rpm before installing? Description: I have created a spec file that creates rpm's for a few packages that use the same source and provide the same service, each with a slightly different configuration. E.g. they each provide the same "capability" Here's an example of the essentials that my .spec file looks like: %define version 1234 %define name foo %define release 1 %define pkgname %{name}-%{version}-%{release} Name: %{name} Version: %{version} Release: %{release} Provides: %{name} %package one Summary: Summary for foo-one Group: %{group} Obsoletes: %{name} <= %{version} Provides: %{name} = %{version} %description one Blah blah blah %package two Summary: Summary for foo-two Group: %{group} Obsoletes: %{name} <= %{version} Provides: %{name} = %{version} %description two Blah blah blah # %prep, %install, %build and %clean are pretty simple # and omitted here for brevity sake %files one %defattr(-,root,root,-) %{_prefix}/%{pkgname} %files two %defattr(-,root,root,-) %{_prefix}/%{pkgname} When I install the first one, it installs ok. I then remove the first one, and then install the second one, that works fine too. I then install the first one, followed immediately by installing the second one, and they both install, one over the other, but, I was expecting that the second one would be removed before installing the second. Example session: # rpmbuild foo and copy rpms to yum repo $ yum install foo-one ... $ yum list installed|grep foo foo-one.noarch 1234-1 @myrepo $ yum install foo-two ...[Should say that it is removing foo-one, but does not]... $ yum list installed|grep foo foo-one.noarch 1234-1 @myrepo foo-two.noarch 1234-1 @myrepo $ rpm -q --provides foo-one foo = 1234 foo-one = 1234-1 $ rpm -q --provides foo-two foo = 1234 foo-two = 1234-1 What do I put in the foo.spec file so that the rpm's will remove the previous rpm before installing? Thank you, .dave.

    Read the article

  • Pure-FTPD accounts and permissions for websites

    - by EddyR
    I'm having trouble setting up the appropriate Pure-FTPD accounts and permissions - I have the following sites setup up on my Debian server. /var/www/site1 /var/www/site2 /var/www/wordpress The permissions are 775 for folders and 664 for files. The owner is currently admin:ftpgroup Wordpress also requires special permissions for file uploads in /var/www/wordpress/wp-content/uploads What I need is: a general admin group with access to /var/www a group for each site (site1, site2, wordpress) and a group or user, not www-data (?), with permissions to write files to the wordpress upload folder I ask because restrictions on linux groups (can't have groups in groups) makes it a little bit confusing and also because many of the tutorial sites have conflicting information like, some recommend the use of www-data and some don't. Also, I'm not sure if I understand how Pure-FTP is supposed to work exactly. I create a Pure-FTPD account and assign it a directory (/var/www) and a system user (ftpuser) and group (ftpgroup): Can I assign more than 1 path? For example, if a user requires access to 2 sites. Is it better to assign ftpgroup to all ftp locations and let Pure-FTPD manage account access? Why would anyone have more than 1 ftpuser or ftpgroup? (Doesn't it mean users have access to everyone else's files if they could get there?) Sorry for so many questions at once. I've been reading lots of tutorials but I think they've ended up making me more confused!

    Read the article

  • How to list rpm packages/subpackages sorted by total size

    - by smci
    Looking for an easy way to postprocess rpm -q output so it reports the total size of all subpackages matching a regexp, e.g. see the aspell* example below. (Short of scripting it with Python/PERL/awk, which is the next step) (Motivation: I'm trying to remove a few Gb of unnecessary packages from a CentOS install, so I'm trying to track down things that are a) large b) unnecessary and c) not dependencies of anything useful like gnome. Ultimately I want to pipe the ouput through sort -n to what the space hogs are, before doing rpm -e) My reporting command looks like [1]: cat unwanted | xargs rpm -q --qf '%9.{size} %{name}\n' > unwanted.size and here's just one example where I'd like to see rpm's total for all aspell* subpackages: root# rpm -q --qf '%9.{size} %{name}\n' `rpm -qa | grep aspell` 1040974 aspell 16417158 aspell-es 4862676 aspell-sv 4334067 aspell-en 23329116 aspell-fr 13075210 aspell-de 39342410 aspell-it 8655094 aspell-ca 62267635 aspell-cs 16714477 aspell-da 17579484 aspell-el 10625591 aspell-no 60719347 aspell-pl 12907088 aspell-pt 8007946 aspell-nl 9425163 aspell-cy Three extra nice-to-have things: list the dependencies/depending packages of each group (so I can figure out the uninstall order) Also, if you could group them by package group, that would be totally neat. Human-readable size units like 'M'/'G' (like ls -h does). Can be done with regexp and rounding on the size field. Footnote: I'm surprised up2date and yum don't add this sort of intelligence. Ideally you would want to see a tree of group-package-subpackage, with rolled-up sizes. Footnote 2: I see yum erase aspell* does actually produce this summary - but not in a query command. [1] where unwanted.txt is a textfile of unnecessary packages obtained by diffing the output of: yum list installed | sed -e 's/\..*//g' > installed.txt diff --suppress-common-lines centos4_minimal.txt installed.txt | grep '>' and centos4_minimal.txt came from the Google doc given by that helpful blogger.

    Read the article

  • Missing Home Folder XP Clients 2008R2 Domain

    - by minamhere
    We just completed a migration from Server 2003 to Server 2008R2. Everything seems to have gone well except that many of our desktops have stopped mapping the Home Folder as set in Active Directory. Other mappings that are defined on individual clients are mapping just fine, these mappings are all on the same file server as the failing Home Folders. Half of the users are on 1 file server and half are on another. Users from both servers are having this problem. I have enabled the Group Policy setting to "Wait for network before logging in". I enabled the policy to "Run Logon Scripts synchronously". There are no errors on the Domain Controller or either File Server. When I enabled Group Policy Preferences as an attempted workaround, I get this error: The user 'V:' preference item in the '<Policy Name>' Group Policy object did not apply because it failed with error code '0x800708ca This network connection does not exist.' This error was suppressed. This seems to indicate that the network connection is not ready by the time Group Policy is processed. But isn't this the point of the "Wait before logging in" and "Run Logon scripts synchronously" settings? Some other background facts: The new Server 2008R2 installation is a Virtual Machine. It is on a new Subnet in a different building from the old server. DNS and DHCP were also migrated from the old DC to this new DC. These Home Folders were all working properly before the migration. Are there new security restrictions/policies in Server 2008R2 that might be causing this? Is there a way to check whether I have an underlying network connectivity issue? Maybe moving the server to the new building is causing a delay/timeout? Any thoughts or ideas on what could be causing this or how I can resolve this? Thanks.

    Read the article

  • NTFS: Deny all permissions for all files, except where explicitly added

    - by Simon
    I'm running a sandboxed application as a local user. I now want to deny almost all file system permissions for this user to secure the system, except for a few working folders and some system DLLs (I'll call this set of files & directories X below). The sandbox user is not in any group. So it shouldn't have any permissions, right? Wrong, because all "Authenticated Users" are a member of the local "Users" group, and that group has access to almost everything. I thought about recursively adding deny ACL-entries to all files and directories and remove them manually from X. But this seems excessive. I also thought about removing "Authenticated Users" from the "Users" group. But I'm afraid of unintended side-effects. It's likely that other things rely on this. Is this correct? Are there better ways to do this? How would you limit the filesystem permissions of a (very) non-trustworthy account?

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • Mirror a Dropbox repository in Sharepoint and restrict access

    - by Dan Robson
    I'm looking for an elegant way to solve the following problem: My development team uses Dropbox for sharing documents amongst our immediate group. We'd like to put some of those documents into a SharePoint repository for the larger group to be able to access, as granting Dropbox access to the group at large is not ideal. However, we'd like to continue to be able to propagate changes to the SharePoint site simply by updating the files in Dropbox on our local client machines, and also vice versa - users granted access on SharePoint that update files in that workspace should be able to save their files and the changes should appear automatically on our client PC's. I've already done the organization of the folders so that in Dropbox, there exists a SharePoint folder that looks something like this: SharePoint ----Team --------Restricted Access Folders ----Organization --------Open Access Folders The Dropbox master account and the SharePoint master account are both set up on my file server. Unfortunately, Dropbox doesn't seem to allow syncing of folders anywhere above the \Dropbox\ part of the file system's hierarchy - or all I would have to do is find where the Sharepoint repository is maintained locally, and I'd be golden. So it seems I have to do some sort of 2-way synchronization between the Dropbox folder on the file server and the SharePoint folder on the file server. I messed around with Microsoft SyncToy, but it seems to be lacking in the area of real-time updating - and as much as I love rsync, I've had nothing but bad luck with it on Windows, and again, it has to be kicked off manually or through Task Scheduler - and I just have a feeling if I go down that route, it's only a matter of time before I get conflicts all over the place in either Dropbox, SharePoint, or both. I really want something that's going to watch both folders, and when one item changes, the other automatically updates in "real-time". It's quite possible I'm going down the entirely wrong route, which is why I'm asking the question. For simplicity's sake, I'll restate the goal: To be able to update Dropbox and have it viewable on the SharePoint site, or to update the SharePoint site and have it viewable in Dropbox. And since I'm a SharePoint noob, I'll also need help hiding the "Team" subfolder from everyone not in a specific group in AD.

    Read the article

  • Read access to Active Directory property (uSNCreated)

    - by Tom Ligda
    I have an issue with read access to the uSNCreated property when doing LDAP searches. If I do an LDAP search with a user that is a member of the Domain Admins group (UserA), I can see the uSNCreated property for every user. The problem is that if I do an LDAP search with a user (UserB) that is not a member of the Domain Admins group, I can see the uSNCreated property for some users (UserGroupA) and not for some users (UserGroupB). When I look at the users in UserGroupA and compare them to the users in UserGroupB, I see a crucial difference in the "Security" tab. The users in UserGroupA have the "Include inheritable permissions from this object's parent" unchecked. The users in UserGroupB have that option checked. I also noticed that the users in UserGroupA are users that were created earlier. The users in UserGroupB are users created recently. It's difficult to quantify, but I estimate the border between creation time between the users in UserGroupA and UserGroupB is about 6 months ago. What can cause the user creation to default to having that security property checked as opposed to unchecked? A while back (maybe around 6 months ago?) I changed the domain functional level from Windows Server 2003 to Windows Server 2008 R2. Would that have had this effect? (I can't exactly downgrade the domain functional level to test it out.) Is this security property actually the cause of the issue with read access to the uSNChanged property on LDAP searches? It seems correlated, but I'm not sure about causation. What I want in the end is for all authenticated users to have read access to the uSNCreated property for all users when doing an LDAP search. I would also be OK if I could grant read access for that property to an AD group. Then I can control access by adding members to the group.

    Read the article

  • Body of email breaks distribution list in exchange?

    - by widgisoft
    Hi, I have a very odd problem that I'm not sure is a programming issue or a server issue :-p. Basically I'm sending an email to an exchange distribution list that includes a PHP stack trace; during certain faults the trace includes really high level information such as the machine's environment variables (during file reads, etc.). I went through a copy of the email line by line until the email sent and it appears the line: [SUDO_COMMAND] => /etc/init.d/httpd restart is the culprit. Adding a string replacement in before the email is sent allows a successful send. What I don't understand is WHY these stream of characters are causing the issue ONLY on the distribution email. If I send the email to myself as well, i.e. "[email protected]; [email protected]", then I get the email fine. Re-ordering the list doesn't make a difference the group never gets the email. Because the individual gets the email and not the group I'm assuming the fault is with exchange and some rogue filtering - I've gone through it with the sysadmins and there's no filtering of any sort on that group... so maybe it's a bug? I can't find anyone else having recorded this specific fault so I figured I'd open it here. For now I'm just not using the distribution list but it'd be nice to eventually find the solution. Many thanks, Chris

    Read the article

  • Problem with network policy rule in Network Policy Server

    - by Robert Moir
    Trying to configure RADIUS for a college network, and have run into the following frustration: I can't set an "AND" condition for group membership of authenticated objects in the network policy rules, e.g. I'm trying to create a NPS rule that says, essentially "IF user is a member of [list of user groups] And is authenticating from a computer in [wireless computer group] then allow access. The screenshot above is the rule I am having trouble with. It does not work as written. The rule underneath it, which is identical in every aspect except the conditions rule, does work. I've tried changing the non-working rule to define each set of groups as "Windows group" rather than specifically as machine and user groups, with no change. With the "faulty" rule enabled and the working one disabled, any attempt to login with a valid account from a machine that is in the wireless computers group gives a 6273 audit event in the windows event log: Reason code 66 - "the user attempted to use an authentication method that is not enabled on the matching network policy". Disabling the "faulty" rule, enabling the other rule and logging in with the same account and computer works just fine.

    Read the article

  • Apache httpd LDAP integration

    - by David W.
    I am configuring a CollabNet Subversion integration. I have the following collabnet_subversion.conf file: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=vegibanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" </Location> This works great. Any user in our Active Directory can access our Subversion repository. Now, I want to limit this to only people in the Active Directory group Development: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=VegiBanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" Require ldap-group CN=Development OU=Security Groups OU=VegiBanc, dc=vegibanc, dc=com </Location> I added Require ldap-group, but now no one can log in. I have LogLevel set to debug, but all I get is this in my error_log (Single line broken up for easier reading): [Thu Oct 11 13:09:28 2012] [info] [client 10.55.9.45] [6752] vauth_ldap authenticate: user dweintraub authentication failed; URI /svn/ [ldap_search_ext_s() for user failed][Bad search filter] And, I get this in my access_log: 10.55.9.45 - - [11/Oct/2012:13:09:27 -0500] "GET /svn/ HTTP/1.1" 401 401 10.55.9.45 - dweintraub [11/Oct/2012:13:09:28 -0500] "GET /svn/ HTTP/1.1" 500 535 Yes, I am in that group. (Or, at least how can I confirm that just to make sure that's not the issue. I have the SysinternalsSuite ADExplorer. It's where I'm getting all of my info.)

    Read the article

  • Any e-mail client with additional grouping functionality on Mac OS X?

    - by harald
    Hello, I'm very unhappy with my mail experience. I'm receiving a lot of mails from various clients and projects and need a way to better organize them. I would really love to have additional grouping-functionality with the e-mail client. Currently I'm using Mac OS X's Mail.app, but I am not bound to this. So I am open to any Mail.app-plugin or independent mail application commercial or not for Mac OS X -- should support IMAP, though -- but I think this should not be a problem nowadays? With Mail.app i'm doing the following: group by thread sort by datetime descending What I would love to have is not only additional tagging-functionality for e-mails -- I know, that at least thunderbird and postbox support them. I would love to have some additional grouping functionality for these tags -- inside the main mail window. So maybe I can summarize the important points: "native" Mac OS X mail client (no web-mailer please) automatic-tagging functionality (eg.: auto-apply tags by some kind of filter) easy access to tagged mails Easy access to tagged mails: I would really love to have some additional grouping functionality in the mail folders. The mail application should put all tagged mails in a group -- the groups should be sorted by last received e-mail. Inside the group I would still like to have the possiblity to group by thread. or It would be ok to have a list of tags (topics) on the left pane of the mail client. For example postbox: There is the 'accounts-section', there is the 'folders-section' -- but why is there no 'topics-section'? Thanks very much,

    Read the article

  • Windows XP laptop doesn't appear in WSUS All computers list

    - by George
    I have this one laptop that doesn't appear in WSUS all computers list. We have about 23-25 PCs/laptops/servers in the network, all, but one are listed in WSUS. This is what I have done so far: 1) Changing Updates on local PC: Go to your Windows XP client and start a new Microsoft Management Console (MMC). At Start, Run, type MMC. Use Ctrl+M to add a new snap-in. Click Add, and then add the Group Policy Object Editor for the Local Computer. Click Close, and then click OK. Expand the Local Computer Policy. Under Computer Configuration, go to Administrative Templates, Windows Components, Windows Update. In the right-hand pane, double-click Specify intranet Microsoft update service location. Configure the settings to reflect my WSUS server. Click OK and then close the MMC without saving it. executed wuauclt.exe /detectnow 2) Edited registry key to be pushed to the PCs using GPO [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate] "WUServer"=http://wsusserver "TargetGroupEnabled"=dword:00000001 "TargetGroup"="WINXP" "WUStatusServer"=http://wsuswerver 3) executed wuauclt /resetauthorization /detectnow 4)Synchronised and refreshed the group I am running out of ideas here. The client is running Windows XP pro, WSUS version is 3.0 and is running on Windows 2008 R2 64-bit. Please, help! Thanks! EDIT 13.IX.2012 @ 15.40 I should have also mentioned that we do have a Windows Update GPO for workstations group and that laptop is a part of that group.

    Read the article

  • How to manage mounted partitions (fstab + mount points) from puppet

    - by Cristian Ciupitu
    I want to manage the mounted partitions from puppet which includes both modifying /etc/fstab and creating the directories used as mount points. The mount resource type updates fstab just fine, but using file for creating the mount points is bit tricky. For example, by default the owner of the directory is root and if the root (/) of the mounted partition has another owner, puppet will try to change it and I don't want this. I know that I can set the owner of that directory, but why should I care what's on the mounted partition? All I want to do is mount it. Is there a way to make puppet not to care about the permissions of the directory used as the mount point? This is what I'm using right now: define extra_mount_point( $device, $location = "/mnt", $fstype = "xfs", $owner = "root", $group = "root", $mode = 0755, $seltype = "public_content_t" $options = "ro,relatime,nosuid,nodev,noexec", ) { file { "${location}/${name}": ensure => directory, owner => "${owner}", group => "${group}", mode => $mode, seltype => "${seltype}", } mount { "${location}/${name}": atboot => true, ensure => mounted, device => "${device}", fstype => "${fstype}", options => "${options}", dump => 0, pass => 2, require => File["${location}/${name}"], } } extra_mount_point { "sda3": device => "/dev/sda3", fstype => "xfs", owner => "ciupicri", group => "ciupicri", $options = "relatime,nosuid,nodev,noexec", } In case it matters, I'm using puppet-0.25.4-1.fc13.noarch.rpm and puppet-server-0.25.4-1.fc13.noarch.rpm.

    Read the article

  • Some clients cannot connect to Server 2008 R2 VPN

    - by Robl
    Hi all, Have a server 2008 r2 setup as a VPN server. We have created a windows group to control access to the VPN called vpn-users. Clients are all Windows 7 Pro. This all seems to work fine except some users cannot connect to the VPN! For example I try to logon to the VPN from a client and get an error saying the server refused the connect due to a policy in place. Specifically authentication type! Fine I think. So i drop that user into the vpn-users group created for this and try again and hey presto the user can now logon! Great. Now try this with another user. But this time I get the same error even though I have dropped them into the vpn-users group!! So does anyone have any idea why this works for some users and not for others?? I have tried moving the user from certain OU's in AD to others, copying the account, taking the user out of the vpn-users group and then back in but get the same error each time. Any thoughts anyone?

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >