Search Results

Search found 1694 results on 68 pages for 'rights'.

Page 48/68 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • How can I manage AWS VPC ssh access accounts and keys across multiple instances?

    - by deitch
    I am setting up a standard AWS VPC structure: a public subnet some private subnets, hosts on each, ELB, etc. Operational network access will be via either an ssh bastion host or an openvpn instance. Once on the network (bastion or openvpn), admins use ssh to access the individual instances. From what I can tell all of the docs seem to depend on a single user with sudo rights and a single public ssh key. But is that really best practice? Isn't it much better to have each user access each host under their own name? So I can deploy accounts and ssh public keys to each server, but that rapidly gets unmanageable. How do people recommend managing user accounts? I've looked at: IAM: It doesn't like like IAM has a method for automatically distributing accounts and ssh keys to VPC instances. IAM via LDAP: IAM doesn't have an LDAP API LDAP: set up my own LDAP servers (redundant, of course). Bit of a pain to manage, still better than managing on every host, especially as we grow. Shared ssh key: rely on the VPN/bastion to track user activities. I don't love it, but... What do people recommend? NOTE: I moved this over from accidentally posting in StackOverflow.

    Read the article

  • Remote Desktop Network Level Authentication Not Supported

    - by Iszi
    I'm running Windows XP Professional SP3 x86, trying to connect to a system with Windows 7 Ultimate SP1 x64. Recently, I updated the Remote Desktop Connection software on the XP system in hopes of using Network Level Authentication (NLA) for my connections to the Windows 7 box. After the update, I connected to the Windows 7 box over RDP and enabled NLA believing that the updated client should support it. After disconnecting and attempting to reconnect, I'm presented with the following error: The remote computer requires Network Level Authentication, which your computer does not support. For assistance, contact your system administrator or technical support. So, I checked the About page in Remote Desktop Connection to make sure the update had applied. This is what I see. Remote Desktop Connection Shell Version 6.1.7600 Control Version 6.1.7600 © 2007 Microsoft Corporation. All rights reserved. Network Level Authentication not supported. Remote Desktop Protocol 7.0 supported. I thought NLA was supposed to be a part of RDP 7.0 clients. Is there a component I'm missing somewhere?

    Read the article

  • Why can't I specify the executable that opens file with extension on windows?

    - by Glen S. Dalton
    I am on windows server 2003, but I guess it is the same on windows xp. This is a superuser question, because it is definitly desktop, so do not move or close it. Question: I copied some movable applications (usually people create them for usb sticks) to locations like c:\bin\app1\app1.exe app1.exe can open files of type *.ap1 When I rightclick file.ap1 and choose "open with ..." the "Open with" dialog appears. But it is not working how I expect in this situation. I can choose c:\bin\app1\app1.exe with the "Browse" button, but: app1.exe will not appear in the dialog where I just choosed it in the programs list, like I am used to it after clicking OK on it in the browse dialog. app1.exe will not open it when I click ok in the "Open with" dialog, the application that was assigned until then will still open it What could be the reason? Edit: Additional Information: my account is member of the administrators group I just changed the permissions of the folder c:\bin\app1\ and made sure that the group "Administrators" has all rights. I also inherited this manually to all subfodlers and subfiles.

    Read the article

  • SQL Server rolling forward lots of transactions, what should I look at?

    - by Anthony D
    I am running SQL Server Express on a Windows XP Embedded box. It runs for a day or two, doing some transactional processing for a POS type system, and with another system pulling data out to an OLAP DB for processing. After a while, I see in the event viewer the sequence SQL Server puts out when it restarts, copy rights, command line parameters, and so on. It seems like that coincides with our OLAP process crashing. I then see that when it restarts our transaction DB, it does a recovery, pulling in 10K or so in transactions that need to be rolled forward. Does this mean SQL has crashed? I don't really see much to indicate what happened. Update 1 I noticed I have my memory limit set to 1MB per query and 2TB for the server. These are the defaults. I only have one GB in the box. We have seen SQL crash a whole box by just using all the system memory. In this case though the whole box is up when we get to it.

    Read the article

  • Setting SVN permissions with Dav SVN Authz

    - by Ken
    There seems to be a path inheritance issue which is boggling me over access restrictions. For instance, if I grant rw access one group/user, and wish to restrict it some /../../secret to none, it promptly spits in my face. Here is an example of what I'm trying to achieve in dav_svn.authz [groups] grp_W = a, b, c, g grp_X = a, d, f, e grp_Y = a, e, [/] * = @grp_Y = rw [somerepo1:/projectPot] @grp_W = rw [somerepo2:/projectKettle] @grp_X = rw What is expected: grp_Y has rw access to all repositories, while grp_W and grp_X only have access to their respective repositories. What occurs: grp_Y has access to all repositories, while grp_W and grp_X have access to nothing If I flip the access ordering where I give everyone access and restrict it in each repository, it promply ignores the invalidation rule (stripping of rights) and gives everyone the access granted at the root level. Forgoing groups, it performs the same with user specific provisions; even fully defined such as: [/] a = rw b = c = d = e = f = g = rw [somerepo1:/projectPot] a = rw b = rw c = rw d = e = rw f = g = rw [somerepo2:/projectKettle] a = rw b c d = rw e = rw f = rw g Which yields the exact same result. According to the documentation I'm following all protocols so this is insane. Running on Apache2 with dav_svn

    Read the article

  • Passing PATH through sudo

    - by whitequark
    In short: how to make sudo not to flush PATH everytime? I have some websites deployed on my server (Debian testing) written with Ruby on Rails. I use Mongrel+Nginx to host them, but there is one problem that comes when I need to restart Mongrel (e.g. after making some changes). All sites are checked in VCS (git, but it is not important) and have owner and group set to my user, whereas Mongrel runs under the, huh, mongrel user that is severely restricted in it's rights. So Mongrel must be started under root (it can automatically change UID) or mongrel. To manage mongrel I use mongrel_cluster gem because it allows starting or stopping any amount of Mongrel servers with just one command. But it needs the directory /var/lib/gems/1.8/bin to be in PATH: this is not enough to start it with absolute path. Modifying PATH in root .bashrc changed nothing, tweaking sudo's env_reset and keepenv didn't either. So the question: how to add a directory to PATH or keep user's PATH in sudo?

    Read the article

  • Windows 7 - Windwos XP - sharing - why isn't working?

    - by durumdara
    Hi! This is seems to be "hardware" and not "software" / "programming" question, but I need to use this share in my programs, so it is "close to programming". We had an XP based wireless network. The server is XP Professional, the clients are XP Home (Notebooks). This was working well with folder sharing (with user rights, not simple share). Then we replaced the one of the notebook with Win7/X64 notebook. First time this can reach the server, and the another client too. Later I went to another sites, and connect to another servers, another networks. And then, when I return to this network, I saw that I cannot connect to this server. Nothing of resources I see, and when try to dbl click on this computer, I got login window, where I can write anything, never I can login... The interesting part, that: Another XP home can see the server, can login as quest, or with other user. The server can see the XP home notebook. The Win7 can see the notebook's shared folders, and XP home can see the Win7 shared folders. The server can see the Win7 folders, BUT: the Win7 cannot see the server folders. Cannot see the resources too... The Win7 is in "work networking group", the group name is not mshome. I tried everything on the server, I tried to remove MS client, restore it with simple sharing, set guest password, etc., but I lost the possibilities to access this server from Win7. Does anyone have any idea what I need to see, what I need to set to access these resource - to use them in my programs? Thanks for every info, link: dd

    Read the article

  • MSE updating fails, no warning or error message.

    - by WebDevHobo
    I'm running Windows 7 Ultimate, 32-bit. For the last couple of days, MSE doesn't fails to update, remaining stuck at version 1.75.119 I presume that an error log is created somewhere, or an event log, but I don't know where to find those. It just says "connection failed". Tried it at home, at work and friends places, but never works. Restarted computer a lot of times now, checked for Microsoft Updates in general, but nothing shows up. EDIT: I've opened a bounty for this, because I really don't know what to do anymore. The oldest answer(the long post) here did not work. Besides this problem, I'm having trouble using MSI installers too. I've had to add the SYSTEM group to a lot of maps and give them full control, but shouldn't the SYSTEM already be there? Also, I had to remove the "read-only" attribute from the ProgramData and Users folders, add the SYSTEM group there too and give them full control. Only then will the MSI install work and even then, it says I doesn't have the rights to create a shortcut on the desktop. Don't know what I need to modify and where for that. I'm saying this because I don't know how MSE updates, but if they use MSI files to do that, that might explain things. The SYSTEM group remains added, but every time I take away the read only attribute, click OK and check the settings again, read-only is still active... That's all I know. Screenshot, all those updates were manual:

    Read the article

  • How does the "Steam" platform work? Is it DRM? Can I trust "Steam"-powered software? [closed]

    - by Chris W. Rea
    So – I just bought the new game Supreme Commander 2. This question is not about the game, but about the online software installation platform that it seems to require. I haven't bought a game in a long time, and I'm puzzled: Apparently, SC2 is a "Steam"-powered game. When I went to install the game, it asked me to either create a new Steam account, or log in with an existing account. I clicked "Cancel" because I don't plan to play online and I don't want anything unnecessary installed on my computer, since I only plan to play single player! However, after clicking "Cancel", the installer asked for my confirmation that I indeed wanted to cancel installation of the game! I thought I was just canceling the "online" portions! So I really want to know: How do "Steam" powered games work? Is this essentially a form of DRM (Digital Rights Management)? Can I trust this software platform? Has anybody done any independent verification on how this platform works? (I'm very leery of any DRM after the Sony BMG CD copy protection scandal. Thank goodness for Mark Russinovich.) Does the "Steam" platform install anything particularly nasty or unwanted on my computer? High-rep users: Please vote to reopen this question. It is not about the game, but about the software update platform / updater / DRM. Imagine if the software in question were a productivity application. The issues remain the same.

    Read the article

  • Passive FTP Server Port Configuration Troubles Win2003

    - by Chris
    Win2003 Ports 20 & 21 are open IIS6 - Direct Metabase Edit enabled Configured FTP service passive range to 5500-5550 5500-5550 added to windows firewall iisreset and double checked by restarting ftp service nothing has changed, when I connect and enter passive, it still hangs when ever I try to LIST or transfer files. Active is just as useless. Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\user>ftp ftp> open x.x.x.x Connected to x.x.x.x. 220-Microsoft FTP Service xxxxxxxxxxxxxxxxxx 220 xxxxxxxxxxxxxxxxxx User (x.x.x.x:(none)): user 331 Password required for user. Password: 230-YOUR ACTIVITY IS BEING RECORDED TO THE FULLEST EXTENT 230 User user logged in. ftp> QUOTE PASV 227 Entering Passive Mode (82,19,25,134,21,124) ftp> ls 200 PORT command successful. 150 Opening ASCII mode data connection for file list. and it hangs.. Now I can see from microsooft documentation that on newer windows releases, additional steps such as these are suggested, but they dont work on 2003... netsh advfirewall firewall add rule name=”FTP Service” action=allow service=ftpsvc protocol=TCP dir=in netsh advfirewall set global StatefulFTP disable is there anything I am missing, what is this StatefulFTP malarkey at the end EDIT I can connect and transfer binary files using WinSCP client - Therefore the problem must be with my ftp commands no? Can anyone see anything wrong with my windows ftp client example? why would it hang on ls, i tried QUOTE LIST as well, and that just hangs, and the windows ftp client doesnt work in active, it hangs if I try to go "binary" then put - This worked before I added 5500-5550 on the router. I have since added this range to the router but no difference to the windows ftp client.

    Read the article

  • Concerns about Apache per-Vhost logging setup

    - by etienne
    I'm both senior developer and sysadmin in my company, so i'm trying to deal with the needs of both activities. I've set up our apache box, wich deals with 30-50 domains atm (and hopefully will grow larger) and hosts both production and development sites, with this directory structure: domains/ domains/domain.ext/ #FTPS chroot for user domain.ext domains/domain.ext/public #the DocumentRoot of http://domain.ext domains/domain.ext/logs domains/domain.ext/subdomains/sub.domain.ext domains/domain.ext/subdomains/sub.domain.ext/public #DocumentRoot of http://sub.domain.ext Each domain.ext Vhost runs with his dedicated user and group via mpm-itk, umask being 027, and the logs are stored via a piped sudo command, like this: ErrorLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_error.log" CustomLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_access.log" combined Now, i've read a lot about not letting the logs out of a very restricted directory, but the developers often need to give a quick look to a particular subdomain error log, and i don't really want to give them admin rights to look into /var/logs. Having them available into the ftp account is REALLY handy during development stages. Do you think this setup is viable and safe enough? To me it is apparently looking good, but i'm concerned about 3 security issues: -is the sudo pipe enough to deal with symlink exploits? Any catches i'm missing? -log dos: logs are in the same partition of all domains. got hundreds of gigs, but still, if one get disk-space dos'd, everything will break. Any workaround? Will a short timed logrotate suffice? -file descriptors limits: AFAIK the default limit for Apache on Ubuntu Server is currently 8192, which should be plenty enough to handle 2 log files per subdomain. Is it? Am i missing something? I hope to read some thoughts on the matter!

    Read the article

  • Configuration Help for Sendmail Required

    - by Vinayak Mahadevan
    Hi I need some help with respect to sendmail configuration. The basic problem is that I have some employees working from other places and they need access to their mail. So what I have done right now is whatever mails which are meant for them which are generated from within the company and collected by my internal mail server is bounced to an external mail server from where the employees access it. This is done through a email id on a different domain. This was working fine till I restricted the external mailing access for certain users using rulesets in sendmail.cf. Once I had put that in place only people who had external mailing rights could send mails to people outside the office. What I would like to know is that is there anyway where I can expose sendmail on two different ips and thereby configure everybody's email id to point to the same internal mail server using 2 different ips. one ip when inside the company and one ip outside the company. Is it possible that I have one static ip configured for both internal access and external access or is there any otherway it can be done with sendmail. Can anybody help me Sorry for the long post Regards Vinayak

    Read the article

  • Why won't dhclient use the static IP I'm telling it to request?

    - by mike
    Here's my /etc/dhcp3/dhclient.conf: request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, netbios-name-servers, netbios-scope, interface-mtu; timeout 60; reject 192.168.1.27; alias { interface "eth0"; fixed-address 192.168.1.222; } lease { interface "eth0"; fixed-address 192.168.1.222; option subnet-mask 255.255.255.0; option broadcast-address 255.255.255.255; option routers 192.168.1.254; option domain-name-servers 192.168.1.254; } When I run "dhclient eth0", I get this: There is already a pid file /var/run/dhclient.pid with pid 6511 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ wmaster0: unknown hardware address type 801 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:1c:25:97:82:20 Sending on LPF/eth0/00:1c:25:97:82:20 Sending on Socket/fallback DHCPREQUEST of 192.168.1.27 on eth0 to 255.255.255.255 port 67 DHCPACK of 192.168.1.27 from 192.168.1.254 bound to 192.168.1.27 -- renewal in 1468 seconds. I used strace to make sure that dhclient really is reading that conf file. Why isn't it paying attention to my "reject 192.168.1.27" and "fixed-address 192.168.1.222" lines?

    Read the article

  • Regular Windows 7 BSOD with Shrew VPN client

    - by Junto
    The Shrew VPN client appears to be a good alternative to the Cisco VPN Client on x64 Windows 7. However, since installing it I've seen fairly regular BSODs. Minidump attached: Microsoft (R) Windows Debugger Version 6.11.0001.404 AMD64 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [D:\Temp\020810-23431-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: SRV*d:\symbols*http://msdl.microsoft.com/download/symbols Executable search path is: Windows 7 Kernel Version 7600 MP (8 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Built by: 7600.16385.amd64fre.win7_rtm.090713-1255 Machine Name: Kernel base = 0xfffff800`0285f000 PsLoadedModuleList = 0xfffff800`02a9ce50 Debug session time: Mon Feb 8 18:08:12.887 2010 (GMT+1) System Uptime: 0 days 7:52:06.120 Loading Kernel Symbols ............................................................... ................................................................ .............. Loading User Symbols Loading unloaded module list .... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck A, {0, 2, 0, fffff800028d50b6} Unable to load image \SystemRoot\system32\DRIVERS\vfilter.sys, Win32 error 0n2 *** WARNING: Unable to verify timestamp for vfilter.sys *** ERROR: Module load completed but symbols could not be loaded for vfilter.sys Probably caused by : vfilter.sys ( vfilter+29a6 ) Followup: MachineOwner Machine is a brand spanking new Dell Precision T5500. Superuser appears to have several recommendations for the Shrew VPN Client as an alternative to the Cisco VPN client on 64 bit machines, so I wondered if anyone here has seen this problem and possibly found a solution to the problem? I've decided to run the VPN client under Windows 7 compatibility mode for the moment (Vista SP2) with administrator privileges to see if it makes a difference. Oddly, the VPN doesn't necessarily need to be connected. I've noticed it when browsing using Google Chrome and Internet Explorer, usually when I open up a new tab. If it carries on I think I'll be forced to shell out the 120 EUR for the NCP Client instead.

    Read the article

  • Upstart: cannot run as root

    - by Ronni Egeriis
    I have made this upstart script, which starts a Node.js service. But all of the sudden the service has stopped, and upstart has failed to restart it. Now that I am trying to start it manually, it fails to recognize my service: start: Unknown job: queue The script is properly placed in /etc/init, and should have the correct rights: -rw-r--r-- 1 root root 200 Aug 7 13:30 queue.conf When I check the config file with init-checkconf however, it says that it is not able to run as root: root@production1:~# init-checkconf /etc/init/queue.conf ERROR: cannot run as root What causes this error and how do I solve it? Debug info: Ubuntu 12.04.3 LTS root@production1:~# service --version service ver. 0.91-ubuntu1 Edit Here's queue.conf: description "Echo.it command queue" author "Ronni Egeriis Persson <[email protected]>" stop on shutdown respawn respawn 20 5 exec sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 The command sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 works fine when run manually.

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

  • How does the "Steam" platform work? Is it DRM? Can I trust "Steam"-powered games?

    - by Chris W. Rea
    So – I just bought the new game Supreme Commander 2. This question is not about the game, but about the online software installation platform that it seems to require. I haven't bought a game in a long time, and I'm puzzled: Apparently, SC2 is a "Steam"-powered game. When I went to install the game, it asked me to either create a new Steam account, or log in with an existing account. I clicked "Cancel" because I don't plan to play online and I don't want anything unnecessary installed on my computer, since I only plan to play single player! However, after clicking "Cancel", the installer asked for my confirmation that I indeed wanted to cancel installation of the game! I thought I was just canceling the "online" portions! So I really want to know: How do "Steam" powered games work? Is this essentially a form of DRM (Digital Rights Management)? Can I trust this software platform? Has anybody done any independent verification on how this platform works? (I'm very leery of any DRM after the Sony BMG CD copy protection scandal. Thank goodness for Mark Russinovich.) Does the "Steam" platform install anything particularly nasty or unwanted on my computer?

    Read the article

  • Keep Windows Installer from using largest drive for temporary files

    - by stefan.at.wpf
    By default Windows Installer uses the largest drive for temporary storage, no matter if that's needed (meaning there would also be enough space on the system drive). Taken from http://msdn.microsoft.com/en-us/library/aa371372%28VS.85%29.aspx: During an administrative installation the installer sets ROOTDRIVE to the first connected network drive it finds that can be written to. If it is not an administrative installation, or if the installer can find no network drives, the installer sets ROOTDRIVE to the local drive that can be written to having the most free space. Now my system drive is an SSD, my largest drive is a RAID, that spins down when it's not used. Remember the SSD as system drive? Everything is silent now! Until I install something and Windows Installer wakes up my RAID again just to put a small .tmp file on it... How can I prevent Windows Installer from using the largest drive as temporary storage? Can I maybe set some access rights to disallow the Windows Installer to write on my RAID drive? Any other ideas? Thank you!

    Read the article

  • Cannot open files in Visual Studio but in Delphi and Notepad

    - by Andrew J. Brehm
    About an hour ago Visual Studio 2008 decided that it cannot find files any more. This is on 64 bit Windows Vista. When I right-click on a text file (source code or otherwise) and select "open with" and "Visual Studio 2008", I get the following error (example): Windows cannot find 'C:\Users\ajbrehm\Documents\Visual Studio 2008\Projects\Hello Prism\Hello Prism\Main.pas'. Make sure you typed the name correctly, and then try again. When I right-click the same file and select "open with" and "Delphi 2010" or "Notepad" (both other options available for text files on my system), the file opens correctly. Oddly enough when the file is part of a Visual Studio project and I open the project itself with Visual Studio (this works), I can open the file from within Visual Studio. Any ideas what might be going on? This started about an hour after I made a complete backup of my Vista VM and after I installed IIS 7, SQL Express, and Sourcegear Vault. The first files I noticed couldn't be opened in Visual Studio any more where Pascal source files in checked-outed folders from Vault. And Vault also seems to be unable to see one of the sources files and claims they don't exist. I found out about Visual Studio not opening ANY files any more when I tried to recreate the file Vault refused to see. Update: I just checked. Another user, "administrator", can still open text files with Visual Studio 2008. Both users have administrator rights. Update: I just restored the hours-old backup. Same problem. Apparently whatever triggered this happened before the install of IIS 7 and SQL Express. Never noticed it before.

    Read the article

  • Managing Linux Directory Permissions & SFTP

    - by Dizzle
    Good morning; I have a RHEL 5.7 web server configured to allow SSH/SFTP only by specific groups. I'd like for content managers to upload content to their respective directories and have that content inherit the user/group ownership of the directory regardless of upload method or application. For example: John is in group "web" for SSH/SFTP rights and "finance" for directory permissions, and uploads to directory "webstuff" via SFTP. Directory "webstuff" has permissions of "2760" (rwxrws---), and ownership of "apache:finance". If John uploads an update to an existing file in "webstuff", the ownership of the file stays at "apache:finance". If John uploads a new file to "webstuff", the ownership of the file is "john:finance". My desire is to have any file from John uploaded to "webstuff" to change to the directory's owner. I've tried with setuid and setgid both set, but the user-ownership didn't take. I've seen mentions on ServerFault of using ACL's, or a chrooted jail for SFTP but I have yet to configure and test them, and I don't know if they're a viable solution (they could be, I just don't know because I've never done either). Any thoughts and assistance would be greatly appreciated.

    Read the article

  • Why can't I open file with doubleklick after I moved the application that opens it on windows?

    - by Glen S. Dalton
    I am on windows server 2003, but I guess it is the same on windows xp. I moved some movable applications (usually people create them for usb sticks) to locations like c:\bin\app1\app1.exe. The old location was c:\programs\app1\app1.exe app1.exe can open files of type *.ap1 When I rightclick file.ap1 and choose open with ... the Open with dialog appears. But it is not working how I expect in this situation. I can choose c:\bin\app1\app1.exe with the "Browse" button, but: app1.exe will not appear in the dialog where I just choosed it in the programs list, like I am used to it after clicking OK on it in the browse dialog. app1.exe will not open it when I click ok in the "Open with" dialog, the application that was assigned until then will still open it What could be the reason? my account is member of the administrators group I just changed the permissions of the folder c:\bin\app1\ and made sure that the group "Administrators" has all rights. I also inherited this manually to all subfodlers and subfiles. I also tried to move the application (with the whole folder) to "c:\program files\app1\app1.exe

    Read the article

  • eAccelerator Issue - Cache Directory Empty.

    - by Tom
    Hi all, Hoping someone can give me a hand with this. I've recently installated eAccelerator 0.9.6.1 - On a CentOS LAMP server. Had it working fine, using the /tmp/accelerator as the cache directory. php.ini set up: zend_extension="/usr/local/lib/php/extensions/no-debug-non-zts-20060613/eaccelerator.so" eaccelerator.shm_size="200" eaccelerator.cache_dir="/var/cache/eaccelerator" eaccelerator.enable="1" eaccelerator.optimizer="1" eaccelerator.check_mtime="1" eaccelerator.debug="0" eaccelerator.filter="" eaccelerator.shm_max="0" eaccelerator.shm_ttl="3600" eaccelerator.shm_prune_period="180" eaccelerator.shm_only="1" eaccelerator.compress="1" eaccelerator.compress_level="9" php -v output: PHP 5.2.12 (cli) (built: Feb 3 2010 00:34:28) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies with eAccelerator v0.9.6.1, Copyright (c) 2004-2010 eAccelerator, by eAccelerator with the ionCube PHP Loader v3.3.20, Copyright (c) 2002-2010, by ionCube Ltd. I had to remove the cache directory as I was testing something. Remade it, re-set permissions and found that eAccelerator was no longer creating cache files within the folder. I thought it might be down to ownership rights on the folder so chown'd it apache.apache and this made no difference. I recreated the directory in /var/cache instead and editted php.ini to point to the new cache dir location, chmod'd, chown'd etc. and still eAccelerator is not creating any of the cache files in the directory (just empty). Could someone suggest what I might be doing incorrectly here. I've read through numerous pages to try and troubleshoot the issue to no avail. Any help appreciated.

    Read the article

  • NRPE: Unable to read output with check_connections plugin

    - by Wlodzimierz
    I'm using plugin which gives me warning or crtis with established connections. If I run it on local machine it gives: *root@graber:/usr/lib/nagios/plugins# ./check_connections -w 1 -c 5 -C sshd CRITICAL Established connections: 6* I know, I run as root. But: Rights to the file: root@graber:/usr/lib/nagios/plugins# ls -all check_connections -rwxr-xr-x 1 nagios nagios 5459 2012-07-06 10:19 check_connections /etc/sudoers: root@graber:/usr/lib/nagios/plugins# cat /etc/sudoers Defaults env_reset root ALL=(ALL:ALL) ALL %admin ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/bin/lsof nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ /etc/nagios/nrpe.cfg: *nrpe_user=nagios nrpe_group=nagios* *dont_blame_nrpe=1* *command_prefix=/usr/bin/sudo command[check_connections]=/usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd* log from remote: *2012-07-06T11:12:49+02:00 graber nrpe[25928]: Handling the connection... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host address is in allowed_hosts 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host is asking for command 'check_connections' to be run... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Running command: /usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd 2012-07-06T11:19:11+02:00 graber nrpe[26100]: Return Code: 2, Output: NRPE: Unable to read output* Why is this happening? I'm out of ideas, I've searched google for 2 days now :)

    Read the article

  • PostgreSQL user authentication against PAM

    - by elmuerte
    I am trying to set up authentication via PAM for PostgreSQL 9.3. I already managed to get this working on an Ubuntu 12.04 server, but I am unable to get this working on a Centos-6 install. The relevant pg_hba.conf line: host all all 0.0.0.0/0 pam pamservice=postgresql93 The pam.d/postgressql93 is the default config shipped with the official postgresql 9.3 package: #%PAM-1.0 auth include password-auth account include password-auth When a user tries to authenticate the following is reported in secure log: hostname unix_chkpwd[31807]: check pass; user unknown hostname unix_chkpwd[31808]: check pass; user unknown hostname unix_chkpwd[31808]: password check failed for user (myuser) hostname postgres 10.1.0.1(61459) authentication: pam_unix(postgresql93:auth): authentication failure; logname= uid=26 euid=26 tty= ruser= rhost= user=myuser The relevant content of password-auth config is: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so The problem is with the pam_unix.so. It is unable to validate the password, and unable to retrieve the user info (when I remove the auth entry of pam_unix.so). The Centos-6 install is only 5 days old, so it does not have a lot of baggage. The unix_chkpwd is suid and has execute rights for everybody, so it should be able to check the shadow file (which has no privileges at all?).

    Read the article

  • FastCGI Error when installing PHP on IIS7.5

    - by ytoledano
    I'm trying to install MediaWiki on a Win2008r2 server, but can't manage to install PHP. Here's what I did: Grabbed a Zip archive of PHP and unzipped it into C:\PHP. Created two subdirs: c:\PHP\sessiondata and c:\PHP\uploadtemp. Granted modify rights to the IUSR account for the subdirs. Copied php.ini-production as php.ini Edited php.ini and made the following changes: fastcgi.impersonate = 1 cgi.fix_pathinfo = 1 cgi.force_redirect = 0 open_basedir = "c:\inetpub\wwwroot;c:\PHP\uploadtemp;C:\PHP\sessiondata" extension = php_mysql.dll extension_dir = "./ext" upload_tmp_dir = C:\PHP\uploadtemp session.save_path = C:\php\sessiondata Install Web server role, selected CGI and HTTP Redirection options. In the Handler Mappings: Added Module Mapping. Entered the following values: Path = *.php, Module = FastCgiModule, Executable = c:\php\php-cgi.exe, Name = PHP via FastCGI. Created a test page into wwwroot directory: phpinfo.php and set the contents like this: < ?php phpinfo(); ? Browsed to http://localhost/phpinfo.php But then I get: HTTP Error 500.0 - Internal Server Error An unknown FastCGI error occured Detailed Error Information Module: FastCgiModule Notification: ExecuteRequestHandler Handler: PHP via FastCGI Error Code: 0x800736b1 Requested URL: http://localhost:80/phpinfo.php Physical Path: C:\inetpub\wwwroot\phpinfo.php Logon Method: Anonymous Logon User: Anonymous Does anyone know what I'm doing wrong here? Thanks.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >