Search Results

Search found 978 results on 40 pages for 'nobody'.

Page 27/40 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Internal Code Signing: Key Distribution, or Certificate Server?

    - by Myrddin Emrys
    I should first note that we have nobody in IT with significant familiarity with self-signed certification. We have a moderately sprawling network (one forest, many locations), and we are now rolling out internal code signing; until now users have run untrusted code, or we even disabled(!) the warnings. Intranet applications, scripts, and sites will now be signed with self certification. I am aware of two obvious ways we can deploy this: Distributing the keys directly via a group policy, and setting up a cert server. Can someone explain the trade-offs between these two methods? How many certs before the group policy method is unwieldy? Are they large enough that remote users will have issues? Does the group policy method distribute duplicates on every login? Is there a better method I am not aware of? I can find a lot of documentation on certifications and various ways to create them, but I have not been able to find something that summarizes the difference between the distribution methods and what criteria make one or the other superior.

    Read the article

  • Execute encrypted files but don't let anybody read them.

    - by Stebi
    I want to provide a virtual machine image with an installed web application. The user should be able to boot the vm (don't login, just boot) and a webserver should start automatically. The point is I want to hide the (ruby) source code of the web application from everyone as there is no obfuscator for ruby. I thought I could use file system encryption to encrypt the directory with the sourcecode (or even a whole partition). But the webserver user must be able to read it automatically after booting. Nobody is allowed to login as the webserver user (or any other user) so no other can read the contents. My questions are now: Is this possible? Because I give away the whole vm everybody could mount its virtual discs and read them (except the encrypted one). Is it now possible to find the key the webserver user needs to decrypt the files and decrypt them manually? Or is it safe to give such a vm away? The problem is that everything needed to decrypt must be included somewhere in the vm else the webserver cannot start automatically. Maybe I'm completely wrong and you have another tip for me securing the source code.

    Read the article

  • Mail not piping in postfix

    - by user220912
    I have setup a postfix server and wanted to test the piping of mail to my perl script where i can make use of it and filter the mails.I wrote a test script for that which just logs the information in txt file. but i don't see any changes on sending the mail. My postconf-n output: alias_database = hash:/etc/aliases append_dot_mydomain = no command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = yantratech.co.in, localhost.localdomain, localhost myhostname = tcmailer8.in mynetworks = 103.8.128.62, 103.8.128.69/101, 168.100.189.0/28, 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES recipient_delimiter = + relayhost = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/pki/tls/certs/tcmailer8.in.cert smtpd_tls_key_file = /etc/pki/tls/private/localhost.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = hash:/etc/postfix/virtual virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = /etc/postfix/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000 here's my transport: [email protected] email_route my main.cf declaration: transport_maps = hash:/etc/postfix/transport my master.cf declaration: email_route unix - n n - - pipe flags=FR user=nobody argv=/etc/postfix/test.php -f $(sender) -- $(recipient) and my php script: #!/usr/bin/php <?php $fh = fopen('/etc/postfix/testmail.txt','a'); fwrite($fh, "Hello it works\n"); fclose($fh); ?> I am sending mails through telnet in localhost.

    Read the article

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • Samba access works with IP address only

    - by Sebastian Rittau
    I added a Debian etch host (hostname: webserver, IP address: 192.168.101.2) running Samba to a Windows network with a Windows 2003 PDC (IP address 192.168.101.3). The Samba server exports a public guest share, called "Intranet". The server shows up fine in the network, but trying to click on it produces an error dialog, stating I don't have the necessary permissions. So does entering \webserver manually and using \webserver\internet states that the path does not exist. Interestingly, accessing the share by IP address (\192.168.101.2 or \192.168.101.2\intranet) works fine. DNS is configured correctly, and "smbclient //webserver/intranet" on another Linux client works fine. One complicating issue is that the webserver is only a VMware virtual machine running on PDC server. Here is our smb.conf: [global] workgroup = Foobar server string = Webserver wins support = yes ; commenting out these wins server = 192.168.101.3 ; two lines has no effect dns proxy = no guest account = nobody [... snipped some unrelated bits, like logging ...] security = share [... snipped some password-related things ...] domain master = no [intranet] comment = Intranet path = /srv/webserver/contents browseable = yes guest ok = yes guest only = yes read only = yes create mask = 0775 directory mask = 0775

    Read the article

  • How to route broadcast packets from machine with two network interfaces on same subnet

    - by Syam
    I run RHEL 5 and have two NICs on one machine connected to the same subnet: eth0 192.168.100.10 eth1 192.168.100.11 My application needs to receive and transmit UDP packets (both unicast & broadcast) via these interfaces. I've found the way to handle the ARP problem and I've added routes to handle the routing problem: ip rule add from 192.168.100.10 lookup 10 ip route add table 10 default src 192.168.100.10 dev eth0 (and similarly, table 11 for eth1) The problem is that only unicast packets gets routed properly. Broadcast packets always go out through eth0. I tried removing the rule for 192.168.100.0 & 192.168.100.255 from table 255 and adding them to my tables. But then I see ARP requests being given out for packets to 192.168.100.255 (obviously, no nodes respond and nobody gets any data). Due to several techno-political issues, I'm stuck with this configuration and can't change subnets or try something different. I've tried SO_BINDTODEVICE and it works, but I'd prefer a solution that doesn't need my application to run as root. Is there a way to get this working? Any help is highly appreciated.

    Read the article

  • Openvpn - stuck on Connecting

    - by user224277
    I've got a problem with openvpn server... every time when I trying to connect to the VPN , I am getting a window with login and password box, so I typed my login and password (login = Common Name (user1) and password is from a challenge password from the client certificate. Logs : Jun 7 17:03:05 test ovpn-openvpn[5618]: Authenticate/Decrypt packet error: packet HMAC authentication failed Jun 7 17:03:05 test ovpn-openvpn[5618]: TLS Error: incoming packet authentication failed from [AF_INET]80.**.**.***:54179 Client.ovpn : client #dev tap dev tun #proto tcp proto udp remote [Server IP] 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert user1.crt key user1.key <tls-auth> -----BEGIN OpenVPN Static key V1----- d1e0... -----END OpenVPN Static key V1----- </tls-auth> ns-cert-type server cipher AES-256-CBC comp-lzo yes verb 0 mute 20 My openvpn.conf : port 1194 #proto tcp proto udp #dev tap dev tun #dev-node MyTap ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/VPN.crt key /etc/openvpn/keys/VPN.key dh /etc/openvpn/keys/dh2048.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt #push „route 192.168.5.0 255.255.255.0? #push „route 192.168.10.0 255.255.255.0? keepalive 10 120 tls-auth /etc/openvpn/keys/ta.key 0 #cipher BF-CBC # Blowfish #cipher AES-128-CBC # AES #cipher DES-EDE3-CBC # Triple-DES comp-lzo #max-clients 100 #user nobody #group nogroup persist-key persist-tun status openvpn-status.log #log openvpn.log #log-append openvpn.log verb 3 sysctl : net.ipv4.ip_forward=1

    Read the article

  • Inbox not updating in Exchange 2010, all users affected

    - by TuxMeister
    I'm battling against this darn issue this morning. We have the following setup: Big Hyper-V machine hosting the servers as VM's VM for CAS: WEB.XXX.local VM for Mailbox: EXC.XXX.local Servers are running Server 2008 R2 with Exchange 2010 SP1 Clients are all running Windows 7 Pro x64 with Outlook 2010 x64 The problem we're having is that nobody is able to see any emails received today (16th of October), but they are able to send externally. When I reply back to the email received externally, I don't get an NDR, yet the user cannot see my email. This is what I found and tried thus far: If we create a subfolder in Outlook 2010 and move any email from the inbox into that folder, changes will be immediately reflected in OWA We've been sending test emails to other users internaly and external email addresses and the sent items folder contains all those tests, synced properly to OWA as well Have tried crating a new profile, new emails are still missing Tried disabling Cache Mode, still no luck Also disabled "Download shared folders", still no luck Tried to setup a brand new Exchange mailbox and configured it on a VM that never had Outlook on it, still the same issue Tried restarting Exchange services on both CAS and Mailbox servers, no luck Tried rebooting both CAS and Mailbox servers, still no luck Performed a Mailbox Discovery on my admin account, emails from today are being found in the Discovery results, so the stuff is there, just not updating the user inboxes Any idea about what this hellish thing can be? I've done everything I can think of and also everything I could find out there. Let me know if you need any more details and thanks for reading this!

    Read the article

  • How do I force a restore over an existing database?

    - by Ian Boyd
    I have a database, and i want to force a restore over top of it. I check the option: Overwrite the existing database (WITH REPLACE) But, as expected, SSMS is unable to overwrite the existing database. Of course i don't want different filenames; i want to overwrite the existing database. How do i force a restore over an existing database? And for Google search crawler: File '%s' is claimed by '%s'(4) and '%s"(3). The WITH MOVE clause can be used to relocate one or more files. RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3176) Update The script (before i deleted the database, because i needed to get it done) was: RESTORE DATABASE [HealthCareGovManager] FILE = N'HealthCareGovManager_Data', FILE = N'HealthCareGovManager_Archive', FILE = N'HealthCareGovManager_AuditLog' FROM DISK = N'D:\STAGING\HealthCareGovManager10232013.bak' WITH FILE = 1, MOVE N'HealthCareGovManager_Data' TO N'D:\CGI Data\HealthCareGovManager.MDF', MOVE N'HealthCareGovManager_Archive' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_AuditLog' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_Log' TO N'D:\CGI Data\HealthCareGovManager.LDF', NOUNLOAD, REPLACE, STATS = 10 I used the UI to delete the existing database, so that i could use the UI to force an overwrite of the (non)existing database. Hopefully there can be an answer so that the next guy can have an answer. No, nobody was in the context of the database (The error message from other connections is quite different from this error, and i only got to see this error after i killed the other connections).

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@eduro eduro.nl]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout (svn checkout file:///var/www/svn/repos/reposname) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either. -- Edit 4 Really? Nobody :(?

    Read the article

  • Windows 7 sharing folder from command line, selecting users and triggering the "Apply" of changes

    - by clintp
    I have a drive that doesn't get mounted until after I log in. (A Truecrypt thumbdrive device, and no, I'm not making it a "System Favorite" to get around this.) I'd like to construct a batch file to share it once I've gotten it mounted because the sharing info doesn't seem to stick through a reboot. From the GUI, I'd go into the folder Properties-Sharing. And then in Advanced Sharing I'd pick the name to share it as. And then under the "Share..." button I'd pick the users and the permissions I want to grant them. After "Apply" there's a pause -- I'm not sure what's happening here, but the dialog says "Sharing Items..." -- and then everything is okay. From the command line, I've done: net share MyFolder=F:\MyFolder cacls F:\MyFolder /G FirstUser:F cacls F:\MyFolder /G OtherUser:F And this almost works. I can see the share on the network then, but nobody has permissions to do anything. If I go into the GUI and change anything (and I can see my command-line changes in there already) and press "Apply" I get the: "Sharing Items.... This may take a few minutes" Dialog... and then Voila! It works. I get the "Your folder is shared" dialog with the command-line changes I made, along with the GUI change that I made to trigger the "Sharing Items..." dialog. Everything's peachy. Is a service being restarted? Which one? What's triggering the sharing to take effect? And -- more importantly -- how do I do it from the command line?

    Read the article

  • umask seems to vary by user

    - by paullb
    I've got a development Ubuntu system for which I have several users: myself (with full sudo) and about 5 other users. (I've set up the system so everything in this respect is still at its default setting) I'm trying to set the system up so that multiple people can collaborate in a single directory by using grouing and I want the default permissions to be 664. However when some users edit files the permissions were 644. After a lot of investigating most users have a umask (checked at the prompt) of 0002 and when they create files they are 664 (as expected) but there are 2 (myself and one other) who have 0022 umask (so the files that come out are 644 and nobody else can write to them). I've looked everywhere but can't figure out why a couple users wind up with a different umask e.g. there is nothing the .bash_profile or anything like that) Any ideas for the source of the discrepancy? /etc/bashrc if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi /etc/profile if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi EDIT: My (bad) ~/.bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions export LANG=en_US.utf8 Other user (good) .bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions

    Read the article

  • Computer won't start up. Stuck on Lenovo splash screen. Help Diagnose

    - by Ace Legend
    I have some (I'm not sure exactly what model) Lenovo 21" IdeaCentre. Honestly, the computer works off and on. I have had problems with it not being able to shutdown, which I fixed. The fan seems to be constantly running, a few other problems as well. Anyway, nobody was using it when all of a sudden it switched to a blue screen. I was in the kitchen, but when I got over to the computer I read the message. It said something about bad drivers, but that is all I saw and then it restarted. However, when it got to the Lenovo Splash screen, nothing happened. I waited there for over 10 minutes, but still nothing. I tried to turn of the computer, but the only way to do it was to pull out the power cable. I then removed all USB devices and tried again. Still nothing. It also won't respond to keyboard input when I try to use enter to interrupt normal startup. My guess is some piece of hardware is damaged inside the machine. However, I have no idea what piece it is. Does anybody have any idea what could be wrong with it? Thanks.

    Read the article

  • Windows 8 Install Hanging at first white-font boot splash

    - by Omega
    I'm trying to install the Windows 8 preview on my Samsung Series 9 (2012, Ivy Bridge). I've done a bit of a custom scheme with this one: I'm using EFI/UEFI on this system. I've seen no indication that this system supports secure boot (yay!) My SSD is set up with GPT Ubuntu is already installed and working great via UEFI. I'm trying to boot the Windows 8 install from a USB stick via UEFI I don't have access to a CD drive. The problem is that the boot seems to hang at the very first splash screen that looks like this. White windows font, the little beads don't show up. My USB stick has an activity light and it does blink for the first few seconds, but then goes back to it's "nobody is talking to me" idle pulse. What I know: UEFI booting is definitely working. Windows 8 for those few seconds seems to have some kind of access to the USB drive. My Series 9 is running the latest BIOS/firmware. Any idea what I might be able to do to get Windows 8 installed??

    Read the article

  • At what point does the performance gap between GPU & CPU become so great that the CPU is holding back a system?

    - by Matthew Galloway
    I know that generally speaking for gaming performance the GPU is the primary factor which holds back performance, with everything else such as RAM/motherboard/PSU/CPU being secondary in importance to the graphics card. But at some point the other components ARE going to be significant in holding back the whole system! For instance nobody would be silly enough to play modern games with 512MB RAM and the very latest graphics cards (such as an HD7970) as I bet the performance increase over such a system with only 512MB but a mid range card would be non-existent! Thus it would be a "waste" for such a person to buy any high end graphics card without resolving first the system's other problems. The same point applies to other components, such as if it only had a Pentium II a current high end graphics card would be wasted on it! So my core question is how do you determine at what point for your system is spending on extra GPU power be completely "wasted"? (also, a slightly more nuanced question is trying work out at what point might the extra graphics power not be "wasted" but would be "sub optimal" value for money, when the expenditure should then be split around graphics card and other components. As obviously a gamer shouldn't always just spend on upgrading the graphics card! But needs to balance it out)

    Read the article

  • How do you create an ssh key for the apache user on Redhat?

    - by Josh Smeaton
    As the question asks, how do I generate an ssh key for the user apache on Redhat? My use case, is that we have a mercurial server running under the apache user. We also have several web servers clustered that we need to log on to manually and do pulls from. Ideally, what we'd like to do is have the mercurial server push all changes to all the webservers in the cluster. To do this, we want to use ssh, as setting up http mercurial servers on each of the web servers seems like too much work, and far too heavy. What I've tried to do is the following: > sudo mkdir /var/www/.ssh > sudo chown -R apache:nobody /var/www/.ssh > su - apache -c "ssh-keygen -t rsa" This account is currently not available. I found the above commands elsewhere, but I can only assume that Redhat has differences to whatever distro was used for the above. Is there a way I can generate an ssh-key for the apache user?

    Read the article

  • Changing the modified date of a message in Exchange 2010

    - by jgoldschrafe
    My organization is in the middle of a process to move their Exchange 2010 messaging system from one archiving platform to another. As part of this process, we need to restore all archived messages back into users' email accounts, and then let the new system import them again. The problem is that when the messages are dumped back, the modified date on the message is set to the date it was restored, which trips up message archiving and basically means nobody will have anything archived for six months. So you don't have to ask: no, our archiving platform only uses the modified timestamp on the message and cannot be altered to temporarily use the sent or received timestamp instead to determine whether to archive it. We and others have asked for the feature, but it doesn't exist right now. What we're looking for is a method to go through the user's mailbox and alter the modified timestamp of each message (or preferably received more than X months ago) to the received date of the message. We also don't want to spend more on this tool per user than we're spending on the archiving solution in the first place. We've run across a few tools that are something ridiculous like $25 per user. I don't think we're even paying close to that for Exchange and the archiving solution put together. Whatever we settle on should function on a live mailbox with no downtime. Playing around with PST imports and hacky little things like that isn't going to work. We're fine with programming/scripting, if anyone knows the best way through PowerShell, COM automation or some other way to best handle this.

    Read the article

  • How to make Firefox file associations consistent with Ubuntu file associations?

    - by wbharding
    This seems to be a pretty commonly Google question, but one for which there are no answers. http://www.linuxquestions.org/questions/linux-software-2/firefox-download-mime-types-378902 http://www.birkit.com/content/kubuntu-linux/internet/firefox/fix-file-associations-in-firefox.html Being three links amongst the many. The gist of what I want to accomplish is to have Firefox understand the file associations I download without me having to manually map all of them myself. Gnome knows the file extensions, so I would have expected that Firefox could just use the already-known file mappings there to open the right stuff (as I presume Chrome does). But it doesn't. At least not for me, using Firefox 4, and not by default. When I click on a downloaded file right now, Firefox always has to ask me what application should be used to open the file. A handful of Google results tell me that I can reassociate my file extensions by deleting ~/.mozilla/firefox/[profile name]/mimeTypes.rdf, but while deleting that file does in fact result in a new mimeTypes file being generated, the new mimeTypes is just as barren as the old one had been. Based on the amount of unanswered Qs on the Googlesphere, I know this is a very common problem for Ubuntu users, but it seems to be one for which nobody has chimed in with a good solution. Maybe Superuser can finally be the panacea for us all?

    Read the article

  • Thunderbird: export email account settings

    - by zpea
    I'd like to create a new profile for Thunderbird using the same mail accounts I already configured in my old profile. As it is quite a number of accounts, it would be great to have a way to export/import them instead of writing down the settings just to fill in again in the new profile. Using web search and search here I mainly found following suggestions that do not match what I need: Copy the whole profile: Not possible for me as I don't want to copy other settings, the downloaded mail data etc. and the old profile broke when running out of space in the home folder anyway. Use mozBackup: There seem to be several programs by that name (forks?). In any case, it's Windows-only and hence no option (I am mostly on Linux and prefer platform-independent solutions anyway) Use accountex: Seems to do what I want, but it is not compatible with current Thunderbird version (supports only up to version 3.1) Posts with various tips from 4 years ago: Top results in the web search with the G. But they do not work in current versions of Thunderbird either. Did I overlook anything? After all, it doesn't sound like I was looking for something nobody ever looked for.

    Read the article

  • Understanding Unix Permissions (w/ ACL)

    - by Dr. DOT
    I am trying to set permissions on my server properly. Currently I have a number of directories and files chmod'd at 0777 -- but I am not comfortable with it being this way. So at the advice of a serverfault specialist, I had my hosting provider install ACL on my shared virtual server. When I FTP to the server as my FTP user account "abc", I can do everything I need to do (and rightfully so) because all my dirs and files are owned by "abc", the group is "abc", and the 1st octet is set to 7 (rwx). That much I get. But here's where it gets dark gray for me. PHP is set to user "nobody". so when someone browses on of my web pages that either ends in .php or has some embedded PHP, I assume the last octet controls the access. Because all my dirs and files are owned by "abc" and assigned to group "abc", if the last octet was a 4 (r--) then the server would let the browser read the file. If it were a 6 (rw-) then the server would let the browser also write to the file or directory, correct? what if the web document does not end in .php or does not have any PHP embedded? What is the user then? how can I use ACL to not set the permission to 6 (rw-) or even 7 (rwx)? [not sure what execute does or means] Just looking for some sort of policy settings to best lock down my dirs and files while allowing my PHP scripts to do uploads and write to files (so my users don't call me to tell me "permission denied". Ok, thanks to anyone out there willing to lend me a hand. It is greatly appreciated.

    Read the article

  • Is there anything like Heroku for PHP and/or .NET?

    - by Wayne M
    In my area PHP is very widespread, so is .NET. Ruby not so much; most places have never heard of it. For some personal things I am "forced" to choose Rails because I want to take advantage of Heroku - the ability to deploy and scale on the cloud very easily is the main reason. Also, they offer a small FREE plan, with no ads or strings attached, that I can use for demo sites or, in this case, for my business' static page; as a totally bootstrapped startup I have maybe $50 or so in initial capital and cannot afford to pay monthly fees while I'm getting started. Are there any similar offerings for other languages? Specifically, I really like the small, 5MB site for free that Heroku offers - is there anything like that for PHP and/or .NET? I'm not even that concerned about the "cloud" part, but that would be a nice bonus. If there is, I might be able to kill two birds with one stone and pick up a useful skill as I'm doing my own thing instead of using something that nobody else knows or cares about. I should add I'm specifically interested in something that offers a free plan. As I said, Heroku has a 5mb plan that you can have as many as you want for free; I have yet to find anything similar for any other platform (most of the "free" sites require you to have ugly banners on your page, or don't allow you to use your own domain name), and to be honest I'm not too thrilled about using Ruby on Rails for everything simply to take advantage of this. I'm asking this here because I already asked it on StackOverflow and someone suggested it would be better suited here.

    Read the article

  • How do i Setup a Mac OS X Server - NameServer behind an Airport Extreme?

    - by basilmir
    I have a Mac mini server i want to setup to host a couple of things. My setup is as follows: The WAN connection (static IP and ISP nameservers) goes into the wan port of the Airport Extreme. The Mac mini server is connected to one of the ethernet ports. The mac mini will host my domain something.com. My settings so far: Airport Express gets: 96.x.x.x as the external static IP from the ISP 174.y.y.y as the nameserver Mac mini server always gets a reserved DHCP IP from the Airport Express: 10.0.1.3 is the server's ip 10.0.1.1 as the dns (this ip is the airport express itself) My dns server has an A record pointing to ns.something.com and a PTR doing the reverse. I've already added my 96.x.x.x to point ns.something.com with my registrar as attached. NOW: Nobody seems to be able to access my ns.something.com to resolve any of my records. From a any computer in my network I CAN see my ns and everything works. The outside on the other hand does not... it's as if the airport extreme which "holds" the exterior 94.x.x.x address doesn't pass DNS along to my 10.0.1.3 ns server. I have the server managing the airport. Isn't this supposed to work?

    Read the article

  • RSS "Newspaper" / Google Reader replacement

    - by Sean D
    With the impending demise of Google Reader I've been looking at ways to replace it. I've decided that what might be cool is to get an email every morning, with all the updates from the last twenty-four hours, maybe in the style of a newspaper. That's not a very original idea, since sites like http://fivefilters.org/pdf-newspaper/ and http://feedjournal.com/ already do this, but they both have various drawbacks. In particular both require a single feed, will just take the last n items, and clicking around on their website. The Pro option for feedjournal seems almost like it would do the job, but the project seems to be dead, and there's no way to buy it. Before I hack together something crazy I'd like to know if there's a better solution to my problem. In short: I want to replace Google Reader with a daily pdf email, how should I do this? edit: I didn't award the bounty because nobody solved the problem (not that I'm assuming it has a solution). Answers like "well for the way I do things this wouldn't work" aren't actually helpful, even if they are well-meaning.

    Read the article

  • Samba access works with IP address only

    - by Sebastian Rittau
    I added a Debian etch host (hostname: webserver, IP address: 192.168.101.2) running Samba to a Windows network with a Windows 2003 PDC (IP address 192.168.101.3). The Samba server exports a public guest share, called "Intranet". The server shows up fine in the network, but trying to click on it produces an error dialog, stating I don't have the necessary permissions. So does entering \webserver manually and using \webserver\internet states that the path does not exist. Interestingly, accessing the share by IP address (\192.168.101.2 or \192.168.101.2\intranet) works fine. DNS is configured correctly, and "smbclient //webserver/intranet" on another Linux client works fine. One complicating issue is that the webserver is only a VMware virtual machine running on PDC server. Here is our smb.conf: [global] workgroup = Foobar server string = Webserver wins support = yes ; commenting out these wins server = 192.168.101.3 ; two lines has no effect dns proxy = no guest account = nobody [... snipped some unrelated bits, like logging ...] security = share [... snipped some password-related things ...] domain master = no [intranet] comment = Intranet path = /srv/webserver/contents browseable = yes guest ok = yes guest only = yes read only = yes create mask = 0775 directory mask = 0775

    Read the article

  • What are possible results/side effects if replication between DC's in a Windows domain is unable to occur?

    - by hydroparadise
    There's plenty of administration literature out there how to properly manage Windows servers. But in dealing with real life, things don't always occur like you want them to. In Microsoft's Windows Server 2003 Administrator's Companion, out of 1400+ pages, theres only one page that I could find when it comes up setting up additional domain controlers. They make it sound seemless and don't reveal a whole lot on what happens if "peer" DC's are unable to replicate. Down to the specific issue at hand, we had a DC go down about a month ago due to a bad RAID controller. There was nothing critical that waranted imediate attention, so bringing it back up got put on the back burner. A month later, we get the DC back up and running and everyting seemed ok. The next day, nobody is able to logon complaining that the "user does not exist" or "unable to establish a trust relationship". Knowing that I had just put the downed DC back on the network, I immediately took it back off the network and had everybody restart the workstations. After that, exchange was fine, shares became available, and everybody was able to log in. After doing some event log swimming, it would appear that everything started due to replication issues on the SYSVOL. I've read where you can force replication, but that would mean putting it back on the network. I am afraid to put the DC back on the network in fear that something else could go wrong. So, what other issues could one expect to run into where two DC's are unreplicated for over a month?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >