Search Results

Search found 13586 results on 544 pages for 'trusted domain'.

Page 193/544 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Set global handling for PHP scripts in NGINX + PHP-FPM

    - by Radio
    I have to define fastcgi_pass for every virtual host. How do I define it global-wise? server { listen 80; server_name www.domain.tld; location / { root /home/user/www.domain.tld; index index.html index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/user/domain.tld$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • Setting permissions on user accounts

    - by Ron Porter
    We would like to lock a couple of accounts to prevent even domain admins from resetting the password without already knowing the current password. From what I can see in the permission sets, this looks possible. Anything I've found on the subject recommends against altering default permissions, but doesn't go into detail why. Assuming that domain admin retains the ability to reset passwords without knowing current passwords is it reasonable to prevent password resets on the domain admin account and maybe a couple of others? If not, why not?

    Read the article

  • Error from DNS validation service: RFC821 4.3

    - by ferdi
    So I've managed to setup bind9 and a mail server, but it seems there is something wrong. I don't quite under stand this error: The configuration of your mail servers and your DNS are not ok! The report of the test is: mail.domain.com. - mail.domain.com - 208.xxx.xxx.xxx - lisa.domain.com Spam recognition software and RFC821 4.3 (also RFC2821 4.3.1) state that the hostname given in the SMTP greeting MUST have an A record pointing back to the same server. Can anyone explain this to me in a bit more detail, and maybe point me in a direction to solve this?

    Read the article

  • how to create stub DNS zone for emulating my customer production environment ?

    - by Albert Widjaja
    Hi All, Is it possible to emulate my customer production environment inside my AD domain by just creating the same domain inside my primary DNS server ? Can I created mycustomer.com DNS zone (STUB) just for the sake of listing few database servers and application servers and then for the other DNS records eg. MX, NS and the other refer to the REAL MX record entry so that my Exchange Server email flow is unaffected to mycustomer.com ? because if I just create A record in my current domain for some of the servers, the FQDN is not exactly what I want. Thanks.

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • Using gitlab behind Apache proxy all urls are wrong

    - by Hippyjim
    I've set up Gitlab on Ubuntu 12.04 using the default package from https://about.gitlab.com/downloads/ As I had Apache installed already I have to run nginx on localhost:8888. The problem is, all images (such as avatars) are now served from that url, and all the checkout urls Gitlab gives are also the same - instead of using my domain name. If I change /etc/gitlab/gitlab.rb to use that url, then Gitlab stops working and gives a 503. Any ideas how I can tell Gitlab what URL to present to the world, even though it's really running on localhost? /etc/gitlab/gitlab.rb looks like: # Change the external_url to the address your users will type in their browser external_url 'http://my.local.domain' redis['port'] = 6379 postgresql['port'] = 2345 unicorn['port'] = 3456 and /opt/gitlab/embedded/conf/nginx.conf looks like: server { listen localhost:8888; server_name my.local.domain;

    Read the article

  • You don't have permission to access / on this server on centos 5.3

    - by zahid
    hello am using centos 5.3 with kloxo panel everything was fine but last night when i was updating my site i do not know i got first error when i try to access my site script everything is ok www.w3scan.net www.dl4fun.com Forbidden You don't have permission to access / on this server. please help i checked httpd it seems to be ok my httpd.conf #<VirtualHost *:80> # ServerName www.domain.tld # ServerPath /domain # DocumentRoot /home/user/domain # DirectoryIndex index.html index.htm index.shtml default.cgi default.html default.htm #</VirtualHost> i uninstall apache and installed again now i have still now access Index of / i modify apache welcome.conf to remove apache test page help

    Read the article

  • DNS lookup working but nothing pinging

    - by blsub6
    I have three forward lookup zones in my DNS server. When I try pinging a server that's part of lookup zone A from a computer that's part of that same zone, I get a response. If I try pinging a server that's not in the same zone as the computer I'm pinging from, I get nothing. When I try an nslookup of the servers that are not part of the same lookup zone, the names resolve correctly. If I append the domain onto the name, I can ping it just fine. Is there a way I can fix this? or do I need to keep appending the domain onto the name if the computer I'm trying to ping is not of the same domain as me?

    Read the article

  • certificate error while subdomain forwarding

    - by rahulchandran
    I have a website, call it http://sub.example.com, hosted on, say, 72.xx.xx.x. There is a certificate for https://sub.example.com. Now I go into the DNS management tool in my hosting provider, and I set up the standard subdomain forwarding wherein https://sub.example.com forwards to 72.xx.xx.x. Now when I try to browse to https://sub.example.com, I get a certificate error saying it is for the wrong website. I have also tried forwarding http://sub.example.com to 72.xx.xx.x, and tried it with domain masking in both cases. I am still getting the certificate error no matter what. Additional wrinkle: if someone types in https://sub.example.com then the domain forwarding does not seem to work and IE just spins endlesssly and finally fails. How can I domain forward the https://sub.example.com to 72.xx.xx.x?

    Read the article

  • SSL Certificate for local web server

    - by Firefly
    Is it at all possible to create a self-signed certificate for use on multiple machines on a local network which would stop the browser complaining it is not a trusted site? We have a product which is basically a computer running lighttpd to serve a web interface for configuring the computer (sort of how a router has a web interface). There can also be many of these machines running on the same network with dynamic IP's. What I basically want to do is enable SSL for extra security but I don't want people who are on the local network to be given a browser warning about the certificate not being trusted. Is this at all possible?

    Read the article

  • Tool/Program/Script/Formula for deciphering Active Directory Connection Strings for 3rd party user i

    - by I.T. Support
    We're using WSFTP, which has an Active Directory Integration module. To populate the user accounts you need to provide a connection string akin to: OU=Users,DC=domain,DC=com CN=Domain Users,OU=Users,DC=domain,DC=com Questions: Is there a Tool/Program/Script/Formula that allows me to decipher how these strings might look based on what I can see in Active Directory Users & Computers? Is there a proper/accepted name for these types of connection strings? I don't even know what to Google to get more information about how to format one properly How would I troubleshoot the connection string if I think it looks correctly formatted, but it isn't working? Thanks!

    Read the article

  • How to add addtional disks to a Windows 2008 KVM based Guest?

    - by taazaa
    I have a Win 2008 KVM based guest VM running on a Ubuntu 10 host. It is a raw image of 22G. I want to add a "data" drive which would show up as "D:\" drive on the guest. I first created a raw image using: qemu-img create -f raw ~/vmdisk2.img 50G Then, tried attaching it using virsh attach-disk. When that did not work, I tried editing the xml file of the VM directly. Both did not seem to work. I would greatly appreciate any help on how to do this and what the best practice is. I want to keep the base image small, so that I can clone it (hopefully) and then attach necessary storage based on the application at hand. Update: The xml of the vm before adding the second drive: <domain type='kvm'> <name>win08e-vm1</name> <uuid>183a4ba0-1c0b-0b04-ad01-aa7c3a4cb390</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/win08e-vm1.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/home/taazaa/iso/Win08ER264.iso'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:7f:a7:ae'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> <video> <model type='vga' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> </domain> Thanks!

    Read the article

  • Using AWS SES with Sendmail

    - by Abs
    I am trying to send mail via AWS SES uisng Sendmail. I have Sendmail version 8.14.4 installed and I followed the first section of this useful tutorial by Amazon. However, I get this: root@:/etc/mail# echo "Subject: test" | sendmail -v [email protected] [email protected]... Connecting to [127.0.0.1] via relay... [email protected]... Deferred: Connection timed out with [127.0.0.1] Can anyone help me get this working? The logs have the following: Dec 14 10:35:21 ip-10-xx-xx-181 sm-msp-queue[17910]: qBE8K1Lu016411: to=root, delay=00:21:24, xdelay=00:06:19, mailer=relay, pri=121806, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1]

    Read the article

  • Running PHPmyAdmin on Nginx, port 8080 passed to varnish not working well!

    - by amrnt
    I installed Nginx, Varnish and PHP-fpm. Then I installed PHPmyAdmin and made a virtual host for it: server{ listen 8080; server_name phpmyadmin.Domain.com; access_log /var/log/phpmyadmin.access_log; error_log /var/log/phpmyadmin.error_log; location / { root /usr/share/phpmyadmin; index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include /opt/nginx/conf/fastcgi_params; } } When I go to phpmyadmin.Domain.com it works as expected! but after submitting username/password it redirects me to phpmyadmin.Domain.com:8080/index.php?... with page cannot be found response as well! What could I do?

    Read the article

  • winbind not working

    - by Yon
    I'm trying to set up winbind with an Active Directory running on Win2003. This works: net rpc user -S SOMEDOMAIN -U Administrator Password: Administrator ASPNET Demo Guest IUSR_SERVER20 IWAM_SERVER20 krbtgt RemoteUser SUPPORT_388945a0 This does not: wbinfo -u Error looking up domain users From the winbindd log: [2012/05/31 16:45:38, 1] nsswitch/winbindd_ads.c:ads_cached_connection(128) ads_connect for domain SOMEDOMAIN failed: Operations error [2012/05/31 16:46:38, 1] nsswitch/winbindd_util.c:trustdom_recv(230) Could not receive trustdoms ADS is not working with this domain. Why is winbind trying to use it instead of RPC? How can I force it to use only RPC and for all of this to work?

    Read the article

  • How to setup Wordpress High Availability

    - by Ketam
    I have installed Galera Cluster on 3 cluster + 1 management. I wanted to make it like this, Server1: Home (www.domain.com) Server2: For BBpress/Forum (Forum Tab Menu will forward to forum.domain.com) Server3: BuddyPress Activity (Social Tab Menu will forward to social.domain.com) The purpose I am doing this is to distribute my resource and load balancing each other at same time. However, I have difficulty to setup Apache Load-Balancing/mod_proxy/clustering or any suitable to have high availability WordPress. Any best suggestion/solution to make high availability WordPress? Or how to? And another question is I tried to copy whole WordPress files & folders to Server2 connecting to local database (same data inside since it is already on Galera Cluster) but the page blank. Any advice? OS: Centos 6.2 Thanks in advanced.

    Read the article

  • AD Authentication fails in local machine but works from Production server

    - by jesu
    Hi i am using a AD authentication and facing 2 problems. Authentication works fine when i move the application to a production server but FAILS in my LOCAL machine. Both local machine and server are in same domain and used same domain account logging in. When the machine logs in the users with domain account , AD authentication from the application says that the account is not valid. Please suggest me , if you can find out the problem and ways to recover. thanks in advance! Regards jesu

    Read the article

  • reverse proxy http to tomcat

    - by John Q
    I've configured an Apache server with SSL and reverse proxy to a tomcat <VirtualHost domain.com:1443> [...] ProxyRequests Off ProxyPreserveHost On ProxyPass / http://local.com:8080/ ProxyPassReverse / http://local.com:8080 SSLEngine on [...] </VirtualHost> Tomcat is listening on 8080. The issue is that the app on tomcat is redirecting the request (HTTP 302 Moved temporairly). For example, if I use the URL https:// domain.com:1443/folder, reverse proxy launch the request http:// local.com:8080/folder, then, the app redirect to "/subfolder", so the final request is: http://domain.com:1443/folder/subfolder. Result is a 400 Bad request error code, as the request is HTTP on my SSL port. Do you know how I can fix this issue ? Thanks in advance.

    Read the article

  • Puppet exported resource naming

    - by Tim Brigham
    I am working on setting up a collection of Splunk nodes to be deployed by Puppet. One of the steps in this process is importing the trusts to allow these nodes to automatically find each other. I've looked over several options and it appears that exported resources are the only ready way to go for this to work. The files I need to create are under /opt/splunk/etc/auth/distServerKeys//trusted.pem. The source for each of these files should be /opt/splunk/etc/auth/distServerKeys/trusted.pem, one per node. What syntax do I need to make this work? The samples I've looked at all appear to have the same source and destination file name.

    Read the article

  • Configuring Bind9 on ubuntu

    - by Jerry
    I am trying to configure name server on Ubuntu just for learning. I have followed this tutorial. After configuring bind9 I have restarted it and works well. I have no registered domain name and public IP, so I have used a random domain name(khalidiitdu.com) that is not registered. When I dig khalidiitdu.com, it shows status: NXDOMAIN. If I use nslookup command, it shows ** server can't find khalidiitdu.com: NXDOMAIN. Now question is: Is registered domain mandatory to configure bind9 within a LAN? If not please suggest me alternative ways. Thanks.....

    Read the article

  • Posting videos to a website... with a dynamically updating interface

    - by Johnny W
    I'm currently putting together a website for my Luddite friend. He needs a blog, photos and videos... all easily updateable by him. WordPress and Blogger both have brilliant functionality to allow you to host your own blog on your own domain, but what about photos and videos? Does YouTube or Google videos or Vimeo or MSN Video or anyone offer the ability to easily insert a video navigation interface onto your own site? (Something similar to YouTube's "Channel" pages, except on your own domain?) Same goes for photos. I'm pretty sure FlickR doesn't allow you to easily upload and manage photos for viewing on your own domain. Does anyone else? My friend doesn't want to be editing any HTML and I don't want to be creating a complicated system to handle tons of different videos. There must a user-friendly solution in this day and age? Thanks for any help!

    Read the article

  • Windows Filtering Platform not turning off until admin logon. Win2008R2sp1

    - by rjt
    Just installed Windows Server 2008R2 SP1 to see if it would fix this problem, but it didn't. Until an administrator logs onto the domain controller, there are many events that WFP blocked a connection from Server60 to Server60 or Server60 to Server70. Both server60 and server70 are the domain controllers. One the admin logs on, the WFP events stop. The firewall is off by default GPO. Yes, i know that the WFP kicks in during the boot up sequence until the firewall takes over or in my case does not take over (since Vista), but i clearly should not have to autologon to a domain controller and call autolock or something. Example event LEVEL = Information Source = Microsoft Windows Security Auditing EventID = 5152 "Filtering Platform Packet Drop" and its evil twin id = 5157 "Filtering Platform Connection" "The Windows Filtering platform has blocked a connection." Direction %%14593 SourceAddress 192.168.10.60 SourcePort 49677 DestAddress 192.168.10.60 DestPort 389 Protocol 6 FilterRTID 65667 LayerName %%14611 LayerRTID 48 RemoteUserID S-1-0-0 RemoteMachineID S-1-0-0 windows-server-2008-r2 WFP BFE WindowsFilteringPlatform BaseFilteringEngine

    Read the article

  • hMailServer Email + MX Records Configuration

    - by asn187
    Trying to make DNS changes to enable email to be sent using hMailServer. My mail server is on a separate machine with a separate IP Address. I have already added MyDomain.com and an email account I have create a MX Record with the mail server being mail.domain.com an a priority on 20. 1) But the question is how do I now link this MX record for the domain to my mail server/ mail server IP Address? 2) What changes are needed in hMailServer to complete the process and be able to send emails for the domain? 3) In Settings SMTP Delivery of email: What should my configuration here look like?

    Read the article

  • plesk + high POP3/IMAP traffic, how to check details?

    - by Danzzz
    Please check this image, it's a screenshot from plesk 10 of 1 domains mail traffic: This domain has about 1GB POP3/IMAP (OUT) traffic each day. I know that this is not normal because I know the owner and how he's using his mail. It's just some mails each day. Is OUT in this case incomming mails to the domain owner? or is it OUTgoing imap traffic? I have about 30 domains on this machine, but this one is the only one making so much traffic. How can I check whats happening there? Can I check which email adress makes the most of this traffic? Can I try to swich off Imap for just this domain?

    Read the article

  • Bash: Quotes getting stripped when a command is passed as argument to a function

    - by Shoaibi
    I am trying to implement a dry run kind of mechanism for my script and facing the issue of quotes getting stripped off when a command is passed as an argument to a function and resulting in unexpected behavior. dry_run () { echo "$@" #printf '%q ' "$@" if [ "$DRY_RUN" ]; then return 0 fi "$@" } email_admin() { echo " Emailing admin" dry_run su - $target_username -c "cd $GIT_WORK_TREE && git log -1 -p|mail -s '$mail_subject' $admin_email" echo " Emailed" } Output is: su - webuser1 -c cd /home/webuser1/public_html && git log -1 -p|mail -s 'Git deployment on webuser1' [email protected] Expected: su - webuser1 -c "cd /home/webuser1/public_html && git log -1 -p|mail -s 'Git deployment on webuser1' [email protected]" With printf enabled instead of echo: su - webuser1 -c cd\ /home/webuser1/public_html\ \&\&\ git\ log\ -1\ -p\|mail\ -s\ \'Git\ deployment\ on\ webuser1\'\ [email protected] Result: su: invalid option -- 1 That shouldn't be the case if quotes remained where they were inserted. I have also tried using "eval", not much difference. If i remove the dry_run call in email_admin and then run script, it work great.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >