Search Results

Search found 43800 results on 1752 pages for 'drupal domain access'.

Page 201/1752 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Allow access from outside network with dmz and iptables

    - by Ivan
    I'm having a problem with my home network. So my setup is like this: In my Router (using Ubuntu desktop v11.04), I installed squid proxy as my transparent proxy. So I would like to use dyndns to my home network so I could be access my server from the internet, and also I installed CCTV camera and I would like to enable watching it from internet. The problem is I cannot access it from outside the net. I already set DMZ in my modem to my router ip. My first guess is because i'm using iptables to redirect all inside network to use squid. And not allow from outside traffic to my inside network. Here is my iptables script: #!/bin/sh # squid server IP SQUID_SERVER="192.168.5.1" # Interface connected to Internet INTERNET="eth0" # Interface connected to LAN LAN_IN="eth1" # Squid port SQUID_PORT="3128" # Clean old firewall iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X # Load IPTABLES modules for NAT and IP conntrack support modprobe ip_conntrack modprobe ip_conntrack_ftp # For win xp ftp client #modprobe ip_nat_ftp echo 1 > /proc/sys/net/ipv4/ip_forward # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT ACCEPT # Unlimited access to loop back iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # Allow UDP, DNS and Passive FTP iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j ACCEPT # set this system as a router for Rest of LAN iptables --table nat --append POSTROUTING --out-interface $INTERNET -j MASQUERADE iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT # unlimited access to LAN iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT # DNAT port 80 request comming from LAN systems to squid 3128 ($SQUID_PORT) aka transparent proxy iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to $SQUID_SERVER:$SQUID_PORT # if it is same system iptables -t nat -A PREROUTING -i $INTERNET -p tcp --dport 80 -j REDIRECT --to-port $SQUID_PORT # DROP everything and Log it iptables -A INPUT -j LOG iptables -A INPUT -j DROP If you know where did I miss, please advice me. Thanks for all your help and I really appreciate it.

    Read the article

  • Nginx multiple upstream servers on the same domain via diferent url

    - by Barry
    Hello. I am trying to rout trafic to different upstream servers (that serve different applications and not for load balancing). The incoming trafic has the same domain name but different URL. Here is an example of my configuration: http { upstream backend1 { server 127.0.0.1:8080 fail_timeout=0; server 127.0.0.1:8081 fail_timeout=0; } upstream backend2 { server 127.0.0.1:8090 fail_timeout=0; server 127.0.0.1:8091 fail_timeout=0; } server { listen 80; server_name my_server.com; root /home/my_server; location /serve_me { fastcgi_pass backend1; include fastcgi_params; } location / { fastcgi_pass backend2; include fastcgi_params; } } } It seems that whatever trafic comes in (including "my_server.com/serve_me") goes to backend2. How do I make queries that start with /serve_me to be directed to backend1? Thanks, Barry.

    Read the article

  • postfix (for sending mail only) multiple domain setup

    - by seanl
    I have the following problem, I have a Centos 5.4 VPS hosting a few nginx sites (some static, some cakephp), I would like to be able to send email from each sites contact page through postfix to my google apps hosted email (different accounts for each site) so that apps can then send out an auto email to the person filling in the contact form etc I have a bare-bones postfix installation with the following added into the main.cf config file. from using this guide virtual_alias_domains = hash:/etc/postfix/virtual_alias_domains virtual_alias_maps = hash:/etc/postfix/virtual_alias_maps (both of these files have been converted into db files using postmap) I have configured DNS correctly for each site and setup SPF records. (I'm aware R-DNS will still reference my actual hostname not the domain name and cause a possible spam issue but one thing at a time) I can telnet localhost and the helo localhost so that I can send a command line email from an address in the virtual_alias_domains to an email in the virtual_alias_maps file which seems sends without giving an error but it is sending to my local linux account not the email address specified. my question is am i approching this the wrong way in terms of the virtual alias mapping or is this even possible to do in the manner im trying. Any help is greatly appreciated thanks. my postconf -n outlook looks like this alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost myhostname = myactual hostname mynetworks = 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 virtual_alias_domains = hash:/etc/postfix/virtual_alias_domains virtual_alias_maps = hash:/etc/postfix/virtual_alias_maps

    Read the article

  • Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx)

    - by Finglish
    Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx) I am attempting to use nginx to serve private files from django. For X-Access-Redirect settings I followed the following guide http://www.chicagodjango.com/blog/permission-based-file-serving/ Here is my site config file (/etc/nginx/site-available/sitename): server { listen 80; listen 443 default_server ssl; server_name localhost; client_max_body_size 50M; ssl_certificate /home/user/site.crt; ssl_certificate_key /home/user/site.key; access_log /home/user/nginx/access.log; error_log /home/user/nginx/error.log; location / { access_log /home/user/gunicorn/access.log; error_log /home/user/gunicorn/error.log; alias /path_to/app; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://127.0.0.1:8000; proxy_connect_timeout 100s; proxy_send_timeout 100s; proxy_read_timeout 100s; } location /protected/ { internal; alias /home/user/protected; } } I then tried using the following in my django view to test the download: response = HttpResponse() response['Content-Type'] = "application/zip" response['X-Accel-Redirect'] = '/protected/test.zip' return response but instead of the file download I get: 403 Forbidden nginx/1.1.19 Please note: I have removed all the personal data from the the config file, so if there are any obvious mistakes not related to my error that is probably why. My nginx error log gives me the following: 2012/09/18 13:44:36 [error] 23705#0: *44 directory index of "/home/user/protected/" is forbidden, client: 80.221.147.225, server: localhost, request: "GET /icbdazzled/tmpdir/ HTTP/1.1", host: "www.icb.fi"

    Read the article

  • Domain to apache, subdomain or subdirectory to tomcat

    - by hofmeister
    I set up an Apache2.2 and Tomcat7 Windows Server. Now I would like to use the domain for the apache and a subdomain or a subdirectory for the tomcat webapps. But I don’t know how to configure the httpd.conf. At the moment the httpd.conf looks like: <IfModule !mod_jk.c> LoadModule jk_module modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkWorkersFile conf/workers.jetty.properties JkLogFile logs/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" JkOptions +ForwardKeySize +ForwardURICompat </IfModule> <VirtualHost servername:*> ServerName servername ServerAdmin [email protected] JkMount /* jetty </VirtualHost> My idea was to change the VirtualHost to sub.servername:* but this doesn’t work. How could I use a subdomain or directory for the webapps? At the moment, every call will me directed tomcat. My tomcat runs on the port 8081. Maybe edit the server.xml from tomcat? It would be awesome, if someone could help me. Greetz.

    Read the article

  • How do I increase the buffer size for domain sockets in OS X 10.6

    - by Chas. Owens
    In Linux I have no problem dumping tons of data into a domain socket, but the same code on OS X 10.6.2 blows up after about 65 records. The socket reader code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; unlink "foo"; my $sock = IO::Socket::UNIX->new ( Local => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; while (<$sock>) { chomp; print "[$_]\n"; } And the client code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; my $sock = IO::Socket::UNIX->new ( Peer => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; for my $i (1 .. 1_000_000) { print $sock "$i\n" or die $!; } close $sock; The error message I get is No buffer space available at write.pl line 15.. It seems fairly obvious that there is a difference in the buffer size between Linux and OS X, but I don't know how to set it OS X (or what the possible negative side effects might be).

    Read the article

  • Typical outbound port list for guest access?

    - by Steve
    I manage a weekly rental house that includes wireless Internet access. I've allowed all outbound ports on my router but my ISP has disabled my Internet access twice now because guests have downloaded (or served up) copyrighted content. So I'd like to institute some port filtering to discourage p2p sharing (see disclaimer below). But I don't want to inconvenience the 99.9% of folks who keep things above-board. My question is, what outbound ports are typically open for rental/hotel wireless Internet access, or where can I find such a list? TCP 80,443,25,110 at a minimum. Though my own email service uses 995 and 465 for SSL, some may use IMAP, I personally use SSH and FTP, so I'll open those. Roughly I figure I need to open access to privileged ports, and close 1024 & above. Is there a whitelist I should institute for commonly used high ports? And does it make sense to block UDP 1024 ? Disclaimer: I realize anyone replying to this message could circumvent the port filtering and share content to their heart's content. I do not need comprehensive p2p blocking, which requires more than a port whitelist. Anyone staying at the house shoulders the responsibility for their Internet use, per the rental contract. Also anyone savvy enough to circumvent the port filters would hopefully be savvy enough to use some sort of peer blocking, thereby preventing the ISP from taking down the service.

    Read the article

  • Cannot access server shares over VPN

    - by DuncanDavies
    I've set up a single hosted server to use as a development environment for a web-based application. The web app is served up fine on port 80, however I'm struggling to get my VPN to behave how I'd expect so the developers don't have the access they require. The VPN connects fine and I can access the back-end database (SQL Server) which resides on the server with the client tools from the laptops. However they cannot access any shared folders. The server's local IP address is 10.x.x.x, and I've assigned a static IP address pool to RRAS (of 192.168.100.1 - 20). The clients pick up a valid IP Address (i.e. 192.168.100.9) when they connect. There is no name resolution setup, DNS or WINS. When connected via VPN the clients can ping the server (192.168.100.1) by IP Address, but cannot map a drive to a shared folder (net use * \\192.168.100.1\xxxxx) - I get 'System error 53 has occurred. The network path was not found.' I don't understand why I can ping by the ip, but not map by it. Some details: Server OS is Windows 2008 (Datacenter) VPN is SSTP using RRAS Clients are all Windows 7 I've tried temporarily disabling the firewalls So, why can we not access the file system when everything else (ping, RDP, SQL Server clients tools) works? Thanks for your help Duncan

    Read the article

  • How to disable mod_security2 rule (false positive) for one domain on centos 5

    - by nicholas.alipaz
    Hi I have mod_security enabled on a centos5 server and one of the rules is keeping a user from posting some text on a form. The text is legitimate but it has the words 'create' and an html <table> tag later in it so it is causing a false positive. The error I am receiving is below: [Sun Apr 25 20:36:53 2010] [error] [client 76.171.171.xxx] ModSecurity: Access denied with code 500 (phase 2). Pattern match "((alter|create|drop)[[:space:]]+(column|database|procedure|table)|delete[[:space:]]+from|update.+set.+=)" at ARGS:body. [file "/usr/local/apache/conf/modsec2.user.conf"] [line "352"] [id "300015"] [rev "1"] [msg "Generic SQL injection protection"] [severity "CRITICAL"] [hostname "www.mysite.com"] [uri "/node/181/edit"] [unique_id "@TaVDEWnlusAABQv9@oAAAAD"] and here is /usr/local/apache/conf/modsec2.user.conf (line 352) #Generic SQL sigs SecRule ARGS "((alter|create|drop)[[:space:]]+(column|database|procedure|table)|delete[[:space:]]+from|update.+set.+=)" "id:1,rev:1,severity:2,msg:'Generic SQL injection protection'" The questions I have are: What should I do to "whitelist" or allow this rule to get through? What file do I create and where? How should I alter this rule? Can I set it to only be allowed for the one domain, since it is the only one having the issue on this dedicated server or is there a better way to exclude table tags perhaps? Thanks guys

    Read the article

  • Need to have access to my office PC from my laptop hopping through two VPN servers

    - by Andriy Yurchuk
    Here's the illustration of what I have ( http://clip2net.com/s/2fvar ): My office PC with it's IP of 123.45.e.f. Office VPN, which I will connect to from my VPS to get to my office PC. My own VPS, which I use as a: client to connect to office VPN (through vpnc, which creates a tun0 with 123.45.c.d IP address); VPN server my laptop can connect to (OpenVPN, tun1, 10.8.0.1) My own laptop I will use as a VPN client to connect to VPS OpenVPN server (will create a tun0 with 10.8.0.2 IP address) Now what I have to do is to allow my laptop to connect to at least my office PC, but preferably to all the 123.45.x.x subnet. Please advice on how to best configure OpenVPN, routing, iptables or whatever else is needed on my VPS so that my laptop could gain access to my office PC. P.S. The reason I'm hopping through my VPS is that being connected to the office WiFi I cannot access my office PC and I cannot connect to office VPN (which is another way to access my office PC). The only way to access my PC from office WiFi I have is hopping though an outside network.

    Read the article

  • Windows Server - share files without access for administrator

    - by Pawel
    We have a MS Windows Server 2008 R8 based server that is administrated by our IT department. We would like to achieve two things simultaneously: A folder on the server, containing several thousand files (new files added frequently) that is accessible to some ActiveDirectory users (e.g. board of directors) but is not accessible by IT department employees IT department employees still maintain rights to administrate the server, including installing new software and services We already checked some solutions: Using NTFS access rights. Unfortunately IT (members of "Administrators" group) can set themselves as new owners of the files and change the permissions so that they gain access to the files. Enabling EFS. Unfortunately even if you do not allow IT to access files, they still can disable EFS completely because they have administrative rights. Moreover as far as I know you have to manually add permissions for all users but the owner for each new file - very inconvenient. Creating a new role for the IT department that has all the privileges apart from taking ownership of files. Unfortunately if you're not a member of the Administrators group, you cannot install new software, no matter what privileges you add to the role. TrueCrypt - nice free encryption software, but with poor sharing capabilities. You can either mount an encryption container on the server (and then IT has access to its contents) or you mount them locally but only one user can mount it for writing. AxCrypt - free encryption software that enables file-by-file encryption on the server. There are some disadvantages though - you have to manually encrypt each new file added. The files have their extensions changes. You can only set one password for all files (so all users have to know this one password). Any other ideas? Our budget is limited so enterprise-class software from Symantec or PGP would probably be not an option.

    Read the article

  • I cannot access Windows Update at all

    - by Cardinal fang
    I have been unable to access the Windows update site for a couple of weeks now. I just get a message saying "Internet Explorer cannot display the webpage" and saying I have connection problems. Same thing is replicated with any other Microsoft site I try to access. The Automatic Updates also do not work. I can access every other wesbite I've surfed to. I've tried Googling the problem and based on what other site have suggested I have cleared my cache and temp files. I've scanning my hard drive with my antivirus in case I have a virus (nada). I've tried turning off my firewall and anti-virus (I run Zone Alarm). I've downloaded SpyBot and scanned my drive with that in case something was missed by Zone Alarm (again nada). Based on suggestions from the smart cookies on the Bad Science forum, I've used nslookup to check my translation isn't wonky (got all the info they said I should get). I've also tried navigating there directly using the IP address I was given (nope). I normally access the internet through a 3 mobile broadband connection, but have also tried connecting using a mate's wi-fi connection in case it was something on my mobile modem interferring. I run Windows XP SP3 with Internet Explorer 7 and Zone Alarm Internet Security Suite as my anti-virus/ firewall. Any suggestions?

    Read the article

  • Apache2 Re-Routing from Domain Name to Internal IP Address

    - by Richard Grey
    The problem that I am having, is that when someone goes to my domain name example.co.uk, for some reason, apache seems to be re-routing the request to the internal IP address of the server, i.e. 192.168.0.52 My Apache2 default sites enabled file is as follows: ServerAdmin [email protected] ServerName trusteeguard.co.uk ServerAlias www.trusteeguard.co.uk DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/trusteeguard-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/trusteeguard-access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> This is an Ubuntu box if that is any help ;)

    Read the article

  • How do I increase the buffer size for domain sockets in OS X 10.6

    - by Chas. Owens
    In Linux I have no problem dumping tons of data into a domain socket, but the same code on OS X 10.6.2 blows up after about 65 records. The socket reader code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; unlink "foo"; my $sock = IO::Socket::UNIX->new ( Local => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; while (<$sock>) { chomp; print "[$_]\n"; } And the client code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; my $sock = IO::Socket::UNIX->new ( Peer => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; for my $i (1 .. 1_000_000) { print $sock "$i\n" or die $!; } close $sock; The error message I get is No buffer space available at write.pl line 15.. It seems fairly obvious that there is a difference in the buffer size between Linux and OS X, but I don't know how to set it OS X (or what the possible negative side effects might be).

    Read the article

  • New SBS 2011 installation (not migration) in an existing 2008 R2 domain

    - by Tong Wang
    My current network setup has two servers: a Windows 2008 R2 with TMG 2010 as edge firewall (TMG server), a second 2008 R2 with DC, DNS and Hyper-V roles (DCDNS server). I was trying to install SBS 2011 as a child partition on DCDNS, first I installed SBS 2011 in English and did the migration successfully. However, later on, I found that I can't change the display language in SBS 2011 once it's installed (but the clients require a different language), so I had to re-install the SBS in a different language. It is during the re-installation that the problem came up: the migration can't be completed with some error message stating "can't access the source server". I re-ran the migration preparation tool, but it didn't make any difference. I wonder if it's because the source server can only be "migrated" once. Since I only need to setup a handful of users and computers, so I decided to do a new install of SBS and picked a different domain name. But I can't get the SBS to connect to LAN: it can't ping other servers, neither can other servers ping the SBS server. I've tried to stop the DC/DNS services on DCDNS and restart SBS, but with no difference. Anyone has idea how to fix this problem?

    Read the article

  • Apache Mod SVN Access Forbidden

    - by Cerin
    How do you resolve the error svn: access to '/repos/!svn/vcc/default' forbidden? I recently upgraded a Fedora 13 server to 16, and now I'm trying to debug an access error with a Subversion server running on using Apache with mod_dav_svn. Running: svn ls http://myserver/repos/myproject/trunk Lists the correct files. But when I go to commit, I get the error: svn: access to '/repos/!svn/vcc/default' forbidden My Apache virtualhost for svn is: <VirtualHost *:80> ServerName svn.mydomain.com ServerAlias svn DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <Location /repos> Order allow,deny Allow from all DAV svn SVNPath /var/svn/repos SVNAutoversioning On # Authenticate with Kerberos AuthType Kerberos AuthName "Subversion Repository" KrbAuthRealms mydomain.com Krb5KeyTab /etc/httpd/conf/krb5.HTTP.keytab # Get people from LDAP AuthLDAPUrl ldap://ldap.mydomain.com/ou=people,dc=mydomain,dc=corp?uid # For any operations other than these, require an authenticated user. <LimitExcept GET PROPFIND OPTIONS REPORT> Require valid-user </LimitExcept> </Location> </VirtualHost> What's causing this error? EDIT: In my /var/log/httpd/error_log I'm seeing a lot of these: [Fri Jun 22 13:22:51 2012] [error] [client 10.157.10.144] ModSecurity: Warning. Operator LT matched 20 at TX:inbound_anomaly_score. [file "/etc/httpd/modsecurity.d/base_rules/modsecurity_crs_60_correlation.conf"] [line "31"] [msg "Inbound Anomaly Score (Total Inbound Score: 15, SQLi=, XSS=): Method is not allowed by policy"] [hostname "svn.mydomain.com"] [uri "/repos/!svn/act/0510a2b7-9bbe-4f8c-b928-406f6ac38ff2"] [unique_id "T@Sp638DCAEBBCyGfioAAABK"] I'm not entirely sure how to read this, but I'm interpreting "Method is not allowed by policy" as meaning that there's some security Apache module that might be blocking access. How do I change this?

    Read the article

  • Windows Server - share files without access for administrator

    - by Pawel
    We have a MS Windows Server 2008 R8 based server that is administrated by our IT department. We would like to achieve two things simultaneously: A folder on the server, containing several thousand files (new files added frequently) that is accessible to some ActiveDirectory users (e.g. board of directors) but is not accessible by IT department employees IT department employees still maintain rights to administrate the server, including installing new software and services We already checked some solutions: Using NTFS access rights. Unfortunately IT (members of "Administrators" group) can set themselves as new owners of the files and change the permissions so that they gain access to the files. Enabling EFS. Unfortunately even if you do not allow IT to access files, they still can disable EFS completely because they have administrative rights. Moreover as far as I know you have to manually add permissions for all users but the owner for each new file - very inconvenient. Creating a new role for the IT department that has all the privileges apart from taking ownership of files. Unfortunately if you're not a member of the Administrators group, you cannot install new software, no matter what privileges you add to the role. TrueCrypt - nice free encryption software, but with poor sharing capabilities. You can either mount an encryption container on the server (and then IT has access to its contents) or you mount them locally but only one user can mount it for writing. AxCrypt - free encryption software that enables file-by-file encryption on the server. There are some disadvantages though - you have to manually encrypt each new file added. The files have their extensions changes. You can only set one password for all files (so all users have to know this one password). Any other ideas? Our budget is limited so enterprise-class software from Symantec or PGP would probably be not an option.

    Read the article

  • Server 2008 R2 domain windows update strategy

    - by Joost Verdaasdonk
    Let me explain my question a bit. We are a small company that have now made the first move to a bigger network. For now the network contains of 5 servers 2008 R2 (dc,sql,web,etc..). Everything we need is now in place but for now we cannot afford to finish the network by implementing redundant systems. (secondary dc, dns, sql cluster, etc...) For some people this is hard to understand but this is the current situation. (and we are aware and will fix this when we can) Because we want to keep our system secure and up to date I've made sure that all systems are updated regularly. The problem is ofc that the nr of updates Microsoft rolls out that need a system reboot seam to occur more often. (maybe I'm wrong and it just feels like this) ;-) In our domain servers depend on each other for services (like SQL, WEB, or whatever) so just rebooting a server at will is NOT a good idea! For now I update all of them without rebooting at once. After all are up to date I bring them down in the order they are depended on each other. After this I reboot all of them in the inverse order. I understand ofc that if I DID have redundancy in my system that updating and rebooting would not be such a problem because the server task could be taken over by another node but this is something we generally need to add when we can. So my question is. If you read my above situation can you suggest me more Update strategies or general ideas that could help me do this process in a better / faster way? Thanks for your thoughts!

    Read the article

  • Linux kernel - can't access sda16 & sda17

    - by osgx
    I can't access sda16 sda17 and higher partitions from my linux. This linux is rather debian (very old); kernel 2.6.23. So, I know that so old linux kernel can't access 16 partitions on single sata disk. What version of kernel should I use to be able access sda16, sda17 etc? I want to update only a kernel, not a whole Linux distribution. PS. There is an WindowsNT kernel which can access and format 16, 17 or higher partition, but my intention is to use sda16 and sda17 from linux (I want Linux Kernel). PPS: dmesg: sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: sda1 sda2 sda3 sda4 < sda5 sda6 sda7 sda8 sda9 sda10 sda11 sda12 sda13 sda14 sda15 > sd 2:0:0:0: [sda] Attached SCSI disk sd 4:0:0:0: [sdb] xxx 512-byte hardware sectors ... So, there is no mapping of sda16, sda17, ... to sdb. Sdb is the second physical hard drive.

    Read the article

  • Tracking a subdomain serately within the main domain account [closed]

    - by Vinay
    I have a website, for example: xyz.com and info.xyz.com. I created a profile for xyz and tracking is good. I added a new profile for info.xyz.com in xyz.com. Analytics tracking for info.xyz.com is showing traffic from both xyz.com and info.xyz.com. How do I change to show only info.xyz traffic in the info.xyz.com profile. I used the following code: Analytics code for xyz.com domain: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxx-x']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Analytics code for info.xyz.com <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxx-x']); _gaq.push(['_setDomainName', 'xyz.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • "Access is Denied" when executing application from Command Prompt

    - by xpda
    Today when I tried to run an old DOS utility from the XP Command prompt, I got the message "Access is Denied." Then I found that most of the DOS utilities would not run, even though I have "full control" over them. They worked just fine a few weeks ago, and I have not made any OS changes other than Windows Upgrades. Then I tried running edlin.exe and edit.com from the Windows\system32 folder. Same result - "Access is Denied." I tried running these applications from Windows Explorer and got the message "Windows cannot access the specified device, path, or file. You may not have the appropriate permissions to access the item." I am logged in as a member of Administrators and have full control over these files. I tried logging in as THE Administrator, with no change. I checked the security settings on the files, and have full control over all of them. I have tried copying the files to different drives, booting in safe mode, and running without antivirus and firewall, all with no change. Does anybody know what could cause this?

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • Postfix : outgoing mail in TLS for a specific domain

    - by vercetty92
    I am trying to configure postfix to send mail in TLS (starttls in fact), but only for a specific destination. I tried with "smtp_tls_policy_maps". This is the only line in my main.cf file regarding TLS configuration, but it seems not working. Here is my main.cf file: queue_directory = /opt/csw/var/spool/postfix command_directory = /opt/csw/sbin daemon_directory = /opt/csw/libexec/postfix html_directory = /opt/csw/share/doc/postfix/html manpage_directory = /opt/csw/share/man sample_directory = /opt/csw/share/doc/postfix/samples readme_directory = /opt/csw/share/doc/postfix/README_FILES mail_spool_directory = /var/spool/mail sendmail_path = /opt/csw/sbin/sendmail newaliases_path = /opt/csw/bin/newaliases mailq_path = /opt/csw/bin/mailq mail_owner = postfix setgid_group = postdrop mydomain = ullink.net myorigin = $myhostname mydestination = $myhostname, localhost.$mydomain, localhost masquerade_domains = vercetty92.net alias_maps = dbm:/etc/opt/csw/postfix/aliases alias_database = dbm:/etc/opt/csw/postfix/aliases transport_maps = dbm:/etc/opt/csw/postfix/transport smtp_tls_policy_maps = dbm:/etc/opt/csw/postfix/tls_policy inet_interfaces = all unknown_local_recipient_reject_code = 550 relayhost = smtpd_banner = $myhostname ESMTP $mail_name debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin xxgdb $daemon_directory/$process_name $process_id & sleep 5 And here is my "tls_policy" file: gmail.com encrypt protocols=SSLv3:TLSv1 ciphers=high I also tried gmail.com encrypt My wish is to use TLS only for the gmail domain. With this configuration, I don't see any TLS line in the source of the mail. But if I tell postfix to use TLS if possible for all destination with this line, it works: smtp_tls_security_level = may Beause I can see this line in the source of my mail: (version=TLSv1/SSLv3 cipher=OTHER); But I don't want to try to use TLS for the others domains...only for gmail... Do I miss something in my conf? (I also try whith "hash:/etc/opt/csw/postfix/tls_policy", and it's the same) Thanks a lot in advance

    Read the article

  • IIS7.5 - about app pool ID's and folder read/write access

    - by merk
    I did some searching and it looks like for each app pool, there should be an account created called IIS APPPOOL\AppPoolName - however i can see no such account when i try to modify the permissions on a folder to give that app write access. The closest I have found is the IIS_IUSRS group. Now, if i go into that group and look at the members, i see several IIS APPPOOL\PoolName members. But where are these members coming from? Why don't they show up under the users? And why can't i add a specific one to a folder? It doesn't make sense to me to add the IIS_IUSRS group to a folder since they gives every site access to the folder. To be more specific, I'm setting up wordpress and it unfortunately wants write access to the root folder. So i want to restrict it as much a possible. I was trying to figure out how to set it so that the WP root folder has write access only for the ID that the blog's app pool is running under. When i drill down into the IIS_IUSRS group, i do not see the app pool for the blog listed there. The settings for the blog's app pool are: No managed code, Classic, ApplicationPoolIdentity, and it's named 'blog' So any explanations regarding these users that are created for the app pools, and why the blog doesn't seem to belong to the iusrs group? thanks

    Read the article

  • windows xp cannot access admin share

    - by barlop
    I have 3 systems. A,B,Compx all on xp. but comps A and B have an issue with Compx. Compx has network shares I can access. I can do \\compx and get some. But I cannot access the admin share c$ \\compx\c$ gives a login prompt, and I can't get any user/pass to work. I looked at permissions but don't see an issue. Nevertheless, I will describe what I see in the permissions. In the security tab of C, I have Administrators,creator owner,everyone,bob,system,users (6 things there) "creator owner" has nothing ticked, I can't seem to change that. If I tick so they all get ticked, and click apply, 2.5min and it's completed its opration and they all untick. Though this isn't the root of the problem. Since I get the same in the share I can access. In advanced, I see those 6 things, Administrators,creator owner,everyone,bob,system,users (6 things there) all "full control" all are "this folder, subfolders and files".. except creator owner, which is just subfolders and files only I look at the properties for the share I can see. looks the same, except in security..advanced, double clicking any of them the boxes are all ticked but greyed. That's not the problem though since I can access that share. So, I don't know what the problem is.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >