Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 373/537 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Windows 2008 Group Policy Setting? - Migration Headache

    - by DevNULL
    I have a small domain of users that I just migrated from a linux domain running open-ldap. Our new servers are running Windows 2008 Standard. I've installed Active Directory and everything is working perfectly... except that the initial user privileges is pretty restrictive and I need to loosen it up a bit. For example once they login to their workstations, they can create new files and folders but can not modify existing files or start. I basically want to open it all up except for software installations. Can someone please help with with this migration headache?

    Read the article

  • migrating storage to a different controller

    - by bellocarico
    Hello, I've just purcheased a couple of adaptec controller (2405/5405) for my ESXi 4.0 U1 servers. Currently ESXi and a couple of VMs are hosted on single sata boot disk connected to a nvidia on board non-RAID controller. I know that it's possible to migrate from single disk to RAID 1 with adaptec and I'm pleased with that, but I'm not sure if ESXi has already the right drivers installed/loaded for this controller. Is there any way I can check this? Is ESXi clever enough to recognize the new hardware and load the right module? Thanks

    Read the article

  • Upgrade no raid server to raid

    - by AZee
    I have just learned that our PDC has a single drive with 2 partitions. I also know that this drive has bad blocks as recorded in the event log. What I would like to do is to convert this to a RAID solution with a nice balance between economy and performance. I will admit that I have only configured servers with RAID from scratch, and have no experience upgrading an existing system into a RAID system. In fact, I'm not sure it is even possible. Since this is the PDC for 350+ workstations downtime is important. I'd like to hear from other System Administators how they would tackle this and their recommendations for all devices. At this time it seems to me that I can replace the existing drive and then restore from backup or install a controller, drives, configure the RAID an basically start from scratch. Thank you for taking your time. ~AZee

    Read the article

  • Any good resources on setting up an ubuntu virtual machine for web development?

    - by Relequestual
    I'm currently on my placement year at uni with 4 months left. Before working at my current place, I have not used a nix environment for web development and have used WAMP. Over the past year I have found some very interesting new tech that requires a bit more than my shared hosting even to play with (eg node.js, RoR 3). At work we use a Virtual Machine for development, but that's all been set up and configured to match the live servers, and is managed with a Puppet server. Are there any really good resources for setting up and configuring an Ubuntu VM as a web server? Work currently uses Ubuntu so I would assume this is a good OS to use. I do of course know how to use google, but the noise ratio is just too big, so thought I'd ask here, as I know many of you will have a ton of bookmarks. Cheers.

    Read the article

  • How do I configure namecheap for "arbitrarily-nested" wildcard subdomains?

    - by rabidsnail
    I'm trying to set up something like nyud.net, where any arbitrary chain of subdomains resolves to the same CNAME record (which in my case points to an amazon elastic load balancer). Ex: www.gogle.com.nyud.net:8080 points to one of their cache servers, which looks at the HOST header and returns www.google.com. I'm using namecheap as my dns host. Adding a CNAME record for *.mydomain.com doesn't seem to do anything (nslookup gives NXDOMAIN for all subdomains). What do I have to do to set this up? Do I have to use something fancier than namecheap (like route53)?

    Read the article

  • Best idea dataserver serving small pictures 40 ko

    - by Nicolas Manzini
    I'm designing the server structure for my application in case things go well. I have one server DB connected to multiple server who process connections. All those with lots of RAM and fast processors. (still looking for a way to use the multithread because now it's dumb apache php... so loooots of ram needed). Upon an answer from those servers, the client can then connect to another server to retrieve pictures using the address he previously got from the db. Is it a good idea to have one database server with let's say nginx and ssd disk having to send all pictures to everybody? or should I have multiple server accessing to a shared ssd disk drive or multiple disk updating each other? Also should I put a lot of RAM on the database server? because probably there wont be a picture more popular than another.

    Read the article

  • why do I get this mail server configuration error?

    - by Francesco
    <<The configuration of your mail servers and your DNS are not ok! The report of the test is: mail.mydomain.com. -> mydomain.com -> 78.47.63.148 -> static.148.63.47.78.clients.your-server.de Spam recognition software and RFC821 4.3 (also RFC2821 4.3.1) state that the hostname given in the SMTP greeting MUST have an A record pointing back to the same server.>> I have a A Record that points mail.mydomain.com to 78.47.63.148 (which is my given ip address for my vps) All other records are fine, so what's wrong and what record should I create to make it right? Thanks

    Read the article

  • USB 3 adapter for a dell 2850 with PCI (or PCIX) ports

    - by Don Dickinson
    Does anyone know if there is a plain PCI (or PCIX) USB 3 adapter. i understand the bandwidth of PCI < USB3, but it still beats the heck out of USB 2. i have some older dell 2850s that do not have the PCI E ports that most USB 3 adapters require. i'd really like to get usb3 in those servers. i searched the internet but didn't see any. the local computer store said they only had pcie adapters. tia, don

    Read the article

  • Exchange 2010 550 5.7.1 unable to relay

    - by isorfir
    I have a website application that needs to send email via our Exchange servers. It sends email internally fine, but when sending to an external address I get the 550 5.7.1 unable to relay error. I followed this guide to create a connector to allow relay. Unfortunately, all office email was trying to use that connector and was not being routed correctly. It also appeared as though it opened it up for spammers to use. This is obviously unacceptable and a secure method is needed.

    Read the article

  • How to accelerate and notice failure of potentially faulty disks

    - by rainier
    Hey, I got a bunch of 'used' servers, whose disks should have been checked, but they have been shipped around the county in crate which can't help. I just had one disk go bad (despite being mirrored, currently trying to get more details). The server was fine for about a week before everything ground to a halt this afternoon. Is there any way 'accelerate' the failure of faulty disks, with the goal of bringing the disk to failure before we launch production services? Would doing lots of I/O with 'dd' or 'iozone' be a good way to test these potentially faulty disks? Any other tests/tools that would help recognized failures before they happen?

    Read the article

  • Apache mod_remoteip and access logs

    - by GioMac
    Since Apache 2.4 I've started using mod_remoteip instead of mod_extract_forwarded for rewriting client address from x-forwarded-for provided by frontend servers (varnish, squid, apache etc). So far everything works fine with the modules, i.e. php, cgi, wsgi etc... - client addresses are shown as they should be, but I couldn't write client address in access logs (%a, %h, %{c}a). No luck - I'm always getting 127.0.0.1 (localhost forward ex.). How to log client's ip address when using mod_remoteip?

    Read the article

  • Windows 2003 SP1 terminal server printers Disappear after reboot - HP laserjet 4240

    - by Alex
    had working PCL6 laserjet 4240 driver. needed to downgrade to PCl5, tried but did not get clean install. tried again and it seemed to work (this is 2003 enterprise terminal server SP2) Have over 40 working laserjets (5, 4000, 4100 and 18 of the 4240) After normal nightly reboot the 18 4240 printers were 'gone'. Worked w/Microsoft who said bad HP driver issues - weird since they work on other terminal servers. downloaded latest version, etc. from HP site and can NOT get to work. As soon as I install, then do a Net Stop Spool and Net Start Spooler the printer is 'gone'. Current workaround is to use HP 4000 PCL5 drivers for all of these 4240 printers.

    Read the article

  • Debian 6 or CentOS 6 - which one is easiest for latest versions of Ruby and Postgres?

    - by A4J
    I am getting a new server as I've messed up my current box, while trying to install Postgres 9 (on my CentOS 5.8 box). To cut a long story short, I removed postgres but yum decided to remove virtualmin-base as well, which broke my virtualmin install (postfix/dovcot stopped working). Virtualmin advise a fresh install once virtualmin-base has been removed/reinstalled. So I'll probably make a decision based on this simple criteria: which distro out of the two makes it easiest for installing the latest versions of Ruby and Postgres? They are both equally respected as web servers, so I really don't mind either way - I just want to use the one that will work best with the software I need.

    Read the article

  • How can I measure TCP timeout limit on NAT firewall for setting keepalive interval?

    - by jmanning2k
    A new (NAT) firewall appliance was recently installed at $WORK. Since then, I'm getting many network timeouts and interruptions, especially for operations which would require the server to think for a bit without a response (svn update, rsync, etc.). Inbound SSH sessions over VPN also timeout frequently. That clearly suggests I need to adjust the TCP (and ssh) keepalive time on the servers in question in order to reduce these errors. But what is the appropriate value I should use? Assuming I have machines on both sides of the firewall between which I can make a connection, is there a way to measure what the time limit on TCP connections might be for this firewall? In theory, I would send a packet with gradually increasing intervals until the connection is lost. Any tools that might help (free or open source would be best, but I'm open to other suggestions)? The appliance is not under my control, so I can't just get the value, though I am attempting to ask what it currently is and if I can get it increased.

    Read the article

  • How to setup VM in KVM? Qcow or LVM etc.

    - by JohnAdams
    Finally, after quite a bit of this vs that, I have chosen to virtualize a couple of my servers with KVM. I did do a test setup as well, but I have a few questions about setting VM's in KVM. Would appreciate pointers. What is the best storage to use - Qcow2 or LVM? I like the fact that I can copy the VM file easily with a Qcow2 but what about LVM, how do I take a backup or make copy on a development server to play with? I know I can clone a LVM, but how do I bring to my development server? How do I setup the guest partitioning? For example, when setting up Ubuntu inside Ubuntu, do I choose LVM for that VM or regular fdisk partitioning? Can I increase the partition size then later, if I need a bigger disk?

    Read the article

  • Are there any Powershell modules specifically for IIS6 administration?

    - by program247365
    I'm looking to manage/administer many IIS6 servers remotely via Powershell (query sites, iis settings, etc). Is this possible? Is there a Microsoft-supported module out there? Or do I have to use WMI-Object/WebAdministration? If so, could some one give me some quick instructions on doing some simple "get info" commands in Powershell to a remote IIS6 machine? (It's frustrating that there is a nice IIS7/Microsoft-supported module out there: http://learn.iis.net/page.aspx/428/getting-started-with-the-iis-70-powershell-snap-in/ But not an IIS6 one easily found.)

    Read the article

  • Dealing with SMTP invalid command attack

    - by mark
    One of our semi-busy mail servers (sendmail) has had a lot of inbound connections over the past few days from hosts that are issuing garbage commands. In the past two days: incoming smtp connections with invalid commands from 39,000 unique IPs the IPs come from various ranges all over the world, not just a few networks that I can block the mail server serves users throughout north america, so I can't just block connections from unknown IPs sample bad commands: http://pastebin.com/4QUsaTXT I am not sure what someone is trying to accomplish with this attack, besides annoy me. any ideas what this is about, or how to effectively deal with it?

    Read the article

  • Rails 2 and Ngnix: https pages can't load css or js (but will load graphics)

    - by Max Williams
    ADMISSION: i've posted this same question on stackoverflow, before realising it's probabaly better suited to superuser, but it kind of depends on the answer: If it turns out to be a problem in my nginx config, it's definitely superuser. If it turns out to be a problem in my Rails config (or code) then it's arguably stackoverflow. I'm adding some https pages to my rails site. In order to test it locally, i'm running my site under one mongrel_rails instance (on 3000) and nginx. I've managed to get my nginx config to the point where i can actually go to the https pages, and they load. Except, the javascript and css files all fail to load: looking in the Network tab in chrome web tools, i can see that it is trying to load them via an https url. Eg, one of the non-working file urls is https://cmw-local.co.uk/stylesheets/cmw-logged-out.css?1383759216 I have these set up (or at least think i do) in my nginx config to redirect to the http versions of the static files. This seems to be working for graphics, but not for css and js files. If i click on this in the Network tab, it takes me to the above url, which redirects to the http version. So, the redirect seems to be working in some sense, but not when they're loaded by an https page. Like i say, i thought i had this covered in the second try_files directive in my config below, but maybe not. Can anyone see what i'm doing wrong? thanks, Max Here's my nginx config - sorry it's a bit lengthy! I think the error is likely to be in the first (ssl) server block: server { listen 443 ssl; keepalive_timeout 70; ssl_certificate /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.crt; ssl_certificate_key /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; # ensure that we serve css, js, other statics when requested # as SSL, but if the files don't exist (i.e. any non /basket controller) # then redirect to the non-https version location / { try_files $uri @non-ssl-redirect; } # securely serve everything under /basket (/basket/checkout etc) # we need general too, because of the email/username checking location ~ ^/(basket|general|cmw/account/check_username_availability) { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running # other rewrite tests try_files $uri @rails-ssl; expires 1h; } location @non-ssl-redirect { return 301 http://$host$request_uri; } location @rails-ssl { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } } #upstream elrs { # server 127.0.0.1:3000; #} server { listen 80; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; access_log /home/max/work/charanga/elearn_container/elearn/log/access.log; error_log /home/max/work/charanga/elearn_container/elearn/log/error.log debug; client_max_body_size 50M; index index.html index.htm; # gzip html, css & javascript, but don't gzip javascript for pre-SP2 MSIE6 (i.e. those *without* SV1 in their user-agent string) gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #text/html # make sure gzip does not lose large gzipped js or css files # see http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl gzip_buffers 16 8k; # Disable gzip for certain browsers. #gzip_disable "MSIE [1-6].(?!.*SV1)"; gzip_disable "MSIE [1-6]"; # blank gif like it's 1995 location = /images/blank.gif { empty_gif; } # don't serve files beginning with dots location ~ /\. { access_log off; log_not_found off; deny all; } # we don't care if these are missing location = /robots.txt { log_not_found off; } location = /favicon.ico { log_not_found off; } location ~ affiliate.xml { log_not_found off; } location ~ copyright.xml { log_not_found off; } # convert urls with multiple slashes to a single / if ($request ~ /+ ) { rewrite ^(/)+(.*) /$2 break; } # X-Accel-Redirect # Don't tie up mongrels with serving the lesson zips or exes, let Nginx do it instead location /zips { internal; root /var/www/apps/e_learning_resource/shared/assets; } location /tmp { internal; root /; } location /mnt{ root /; } # resource library thumbnails should be served as usual location ~ ^/resource_library/.*/*thumbnail.jpg$ { if (!-f $request_filename) { rewrite ^(.*)$ /images/no-thumb.png break; } expires 1m; } # don't make Rails generate the dynamic routes to the dcr and swf, we'll do it here location ~ "lesson viewer.dcr" { rewrite ^(.*)$ "/assets/players/lesson viewer.dcr" break; } # we need this rule so we don't serve the older lessonviewer when the rule below is matched location = /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf { rewrite ^(.*)$ /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf break; } location ~ v6lessonViewer.swf { rewrite ^(.*)$ /assets/players/v6lessonViewer.swf break; } location ~ lessonViewer.swf { rewrite ^(.*)$ /assets/players/lessonViewer.swf break; } location ~ lgn111.dat { empty_gif; } # try to get autocomplete school names from memcache first, then # fallback to rails when we can't location /schools/autocomplete { set $memcached_key $uri?q=$arg_q; memcached_pass 127.0.0.1:11211; default_type text/html; error_page 404 =200 @rails; # 404 not really! Hand off to rails } location / { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running other rewrite tests try_files $uri @rails; expires 1h; } location @rails { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } }

    Read the article

  • How to whitelist a domain while blocking forgeries using that domain?

    - by QuantumMechanic
    How do you deal with the case of: wanting to whitelist a domain so that emails from it won't get eaten, but not having emails forged to appear to be from that domain get bogusly whitelisted whitelist_from_recvd looks promising, but then you have to know at least the TLD of every host that could send you mail from that domain. Often RandomBigCompany.com will outsource email to one or more sending companies (like Constant Contact and the like) in addition to using servers that reverse-resolve to something in its own domain. But it looks like whitelist_from_recvd can only map to one sending server pattern so that would be problematic. Is there a way to say something like "if email is from domain X, subtract N points from the spam score"? The idea would be that if the mail is legit, that -N will all but guarantee it isn't considered spam. But if it is spam, hopefully all the other failed tests will render it spam even with the -N being included.

    Read the article

  • How long does it take for a server to get 'off greylisting'

    - by Michael
    Hi all, I asked a question regarding email delays a few months ago, and I think I found a workaround. I changed our email from "[email protected]" to "[email protected]", and it seems to work instantly again. After reading some articles, I believe this could be due to some form of greylisting, though some servers might call it something else -- if a server like yahoo or gmail receive email from a server that it is not used to receiving email from, then sometimes the delay occurs. But a name such as yahoo, gmail, which requires a user to sign up manually -- this delay can be avoided. My question is this: does anyone know more about this issue -- especially since it would be nice to send an email from our own site, instead of needing to use a whitelisted server? Thanks!

    Read the article

  • get a list of running ec2 instances programmatically

    - by user113981
    Hi i have started with aws and found out that we can get a list of running servers with the aws php sdk. Is there any other way to get the list of all ec2 instances? after getting the list i want to sync the data from one main instances to all the instances. Something like a button click can also do the operation. Are rsync, incron the only options, or it can be done by aws php sdk also. Please provide some tutorial links.

    Read the article

  • Reccomendation for tuning 100's of SQL Databases

    - by wayne
    I'm running several SQL servers, each running a few hundred multi-gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite a lot from database to database. What would be the best way to auto-index/profile/tune this large amount of databases? As there are at least 600 or more catalogs I cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine.

    Read the article

  • How SmartDNS Works

    - by Emad
    If you travel outside the US you'll notice that most of the streaming services like Netflix, Pandora, hulu etc are blocked, usually by the service providers themselves. To get around that, people use VPN services. They basically tunnel your traffic through a US server so your requests seem like they are originating in the US. These VPN services fix this blocking problem, but make your connection slower than the normal unVPNed connection. Recently however I've come across something called SmartDNS provided by overplay.net. You pay $5 a month and you get access to their DNS servers. After you change to their DNS you get access to the blocked streaming sites, without slowing down your normal traffic like email and browsing. What I'd like to know is the technical details of how this SmartDNS works. I've done some quick research but that didn't turn up anything of substance. Anybody out there knows?

    Read the article

  • How can I remount an NFS volume on Red Hat Linux?

    - by user76177
    I changed the user id of a user on an NFS client that mounts a volume from another server. My goal is to get the 2 users to have the same id, so that both servers can read and write to the volume. I changed the id successfully on the client system, but now when I look at the NFS mount from that system, it reports the files being owned by the old id. So it looks like I need to "refresh" that mount. I have found many instructions on how to remount, but each seems slightly different according to the type of system. Is there a simple command I can run to get the mounted volume to refresh so that it interprets the new user settings?

    Read the article

  • What to have in sources.list on an Ubuntu LTS server (production)?

    - by nbr
    I have several Ubuntu 10.04 LTS servers in production and I'm using apticron to check that my software is up to date, security-wise. However, by default, Ubuntu has the lucid-updates repository enabled. This means lots of low-priority updates (such as this) that I don't need and thus, extra work for me. Is it okay to just remove the lucid-updates line(s) in sources.list? I still get security updates via lucid-security, right? So, this is what my sources.list would look like. deb http://se.archive.ubuntu.com/ubuntu/ lucid main restricted deb http://se.archive.ubuntu.com/ubuntu/ lucid universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted deb http://security.ubuntu.com/ubuntu lucid-security universe

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >