Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 60/537 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Managing Multiple dedicated servers centrally using a Web GUI tools?

    - by Sampath
    Application Architecture I am having a single ruby on rails application code running with multiple instances (ie. each client having identical sub domains) running on a multiple dedicated server using phusion passenger + nginx. sub domains setup done using vhost option in nginx passenger module. For Example server 1 serving 1 - 100 client with identical sub domains www.client1.product.com upto www.client100.product.com server 2 serving 101 - 200 client with identical sub domains www.client101.product.com upto www.client200.product.com server 3 serving 201 - 300 client with identical sub domains www.client201.product.com upto www.client300.product.com What my question is i need to centrally manage all my N dedicated servers using an gui tool I am looking for Web GUI tool to manage tasks like 1) backup all mysql databases automatically from all dedicated servers and send it to an some FTP backup drive 2) back files and folders from all dedicated servers and send it to an some FTP backup drive 3) need to manage firewall (CSF http://configserver.com/cp/csf.html) centrally for all dedicated servers 4) look to see server load , bandwidth used in graphical manner for all N no of dedicated servers Note: I am prefer to looking for an open source solution

    Read the article

  • Gett Tor and Irssi working together

    - by Joey Bagodonuts
    Hi I am trying to get Tor working with Irssi. The directions at the bottom of this page Freenode Install Link say to :~/.irssi$ tor MapAddress 10.40.40.40 p4fsi4ockecnea7l.onion Feb 12 04:26:51.101 [notice] Tor v0.2.1.29 (r318f470bc5f2ad43). This is experimental software. Do not rely on it for strong anonymity. (Running on Linux x86_64) Feb 12 04:26:51.101 [warn] Command-line option 'p4fsi4ockecnea7l.onion' with no value. Failing. Feb 12 04:26:51.101 [err] Reading config failed--see warnings above. Or add it to the torrc file and reload irssi .irssi$ cat /etc/tor/torrc |grep 10.40.40 mapaddress 10.40.40.40 p4fsi4ockecnea7l.onion This is a paste from within irssi after running $torify irssi [04:33] Math::BigInt: couldn't load specified math lib(s), fallback to Math::BigInt::FastCalc at /usr/local/share/perl/5.10.1/Crypt/DH.pm line 6 [04:33] [04:33] *** Irssi: Loaded script cap_sasl So I thought it was a CPAN module issue. cpan[1]> install Math::BigInt This was also done for FastCalc and retried with force install. What am I doing wrong? Thanks

    Read the article

  • Install of mod_proxy to get ProxyPass to work

    - by Lance Roberts
    I've been trying to follow these instructions, so that I could get the Citadel mail server to work alongside Apache, but I get an error when I try to restart Apache: Invalid command 'ProxyPass', ... I was told in the Apache docs that this is from the mod_proxy module, but apt-get install mod_proxy gives E: Couldn't find package mod_proxy, and I was unable to find it on the big list on the Apache site. What do I need to do to get the ProxyPass working in Ubuntu 10.04 TLS?

    Read the article

  • Does apt-cacher Change Packages `Access Time`?

    - by tAmir Naghizadeh
    I tried to remove the long time unused packages from apt-cacher archive using find: 1. $find /var/cache/apt-cacher -atime +5 -type f -name ".*deb*" | wc -l 8471 2. $find /var/cache/apt-cacher -atime +9 -type f -name ".*deb*" | wc -l 2269 3. $find /var/cache/apt-cacher -atime +10 -type f -name ".*deb*" | wc -l 0 Can I depend on the Access Time for apt-cacher archive usage? That is, does Access Time change only when package get received by the user? (we are apt-cacher for more than 6 months).

    Read the article

  • proxychained browser unable to open files

    - by Cocoro Cara
    Ubuntu 10.10, Midori 0.3.2 browser (same problem with Epiphany 2.30.2, Chrome 11.0.686.1 dev; haven't tried with Firefox as yet) Proxychains-3.1 installed, working fine. Here is the deal: when NOT proxychained, Midori or Epiphany can download and open a file (e.g. PDFs from a Google search) in Evince without problem. But when proxychained, neither browser can open PDF files. The message is, "file xxx downloaded". Then it tries to open the file, an indication appears in the status bar, the turning wheel appears, and then nothing. Evince doesn't open. Whats going on here? In both cases (with and without proxychains) files are downloaded to the /tmp folder. They have the same file permissions and ownership. Whats different when proxychained? Why can't the files be opened and why are there no error messages or notifications. Why the silent failure? I don't want to use FF or Chrome. Chrome does not follow my GTK2 customizations and FF is just too resource heavy. Please help.

    Read the article

  • Can't run TOR from terminal

    - by Thi G.
    So... I can't run TOR from my terminal. I have tried many different things, but I couldn't make it run. Once, it didn't stop to run when I wanted to. At my other attempt I also ended up failing because when it stopped to run I couldn't connect myself to the internet. I hope you can help me here guys. To be more specific, what I mean by "can't run from terminal" is that I can't hide my IP if I'm installing a program from terminal, for instance. Or if I'm running another program that is making a connection with the internet, my IP isn't being hidden. What I want is to make TOR work for all my programs. So my IP would be hidden in any connection with the internet.

    Read the article

  • What You Said: How Do You Browse Securely Away From Home?

    - by Jason Fitzpatrick
    Responses to this week’s Ask the Reader question show that just because you’re away from home doesn’t mean you have to give up the security and privacy that your home network provides. Earlier this week we asked you to share you browsing away from home security tips and tricks and obliged. JC offered one of the more entertaining tales of away-from-home browsing: Recently a bunch of us stayed at a high end resort down in Mexico. Internet was offered as a pay per device service at about $80/week/device. Considering we had about 12 wifi devices there among us(a few geeks), I decided to plan ahead. I setup a WRT54G as a WiFi client with a vpn back to my house and NAT. Setup a second one as a basic wireless access point with password and plugged it into the first. Onsite we setup the devices and connected to the wireless with one paid account(tied to the MAC address). Everyone connected to the other device for wireless access and it was all tunnelled through my home network with encryption. HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting

    Read the article

  • Cloud proxying service

    - by ChristopherJ
    I have an app that mashes up images from Bing image search, it's hosted on Heroku written in rails. The app is client side in javascript, so the mashup is done on an html5 canvas - this means though that if I fetch the images direct from the Bing server, the canvas gets dirty and I can't save it. As a quick work around, i have set up a route on my rails app that simply proxies the request to Bing and passes the result back through. Obviously this is a very poor performance solution and will eat up my dynos very quickly. Can anyone suggest a more suitable option? At the moment I'm thinking maybe Amazon EC2 with apache mod_rewrite rules would be better performing and more cost effective. Is there a cloud service (or an app I could deploy to a cloud service) that would be more appropriate for proxying requests for me so that my javascript can fetch the images without dirtying the canvas?

    Read the article

  • Problem with Json Date format when calling cross-domain proxy

    - by Christo Fur
    I am using a proxy service to allow my client side javascript to talk to a service on another domain The proxy is a simple ashx file with simply gets the request and forwards it onto the service on the other domain : using (var sr = new System.IO.StreamReader(context.Request.InputStream)) { requestData = sr.ReadToEnd(); } string data = HttpUtility.UrlDecode(requestData); using (var client = new WebClient()) { client.BaseAddress = serviceUrl; client.Headers.Add("Content-Type", "application/json"); response = client.UploadString(new Uri(webserviceUrl), data); } The client javascript calling this proxy looks like this function TestMethod() { $.ajax({ type: "POST", url: "/custommodules/configuratorproxyservice.ashx?m=TestMethod", contentType: "application/json; charset=utf-8", data: JSON.parse('{"testObj":{"Name":"jo","Ref":"jones","LastModified":"\/Date(-62135596800000+0000)\/"}}'), dataType: "json", success: AjaxSucceeded, error: AjaxFailed }); function AjaxSucceeded(result) { alert(result); } function AjaxFailed(result) { alert(result.status + ' - ' + result.statusText); } } This works fine until I have to pass a date. At which point I get a Bad Request error when the proxy tries to call the service I did have this working at one point but have now lost it. Have tried using JSON.Parse on the object before sending. and JSON.Stringify, but no joy anyone got any ideas what I am missing

    Read the article

  • BIND DNS Master with Zerigo Slaves - BIND won't update the slave servers

    - by Anthony
    I've tried to resolve this myself and have looked through Google and Stack but haven't found the answer I'm looking for. Currently on a VPS server I have BIND DNS installed as a MASTER DNS Server. I use Zerigo's DNS service as SLAVE servers for public use: The Master doesn't receive queries - It's job is to simply create and modify DNS entries locally of which the SLAVE use to serve. Here is an excerpt of the BIND log, I set it to INFO event logging: 14-Apr-2012 23:00:00.234 general: info: received control channel command 'reload' 14-Apr-2012 23:00:00.234 general: info: loading configuration from 'C:\DNS\BIND\etc\named.conf' 14-Apr-2012 23:00:00.234 general: info: using default UDP/IPv4 port range: [1024, 65535] 14-Apr-2012 23:00:00.234 general: info: using default UDP/IPv6 port range: [1024, 65535] 14-Apr-2012 23:00:00.250 general: info: reloading configuration succeeded 14-Apr-2012 23:00:00.250 general: info: reloading zones succeeded 14-Apr-2012 23:16:22.750 xfer-out: info: client 174.36.24.251#47135: transfer of 'ajmakeup.com/IN': AXFR started 14-Apr-2012 23:16:22.750 xfer-out: info: client 174.36.24.251#47135: transfer of 'ajmakeup.com/IN': AXFR ended 14-Apr-2012 23:16:23.015 xfer-out: info: client 68.71.141.22#36212: transfer of 'ajmakeup.com/IN': AXFR started 14-Apr-2012 23:16:23.031 xfer-out: info: client 68.71.141.22#36212: transfer of 'ajmakeup.com/IN': AXFR ended As you can see there is no problem with Zerigo's DNS servers requesting new DNS data, when I force a reload that is; I don't believe, as per the way they are set as SLAVE, that they poll for changes. However the problem is the other way; the MASTER is not updating the SLAVE servers when reload is run (on the MASTER); it is a batch on a 15 minute timer. Below is my NAMED.CONF: key "rndc-key" { algorithm hmac-md5; secret "REMOVED FOR SECURITY"; }; acl "trusted" { 174.36.24.251/32; 68.71.141.22/32; localhost; }; options { version "not currently available"; directory "C:\DNS\BIND\etc"; allow-query { trusted; }; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; logging{ channel simple_log { file "C:\DNS\BIND\logging\bind.log" versions 3 size 5m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ simple_log; }; }; zone "ajmakeup.com" in { type master; file "c:\dns\BIND\zones\db.ajmakeup.com.txt"; allow-transfer { 174.36.24.251; 68.71.141.22; }; allow-update { none; }; }; Does my problem have something to do with 'allow-query' under options? You will notice that 'allow-transfer' is set explicitly on each DNS zone. In case you need it here is my RNDC.CONF: key "rndc-key" { algorithm hmac-md5; secret "REMOVED FOR SECURITY"; }; options { default-key "rndc-key"; default-server 127.0.0.1; default-port 953; }; server localhost { key "rndc-key"; }; Note: I am using WebsitePanel as my hosting panel and is such why it creates the zone enteries the way it does. Although I know I can change this behaviour, I do not wish to do so nor do I believe is the root of the problem. Thanks for your help.

    Read the article

  • Why is -[UISwitch superlayer] being called?

    - by Jonathan Sterling
    I've got sort of a crazy thing going on where I have an NSProxy subclass standing in for a UISwitch. Messages sent to the proxy are forwarded to the switch. Please don't comment on whether or not this is a good design, because in context, it makes sense as an incredibly cool thing. The dealio is that when I try to add this object as an accessory view to a UITableViewCell, I get the following crash: Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[UISwitch superlayer]: unrecognized selector sent to instance 0x5889740' Yes, I could just set the proxy's target as the accessory view, but then I would have to keep track of the proxy so that I could release it at the right time. So, what I really want is to be able to have the proxy retained by the table cell, and released when it is removed from the view just like a normal accessory view. So, why is -[UISwitch superlayer] (a method which does not exist) being called, and how do I save the world?

    Read the article

  • Huge performance difference between two web servers, odd behavior seen using process monitor

    - by Francis Gagnon
    We have two Coldfusion servers that have a huge performance difference running the exact same code on the exact same input data. The code in questions instantiates a large amount of CFCs (Coldfusion Components, which are similar to objects in OOP languages). I compared the two servers by running Process Monitor and then calling the problematic code on both machines. I learned two things. First, Coldfusion opens CFC files every time it instantiates an object. Both servers do this, so it cannot be the cause of the performance difference. Second, the fast server opens the CFC files directly while the server with the performance problem seems to navigate its way through the path until it reaches the desired CFC file. It does this for every file, even the ones it has previously loaded, and because the code instantiates so many CFCs it becomes very slow. See below the partial Promon traces that show this behavior. It can take over 60 seconds for the slow server to do what the fast one does in 2 seconds. Can anyone tell me what causes this behavior? Is it a Coldfusion setting? Since Coldfusion runs on top of Java, is it a Java setting? Is it an OS option? The fast server is running Windows XP and I think the slow server is a Windows Server 2003. Bonus question: Coldfusion doesn't seem to perform any READ FILE operations on any of the CFC or CFM files. How can this be? Sample of the fast server opening CFC files: 11:25:14.5588975 jrun.exe QueryOpen C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5592758 jrun.exe CreateFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5595024 jrun.exe QueryBasicInformationFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5595940 jrun.exe CloseFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5599628 jrun.exe CreateFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5601600 jrun.exe QueryBasicInformationFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5602463 jrun.exe CloseFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc Equivalent sample of the slow server opening CFC files: 11:15:08.1249230 jrun.exe CreateFile D:\ 11:15:08.1250100 jrun.exe QueryDirectory D:\org 11:15:08.1252852 jrun.exe CloseFile D:\ 11:15:08.1259670 jrun.exe CreateFile D:\org 11:15:08.1260319 jrun.exe QueryDirectory D:\org\cli 11:15:08.1260769 jrun.exe CloseFile D:\org 11:15:08.1269451 jrun.exe CreateFile D:\org\cli 11:15:08.1270613 jrun.exe QueryDirectory D:\org\cli\cpn 11:15:08.1271140 jrun.exe CloseFile D:\org\cli 11:15:08.1279312 jrun.exe CreateFile D:\org\cli\cpn 11:15:08.1280086 jrun.exe QueryDirectory D:\org\cli\cpn\APP 11:15:08.1280789 jrun.exe CloseFile D:\org\cli\cpn 11:15:08.1291034 jrun.exe CreateFile D:\org\cli\cpn\APP 11:15:08.1291709 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com 11:15:08.1292224 jrun.exe CloseFile D:\org\cli\cpn\APP 11:15:08.1300568 jrun.exe CreateFile D:\org\cli\cpn\APP\com 11:15:08.1301321 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1301843 jrun.exe CloseFile D:\org\cli\cpn\APP\com 11:15:08.1312049 jrun.exe CreateFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1314409 jrun.exe QueryBasicInformationFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1314633 jrun.exe CloseFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1315881 jrun.exe CreateFile D:\ 11:15:08.1316379 jrun.exe QueryDirectory D:\org 11:15:08.1316926 jrun.exe CloseFile D:\ 11:15:08.1330951 jrun.exe CreateFile D:\org 11:15:08.1338656 jrun.exe QueryDirectory D:\org\cli 11:15:08.1339118 jrun.exe CloseFile D:\org 11:15:08.1526468 jrun.exe CreateFile D:\org\cli 11:15:08.1527295 jrun.exe QueryDirectory D:\org\cli\cpn 11:15:08.1527989 jrun.exe CloseFile D:\org\cli 11:15:08.1531977 jrun.exe CreateFile D:\org\cli\cpn 11:15:08.1532589 jrun.exe QueryDirectory D:\org\cli\cpn\APP 11:15:08.1533575 jrun.exe CloseFile D:\org\cli\cpn 11:15:08.1538457 jrun.exe CreateFile D:\org\cli\cpn\APP 11:15:08.1539083 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com 11:15:08.1539553 jrun.exe CloseFile D:\org\cli\cpn\APP 11:15:08.1544126 jrun.exe CreateFile D:\org\cli\cpn\APP\com 11:15:08.1544980 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1545482 jrun.exe CloseFile D:\org\cli\cpn\APP\com 11:15:08.1551034 jrun.exe CreateFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1552878 jrun.exe QueryBasicInformationFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1553044 jrun.exe CloseFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc Thanks

    Read the article

  • Dns caching for sockets

    - by Poma
    I'm connecting to some websites through socks proxy server. In my case its very good to implement dns cache, so proxy don't need to resolve website's ip address. So, I performed DNS lookup, but don't know where to supply IP address. mySocket.Connect uses proxy's ip address so it isn't right place. I tried to place it in http header GET http://11.22.33.44/index.html HTTP/1.1 - this doesn't work (even in browser) since website is on virtual hosting. It seems that Host header is right place for resolved ip address. Am I right? Will proxy resolve host name (since it's still there in GET header) or not?

    Read the article

  • Trying to install mod_proxy in Apache-Httpd-2.2.15

    - by Dspace
    Hello, I have spent the afternoon trying to install the mod_proxy module into apache. I have tried ./configure --prefix=/opt/apache2 --enable-proxy --enable-proxy-http ./configure --prefix=/opt/apache2 --enable-module=proxy After it finishes installing, navigating to /opt/apache2/modules only shows one file: httpd.exp. It seems that the module is not being installed. Any help is appreciated. Thanks.

    Read the article

  • Why "Content-Length: 0" in POST requests?

    - by stesch
    A customer sometimes sends POST requests with Content-Length: 0 when submitting a form (10 to over 40 fields). We tested it with different browsers and from different locations but couldn't reproduce the error. The customer is using Internet Explorer 7 and a proxy. We asked them to let their system administrator see into the problem from their side. Running some tests without the proxy, etc.. In the meantime (half a year later and still no answer) I'm curious if somebody else knows of similar problems with a Content-Length: 0 request. Maybe from inside some Windows network with a special proxy for big companies. Is there a known problem with Internet Explorer 7? With a proxy system? The Windows network itself? Google only showed something in the context of NTLM (and such) authentication, but we aren't using this in the web application. Maybe it's in the way the proxy operates in the customer's network with Windows logins? (I'm no Windows expert. Just guessing.) I have no further information about the infrastructure.

    Read the article

  • Antivirus Configuration for dedicated SQL and dedicated IIS Servers

    - by Wayne Arthurton
    Our corporate standard is McAfee Enterprise, unfortunately this is non-negotiable. On two types of servers I'm responsible for, SQL & Web, we have noticed major performance issues with the corporate standard setup. Max scan time 45sec One policy for all processes Scan ALL files on write, read and open for backup Heuristics: Find unknown programs, trojans and macros Detect unwanted programs Exclude: EVT, LDF, LOG, MDF, VMD, , windows file protection) This of course still causes major slowdowns. IIS .NET recompiles are slow especially with SharePoint, SQL backups and restores, SQL Analysis Services, Integration Services and temp data from them as well. I have looked from time to time, for some best practices on setting up McAfee of SQL & SQL Analysis Service, SQL Integration Service, Visual Studio, Sharepoint, and .NET web servers in general. How do people setup McAfee enterprise on their corporate serves keeping security intact, but affecting performance as minimally as possible? Has anyone run across white papers on these setups? Obviously some are case by case, but there must be some best practices out there somewhere.

    Read the article

  • Issues resolving DNS entries for multi-homed servers

    - by I.T. Support
    This is difficult to explain, so bear with me. We have 2 domain controllers, each multi-homed to straddle 2 internal subnets, (subnet A and subnet B) and provide dns, dhcp, and ldap authentication. Both domain controllers each have 2 DNS entries. both entries have identical host names, but correspond to subnet A & subnet B respectively (example entries shown): dc1 host 192.168.8.1 dc1 host 192.168.9.1 dc2 host 192.168.8.2 dc2 host 192.168.9.2 We also have a 3rd subnet for our dmz, (subnet C) which neither domain controller has an IP address on, but our firewall/routing tables provide access to subnet A from subnet C and vice versa, but don't allow access to subnet B from subnet C. Here's my issue. How can I force/determine which dns entry is used when a server on subnet C queries either domain controller by host name? Right now it seems to randomly pick one of the two entries, swaps out the name for the IP address and that's that. The problem is if it randomly selects the entry that corresponds to the 9.x subnet B (no access from subnet C), then the server fails to resolve. If it picks the entry for the 8.x subnet A then it resolves (firewall/routing tables defined for communication between these 2 subnets) Here's what I'd like to know: What are Best Practices (if any) for dealing with DNS resolution on subnets that the DNS servers don't have a presence on? Can I control something akin to a metric value to force an order of DNS resolution when there are multiple entries for the same host name that correspond to different IP subnets? Should I even have 2 DNS HOST entries for the same name? Here's what I'd like to avoid: Making edits to the HOSTS files of servers on subnet C to force DNS resolution of the hostname to the appropriate subnet Adding NIC's to the DC's to have them straddle the DMZ as well, thus obtaining a third DNS entry that corresponds to subnet C Again, my apologies if this was too verbose / unclear. Thanks!

    Read the article

  • Windows Server 2008 R2 DNS - One IP, multiple servers

    - by Blu Dragon
    I need opinions and examples on how to best to accomplish the setup I am looking for. I have a public-facing AD domain server with one public IP address. I have setup an external zone for example.com and I successfully have my own name servers pointing to it at ns0.example.com and ns1.example.com. I also have an internal zone for my private network at home.example.com. I am behind a router with the domain server in the DMZ. I want dev.example.com to be accessible from the outside world over https and to point to internal IP address 192.168.1.78. Likewise, I want www.example.com to be accessible from the outside world and point to internal IP address 192.168.1.79. Both dev and www servers are CentOS 5.6 VMs running inside of Hyper-V on the domain server (bad idea I know but I am limited on hardware atm). What is best way to achieve this? From what I have read and researched on Google, I may need to setup a reverse proxy but I am not sure how well that will work with SSL.

    Read the article

  • Authenticating Linked Servers - SQL Server 8 to SQL Server 10

    - by jp2code
    We have an old SQL Server 2000 database that has to be kept because it is needed on our manufacturing machines. It also maintains our employee records, since they are needed on these machines for employee logins. We also have a newer SQL Server 10 database (I think this is 2008, but I'm not sure) that we are using for newer development. I have recently learned (i.e. today) that I can link the two servers. This would allow me to access the employee tables in the newer server. Following the SF post SQL Server to SQL Server Linked Server Setup, I tried adding the link. In our SQL Server 2000 machine, I got this error: Similarly, on our SQL Server 10 machine, I got this error: The messages, though worded different, probably say the same thing: I need to authenticate, somehow. We have an Active Directory, but it is on yet another server. What, exactly, should be done here? A guy HERE<< said to check the Security settings, but did not say what else to do. Both servers are set to SQL Server and Windows Authentication mode. Now what?

    Read the article

  • Optimum configuration of McAfee for Servers

    - by Wayne Arthurton
    Our corporate standard is McAfee Enterprise, unfortunately this is non-negotiable. On two types of servers I'm responsible for, SQL & Web, we have noticed major performance issues with the corporate standard setup. Max scan time 45sec One policy for all processes Scan ALL files on write, read and open for backup Heuristics: Find unknown programs, trojans and macros Detect unwanted programs Exclude: EVT, LDF, LOG, MDF, VMD, , windows file protection) This of course still causes major slowdowns. IIS .NET recompiles are slow especially with SharePoint, SQL backups and restores, SQL Analysis Services, Integration Services and temp data from them as well. I have looked from time to time, for some best practices on setting up McAfee of SQL & SQL Analysis Service, SQL Integration Service, Visual Studio, Sharepoint, and .NET web servers in general. How do people setup McAfee enterprise on their corporate serves keeping security intact, but affecting performance as minimally as possible? Has anyone run across white papers on these setups? Obviously some are case by case, but there must be some best practices out there somewhere.

    Read the article

  • DNS zone file SPF configuration to support sending mail from multiple servers and gmail

    - by Tauren
    I want to configure SPF on a domain to allow mail to be sent from: the x.com website server (x.com and www.x.com - both at same IP) it's MX servers (smtp.x.com, mx.x.com, mail.x.com) another server that isn't listed as an MX server (somehost.x.com) via gmail using an account that has authenticated use of [email protected] Will this zone file work? If not, what are the problems with it? $ttl 38400 @ IN SOA ns1.x.com. hostmaster.x.com. ( 201003092 ; serial 8H ; refresh 15M ; retry 1W ; expire 1H ) ; minimum @ NS ns1.x.com. @ NS ns2.x.com. @ MX 10 mx.x.com. @ MX 20 smtp.x.com. @ MX 30 mailhost.x.com. ; SPF records @ IN TXT "v=spf1 a mx a:somehost.x.com include:_spf.google.com ~all" mx IN TXT "v=spf1 a -all" smtp IN TXT "v=spf1 a -all" mailhost IN TXT "v=spf1 a -all" Questions: Is _spf.google.com the right thing to include for gmail.com, or is it only for Google Hosted Apps? If only for Google Apps, what should I include to send from gmail.com? If mail shouldn't be sent from anywhere else, is it safe to use -all instead of ~all? Does it make sense to add specific SPF records for each of the mail servers? Any other problems with the zone file? I want to confirm these things before making changes to my zone file. The file has SPF configured basically the same now, just without google.com and somehost, but I want to make sure I won't break things when I change it.

    Read the article

  • GMail and Yahoo Mail servers not accepting mails from my slicehost slice

    - by Lakshmanan
    Hi, I have a rails in one of the slices at Slicehost. I've setup postfix (sendmail) to send emails from my rails app. All emails to Google Apps domain (to company setup google hosted paid email id) are getting delivered properly (but to spam folder). But all emails to [email protected], [email protected], .. @hotmail.com are not getting delivered and this is the line from my /var/log/mail.log Dec 21 17:33:56 staging postfix/smtp[32295]: 5EB4810545B: to=<[email protected]>, relay=j.mx.mail.yahoo.com[66.94.237.64]:25, delay=1.6, delays=0.02/0.01/1.5/0, dsn=4.0.0, status=deferred (host j.mx.mail.yahoo.com[66.94.237.64] refused to talk to me: 553 Mail from 173.203.201.186 not allowed - 5.7.1 [BL21] Connections not accepted from IP addresses on Spamhaus PBL; see http://postmaster.yahoo.com/errors/550-bl21.html [550]) and this is what i got for gmail Dec 21 17:29:17 staging postfix/smtp[32216]: 0FA3310545B: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[74.125.65.27]:25, delay=0.59, delays=0.02/0.01/0.09/0.47, dsn=5.7.1, status=bounced (host gmail-smtp-in.l.google.com[74.125.65.27] said: 550-5.7.1 [173.203.201.186] The IP you're using to send mail is not authorized 550-5.7.1 to send email directly to our servers. Please use the SMTP relay at 550-5.7.1 your service provider instead. Learn more at 550 5.7.1 http://mail.google.com/support/bin/answer.py?answer=10336 v49si11176750yhc.16 (in reply to end of DATA command)) Please help. I have very little knowledge about setting dns, servers and stuff.

    Read the article

  • Load balancing a Windows File Share using HA-Proxy

    - by NathanE
    After pulling my hair out over DFS I just had this weird and potentially dangerous idea come into my head whereby, just possibly, I might be able to use HA-Proxy to load balance a file share between servers. I've done some remedial packet traces and it does appear that TCP port 445 is the only thing involved in using Windows file sharing. I've always thought for many years that UDP 139, 135 etc were also involved in at least establishing the connection - but apparently not! So I setup a basic test: listen SMBTest *:445 mode tcp server Smb1 172.16.61.201:445 server Smb2 172.16.61.202:445 And you'll never guess what... it works??? (!) Now obviously there is the whole concern about synchronisation between the file servers (of course). That could easily be taken care of with a little bit of Robocopy script. And considering I only need a HA read-only file share there wouldn't be any issues with regard to file locking etc. Can anyone tell me if what I'm playing with here is fire? I really didn't think it would work at all and now I'm a little shocked. What would be the downsides? Could this be relied upon for a production environment?

    Read the article

  • Provisioning DELL servers using only linux tools

    - by Krist van Besien
    We have taken delivery of a bunch off Dell Poweredge Rack servers. Unfortunately when ordering it was neglegted to ask for dhcp to be enabled in the built in iDRAC controlers... So they are all stuck on the same IP address. Which means that I'll have to go to each of them individually and configure a new IP in the console... In the future I want to avoid that. Now Dell proposes to deliver the next batch with auto discovery enabled. As I understand this means that when the machine wakes up for the first time the iDRAC will request a DHCP address. The DHCP server then supposedly also provides a "provisioning" server, that provides it with a username and password, and a configuration to be applied. This would allow us to for example configure things like RAID automatically. However, I can't seem to find a way to set up such a provisioning server that does not involve setting up a windows machine. I want to use Linux tools exclusively. Is there a way to do this? I want to just rack servers, switch them on, and then do everything remotely. And that using only linux tools?

    Read the article

  • There are currently no logon servers available

    - by Ian Robinson
    I am running a Windows 7 laptop that is joined to my company's domain. When I installed Windows 7, I created an account for myself, joined to the domain, and it had been working quite well even though I'm physically remote most of the time, and not actually on the network. However, today I created a new local user account (non-admin) for my little brother. While he was using it, he decided he wanted to install a program, because his account is not an admin, he was prompted to enter Administrator credentials to allow the program to make changes to his computer. I entered my credentials, and this is the first time I ran into the error message: There are currently no logon servers available to service the logon request. I tried logging off and loggin back in, rebooting, etc etc, and no matter what, every time I try to authenticate as my "normal" domain account - I get that message. I can no longer access my computer as an administrator. I no longer know how to log in to my machine using any other account aside from my little brother's non-admin account. I don't have any other local accounts created, and the default local admin account was never enabled. I'd appreciate any ideas on how I can recover access to my account. Let me know if I can provide any more information. FYI - This is a similar question but not sure any of the answers help me in my case. http://serverfault.com/questions/71632/there-are-currently-no-logon-servers-available-to-service-the-logon-request

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >