Search Results

Search found 15651 results on 627 pages for 'setup'.

Page 160/627 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Ruby Passeger + Nginx or lighthttpi + fgci for shared hosting

    - by devnull
    I have set up a passenger + nginx setup and I plan to offer a free non-commercial hosting (or in fact on the fly deployment) for rack-based frameworks (e.g. camping, sinatra). I am facing an "issue" with passenger. For each application you need to configure nginx.conf (it would be the same with apache so it is not an nginx issue) with: server { ... passenger_base_uri /app1; passenger_base_uri /app2; passenger_base_uri /app3; } Now this is not inherently bad as, in theory, I could allow a user to run just one app on his webspace but even in this case I need to create a new server directory on nginx e.g. (user.domain.com). As this will mainly be used to deploy apps the behavior I am looking at is more the possibility to auto map several apps (e.g. app1, app2, app3, app4) under the same server (your app.com/app1 yourapp.com/app2) without having to update the nginx or apache file each time. This seems to be a limitation in passenger. As such I am thinking about an alternative with lighttpd and fastcgi. Would this allow immediate deployment without touching the lighttpd config file e.g. I create a new directory with app2 and it will run immediately ? What is your experience in performance difference between passenger + nginx vs. lighttpd + fastcgi ? thanks in advance scenario details: on nginx + passenger - user cannot add a new sub-folder and run another sinatra/camping app without declaring the path on nginx.conf and restarting the server; wished behavior with the new setup: - user can add a new folder with a new app and it would run on lighttpd+fcgi without any extra configuration of the web server;

    Read the article

  • What do I need to do to set my computer as Default Gateway?

    - by Vaibhav
    We are trying to put together a box with dual LAN cards (let's say Outer and Inner), where the Inner LAN card is supposed to act as a default gateway on the network it is connected to. This box is running Ubuntu. The basic purpose for this box is to take messages generated on the inner network, do some work with them and forward them out the Outer LAN card to a server. The inner network is completely isolated with simply a regular switch connecting the Inner LAN Card with two other boxes. These other boxes either throw out multi-cast messages (which the Inner LAN Card is listening to), or send out unicast messages meant for the server which is not on this inner network. So, we need the Inner LAN Card to act as a default gateway, where these unicast messages will then be sent, and the code on the dual-LAN Card box can then intercept and forward these messages to the server. Question: 1. How do we setup the LAN Card to be default gateway (does it need some configuration on Ubuntu)? 2. Once we have this setup, is it a simple matter of listening to the interface to intercept the incoming messages? Any help (pointers in the right direction) is appreciated. Thanks.

    Read the article

  • iis 7.5 - WFF and ARR farm management

    - by smackaysmith
    We have two test web farms (IIS 7.5). The Florida web farm has two ARR servers and two content servers. The ARR servers have WFF and NLB installed. The ARR setup uses a shared config located on a file share. The content servers do not have WFF installed. There is one web farm, and it's managed on an ARR server. The Illinois web farm also has two ARR servers and two content servers. ARR servers have WFF and NLB installed, and they use a shared config located on a share. One of the content servers has WFF installed, which makes it the controller; it's also the primary content server. Apparently, Illinois isn't properly configured. From what we've pieced together from various IIS.net articles and this post (http://ruslany.net/2010/07/web-farm-framework-2-0-overview/), the controller should be one of the ARR servers (like our Florida setup). The thing is Florida's controller doesn't have a Primary server nor can you set one of the content servers as Primary. It doesn't have the management piece showing the Trace messages when you click the Servers node (from iis console, Server Farms/FLFarm/Servers http://ruslany.net/wp-content/uploads/2010/07/WebFarm8.png). That management piece does exist in the Illinois farm, but that's a bad configuration. What are we missing that our Florida configuration doesn't have the Primary and Secondary content servers, and the management piece? I have looked for IIS role differences, but there are none.

    Read the article

  • ESX 4.0 space: DASD, NAS, or ?

    - by thormj
    I put together an ESX box for better management, but its performance is a WTF item; I'm a noob at dealing with ESX, so I'm looking for a laundry-list of reading material to help me straighten this out so I can go back to .NET programming. Current storage system: We're running Raid5+Hotspare (8x500 GB spindles) on a PERC6i on a Dell 2910. Due to ESX limitatios, the PERC is showing the storage as 1x2TB + 1x800GB "partitions." I'm not sure of the setup's configuration (stride / stripe / ???) at all. Our Applications We have a SBS server as well as a minor (2x50 GB, but growing at 10GB/month) database server... Our application that lives on the database VM is CPU and I/O insense; it's a database churning excercise mixed in with a lot of computation on the data (fixing that performance is what I'm supposed to be working on)... Perfomance Issue When I do a backup, restore, or worse (copy a backup from 1 vm to another to move it to the QA VM), the entire system slows to a crawl (even "unrelated" VMs). I originally thought a DASD situation would be quite good since you had PCI-x bandwidth, but the systemwide slowdown is killing productivity. Questions What should I do to make an intelligent decision about NAS vs RAID vs SAN vs DASD? Are there sweet spots/ugly spots in the storage setup? Can you use a SSD PCI-X card in ESX for the tempdb? Good/Bad idea? Is there any way to "share" some image in a copy-on-write fashion? Most of the "Backup-Copy-Restore" is to "put a clean image on the dev boxes"; if I could have them "share" the master image, the "big copy" (2x50 GB) would only need to be done once per week instead of once per dev per week...[runtime performance isn't a concern with the dev boxes, but the backup/copy/restore kills production, SBS, and everything else on the box]

    Read the article

  • Apahe configuration with virtual hosts and SSL on a local network

    - by Petah
    I'm trying to setup my local Apache configuration like so: http://localhost/ should serve ~/ http://development.somedomain.co.nz/ should serve ~/sites/development.somedomain.co.nz/ https://development.assldomain.co.nz/ should serve ~/sites/development.assldomain.co.nz/ I only want to allow connections from our local network (192.168.1.* range) and myself (127.0.0.1). I have setup my hosts file with: 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 development.somedomain.co.nz 127.0.0.1 development.assldomain.co.nz 127.0.0.1 development.anunuseddomain.co.nz My Apache configuration looks like: Listen 80 NameVirtualHost *:80 <VirtualHost development.somedomain.co.nz:80> ServerName development.somedomain.co.nz DocumentRoot "~/sites/development.somedomain.co.nz" DirectoryIndex index.php <Directory ~/sites/development.somedomain.co.nz> Options Indexes FollowSymLinks ExecCGI Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost localhost:80> DocumentRoot "~/" ServerName localhost <Directory "~/"> Options Indexes FollowSymLinks ExecCGI Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <IfModule mod_ssl.c> Listen *:443 NameVirtualHost *:443 AcceptMutex flock <VirtualHost development.assldomain.co.nz:443> ServerName development.assldomain.co.nz DocumentRoot "~/sites/development.assldomain.co.nz" DirectoryIndex index.php SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /Applications/XAMPP/etc/ssl.crt/server.crt SSLCertificateKeyFile /Applications/XAMPP/etc/ssl.key/server.key BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 <Directory ~/sites/development.assldomain.co.nz> SSLRequireSSL Options Indexes FollowSymLinks ExecCGI Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> </IfModule> http://development.somedomain.co.nz/ http://localhost/ and https://development.assldomain.co.nz/ work fine. The problem is when I request http://development.anunuseddomain.co.nz/ or http://development.assldomain.co.nz/ it responds with the same as http://development.somedomain.co.nz/ I want it to deny all requests that do not match a virtual host server name and all requests to a https host that are requested with http PS I'm running XAMPP on Mac OS X 10.5.8

    Read the article

  • Asterisk terminating outbound call when picked up, sends 'BYE' message

    - by vo
    I'm running Asterisk 1.6.1.10 / FreePBX 2.5.2.2 and I've got an outbound trunk setup. Everything use to work fine until recently (perhaps due to upgrade to FC12 or other things I'm not sure). Anyway the setup does not appear to have issues registering and setting up the call, RTP packets go both ways and you can hear the ringing from the other side. However it appears that when the call is picked up or thereabouts, the incoming RTP packets cease. Upon closer inspection with Wireshark, there are these particular packets that seem to be the cause: trunk->asterisk SIP/SD Status: 200 OK, with session description asterisk->trunk SIP Request: ACK sip:<phone>@trunk:6889 asterisk->trunk SIP Request: BYE sip:<phone>@trunk:6889 [..about a dozzen RTP packets in/outbound..] trunk->asterisk SIP Status: 200 OK, CSeq: 104 Bye [..outbound RTP continues, phone is silent..] Then the inbound RTP packets cease, however the asterisk logs dont show any activity at this point. The last entry reads 'SIP/ is answered SIP/'. Then when you hangup the extension, you get asterisk->trunk SIP Request: BYE sip:<phone>@trunk:6889 trunk->asterisk SIP Status: 481 Call Leg/Transaction does not exist My trunk peer settings in FreePBX are: username=<user> fromuser=<user> canreinvite=no type=friend secret=<pass> qualify=no [qualify yes produces 401/forbidden messages] nat=yes insecure=very host=<sip trunk gateway> fromdomain=<sip trunk gateway> disallow=all context=from-pstn allow=ulaw dtmfmode=inband Under sip_general_custom.conf i have stunaddr=stun.xten.com externrefresh=120 localnet=192.168.1.1/255.255.255.0 nat=yes Whats causing Asterisk to prematurely end the call and still think the call is in progress? I have no idea where to look next.

    Read the article

  • LYNC / OCS... problems getting edge server working.

    - by TomTom
    New setup Lync 2010 (i.e. OCS 2010). I have serious problems getting my edge system going. Internally things work fine. Externally I am stuck. I have used the tester at https://www.testocsconnectivity.com/ and it also fails. NOTE: I use the domain xample.com / xample.local here just as example. Here is the setup. I have 2 internal hosts (lync.xample.local, edge.xample.local). edge.xample.com is also correctly in dns. and points to the edge.xample.local external assigned ip address (external interface). Externally, I have the following dns entries: edge.xample.com _sip._tcp - edge.xample.com 443 _sipfederationtls._tcp - edge.xample.com 5061 _sipinternaltls._tcp - lync.xample.local 5061 _sip._tls - edge.xample.com 443 My problem is that the ocs connection test always ends up trying to contact lync.xample.local (i.e. the internal address) when connecting to [email protected]. The error is: Attempting to Resolve the host name lync.xample.local in DNS. This shows me it clearly manages to connect to SOMETHING, but it does either fall through to the _sipinternaltls._tcp entry, OR it does get that internal entry wrongly from the edge system. Am I missing some entries or have some wrong?

    Read the article

  • Map FTP folder to folder on different FTP server

    - by jolt
    In my team we work a lot with FTP. We upload and download files from several different servers daily. Currently every member of the team manages access credentials to each FTP server locally on their own machine. I am looking for a way to set up a central FTP server that we can connect to, and from there, navigate to folders that each represent one of the other FTP servers that we connect to daily. Something like this: In-house central FTP server: |- FolderA --> server A root folder |- FolderB --> server B root folder |- FolderC --> server C root folder A setup like this, would mean that we can manage access credentials on the central FTP server, and team members would only need to have the access credentials to the central FTP server, and from there they could navigate to the other servers through these "virtual" folders. We could potentially develop our own custom FTP server that just forward requests to the remote FTP servers, but i feel like something like this (or something similar) would already have been done. So I'm looking for pointers that could help us find software for Windows that could help us to simplify our current setup. Thank you! Similar (unanswered) question here: FTP management server

    Read the article

  • 550 Requested action not taken: mailbox unavailable

    - by Porch
    I setup a small box with Server 2003 64bit to be used as a webserver and email server for a small school. Real simple stuff for a few users. A simple website and a handful of emails. rDNS and spf records setup and pass every test I found including test at dnsstuff.com. Email sending to almost every email address (google, hotmail, aol, whatever) works. However, with one domain, I get an bounce back with the error. 550 Requested action not taken: mailbox unavailable It's another school running Exchange judging from some packet sniffing with WireShark. Every email on this domain I have tried sending to gives this error. The email address is valid as I can send to it from my personal, and gmail account without a problem. Does anyone know of some anti-spam software that gives an 550 error like the above? What else could this be? Thanks for any suggestions. Packet capture of the two servers communicating look like this. 220 <server snip> Microsoft ESMTP MAIL Service, Version: 6.0.3790.3959 ready at Sat, 2 Oct 2010 12:48:17 -0700 EHLO <email snip> 250-<server snip> Hello [<ip snip>] 250-TURN 250-SIZE 250-ETRN 250-XXXXXXXXXX 250-DSN 250-ENHANCEDSTATUSCODES 250-8bitmime 250-BINARYMIME 250-XXXXXXXX 250-VRFY 250-X-EXPS GSSAPI NTLM LOGIN 250-X-EXPS=LOGIN 250-AUTH GSSAPI NTLM LOGIN 250-AUTH=LOGIN 250-X-LINK2STATE 250-XXXXXXX 250 OK MAIL FROM: <email snip> 250 2.1.0 <email snip>....Sender OK RCPT TO:<email snip> 250 2.1.5 <email snip> DATA 354 Start mail input; end with <CRLF>.<CRLF> <email body here> . 550 Requested action not taken: mailbox unavailable QUIT 221 Goodbye

    Read the article

  • procmail don't execute php script

    - by Phliplip
    Hi, I have setup a kannel SMS gateway on my FreeBSD 7.2 - the service works great. I'm now trying to setup a email2sms feature. For this i have created a system user called kannel and all mails are forwarded to this user. In the home dir of kannel i have the following files. -rw-r--r-- 1 kannel kannel 81B 17 jan 09:50 .procmailrc lrwxr-x--- 1 root kannel 58B 14 jan 13:24 email2sms.php @ -> some-what-some-where -rw-rw-rw- 1 root kannel 5,8K 17 jan 09:52 log.email2sms -rw------- 1 kannel kannel 1,3K 17 jan 09:50 procmail.log -rw-r----- 1 root kannel 606B 14 jan 13:28 rawmail.txt The file email2sms.php is a symlink to the a php script (ZendFramework Application) that takes the email from STDIN, and uses ZendFramework to parse that mail into an object. It then do a http request to the SMS gateway. The php-script works. Content of .procmailrc LOGFILE=$HOME/procmail.log VERBOSE=yes :0 | php email2sms.php >> log.email2sms From last sent email i have this in procmail.log procmail: [97744] Mon Jan 17 09:50:40 2011 procmail: [97744] Mon Jan 17 09:50:40 2011 procmail: Assigning "LASTFOLDER= php email2sms.php >> log.email2sms" procmail: Executing " php email2sms.php >> log.email2sms" procmail: Notified comsat: "kannel@:/home/user/kannel/ php email2sms.php >> log.email2sms" From [email protected] Mon Jan 17 09:50:40 2011 Subject: asdf as Folder: php email2sms.php >> log.email2sms 2600 But there is no new output to log.email2sms, and the script should output the subject of the email. If i sudo as the kannel user and pipe a file with raw email to the script, it executes just fine. [root@webserver /home/user/kannel]# /home/user/kannel/ sudo -u kannel cat rawmail.txt | php email2sms.php >> log.email2sms And the command outputs to log.email2sms as desired. Any ideas guys?

    Read the article

  • Losing JSESSIONID when using ProxyHTMLURLMap

    - by Matthew Schmitt
    I've setup a reverse proxy between an Apache front-end and multiple Tomcat backends. The below block of code includes the ProxyHTMLURLMap param so that the HTML can be rewritten to remove the Tomcat context path. With this setup in place, after logging into my application, an initial JSESSIONID is set properly, but when navigating to any other page, this JSESSIONID is lost and another one is set by the application. I should mention that the initial login directs to a URL that includes the current context path (i.e. https://app.domain.com/context/home), but when navigating to another page, that context path is not present in the URL (i.e. https://app.domain.com/page2). <Proxy balancer://happcluster> BalancerMember ajp://happ01.h.s.com:8009 route=worker1 loadfactor=10 timeout=15 retry=5 BalancerMember ajp://happ02.h.s.com:8009 route=worker2 loadfactor=10 timeout=15 retry=5 BalancerMember ajp://happ03.h.s.com:8009 route=worker3 loadfactor=5 timeout=15 retry=5 BalancerMember ajp://happ04.h.s.com:8009 route=worker4 loadfactor=5 timeout=15 retry=5 BalancerMember ajp://happ05.h.s.com:8009 route=worker5 loadfactor=5 timeout=15 retry=5 ProxySet lbmethod=bytraffic ProxySet stickysession=JSESSIONID </Proxy> ProxyPass /context balancer://happcluster/context ProxyPass / balancer://happcluster/context/ <Location /context/> # Rewrite HTTP headers and HTML/CSS links for everything else ProxyPassReverse / ProxyPassReverseCookieDomain / app.domain.com ProxyPassReverseCookiePath / /context ProxyHTMLURLMap /context/ / # Be prepared to rewrite the HTML/CSS files as they come back # from Tomcat SetOutputFilter INFLATE;proxy-html;DEFLATE </Location> Has anyone ever run into a similar situation?

    Read the article

  • BackupPC - why does it use rsync --sender --server ... ?

    - by Jakobud
    I'm in the process of experimenting with BackupPC on a CentOS 5.5 server. I have everything pretty much setup with default values. I tried setting up a basic backup for a host's /www directory. The backup fails with the following errors: full backup started for directory /www Running: /usr/bin/ssh -q -x -l root target /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Xfer PIDs are now 30395 Read EOF: Connection reset by peer Tried again: got 0 bytes Done: 0 files, 0 bytes Got fatal error during xfer (Unable to read 4 bytes) Backup aborted (Unable to read 4 bytes) Not saving this as a partial backup since it has fewer files than the prior one (got 0 and 0 files versus 0) First of all, yes I have my ssh keys setup to allow me to ssh to the target server without requiring a password. In the process of troubleshooting, I tried the above ssh command directly from the command line, and it hangs. Looking at the end of the debug messages for SSH I get: debug1: Sending subsystem: /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Request for subsystem '/usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/' failed on channel 0 Next I started looking at the rsync flags. I did not recognize --server and --sender. Looking at the rsync man pages, sure enough, I don't see anything about --server or --sender in there. What are those in there for? Looking at the BackupPC config I have this: RsyncClientPath = /usr/bin/rsync RsyncClientCmd = $sshPath -q -x -l root $host $rsyncPath $argList+ And for the arguments, I have the following listed: --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive Notice there is no --server, --sender or --ignore-times. Why are these things getting added in? Is this part of the problem?

    Read the article

  • Squid SSL transparent proxy - SSL_connect:error in SSLv2/v3 read server hello A

    - by larryzhao
    I am trying to setup a SSL proxy for one of my internal servers to visit https://www.googleapis.com using Squid, to make my Rails application on that server to reach googleapis.com via the proxy. I am new to this, so my approach is to setup a SSL transparent proxy with Squid. I build Squid 3.3 on Ubuntu 12.04, generated a pair of ssl key and crt, and configure squid like this: http_port 443 transparent cert=/home/larry/ssl/server.csr key=/home/larry/ssl/server.key And leaves almost all other configurations default. The authorization of the dir that holds key/crt is drwxrwxr-x 2 proxy proxy 4096 Oct 17 15:45 ssl Back on my dev laptop, I put <proxy-server-ip> www.googleapis.com in my /etc/hosts to make the call goes to my proxy server. But when I try it in my rails application, I got: SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol And I also tried with openssl in cli: openssl s_client -state -nbio -connect www.googleapis.com:443 2>&1 | grep "^SSL" SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:error in SSLv2/v3 read server hello A SSL_connect:error in SSLv2/v3 read server hello A Where did I do wrong?

    Read the article

  • Open Source Chef Server can't upload cookbook

    - by veilig
    I just setup the open source chef server on an Ubuntu 12.04 EC2 instance, I've setup my webui and am able to get responses from my knife commands ie: knife node list, knife client list, knife user list, etc... I'm able to update roles, databags, environments, etc... but I cannot upload any cookbooks. I'm running my workstation on Mac OSX. I keep getting this output at the end of my command knife cookbook upload -VV curl. Doesn't matter what cookbook I upload, or if I upload them all - I keep getting the same response DEBUG: Chef::HTTP calling Chef::HTTP::ValidateContentLength#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::RemoteRequestID#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Authenticator#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Decompressor#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::CookieManager#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::JSONToModelOutput#handle_response /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http/json_output.rb:51:in `handle_response': undefined method `chomp' for nil:NilClass (NoMethodError) from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:229:in `block in apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `each' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `inject' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:144:in `request' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:118:in `put' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/cookbook_uploader.rb:123:in `block in uploader_function_for' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `call' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `block (3 levels) in process' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `loop' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `block (2 levels) in process'INFO: HTTP Request Returned 204 No Content:

    Read the article

  • Silent install FirePro v4900 Driver on Windows Embedded 7 Standard

    - by Birgit_B
    I'm trying to install the Drivers for a FirePro v4900 on a Windows Embedded 7 Standard 64bit OS. I want the system to be as small as possible, so i would rather not install the whole catalyst control center, but only the necessary drivers. Because the installation should be accomplished absolutely unattended, the installation process of the FirePro-Driver should also be done without any user interaction. I see two possible solutions for the Problem: Install only the Drivers: Is it possible to solely install the necessary drivers? How would i achieve that? This solution would be the preferred one, because of the smaller footprint. Silent custom install the provided "FirePro_8.911.3.3_VistaWin7_X32X64_135673.exe" (found at ATI FirePro™ Driver). Is there a way, to do that? Thank you in advance for your support! Update: I managed to accomplish a silent installation. I extracted the contents of the above mention installer-file and ran \$_OUTDIR\Bin64\Setup.exe -Install. (There are some other Parameters, just run Setup.exe /?). But i couldn't achieve to just install the drivers without the Cataclyst Control Center, and it seams the Control Center has some unfulfilled dependencies and so it crashes...

    Read the article

  • Long access time for static web page on virtual machine

    - by Karol
    My setup Windows 7 on workstation that I use at work (with domain) and home (no domain) Virtual machine (VMWare) that runs Arch Linux (I will call it just "Linux") with network interface in bridged mode. Linux serves web pages with Nginx. IP address of Linux machine is 192.168.0.16 and is added to C:\windows\system32\drivers\etc\hosts: 192.168.0.16 bridged bri IP address of Windows workstation is added to /etc/hosts: 192.168.0.10 workstation I can add more details to my setup description (I am not sure what is relevant). The question Often (but not always) it takes long time for a web browser (Firefox) to open static web page served by Linux. I am sure it is not a performance issue. To be more specific: it takes about ~20 seconds to resolve(?) the address http://bridged for a web browser. Additionally I have just installed samba service and noticed similar problem, so it is not specific to browser & http. Initial access for samba shares also takes long time.

    Read the article

  • why sendmail resolves to ISP domain?

    - by digital illusion
    I wish to setup a local mail server for debugging purposes using fedora 15 I set up sendmail, but there is a problem. When I'm not connected to the internet, the local mail server delivers correctly (to localhost). And in /var/log/mail I see that I correctly delivered a mail to [email protected]: Jun 21 18:24:56 PowersourceII sendmail[6019]: p5LGOttt006019: [email protected], size=328, class=0, nrcpts=1, msgid=<[email protected]>, relay=adriano@localhost Jun 21 18:24:56 PowersourceII sendmail[6020]: p5LGOuSV006020: from=<[email protected]>, size=506, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=PowersourceII.localdomain [127.0.0.1] Jun 21 18:24:56 PowersourceII sendmail[6019]: p5LGOttt006019: [email protected], [email protected] (500/500), delay=00:00:01, xdelay=00:00:00, mailer=relay, pri=30328, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5LGOuSV006020 Message accepted for delivery) When I connect, networkmanager fills in /etc/resolv.conf with: domain fastwebnet.it search fastwebnet.it localdomain nameserver 62.101.93.101 nameserver 83.103.25.250 Now sendmail does not work any longer and tries to send messages to my ISP domain, as seen in the log: Jun 21 18:40:02 PowersourceII sendmail[6348]: p5LGe1LV006348: [email protected], [email protected] (500/500), delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=30327, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5LGe10n006352 Message accepted for delivery) Jun 21 18:40:02 PowersourceII sendmail[6354]: p5LGe10n006352: to=<[email protected]>, delay=00:00:01, xdelay=00:00:00, mailer=esmtp, pri=120651, relay=mx3.fastwebnet.it. [85.18.95.21], dsn=5.1.1, stat=User unknown As you can see, it tries to deliver a mail to [email protected], and fails The setup is working under other ISPs. How can I avoid the fastweb ISP DNS relay? Thank you

    Read the article

  • Starting airplay from command line, to send output of 'Say' Mac OS X command to airplay

    - by Fabien
    Ok, Sunday question :) Trying to make a little joke... 1) if you open a terminal, and type "say -a ?", Mac OS X will give you the list of devices it can send spoken words to. On mine, it says: 39 AirPlay 47 Built-in Output 2) I have a Denon airplay-ready received in my living room and I'm trying to send spoken words to my wife downstairs... I can send music without any problem using iTunes so, from an infrastructure standpoint, I'm all set. 3) I want my computer to say (out of the blue) "Honey, why don't you bring me a cup of coffee". I can make it say that locally on my internal laptop speakers, but I can't seem to send that to device 39 successfully. I am suspecting that there are a few other things that need to be setup before it works, i.e. setting up airplay output to "denon", maybe opening a channel and reserving it. I don't know. Has anyone played with this? Is there a way to setup airplay from the command line? That would be awesome :)

    Read the article

  • Ensure Mac's get correct machine name from DHCP?

    - by Greg Whitfield
    I have a problem in our network where our Mac's occasionally get given the wrong machine name while, I guess, getting a new DHCP lease. The DHCP servers are Windows based - the bulk of our network is Windows, but we have some Linux machines and an increasing number of Macs. The problem specifics is that occasionally a Mac will take on the name of another machine in the network. For example, I have a new Macbook Pro. In the OSX setup is gets called "gomez", and initially starts up on the network with that name without any problems. But after a few days when the machine was restarted (it had several restarts in the meantime), it ended up being called "florrie", which is actually the name of another machine in another part of the network. All network ops work fine, and indeed you don't notice most of the time - it's only when you run apps like Perforce that require the hostname that you get problems. I'm sorry I don't have more info than that, but if I know what to look for I can dig out some more facts. Or any hints on checking the network setup would be useful.

    Read the article

  • Small Business HP Virtualisation and iSCSI SAN Options

    - by Robin Day
    We are a small business that hosts our core product on a number of HP servers. Our core production setup is 1x HP DL380, high powered for a SQL Server Database 1x HP DL360, mid powered for our core application server 6x HP DL320, low powered for our front ends We run our training / testing / support systems on a similar setup, the servers are just older and less powerful. Unfortunately this is now causing us issues as the system has grown beyond the capabilities of these older servers. Upgrading these servers would be expensive and we believe that virtualisation is probably the way to go for the future. Locally we run a number of test / dev environments on ESXi using Direct Storoage on a couple of high powered DL360's and these are performing fairly well. We're thinking that instead of replacing all of our test servers that we can implement an iSCSI SAN and one or two high powered hosts. Hopefully looking that when it comes to replace our live servers as well that we can just expand the virual environment to cope. So my question is... Can anyone offer any advice on some suitable options? We have generally always been extremely happy with HP servers, all of our kit is currently HP, therefore our preference would be to stick with HP, however, I'm always happy to hear about other options. I'm hoping that initially a budget of around 15-25k (GBP) would be suitable, this could potentially be increased if I had confidence that the system would pave the way for a cost effective upgrade of our live systems in the future as well. I am new to SAN's and my only real experience is playing with OpenFiler on some old desktops. I think iSCSI should be suitable, but I've not done any research into how SQL server may perform. I've had a browser through HP's sites and see plenty of information about EVA, MSA, LeftHand, etc. However, from looking at all that, I don't see which options would be best and more importantly I don't know exactly what I would need to buy. Any help, links, opinions would be much appreciated. Thanks

    Read the article

  • How to have SSL on Amazon Elastic Load Balancer with a Gunicorn EC2 server?

    - by Riegie Godwin
    I'm a self taught back end engineer so I'm learning all of this stuff as I go along. For the longest time, I've been using basic authentication for my users. Many developers are advising against this approach since each request will contain the username & password in clear text. Anyone with the right skills can sniff on the connection between my iOS application and my Django/Gunicorn Server and obtain their password. I wouldn't want to put my user's credentials at risk so I would like to implement a more secure way of authentication. SSL seems to be the most viable option. My server doesn't serve any static content or anything crazy of that sort. All the server does is send and receive "json" responses from and to my iOS application. Here is my current topology. iOS application ------ Amazon Elastic Load Balancer ------- EC2 Instances running HTTP Gunicorn. Gunicorn runs on port 8000. I have a CNAME record from GoDaddy for the Amazon Elastic Load Balancer DNS. So instead of using the long DNS to make requests, I just use server.example.com. To interact with my servers I send and receive requests to server.example.com:8000/ This setup works and has been solid. However I need to have a more secure way. I would like to setup SSL between my iOS application and my Elastic Load Balancer. How can I go about doing this? Since I am only sending json responses to my application, do I really need to buy a certificate from a CA or can I create my own? (since browsers will not be interacting with my servers. My servers are only designed to send json responses to my iOS application).

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything. Thanks.

    Read the article

  • APC Smart UPS network shutdown issue

    - by Rob Clarke
    Here is a bit about our setup: We have 2x Smart-UPS RT 6000 XL units with network management cards We are running Powerchute from a network server Powerchute is connected to the management cards of both UPSs UPSs are set to do a graceful shutdown via Powerchute when the battery duration is under 20 minutes We also have a command file that runs with Powerchute Although our setup is redundant we do not have an equal load on each server due to APC switches for single power devices The problem is that as we do not have an equal load on each server the batteries drain at different rates. This means that the UPSs both get to the specified low battery duration at completely different times. The problem here is that UPS 1 may have run down to 5 minutes and is in desperate need of initiating a Powerchute shutdown - UPS 2 still has 25 minutes of runtime so no shutdown is initiated. Consequently UPS 1 goes down and takes all the servers with and then shuts down UPS 2 as well! What we need to happen are 1 of either 2 things: Powerchute initiates the shutdown as soon as either UPS reaches the 20 minutes low battery duration setting - and doesnt wait for both The UPS with the heavier load expends its entire battery but does not shutdown both UPSs and lets the load be switched across to the UPS that still has runtime remaining. That way when the UPS that still has runtime reaches its low battery duration it can proceed with the graceful shutdown via Powerchute. Hope that makes sense, any help is greatly appreciated!

    Read the article

  • Dynamic subdomain routing

    - by Nader
    Hi everyone, I asked this question over at stackoverflow, but got very few views: http://stackoverflow.com/questions/2284917/route-web-requests-to-different-servers-based-on-subdomain Perhaps it's more applicable to this crowd. Here it is again for convenience: I have a platform where a user can create a new website using a subdomain. There will be thousands of these, eg abc.mydomain.com, def.mydomain.com . Hopefully if we are successful hundreds of thousands. I need to be able to route these domains to a different IPs to point at a particular app server. I have this mapping in a database right now. What are the best practices and recommended technologies here? I see a couple options: Have DNS setup with a wildcard CNAME entry so that all requests go to a single IP where perhaps two machines using heartbeat (for failover) know how to look up the IP in the database and then do an http redirect to the appropriate app server. This seems clunky and slow to me. Run my own DNS server that can be programatically managed such that when a new site is created a DNS entry is added. We also move sites around to different app servers, so I would need to be able to update DNS entries in close to real time. Thoughts anyone? Thanks. Update2: I've setup external wildcard DNS pointing at an HAProxy web server whose job it is to route requests to backend servers. The mapping is stored in our internal PowerDNS server. Question now is how to get the HAProxy server (or another) to use the value of the internal DNS and not some config file or access list? – Update: Based on some suggestions below, it seems like reverse-proxy server(s) is the way to go. As I'll be rebalancing the domain-server mapping, these need to work instantly and the TTL on a DNS solution could be a problem. Any recommendations on software to use considering this domain-IP data is stored in a DB, and I'll need this to be performant?

    Read the article

  • What are the most important aspects to consider when choosing a SAN for a small office virtualizatio

    - by Prof. Moriarty
    I am in the process of consolidating 6 physical servers running 6 different operating system flavors (don't ask) into two identical physical servers (Dell PowerEdge 2900), using the free VMware ESXi 4.0 platform. We will install an iSCSI SAN over a 1GbE network, and store all virtual machine images on the SAN. Each physical server would run 3 VMs, and in the case of a physical server failure, we would manually switch over the other 3. These are all internal servers, while important, they can tolerate some amount of downtime (say <1h) to keep cost and complexity associated with HA down. I now need to choose the SAN to be used for the setup, on a low budget. We currently have about 2TB of data, but of course I want to able to grow, do backups of VM snapshots on other drives and remove them to a different location, etc. So what I would like to know is: Which are the must have features for this setup, without which using a SAN is not worth it? We are mostly a Dell shop, so I have been looking at the EqualLogic PS4000E High Availability model. Any opinions, anecdotes, bad experiences with this model? (This is one of the few models which could accomodate our existing disks from the physical servers.) If you can recommend something that is not Dell, but it has better value, I would most definitely consider it. Caveats, things to look out for?

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >