Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 21/590 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Setting up a wireless connection b'ween 2 clients

    - by shadeMe
    I've got a DLINK router connected to my ADSL modem that forms the hub of my home network. I've two clients - A desktop with an onboard wireless adaptor and a laptop that's outfitted with the same. The former runs Windows 7 x64, the latter Windows XP x86. I'd like to setup a wireless connection between the two that would allow me play games over to and to a lesser extent, transfer and stream data. How would I go about doing it ?

    Read the article

  • Setting up a purely Node.js http server on port 80

    - by Luke Burns
    I'm using a fresh install of Centos 5.5. I have Node installed and working (I'm just using Node -- no apache, or nginx.), but I cannot figure out how to make a simple server on port 80. Node is running and is listening to port 80. I'm just using the demo app: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(80, "x.x.x.x"); console.log('Server listening to port 80.'); When I visit my IP, it does not work. I obtained my ipaddress using ifconfig. I've tried different ports. So there must be something I am missing. What do I need to configure on my server to make this work? I would like to do this without installing apache or nginx. Luke Edit-- Ok so, I installed nginx and started it up, to see whether or not it is related to node, and I don't see its welcome page. So it definitely has something to do with the server. Am I retrieving the IP Address correctly by running: ifconfig then reading the inet addr under eth0?

    Read the article

  • On setting up Apache and IIS to share the same IP

    - by miCRoSCoPiCeaRthLinG
    Hello, There are two different web-apps running on two (physically) different servers on our network... one of IIS and another one on Apache - both on port 80 since two machines are accessible by different IPs on our internal network. Now I want to expose both these services to the world. My idea is to somehow make the incoming connection redirect to the appropriate server based on user's choice of subdomain. Example xxx.domain.com maps to the IIS (Internal IP: 1.2.3.4) yyy.domain.com maps to Apache (Internal IP: 5.6.7.8) To the world, both these servers will share the same public IP. What kind of a configuration am I looking at and how do I go about trapping the subdomain requests and redirecting to the appropriate server? Thanks, m^e

    Read the article

  • Setting up SSL for phpMyAdmin

    - by Ubuntu User
    I would like to run phpmyadmin using my SSL certificate. I read that if I placed the following within the file: /etc/phpmyadmin/config.inc.php, it would force it to use SSL. And now it does... $cfg['ForceSSL'] =true; However, my issue is when I did this, now I get an error stating "cannot connect to server." I do a port scan and my port 443 is closed for one, but I am connecting via https:// for my secure web based email admin panel. This tells me this may not be the issue. Second, is that I have a SSL certificate I purchased but I am not sure how to apply this cert. mydomain.com.crt is sitting on my desktop, how should I be utilizing this? I remember creating a self signed cert for my web-email access. Do I have to do this for phpmyadmin as well? At least this way, since I am the only one who will ever access the DB, it will never expire. Also the phpmyadmin used to come up as: http://mydomain/phpmyadmin/ of course I am now trying to get to https://mydomain.com/phpmyadmin/ however, I do not have any pages on my website that requires https:// currently. In the future I may add this. But for now, I only want to access phpmyadmin via ssl. I can use my own -- but if this causes problems with future ecommerce apps under mydomain.com I would rather use the SSL cert I already purchased. Thank you!

    Read the article

  • Setting Password for phpMyAdmin

    - by anitha
    am using rhel 5 and php 5 with mysql 5. My server is already configured and running all applications smoothly. I am accessing mysql as root and password is 'anitha123'. but when i am accessing phpmyadmin through browser, it is not asking for password. Somebody please tell me how can i set it like prompting for username ans password. Since i am not familiar with php mysql please tell me how to do it in simple way. thanks and regards a anitha

    Read the article

  • Word doesn't want to open hyperlinks, and I can't find the policy setting

    - by michaelb958
    So I have a Surface RT (running Windows RT 8.1), and I have some Word documents with links in them. The thing is, they don't work. When I try to activate one, this happens instead: It's kind of annoying. This is a personal device, so I am the organisation - and after much spelunking and web-searching, I still can't find the relevant policy, which means I can't change it. Is it talking about Group Policy or something else entirely? Is this a[nother] Windows RT limitation, or some obscure switch I haven't found yet, or...?

    Read the article

  • Setting Up Multiple Domains (plus wildcard subdomains) to Point to the Same Site/VirtualHost

    - by Derek Reynolds
    I have my primary domain with wildcard subdomains setup already. username.maindomain.com and maindomain.com I want to provide my users with additional domains that they can select. additional1.com, additional2.com, additional3.com... These additional domains would also need to support wildcard subdomains (as the subdomains route to a username). Anyone know how to properly configure this in DNS and VirtualHost config? Currently I have the additional domains as A records pointing to the same IP as my main domain (with a wildcard subdomain A record for each as well). In my VirtualHost config I am placing the additional domain names in the ServerAlias directive. Let me know if any more detail is needed.

    Read the article

  • setting apache to read from my user's directory.

    - by Adnan
    Hello, I have Centos 5 running Apache/2.2.3. Now the default folder is /var/www/html and whatever I put in it shows when I browse it from the web. Now I would like to to create a folder www under my user bob and have all files loaded from that folder; /home/bob/www When I change the document root folder in my httpd.config I get an 403 error, I have even tried with virtualhosts but the same error shows. Any ideas on what to do next?

    Read the article

  • Setting lusca and dansguardian iptables on Ubuntu 12.04 to prevent loop

    - by Heri YT
    I have a server with ubuntu 12:04 operating system, which runs as a proxy cache server lusca and DansGuardian as well as internet content filter. With the following composition: the client browser - lusca - DansGuardian - internet. And all this running only on one machine only, the following is a partial configuration on my server lusca: http_port 3128 transparent cache_peer 192.168.0.1 parent 8080 0 no-query no-digest no-netdb-exchange default which is also only found on the DansGuardian default settings namely: filterip="blank" filterport=8080 proxyip=192.168.0.1 proxyport=3128 The question is: Can all goes well? By simply relying on one machine only? What causes the "WARNING: Forwarding loop detected for:"? is not problematic if we leave? How to solve "WARNING: Forwarding loop detected for:" found in / var / log / lusca / cache.log Thank you.

    Read the article

  • Odd behavior of setting REMOTE_ADDR between Apache, Nginx, and AWS ELB

    - by Chris Drumgoole
    I have encountered a strange issue and am curious if others have encountered this as well. and if there is absolutely anything that can be done.. We have a set up where we have multiple AWS EC2 Linux machines sitting behind a ELB. The EC2 machines are running Nginx. Let's refer to these as my production machines (because they are!) I also have a Rackspace cloud machine running apache. Completely separate. Let's call this the test server. Now, there's a ISP here in Singapore that seems to be funneling traffic through a transparent proxy or something, and when you do a IP check, the IP often changes. In fact, I noticed that when I check on http://www.whatismyip.com, the ip seems to be stable (doesn't change) across refreshes. But, http://www.whatismyipaddress.com, on refreshing, the IP changes! (so my ISP is doing weird stuff). Now, back to my set up, I noticed a couple of things: Checking the REMOTE_ADDR variable from PHP when connecting to a single Nginx production machine (bypassing the load balancer), is set to the stable IP that does change. Checking the REMOTE_ADDR variable from PHP when connecting to the test Apache server, it is set to the IP that does change on refreshes. Checking the headers when connecting to the nginx production machines through the ELB, the ELB sets the HTTP_X_FORWARDED_FOR to the stable IP. Has anyone experienced this odd behavior? Is there nothing that I can do? And which IP should I "trust"? (the one Apache gives, or the one ELB and Nginx gives?) Thanks! Chris

    Read the article

  • Setting up multiple wireless access points on same network

    - by SqlRyan
    I'd like to add wireless to my network, and I need multiple access points to cover the whole area. I'd like to set them up so that there's only one "wireless network" that the clients see, and it switches them as seamlessly as possible between access points as they wander around (if that's not possible, then at least have it so that they don't need to set up the security by hand on each one the first time, if possible). I've searched online, and there are quite a few sets of mixed instructions (same vs different SSID, frequency, does the security need to match exactly, etc.). Can somebody who has some experience doing this please let me know what they did? I imagine it's pretty simple, but there seems to be no clear cut "yes, you can do this" online, even though I know you can. I have a mid-size LAN with about 20 workstations and two Domain Controllers on it. Also, I'll be doing this with consumer wireless components, if it makes a difference, not enterprise-level components (ie. Linksys rather than Cisco).

    Read the article

  • Need help setting up mail DNS records

    - by Dave
    Hi, We are hosting our web site on host monster, but want our email to continue to be hosted at the old site. Our domain points to the hostmonster DNS servers, but I can't figure out the right configuration for the remote email servers. We have one MX entry, which is priority: 0 domain: ourdomain.com And then we have these DNS entries ... name: mail.ourdomain.com ttl: 14400 class: IN type: A record: old.host.ip.address name: mail1.ourdomain.com ttl: 14400 class: IN type: A record: old.host.secondip.address Can someone tell me what I need to add/edit to get mail to correctly route to our old host? Thanks, - Dave

    Read the article

  • setting visual bell to flash in iTerm

    - by blackwing
    Hi, I am using iTerm on OSX (leopard) to ssh to a linux machine. I run screen on the dev machine to save my work between sessions. I am not a big fan of audio bell and I don't like screen's default 'Wuff Wuff' bell (or any other little message shown at bottom of the page). What I like though is to have flash (foreground and blackground colors swapped for a fraction of a second) as my visual bell. I used to use PuTTY and it is as simple as ticking a checkbox but I can't find such an option in iTerm. My question is how can I set my visual bell to flash? The ideal answer would work with iTerm on local computer, iTerm sshed to a linux server, and iTerm sshed to a linux server and ran screen.

    Read the article

  • Setting up a DNS name server for a mass virtual host with Bind9

    - by Dez
    I am trying to set up a chrooted DNS name server in a local LAN like this everyone connected in the LAN can have access to the mass virtual hosts defined for a development ambience without having to edit manually their local /etc/hosts one by one. The mass virtual host is named example.user.dev (VirtualDocumentRoot /home/user/example ) and example.test (DocumentRoot /var/www/example). I set up everything and the /var/log/syslog doesn't show any error, but when checking the DNS with: host -v example.test Doesn't find the host. Also using the dig command I don't receive answer. dig -x example.test ; << DiG 9.5.1-P3 << -x imprimere ;; global options: printcmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 47844 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;imprimere.in-addr.arpa. IN PTR ;; AUTHORITY SECTION: in-addr.arpa. 600 IN SOA a.root-servers.net. dns-ops.arin.net. 2010042604 1800 900 691200 10800 ;; Query time: 108 msec ;; SERVER: 80.58.0.33#53(80.58.0.33) ;; WHEN: Mon Apr 26 11:15:53 2010 ;; MSG SIZE rcvd: 107 My configuration is the following: /etc/bind/named.conf.local zone "example.test" { type master; allow-query { any; }; file "/etc/bind/zones/master_example.test"; notify yes; }; zone "1.168.192.in-addr.arpa" { type master; allow-query { any; }; file "/etc/bind/zones/master_1.168.192.in-addr.arpa"; notify yes; }; /etc/bind/named.conf.options Note: We have an static IP address so I forward the querys to DNS server to said IP address. options{ directory "/var/cache/bind"; forwarders { 80.34.100.160; }; auth-nxdomain no; listen-on-v6 { any; }; }; /etc/bind/zones/master_example.test $ORIGIN example.test. $TTL 86400 @ IN SOA example.test. root.example.test. ( 201004227 ; serial 28800 ; refresh 14400 ; retry 3600000 ; expire 86400 ) ; min ; TXT "example.test, DNS service" @ IN NS example.test. localhost A 127.0.0.1 example.test. A 192.168.1.52 example A 192.168.1.52 www CNAME example.test. /etc/hosts 127.0.0.1 localhost example 192.168.1.52 localhost example example.test /etc/resolv.conf Note: For Bind I just added the 3 last lines. nameserver 80.58.0.33 nameserver 80.58.61.250 nameserver 80.58.61.254 search example.test search example nameserver 192.168.1.52

    Read the article

  • Setting Up Apache as a Forward Proxy with Cahcing

    - by Karl
    I am trying to set up Apache as a forward proxy with caching, but it does not seem to be working correctly. Getting Apache working as a forward proxy was no problem, but no matter what I do it is not caching anything, to disk or memory. I already checked to make sure nothing is conflicting in the mods_enabled directory with mod_cache (ended up commenting it all out) and also I tried moving all of the caching related fields to the configuration file for mod_cache. In addition I set up logging for caching requests, but nothing is being written to those logs. Below is my Apache config, any help would be greatly appreciated!! <VIRTUALHOST *:8080> ProxyRequests On ProxyVia On #ErrorLog "/var/log/apache2/proxy-error.log" #CustomLog "/var/log/apache2/proxy-access.log" common CustomLog "/var/log/apache2/cached-requests.log" common env=cache-hit CustomLog "/var/log/apache2/uncached-requests.log" common env=cache-miss CustomLog "/var/log/apache2/revalidated-requests.log" common env=cache-revalidate CustomLog "/var/log/apache2/invalidated-requests.log" common env=cache-invalidate LogFormat "%{cache-status}e ..." # This path must be the same as the one in /etc/default/apache2 CacheRoot /var/cache/apache2/mod_disk_cache # This will also cache local documents. It usually makes more sense to # put this into the configuration for just one virtual host. CacheEnable disk / #CacheHeader on CacheDirLevels 3 CacheDirLength 5 ##<IfModule mod_mem_cache.c> # CacheEnable mem / # MCacheSize 4096 # MCacheMaxObjectCount 100 # MCacheMinObjectSize 1 # MCacheMaxObjectSize 2048 #</IfModule> <Proxy *> Order deny,allow Deny from all Allow from x.x.x.x #IP above hidden for this post <filesMatch "\.(xml|txt|html|js|css)$"> ExpiresDefault A7200 Header append Cache-Control "proxy-revalidate" </filesMatch> </Proxy> </VIRTUALHOST> Thank you once again!

    Read the article

  • Setting Mercurial with Active Directory authentication and authorisation

    - by jbx
    I am evaluating the possibilities of moving my organisation to Mercurial, however I am stumbling on 2 basic requirements which I can't find proper pointers to. How do I set up Mercurial's central repository to authenticate users with the central active directory and only allow them to push or pull if they have the right credentials? How do I set up a Mercurial project repository to only allow users pertaining to a specific group to push / pull source code? We need this to have per-project authorisation. On which HTTP servers (IIS or Apache etc.) are the above 2 requirements supported? Apologies if I am asking something obvious or if I am missing something fundamental about how authentication and authorisation works. Thanks.

    Read the article

  • Setting up my own VPN or SSH server

    - by confusedWorker
    http://lifehacker.com/#!237227/geek-to-live--encrypt-your-web-browsing-session-with-an-ssh-socks-proxy http://ca.lifehacker.com/5763170/how-to-secure-and-encrypt-your-web-browsing-on-public-networks-with-hamachi-and-privoxy If I set up my own VPN or similar server on my always on computer at home, they say I could access gmail from my work computer. My question is, will the IT guys at work be able to notice something strange is going on if I'm on gchat at work through one of these things? (by IT guys I mean the two guys in charge of our network at work - its a small company)

    Read the article

  • Need help setting up OpenLDAP on OSX Mountain Lion

    - by rjcarr
    I'm trying to get OpenLDAP manually configured on OSX Mountain Lion. I'd prefer to do it manually instead of installing OSX server, but if that's the only option (i.e., OpenLDAP on OSX isn't meant to be used without server) then I'll just install it. I've seen guides that mostly just say to change the password in slapd.conf and then start the server and it should work. However, whenever I try to do anything with the client it tells me this: ldap_bind: Invalid credentials (49) I've tried encrypting the password as well as leaving it plain it doesn't seem to matter. The version is 2.4.28 and I've read as of 2.4 OpenLDAP uses slapd.d directories, but that doesn't seem to be the case in OSX. There was mention of an 'olcRootPW' I should use (instead of 'rootpw' in slapd.conf), but I only found that in a file named slapd.ldif. So ... I'm really confused. Has anyone got OpenLDAP working on OSX Moutain Lion without the server tools?

    Read the article

  • Setting up a dualboot by installing cloned partitions using clonezilla

    - by Nimjox
    I'm trying to setup a dual boot system where I have Windows 7 and Linux Mint. Here's the kicker both are partitions I've saved using Clonzezilla from different places and to make matters worse Linux Mint is formated as a LVM. I need both of these images specifically as windows is a corporate image that I must use and the other is a development image that took me a week to setup. I've gotten it almost all working but my issue is that I can't get clonezilla to not mess up the partition table of Windows when installing Mint or vise-vera. I can use the (-k1 option) which doens't copy the partition table but then I have a unusable partition when it clones and I'm not sure how to fix the partition table. Here's what I'm doing: Using Gparted to make partitions sda1 40GB ntfs (windows), sda2 extended 70GB, sda5 lvm2 pv 69.99 GB (Linux), sda3 500MB (GRUB) Clonezilla windows image into sda1 partition (keeping partition table) Clonezilla linux image into sda5 partition (not recreating partition table) After all that I can boot into windows using the default MBR. I can use rescue-repair cd to reinstall GRUB which will see Windows 7 but I can't get it to see the Linux OS. I'm thinking its because of the sda5 partition but I'm not sure any ideas on what I could do to get this working or where I might be going wrong. If there is any additional detail you need please let me know and I'll edit as this is a lot.

    Read the article

  • Setting a Static IP Running FreeBSD8 in VirtualBox hosted on Windows 7

    - by gvkv
    I'm using VirtualBox on Windows 7 (host) to run a FreeBSD (guest) based web server. I`ve assigned a static ip of 192.168.80. 1 to the (virtualized) NIC which is run in bridged mode. The problem is that when I ping an external server (such as google.com) I get a No route to host error: dimetro# ping google.com PING google.com (66.249.90.104): 56 data bytes ping: sendto: No route to host ... I can ping the BSD server from both another virtualized machine and my host machine and from the server, I can ping everything on the network. The router ip is 192.168.1.1/16. ADDENDUM: I have the following lines in /etc/rc.conf on the BSD VM to configure networking: defaultrouter="192.168.1.1" ifconfig_em0="inet 192.168.80.1 netmask 255.255.0.0"

    Read the article

  • Setting up subdomain to respond on :443 with apache2

    - by compucuke
    I read through some guides on this and I believe it is possible to have apache respond to a subdomain through ssl. I have domain.com responding on 80 and I do not need domain.com responding on 443. Rather, the only use I have for ssl is for the subdomain sub.domain.com. So my site should be http://domain.com http://www.domain.com https://sub.domain.com https://www.sub.domain.com My CNAME records are as follows sub.domain.com xxx.xx.xx.xxx *.sub.domain.com xxx.xx.xx.xxx The A record exists but should not matter for the example. I set up a separate config file in sites-enabled for sub.domain.com NameVirtualHost xxx.xx.xx.xxx:443 <VirtualHost xxx.xx.xx.xxx:443> SSLEngine on SSLStrictSNIVHostCheck on SSLProtocol -ALL +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:-MEDIUM ServerAlias sub.domain.com DocumentRoot /usr/local/www/ssl/documents/ SSLCertificateFile /root/sub.domain.com.crt SSLCertificateKeyFile /root/sub.domain.com.key Alias /robots.txt /usr/local/www/ssl/documents/robots.txt Alias /favicon.ico /usr/local/www/ssl/documents/favicon.ico Alias /js/libs /usr/local/www/ssl/documents/js/libs Alias /media/ /usr/local/www/documents/media/ Alias /img/ /usr/local/www/ssl/documents/img/ Alias /css/ /usr/local/www/ssl/documents/css/ <Directory /usr/local/www/ssl/documents/> Order allow,deny Allow from all </Directory> WSGIDaemonProcess sub.domain.com processes=2 threads=7 display-name=%{GROUP} WSGIProcessGroup sub.domain.com WSGIScriptAlias / /usr/local/www/wsgi-scripts/script.wsgi <Directory /usr/local/www/wsgi-scripts> Order allow,deny Allow from all </Directory> </VirtualHost> Now, it is important to mention that https://domain.com responds with what I have running from script.wsgi above instead of on https://sub.domain.com. It does not respond to sub.domain.com. checking https://sub.domain.com causes a 105 error. This is a DNS error but I am convinced the DNS does not have a problem with the CNAME records, they just point to my IP. Am I doing something that Apache can not do?

    Read the article

  • Syntax Error when setting up Domain Alias

    - by Poundtrader
    I'm attempting to set up a domain alias via Pre VirtualHost Include and I'm recieving the following error: Error: An error occurred while running: /usr/local/apache/bin/httpd -DSSL -t -f /usr/local/apache/conf/httpd.conf Exit signal was: 0 Exit value was: 1 Output was: --- Syntax error on line 15 of /usr/local/apache/conf/includes/pre_virtualhost_global.conf: CustomLog takes two or three arguments, a file name, a custom log format string or format name, and an optional "env=" clause (see docs) --- Basically I have two domains, the main domain has an opencart installation within a directory (/buy) and I'm attempting to use the multi-store function which allows you to administer multiple stores on multiple domains via the one opencart dashboard. My issue is that I have the opencart installations within the /buy directory so I have been given the following code which should allow me to use this functionality over the multiple cPanel accounts within the same VPS. <VirtualHost 87.117.239.29:80> ServerName newdomain.co.uk ServerAlias www.newdomain.co.uk Alias /buy /home/originaldomain/public_html/buy/ DocumentRoot /home/newdom/public_html ServerAdmin [email protected] ## User newdom # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup newdom newdom </IfModule> <IfModule mod_ruid2.c> RUidGid newdom newdom </IfModule> CustomLog /usr/local/apache/domlogs/newdomain.co.uk-bytes_log “%{%s}t %I .\n%{%s}t %O .” CustomLog /usr/local/apache/domlogs/newdomain.co.uk combined ScriptAlias /cgi-bin/ /home/newdom/public_html/cgi-bin/ </VirtualHost> Does anyone know how to get this code to work? Thanks,

    Read the article

  • Setting up CIFS ISO Repository for Xen

    - by user85610
    I recently started working with Xen, to try to make better use of an extra desktop box for development testing. I'd like to be able to do OS installs on it without having to burn discs, but I'm having some trouble actually being able to get it to boot OS ISOs from a Windows share. My Windows box is running Win 7, and it's on a domain. I created a CIFS ISO SR in Xen, specifying the correct username and password to use. Xen is able to scan the share, and I see the ISOs that are in the folder, and can select them in the list in XenCenter. However, when I try to start the VM, I get "Error: Starting VM 'linxcentos' - INVALID_SOURCE - Unable to access a required file in the specified repository: file:///tmp/cdrom-repo-hIz-H7/isolinux/vmlinuz." I tried booting a different Linux ISO and got the same result. I know that the ISOs are valid because I was able to install them without issue when I tried VMWare ESXi earlier. What am I missing here? It's Xen/XenCenter 6 and I'm trying to install the newest version of Centos. I may end up burning it for now, but I'd like to get this to work, at least just for the principle of not letting mysterious behaviors go unsolved...

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >