Search Results

Search found 21853 results on 875 pages for 'point'.

Page 155/875 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Filtered Router Interface

    - by jviotti
    I'm having some problems with a Scientific-Atlanta DPR2320R2. In specific with the WIFI. A few months ago I changed its password and username and now I can't remember. So I tried cracking it with Hydra but it drove things worse. Content of webadmin was rendered partial, and threw lot of errors. I then reseted the router. I found myself abled to browse the web with ethernet-connected pc. Wifi is configured by registering the device's MAC Address, and indeed the router has been reseted and register MAC address were lost. No device could connect to wifi. In fact, the device does not even recognize the network. I tried the pointing to 192.168.0.1 to restablish the MAC's. But I couldn't connect to the router access point. Tried listing up hosts: $ nmap -sP 192.168.0.0/24 Starting Nmap 5.00 ( http://nmap.org ) at 2012-12-11 01:18 ART Host 192.168.0.1 is up (0.0018s latency). Host 192.168.0.11 is up (0.00025s latency). Nmap done: 256 IP addresses (2 hosts up) scanned in 59.62 seconds Then checked 192.168.0.1 was really up by sending pings. It responded to all my pings. I quick-scanned the access point: $ nmap 192.168.0.1 Starting Nmap 5.00 ( http://nmap.org ) at 2012-12-11 01:08 ART Interesting ports on 192.168.0.1: Not shown: 999 closed ports PORT STATE SERVICE 80/tcp filtered http Nmap done: 1 IP address (1 host up) scanned in 6.73 seconds Look the state of the port 80: FILTERED. I'm pretty confused now. Any suggestion would be appreciated. Thanks in advance.

    Read the article

  • CD-ROM Cant Be Accessed After Installing VMware Tools on VMware Server 2.0.2

    - by Optimal Solutions
    Using VMware Server 2.02, I set up a new VM (Windows XP Pro) applied all of the updates, added Windows addons from the install CD, etc... I got it to a stable point and up through that point I was able to access the CD-ROM drive (E: on my host). What I never did before was install "VMware Tools" and since it claims to give better mouse and video support, I gave it a shot. What it does is it places the install package in a virtual CD-ROM drive. I ran the install, no errors and it wants a reboot. I log back in after reboot and pop in the install CD for Microsoft Office 2003 and I receive the message "Please Insert A Disc Into Drive D:". Drive D: would be the next logical drive after the C: drive where I chose to install the OS. The message box sits there and if I click "Cancel", to return to Windows Explorer, the status bar seems to blink ever 1/2 second - as if its polling for a CD-ROM drive or something. No bangs or exclamations in the Device Manager for any hardware. I had taken a snapshot prior to the VMware Tools install and upon restoring it, the CD-ROM is back. I made copies of two other VMs, installed the VMware Tools on those VMs and both experienced the same issues: Windows 2003 Server and Windows 7 (32-bit). Has anyone seen this issue and know of a fix for this? It would be nice to have the better graphics and better mouse control AND use my CD-ROM drive as well! Thank you.

    Read the article

  • How can I use two Internet connections in Ubuntu?

    - by Martin
    My goal is to be able to do something like this: curl google.com --interface ppp0 curl google.com --interface p2p2 ppp0 is a DSL connection, and p2p2 is a separate direct Internet connection. Currently I can only get one of these to work at a time. When I enable one, the other one stops working. /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # DSL auto p2p1 iface p2p1 inet manual auto dsl-provider iface dsl-provider inet ppp pre-up /sbin/ifconfig p2p1 up # line maintained by pppoeconf provider dsl-provider # DIRECT auto p2p2 iface p2p2 inet dhcp ifconfig: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 p2p1 Link encap:Ethernet inet6 addr: fe80::20a:ebff:fe21:99c6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 p2p2 Link encap:Ethernet inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20a:ebff:fe17:1249/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ppp0 Link encap:Point-to-Point Protocol inet addr:53.193.231.167 P-t-P:53.193.224.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 10.0.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 53.193.224.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 p2p2 By default, only ppp0 works. If I run "route add default gw 192.168.1.1 p2p2" then I can use p2p2 but ppp0 stops working. If I then run "route add default gw 53.193.224.1 ppp0" then I can use ppp0 again but p2p2 stops working. What can I do to be able to use both interfaces selectively?

    Read the article

  • Nice level not working on linux

    - by xioxox
    I have some highly floating point intensive processes doing very little I/O. One is called "xspec", which calculates a numerical model and returns a floating point result back to a master process every second (via stdout). It is niced at the 19 level. I have another simple process "cpufloattest" which just does numerical computations in a tight loop. It is not niced. I have a 4-core i7 system with hyperthreading disabled. I have started 4 of each type of process. Why is the Linux scheduler (Linux 3.4.2) not properly limiting the CPU time taken up by the niced processes? Cpu(s): 56.2%us, 1.0%sy, 41.8%ni, 0.0%id, 0.0%wa, 0.9%hi, 0.1%si, 0.0%st Mem: 12297620k total, 12147472k used, 150148k free, 831564k buffers Swap: 2104508k total, 71172k used, 2033336k free, 4753956k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32399 jss 20 0 44728 32m 772 R 62.7 0.3 4:17.93 cpufloattest 32400 jss 20 0 44728 32m 744 R 53.1 0.3 4:14.17 cpufloattest 32402 jss 20 0 44728 32m 744 R 51.1 0.3 4:14.09 cpufloattest 32398 jss 20 0 44728 32m 744 R 48.8 0.3 4:15.44 cpufloattest 3989 jss 39 19 1725m 690m 7744 R 44.1 5.8 1459:59 xspec 3981 jss 39 19 1725m 689m 7744 R 42.1 5.7 1459:34 xspec 3985 jss 39 19 1725m 689m 7744 R 42.1 5.7 1460:51 xspec 3993 jss 39 19 1725m 691m 7744 R 38.8 5.8 1458:24 xspec The scheduler does what I expect if I start 8 of the cpufloattest processes, with 4 of them niced (i.e. 4 with most of the CPU, and 4 with very little)

    Read the article

  • How to set up multiple DNS servers on an intranet

    - by Brent
    We have an Active Directory network, with a mixture of Windows DNS, linux BIND servers, and want to use OpenDNS as our external DNS provider. I am wondering What is the best way to set up these servers (regarding forwarders, recursion, etc.)? Active Directory is our main internal DNS for our domain, and has 3 redundant servers. DHCP and all our servers use these as their DNS servers. Then we have a legacy AD server from an old network that is still authoritative for a bunch of domains. Finally, we have a couple of Linux Bind servers that are authoritative for a bunch of websites we host. Should our main AD servers point to our legacy AD server, which points to one of our BIND servers, which points to the other BIND server, which finally points out to openDNS? Or should our main AD servers point to all of these directly? - or is there a better option? What happens if a domain is listed in 2 places? Does DNS process the forwarders in order? What about root servers - if I want to use OpenDNS for "everything else", do I just list them as the last forwarders, and delete the root servers from all my DNS servers? How does recursion work - in this scenario, should I be using recursion or not?

    Read the article

  • adding trac to apache2 configuration file

    - by Michael
    I currently have apache2 running from a mythtv/mythweb install. This made two config files available in sites-enabled. One of them ("default-mythbuntu") has the VirtualHost directive and seems like a normal file (except a change to the directory index). There is also a mythweb.conf file that only has directives and sets various variables for mythweb. I want to host a trac site as well. According to this site: http://trac.edgewall.org/wiki/TracOnUbuntu there are some setting I need to set for the Trac site. They give me directions for making a VirtualHost, but I think I should use the current VirtualHost and just add the directives (I'll need to change the default location they point to from the site above to just point to the trac location). Where should I put these directives? Can I make a Trac.conf with just the settings for Trac and enable it, or do I need to put them in the default-mythbuntu file? I don't like that later because it doesn't separate out the Trac configs. How does Apache know that the mythweb (and the trac.conf I want to make) belong to the virtualhost defined in the default-mythbuntu? It is the only virtualhost that is being defined on my system if that matters.

    Read the article

  • Thunderbird alerts when expected email does not arrive

    - by user871199
    I am on Ubuntu 12.04 using Thunderbird as email client. Both are up to date in terms of updates. I have bunch of nightly jobs that do the work and send a status mail. It gets tedious if you keep getting same/similar mails every day so I ended up writing a mail filter rule which causes emails to end up in their respective folders automatically. If things are going ok, I really don't need to read emails. Failure emails are sent to different alias - if the job runs. We recently discovered that one of the job had not run for few days as someone accidentally disabled it. In order to avoid such problems in future, I would like to setup thunderbird in such a way that if I don't get email from given address within given duration, it should alert me. My dream solution is to set up frequency - some jobs do run every 4 hours. Is this possible? Can I setup Thunderbird (preferred) or other email client for reminding me when expected email does not show up. Based on comments and answer I received, here are the reasons why I would like to use Thunderbird. We are already using Thunderbird. It has calender support via plugin, so I suppose someone is already watching time to remind us about the event. May be this another type of event. Additional job is one more failure point, may complicate life if it has to monitor multiple hosts. Additional tools - same thing, one more failure point. Thunderbird can be run across all the platforms we are using - Windows and Ubuntu. It sort of becomes platform independent solution.

    Read the article

  • Set up WLAN in 3-level house

    - by Balint Erdi
    I'm having a hard time setting up the network in our house. It has three levels (basement, ground floor, first level). The WLAN is set up by an ASUS RT-N12 router which provides perfect coverage for the ground floor and the basement. However, I set up my "home office" in the basement where the signal barely arrived. So I purchased a TP-Link TL-WA901ND (300 Mbps) Access Point which I set up in the other corner of the ground floor to expand the ASUS router's range. I used the AP's Repeater mode for that. The distance between my computer and the TP-Link AP is 6-7 meters. There is a staircase going down from the ground floor to the basement so there are no solid walls between the computer and the AP. This setup mostly works (I am writing this from the basement) but it is not reliable (the signal strength sometimes goes down to ~40% of the max) sometimes so I wonder if I am doing it correctly or if there is a better way. Screenshot of the router's and the AP's dashboard screen follow: Any comments on what I am doing wrong or hints for improvement are appreciated. Thank you. UPDATE Tried one more thing, setting up the TP-LINK AP in Access Point mode. That way, I can make it use a different SSID. I enabled WDS/Bridge so that it expands the range of the ASUS router (see screenshot). That does not work, either, if I connect to the network set up by the TP-LINK device (PELSTER-2), I can't reach the external network (the Internet). It seems the problem always comes back to this, the TP-LINK does not have access to the external network, whatever its mode of operation.

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • ProCurve 1800 switch issue

    - by user98651
    I recently deployed ProCurve 1800-24G switches in place of some older ProCurve 2424M switches in my network. However, I'm having a serious problem with the switch connected to the router. It seems, every night when our Windows 2008 R2 server (off site) runs a backup to a iSCSI target (on site) [facilitated through a PPTP tunnel] the LAN loses connectivity with the router. To clarify, there is only one router which is connected to the switch affected by this problem. The only way to resolve the issue is to either reboot the router or pull the ethernet cable that goes to the router and plug it back in. During the outage, clients cannot receive DHCP requests, DNS requests, ping, or do anything else with the router in this state. Now, neither the switch or router are configured extensively and the issue only seems to have surfaced with the new switch in place. I have tried a number of things including replacing cables, rebooting and checking the switch configuration (it is literally as basic as you can get at this point-- flat LAN, no trunking). Interestingly, the router shows (accessed externally) no changes in configuration or status during this state but similarly cannot ping or access other hosts on the network. This issue occurs in different stages of backup (ie, different amounts transferred). I've also dumped packets from the switch into WireShark but cannot seem to find any anomaly yet (I'm looking at packets around the time the issue appeared and at the time when I reset the NIC). Any suggestions for what to look for? Ideas on what could be causing this? I'm seeing some transmit/receive errors on the NIC from both the router and switch side but nothing serious when compared to the total packet counts. I'm seriously doubting hardware at this point, as I have tried another switch, different cables, and a different NIC on the router.

    Read the article

  • Utility to store/cache all web pages and YouTube videos

    - by jonathanconway
    I found myself in the following situation. I'm travelling abroad with my laptop. I connect to a WiFi point and do a bit of browsing and play a YouTube video or two. Then I disconnect and hop on either a plane or taxi. Now I want to go back to some of the webpages I was browsing before and continue reading them, or watch some more of that YouTube video. Unfortunately it seems like none of these resources are cached, or if they are, I have no idea how to access them. Here's what I'd like: A utility that starts when my computer boots and sits in the background, silently caching all the web pages that I view. Not only that, but also the resources such as YouTube videos. Later, when I re-navigate to a site while disconnected, the browser automatically pulls the pages from my cache rather than giving me a 404 error. Or I can click an icon in the system tray and see a list of all the pages/videos in the cache and view any that I like. I'm sure Internet Explorer had a feature like this at some point, like "Offline Mode" or something. But these days it doesn't seem to work. Even when I select that option I still can't view pages that I'm certain I downloaded before. So has the utility I'm talking about been developed yet?

    Read the article

  • How use DNS server to create simple HA (High availability) of my website?

    - by marc22
    Welcome, How can i use DNS server to create simple HA (High availability) of website ? For example if my web-server ( for better understanding i use internal IP in real it will be other hosting companies) 192.168.0.120 :80 (is offline) traffic go to 192.168.0.130 :80 You have right, i use bad word "hight avability" of course i was thinking about failover. Using few IP in A records is good for simple load-balancing. But not in case, if i want notice user about failure (for example display page, Oops something is wrong without our server, we working on it) against "can't establish connection". I was thinking about setting up something like this 2 DNS servers, one installed on www server Both have low TTL on my domain, set up 2 ns records first for DNS with my apache server second to other dns If user try connect he will get ip of www server using first dns, if that dns is offline (probably www server is also down) so it will try second NS record, what will point to another dns, that dns will point to "backup" page. That's what i would like to do. If You have other idea please share. Reverse proxy is not option, because IP of server can change, or i can use other country for backup.

    Read the article

  • How do I get a Wireless N PCi card to connect to a wireless G router?

    - by Andy
    I'm having some problems setting up a new wireless PCI card on a WinXP SP3 PC. I know that the router is configured correctly. It is a Linksys WRT54GL, using 802.11b/g. Security mode is WPA2 Personal with TKIP+AES encryption. I am able to connect to this fine using my laptop (first gen MacBook with a 802.11b built in card). The new PCI card is also Linksys, but it supports 802.11n. Card seems to be installed ok (Windows sees it fine, doesn't list any errors in Device Manager), however when it scans for available wireless networks it can't find my wireless network (the router is set to broadcast the SSID). I tried to enter the network SSID manually, but that didn't seem to help. I chose WPA2-PSK for network authentication. The only options for encryption are TKIP or AES - I've tried both, neither worked. I am sure that I typed in my wireless key correctly. At this point, I don't think the problem is with encryption, but something else. It almost seems like I need to switch the wireless card into g mode, but I haven't found a way to do that (if that is even possible/necessary - I thought n was fully backwards compatible with g). Also, the PC is in the same room as the router, and my laptop, so I don't think that it is an interference issue. Any ideas what I'm doing wrong? I'm running out of things to try at this point. :(

    Read the article

  • Running two different websites domains one one IP address

    - by Akshar Prabhu Desai
    Here is my apache configuration file. I have two domain names running on same ip but i want them to point to different webapps. But in this case both point to the one intended for e-yantra.org. If I copy paste akshar.co.in part before E-yantra.org both start pointing to akshar.co.in I have two A DNS entries (one per domain name) pointing to the same IP. NameVirtualHost *:80 <VirtualHost *:80> ServerName www.e-yantra.org ServerAdmin [email protected] DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/ci/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/db2/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:80> ServerName www.akshar.co.in ServerAdmin [email protected] DocumentRoot /var/akshar.co.in <Directory /var/akshar.co.in/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost>

    Read the article

  • Is there a way to use something similar to a capture group for apache2 server name

    - by Zipper
    I have a server that sits behind an AWS load balancer. The LB can't do automatic redirect from HTTP to HTTPs, and the LB is doing my SSL. So I need to setup apache on my servers to redirect any request on port 80 to https://FOOBAR m where FOOBAR is the domain that came in. I haven't been able to find a way of doing that so far. I'm an apache newb though. What I'm trying to do is something similar to this. I'll use regex as an example <VirtualHost *:80> ServerName (.*) Redirect / https://\1 </VirtualHost> If there's a better way to do this, please let me know. EDIT: Sorry I should have explained why this is happening. I actually have a tomcat server running my app on port 8080, and the LB points to that. From what I can tell so far my requests come in on http (which is expected), but when my app server sends redirects (for login purposes) it tries to redirect to http, instead of https. I haven't had a chance to fully investigate this, but I wanted to work around it for now by point the LB to point to the apache server, and have any port 80 requests redirect to 443. EDIT2: The other reason I'm interested in doing this, is that since the LB can't do the redirect, I need to have another redirect mechanism in place to tell the browser to go to https://FOOBAR

    Read the article

  • Backup hardware and strategy on distributed Windows Server 2008 network

    - by CesarGon
    This question is a follow up to this. We have a Windows Server 2008 R2 domain over a network that spans two different buildings, linked by a 100-Mbps point-to-point line. Over 60 users work in the organisation. We are planning to use DFS folders and DFS replication for file serving across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. The idea is to set up a DFS file server in each building and use DFS so that all the contents stay replicated over the 100-Mbps link. We are now considering backup hardware and strategies. We are Dell customers and, after browsing the online Dell catalogue, I can see a number of backup hardware options. My main doubts are the following: Would you go for a tape library, disk backup, or are there other options worth considering? Would you perform batch backups (i.e. nightly) or would you use continuous backup (i.e. while users are working)? Would you use a dedicated backup server to which the tape library (or any other backup device) is attached, or is there any other alternative way of doing things? My experience with backup hardware and overall setup is limited, so I appreciate any good piece of advice that you may have. Thanks.

    Read the article

  • Creating basic, redundant gigE or IB storage network for Xen?

    - by StaringSkyward
    With only a modest budget, I want to move my 4 xen servers over to network storage -either NFS or iSCSI which will be determined based on how well it performs when we test it (we need good throughput and it must continue to work through link and switch failure tests). We may add another couple of xen servers at some point when this is done. I don't know much about the design and operation of storage networks, so would really appreciate some hints from those with experience. The budget is around $3,800 excluding the storage appliance. I am currently thinking these are my options to remain on budget: 1) Go for used infiniband hardware and aim for 10gb performance. 2) Stick with gig ethernet and buy some new switches (cisco or procurve) to create a storage-only ethernet LAN. Upgrade to 10gigE later but try to use hardware capable of it where possible to reduce upgrade costs. I have seen used, warrantied infiniband switches at reasonable prices (presumably because big companies are converging on 10gbit ethernet?) and the promise of cheap 10gb is attractive. I know nothing about IB, so here come the questions: Can I buy 2 x switches and have multiple HBAs in my xen and storage nodes to get redundancy and increased performance without complexity or expensive management software costs? If so, can you point me to some examples? Do NFS and iSCSI work just the same regardless? Is IB a sensible choice or could/should I use ethernet or FC on the same budget - I'm keen not to get boxed into a corner for future upgrades, however. For the storage I am likely to build a storage server using nexentastor with the intention that I can later add more disks, SSDs and add another server to provide a failover option at the storage level. An HP LeftHand starter SAN is also under consideration, too. Thanks in advance.

    Read the article

  • Apache2/mod_fcgid/PHP Process Limits Not Respected

    - by Daniel
    I've recently moved to Apache2 / mod_fcgid / PHP from nginx / php_fpm. This is the second server on which I've made this migration, but it's used much less frequently than the first, which is working like a charm. The problem is in the PHP processes that it's spawning. In looking at the mod_fcgid documentation, it appears that the default for killing idle processes is 300 seconds; I've changed that to 20. At this point, I'd be fine if 300 would work - but it's not happening. It's been running for nearly a day now, and server-status shows 12 active processes: Process name: php5 Pid Active Idle Accesses State 19243 84879 14420 11 Ready Process name: php5 Pid Active Idle Accesses State 20954 82143 149 22 Ready 20947 82149 149 22 Ready 20953 82143 149 13 Ready Process name: php5 Pid Active Idle Accesses State 20589 82765 23644 72 Ready Process name: php5 Pid Active Idle Accesses State 17663 86103 2034 117 Ready Process name: php5 Pid Active Idle Accesses State 19862 83961 1976 91 Ready Process name: php5 Pid Active Idle Accesses State 18495 85825 5164 18 Ready Process name: php5 Pid Active Idle Accesses State 25463 75109 23948 24 Ready Process name: php5 Pid Active Idle Accesses State 2466 60019 60016 2 Ready Process name: php5 Pid Active Idle Accesses State 20729 82541 12592 23 Ready Process name: php5 Pid Active Idle Accesses State 22135 80616 46361 6 Ready PHP applications are not being served at this point - Apache is returning a 503. However, it is still serving the server-status module, and mod_mono/Mono 2.10 applications are still being served. The problem is with the PHP. /etc/apache2/mods-available/fcgid.conf... <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi FcgidConnectTimeout 10 FcgidMaxRequestsPerProcess 500 FcgidIdleTimeout 20 FcgidFixPathinfo 1 FcgidMaxProcesses 10 </IfModule> (heh - Max Processes isn't being respected either...) Of course, fcgid.conf is smylinked in mods-enabled.

    Read the article

  • setting up subdomain wildcard: configured A record, VirtualHost... still does not work

    - by user80314
    Running Apache on CentOS, trying to setup wildcard subdomains, basically I want .mydomain.com to point to mydomain.com With cPanel I added *.mydomain.com With WHM I made sure that A record is pointing to the right IP. I set my A record: * 14400 IN X.x.x.x My httpd.conf: ServerName _wildcard_.mydomain.com ServerAlias *.mydomain.com DocumentRoot /home/mydomain/public_html ServerAdmin [email protected] UseCanonicalName Off ## User userdomain# Needed for Cpanel::ApacheConf UserDir enabled userdomain <IfModule mod_suphp.c> suPHP_UserGroup userdomain userdomain </IfModule> <IfModule !mod_disable_suexec.c> <IfModule !mod_ruid2.c> SuexecUserGroup usergrdomain userdomain </IfModule> </IfModule> <IfModule mod_ruid2.c> RUidGid userdomain userdomain </IfModule> ScriptAlias /cgi-bin/ /home/mydomain/public_html/cgi-bin/ # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/std/2/mydomain/wildcard_safe.mydomain.com/*.conf" I have my VirtualHost in httpd.conf set to point to domain root. Restarted Apache, server, dns, still nothing. I have spent hours researching this, followed instructions, set everything correctly. What am I missing?

    Read the article

  • I accidentally hijacked my localhost

    - by Zach L
    Opening localhost in the browser is pointing a local webpage (examplePage) after playing with some config files a while back, and I can't figure out how to restore the default behavior. Background: I have XAMPP installed on my Windows 7 machine, and a webpage at c:/xampp/htdocs/examplePage. A couple weeks ago, I was on a mission to get sites root-relative urls (/resource) to work, so I played around with a bunch of apache/conf files, including httpd.conf and httpd-vhosts.conf and also was messing with the Windows hosts file. I gave up at some point, didn't document exactly what I did, and have since probably forgotten some of what I did. Many of my changes stemmed from suggestions in this StackOverflow post What I've Tried I commented out my additions to the hosts file I turned off XAMPP (thus hopefully negating any apache config file effect) I reverted to my original DocumentRoot in httpd.conf anyway (xampp/htdocs) localhost still displays examplePage. Even with xampp turned on (my reverted DocmentRootisn't taking effect) Does anyone know what I may have done and how I can fix it? Update : Its been resolved, thank everyone so much in taskmanager, theres a couple instances of httpd.exe (Apache HTTP Server). I ended these, and opened XAMPP, restarting apache. all references to examplePage in my .conf files that I could find had been commented out or removed. I imagine that the old versions were still in effect for some reason, and manually ending the Apache processes fixed this. As a point of interest, Its still a mystery why those processes were running - I cannot reproduce that situation. I must've stumbled upon a XAMPP bug of some sort.

    Read the article

  • The bottlenecks of any computer, what to look for?

    - by WebDevHobo
    Whether it is a laptop or a desktop, any computer is made up of several pieces of hardware that communicate with each other. Sending data back and forth to ensure that the user gets the desired results. I have seen some theoretical stuff on computers & hardware, but I wonder how it all comes together. CPU RAM Graphics Card L1 CACHE L2 CACHE L3 CACHE FSB ... And all other things. Which is the biggest bottle neck? Why would a person not want/need a big value in one of those categories in certain situations? P.S.: when reading the specs of the i5 750 processor, I came across this description: In place of the FSB, one or more high speed, point-to-point buses called Quick Path Interconnect (QPI) are used, formerly known as Common Serial Interconnect Bus or CSI. QPI features higher bandwidth than the traditional FSB and is better suited to system scaling. What is this, and how does it compare to FSB? EDIT: I am not planning to buy a computer at all. The goal of this question is to understand the internal relation of various hardware pieces, their specific functions and how they work together. For instance, I have heard to a somewhat higher-than-usual amount of L2/L3 Cache can help speed up your computer. What's up with saying that? Also I forgot to mention Hard-disk RPM.

    Read the article

  • VMware and Windows Activation

    - by Peter M
    Yesterday I installed Slysoft's Virtual CloneDrive in order to mount an iso for some software installation on my host system (XP Pro SP3) This morning I fired up VMware and made a linked clone of an existing XP vm in order to do some software testing. This is the sort of thing that I do all the time, and the base XP vm that I clone was activated a long long time ago. The surprise today was that the newly cloned vm was no longer activated and XP cited major changes in hardware as the reason. I repeated the test with a full clone of the base system and got the same message. I then started up my base vm and it seemed to be activated, yet another vm (which I fully cloned from the base vm a long time ago) now started reporting that XP was not activated. At this point I guessed that Virtual DriveClone might have been the source of my hardware differences so I uninstalled it and rebooted. After this I made a new linked clone and full clone of the base vm and XP did not complain about not being activated. So I seem to be back to where I was before installing Virtual DriveClone with the exception that that one particular XP vm continues to complain about activation (even though 4 or 5 other XP vm's are fat and happy) Now to my questions: Why would adding Virtual CloneDrive to the host system affect XP activation on the vm's? From their point of view I would have thought that the environment had not changed as I had not enabled any new hard drives in their systems. Or is adding a hard drive to the host system enough to upset XP activation? Since this event, one of my fully cloned vm's is still reporting that XP is not activated even though I have removed Virtual CloneDrive. Is there anyway to convince XP that it is on the same system as yesterday? Or are my only options to do an activation or restore the vm from a previous backup?

    Read the article

  • Value of Itanium over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium hardware, are there any drawbacks? Is there a point where Itanium starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the two platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer.

    Read the article

  • Router recommendation to virtualize 800 IPs

    - by delerious010
    I've recently been looking at getting some new load balancers for our environment as we are expecting to double our client base in the next 12 months. Currently we have 400 public IPS serving 800 clusters ( 2 clusters / IP due to ports ) on Coyote Point Balancers, and distributing connections to 3 web servers serving about 6GBytes outgoing, 2Gbytes in per day. If we double, this would be about 800 IPs, possibly 1600 clusters, and about 6 servers per cluster ( for a total of 9600 so called "real servers" using Barracuda's lingo ). Due to the amount of clusters, most solutions I've looked at ( Coyote, Barracuda, Loadbalancer.org ) seem to be unsure whether they'll be able to handle our planned growth, mostly due to health checks performed on the servers ... which makes total sense when you think of it. So the fine folk at loadbalancer.org recommended that we may be better off offload the 400-800 public IPs, which we require for SSL eCommerce solutions, over to a forward facing router. From that point on, the router could do some mangling to route EXT_IP:443 to INT_IP:INT_PORT which would then allow us to reduce the Load Balancer configuration to 1 or 2 clusters, thus resolving the health check problem. Does this idea make sense to yall ? Or would you have other recommendations to make ? Secondly, what router would you recommend for such an undertaking ? I'd be looking at something that has some form of failover mechanism built in. On a totally unrelated note, I've got to admit that I'm extremely pleased with the responses I got from loadbalancer.org. Their responses to my inquiries were surprisingly helpful ( i.e. I didn't feel as if I was taking to a sales guy trying to push something ). ( No I don't work for them, and sadly nor are they sending me free gear ).

    Read the article

  • Unix Permissions: Enable access to files no matter the user?

    - by TK Kocheran
    I've been using Linux for a long time and I still am completely in the dark about how file permissions really work. With that in mind, does anyone have any books or thorough guides I could read to really understand things completely? I've done my fair share of sysadminning, so I know the easy stuff like making directories readable and writable, making files executable, and changing the owner of a file, but on sharing files across users, I'm lost. Here's my main problem. I have a number of machines across which I intend to synchronize my music library. I've been using Unison for a while now and it's a great choice as I can easily run it over SSH on my local network which I just set up. Win-win. Up until this point, I've been synchronizing computers using a 2TB external hard drive. (computer 1 unisons to HD, computer 2 unisons to HD, etc.) This is tedious at best, especially since I encrypted the drive, making it a huge hassle to hook it up to all of my machines and sync it. Anyway, the drive is running ext4 (in TrueCrypt), so it maintains all Unix filesystem info like owners and groups. I just set up a new machine and just Unison'd it to get the music on it, and I realized that now, all of my permissions are fubar. I had to run Unison as root since that was the only way I could get the files to come off of the external drive. Apparently, since I'm using a different user name on this machine than my usual "rfkrocktk" across all machines, this essentially throws a huge wrench in the gears. Here's my use case. This laptop has two effective users, "leandra" and "rfkrocktk". I want to share music between these two users, so I symlinked /home/rfkrocktk/Music to point to /home/leandra/Music. How do I (a) allow both users access to read/write/delete files in this folder, and (b) keep everything nicely in sync without messing up file ownership?

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >