Search Results

Search found 16329 results on 654 pages for 'b long'.

Page 448/654 | < Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >

  • RRAS NAT not working on a certain computer

    - by legenden
    This is driving me crazy. I have a virtualized W2K8 server running RRAS. Every other computer or server on the network can access the internet through the NAT except one. On one server, it just won't work. I can ping the ip address of the NAT gateway just fine, and everything else works. (SMB, etc) DNS, which is hosted by the same server, also works just fine. I have even reinstalled the OS on the problem server and it still doesn't work. Recap of the steps I tried: There are 3 network cards in the server, I tried every one and different switch ports. Not a hardware problem. Reinstalled W2K8 R2 on server with the problem, didn't help. Tried the IP of the internet gateway directly - this did work (!). But I need NAT to work. All firewalls are disabled. Removed computer from domain, deleted computer membership in Active Directory Users and Computers and added it back. Disabled all other network adapters and set a static ip and specified the gateway ip manually. When I tracert a public IP, the first hop (or any other hop) comes up as: C:\>tracert www.google.com Tracing route to www.l.google.com [209.85.225.106] over a maximum of 30 hops: 1 * * * Request timed out. 2 * * * Request timed out. From a different computer, on which NAT works, the first hop comes up as: tracert www.google.com Tracing route to www.l.google.com [209.85.225.105] over a maximum of 30 hops: 1 <1 ms * <1 ms xxxx [10.5.1.1] This is the most bizarre problem I ever came across, and I realize that it's a long shot asking it here given all the details, but I'm pulling my hair out. Maybe someone has an idea...

    Read the article

  • HP LaserJet 2550 has a carousel motor error

    - by Arlen Beiler
    I have a LaserJet 2550, and it's worked pretty good for a long time (except for some slowness a while back, spooling I think), but just recently it suddenly quit working. We moved this summer, but left it at our other place, and just recently when my Dad went over there to try to print something out, it didn't work. When you turn it on, you hear the fan give a false start (basically a quick pulse), and the carousel goes through its usual thing. Then it starts up in earnest like it's getting ready to print something. All of a sudden it just stops. Everything stops, and the three lower lights are steady. When I push the Go button, the Go light (bottom of the 3) turns off, but the other two stay on. I looked it up on the HP website and it says it is a carousel motor problem. I called HP, but they said it is out of warranty. I've opened the cover and held the switch with a screw driver so I could watch it, and it goes through its thing like I described (doesn't seem to make a difference whether the imaging drum is in or not), then when it stops it kind of seems to jump back a little bit (the carousel). I hope this all makes sense (I know you like details), and hopefully you also know what to do to fix it. Thanks.

    Read the article

  • Fedora 9 not reconizing hard drive

    - by Andrew Jones
    I am installing Fedora 9 to a PC (specifications at the bottom) and have had a lot of trouble with it recognising the hard drive. To get the Fedora installer to recognize it in the first place I had to pass "ata_generic.all_generic_ide=1 pci=nomsi" to the kernel, after which it installed OK. However, now when I boot the installed OS, I get a "could not find filesystem '/dev/root'" error. I tried passing the same arguments to the kernel at boot as I did when installing but to no avail. I have tried using the default LVM layout and defining manual ones but it made no difference. There is no option in the BIOS to enable AHCI or anything like that, in fact the BIOS is very limited in most respects. I can get into the system by using the installation CD in rescue mode (with those extra kernal parameters) but I'm not sure what to do once in there... Unfortunately using a more recent version of Fedora or even another Linux distribution altogether isn't an option becuase of outside constraints - which is annoying since I know for a fact Ubuntu works fine on this setup. I have not been using Linux that long, so treat me like an idiot - I am one. Any help would be greatly appreciated, thanks! System spec: Intel Atom Z530 CPU @ 1.6 GHz Intel US15W chipset 1 GB DDR2 160 GB SATA harddisk (Samsung HM16HI) 1000 Mbit/s Ethernet port Phoenix BIOS

    Read the article

  • Networking problems in VMWare with wireless bridge

    - by Robert Koritnik
    Barebone data: virtualization: VMWare Workstation 6.5 (latest) Host: Windows Server 2008 x64 Guest: Windows Server 2008 x86 Host network adapter: wireless Guest network adapter 1: over Bridge VMNet (automatic) Guest network adapter 2: over Host only VMNet Problem When I surf the net within VM my internet connection just gets stalled (not dropped). It doesn't experience any timeout whatsoever, it just stops downloading/communicating. For instance: I start downloading a file with a browser (IE/FF/CR doesn't matter) and I have to pause/restart download when speed drops to 0. I could wait indefinitelly but connection won't pickup automatically. What did I miss in my network configuration? Update 1 I've tested this in various combinations. This works fine when host is connected via Ethernet. But when connected via Wifi, the connection on the guest works as previously described. It connects fine. It gets a valid IP from DHCP... Everything is cool as long as you don't start doing some intensive network traffic (ie. download a 2MB file) In this case it starts downloading and stops after a while. Speed just drops to 0B/s... Sometimes it picks up back, sometimes it doesn't. Connection still stays and works. I can ping around with no problem.

    Read the article

  • Remote desktop solution where the desktop sharing party contacts the computer it wants to share with

    - by Kent
    I'm in a situation where I act as a sort of techinical support to my family and less techinically experienced friends. I'm looking for a remote desktop solution where it's possible to setup a "zero-install, double click an icon"-solution where the client computer contacts me so that I may interact with their desktop. The last part is important as the people in need of my help don't know how to configure their router or even the firewall software on their own computer. They are able to click an accept button when asked if a program should be able to make outgoing connections. They have many different kinds of routers, as well as software firewalls, and I rather not deal with the problem of how to connect to them using whatever as well as the actual problem they are having. It must be: Free of charge for non-commercial use. Possible to use it in a mode where the computer wanting to share its desktop should be able to make a connection to my computer. My computer has a DNS name we can use. Compatible with both Windows XP and Windows 7. Independent of a third party server or infrastructure. Explanations of the above: I don't want to spend money on it when I help them for free. If it's free as in freedom, all the better! I guess this boils down to being callable like showdesktopto.exe opscomputer.com where opscomputer.com is my computers DNS name. If that is possible then I can create a shortcut they can use to connect to me when they need help. It's nice if it's possible to specify a password or key file which I can use to authenticate myself, but it's not required. They use the OS which their machine comes installed with. That means Windows XP or 7. I want something which will work in the long run. Using a third party service which might not be available when I need it disqualified such solutions.

    Read the article

  • How can I restore my original IME Romaji input settings?

    - by JOhn K
    the this one, Japanese IME on Windows: switch back to romaji input method (3) did not help. The problem seems the same. My Vista home premium version PC, I had been using Microsoft IME to use English and Japanese input using romaji henkan for a long time. One day, all of a sudden, first when I started up the PC, it has cap lock indicator ON. So, I press SHIFT key, CAP lock indicator is off!(This I have to do every morning.) Now when I want to type romaji input to change to Japanese, I switch EN English (United States) to "JP Japanese (Japan) and select input to hiragana input. It worked until that day. But now when I set to input romaji for hiragana as I used to do and start typing, then it shows Japanese hiragana directly on the display just as keyboard setting as Japanese ???109???????? as shown in Wikipedia JIS keyboard. And I cannot show hiragana as I wanted ( I can convert to Kanji OK) etc. by hitting space key. But its key board arrangement is what I never learned. Other thing I found is when I hit "`" key, it switches between hiragana and alphabet. When I see Control panel setting it is the same setting as I have seen. Please suggest me a solution to get the original setting for IME input mode as I used to do. John K.

    Read the article

  • Is Windows Server 2008R2 NAP solution for NAC (endpoint security) valuable enough to be worth the hassles?

    - by Warren P
    I'm learning about Windows Server 2008 R2's NAP features. I understand what network access control (NAC) is and what role NAP plays in that, but I would like to know what limitations and problems it has, that people wish they knew before they rolled it out. Secondly, I'd like to know if anyone has had success rolling it out in a mid-size (multi-city corporate network with around 15 servers, 200 desktops) environment with most (99%) Windows XP SP3 and newer Windows clients (Vista, and Win7). Did it work with your anti-virus? (I'm guessing NAP works well with the big name anti-virus products, but we're using Trend micro.). Let's assume that the servers are all Windows Server 2008 R2. Our VPNs are cisco stuff, and have their own NAC features. Has NAP actually benefitted your organization, and was it wise to roll it out, or is it yet another in the long list of things that Windows Server 2008 R2 does, but that if you do move your servers up to it, you're probably not going to want to use. In what particular ways might the built-in NAP solution be the best one, and in what particular ways might no solution at all (the status quo pre-NAP) or a third-party endpoint security or NAC solution be considered a better fit? I found an article where a panel of security experts in 2007 say NAC is maybe "not worth it". Are things better now in 2010 with Win Server 2008 R2?

    Read the article

  • SQL Server Installaion error 0x84B40000

    - by Kurtevich
    I have a problem installing SQL Server 2008 R2. Long time ago I had it installed, and then uninstalled. It was left in "Add/remove programs", but I didn't pay attention on that. I had 2005 installed. And now there is a need to install 2008. I removed 2005 and started installing 2008, but it says that space on C: is not enough. That's when I found out that "Add/remove programs" shows it occupying more than 4 gigabytes, though I used to uninstall it. So I click "Remove", it shows all those many screens and validations, shows that removal completed, but the size of Program Files folder is still more than 4 GB. I removed (from "Add\remove programs" everything that had "SQL Server" in it's name, but that main "SQL Server 2008" item is still there and still 4 GB and uninstalling does nothing. Because installation of SQL Server did not show existing instances, and I don't see any running services related to SQL server (well, almost any, more details in the end), I though that this folder contains just some leftover staff and data and deleted it manually. Then agreed to removing of the item in "Add/remove programs" and everything looks clean. Now every time I try to install SQL Server (even in the minimum configuration), I receive the following error: SQL Server Setup has encountered the following error: The specified credentials that were provided for the SQL Server service are not valid. To continue, provide a valid account and password for the SQL Server service. Error code 0x84B40000. What is this service mentioned here? This error looks like I'm trying to add features to existing server and it can't login. But the setup didn't ask me for any credentials, except one username that couldn't be changed. Here are the services shown that can be related, both disabled and pointing to non-existing executables: SQL Active Directory Helper Service SQL Full-text Filter Daemon Launcher (MSSQLSERVER) I understand that this must be because of my manual deletion, but is there a way to clean it up now?

    Read the article

  • Scheduled task does not run on WIndows 2003 server on VMWare unattened, runs fine otherwise

    - by lnm
    Scheduled task does not run on Windows 2003 server on VMWare. The same setup runs fine on standalone server. Test below explains the problem. We really need to run a more complex bat file, but this shows the issue. I have a bat file that copies a file from server A to server B. I use full path name, no drive mapping. Runs fine on server B from command prompt. I created a task that runs this bat file under a domain id with password that is part of administrator group on both servers. Task runs fine from Scheduled task screen, and as a scheduled task as long as somebody is logged into the server. If nobody is logged in, the task does not run. There is no error message in Task Scheduler log, just an entry that the task started, bit no entry for finish or an error code. To add insult to injury, if the task copies a file in the opposite direction, from server B to server A, it runs fine as a scheduled unattended task. If I copy a file from server B to server B, the task also runs fine unattended, I recreated exactly the same setup on a standalone server. No issues at all. I checked obvious things like the task has "run only as logged in" unchecked, domain id has run as a batch job privilege and logon rights, Task Scheduler service runs as a local system, automatic start. Any suggestions?

    Read the article

  • Canonical Redirect on Dynamic Mass Virtual Hosts on Apache

    - by Josh
    I have a Web app on Apache that allows users to point their domain to the server. Right now I'm using Apache's dynamic mass virtual hosts with an entry VirtualDocumentRoot /www/hosts/%0/docs So with www.companydomain.com it points to /www/hosts/www.companydomain.com/docs The problem is when the user goes to companydomain.com it will point to /www/hosts/companydomain.com/docs Is there an easy way to automatically have Apache check to see if a directory exists for the virtual host, and if not, look for the host name with "www." in front of it? Other subdomains are fine (i.e. abc.domain.com should point to a diff. directory than def.domain.com) but the whole "www" issue is a mystery to me. I am using dynamic mass virtual hosts so the server does not have to restart after each registration for the application. If there is a different way that is fine as long as apache isn't restarted each time. How can I accomplish this? Worst case scenario if there were a way to redirect to a "default" location on the server if not found I could always do a check via PHP or something but I feel like that is a bit hacked together and there has to be a more efficient way. Thanks in advance!

    Read the article

  • Migrating to CF9: trouble getting JRun working with SSL

    - by DaveBurns
    I have a client on MX7 who wants to migrate to CF9. I have a dev environment for them on my WinXP machine where I've configured MX7 to run with JRun's built-in web server. I've had that working for a long time with both regular and SSL connections. I installed CF9 yesterday side-by-side with the existing MX7 install to start testing. The install was smooth and detected MX7, adjusted CF9's port numbers for no conflict, etc. Testing started well: MX7 over regular and SSL still worked and CF9 worked over regular HTTP. But I can't get CF9 to work with SSL. I installed a new certificate with keytool, FireFox (v3.6) complained about it being unsigned, I added it to the exception list, and now I get this: Secure Connection Failed An error occurred during a connection to localhost:9101. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) I've been Googling that in all variations but can't find much help to get past this. I don't see any info in any log files either. FWIW, here's my SSL config from SERVER-INF/jrun.xml: <service class="jrun.servlet.http.SSLService" name="SSLService"> <attribute name="enabled">true</attribute>` <attribute name="interface">*</attribute> <attribute name="port">9101</attribute> <attribute name="keyStore">{jrun.rootdir}/lib/mykey</attribute> <attribute name="keyStorePassword">*deleted*</attribute> <attribute name="trustStore">{jrun.rootdir}/lib/trustStore</attribute> <attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute> <attribute name="deactivated">false</attribute> <attribute name="bindAddress">*</attribute> <attribute name="clientAuth">false</attribute> </service> Anyone here know of any issues re setting up SSL and CF9? Anyone had success with it? Dave

    Read the article

  • Problem configuring php-fpm with nginx

    - by Nisanio
    First of all: I'm not an expert in configuring things. This is very new for me, so, my apologies in advance. At work we have a Centos server. The guy who worked here before installed nginx. We need to made a php site, so, obviously, I need to set up php and make it work with nginx. Making short a very long tale, I had to replace the nginx binary with a new one (because the older was compile without fast-cgi), and I had to recompile and install php (because the new version has fpm). Then I struggle with the config files, making this nginx.conf (not all the file) user php; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } and uncomment some parameters in php-fpm (to much to detail here, but the important is that group and user are "php") I never could start the php-fpm with the instructions of the book sudo /usr/sbin/php-fpm start But after look at the net, I found this sudo /usr/local/sbin/php-fpm --fpm-config=/usr/local/etc/php-fpm.conf This worked (I think) I restarted nginx. But... nothings happens with php... My calls to php files (via firefox) doesn't even appear in the log (/opt/nginx/logs/error.log) I'm really, really exhausted and lost... Could anyone help me, pleaaase.... :( Thanks in advance

    Read the article

  • Officially announced RAM support size doesn't apply to one of twin rigs with just one difference

    - by Deniz
    It'll take a little long to describe my situation but here goes the story : In January 2009 we bought (the OEM parts) two similar systems with just one difference. One of them had a Phenom X4 cpu and the other one (mine) a Phenom X3 cpu. At the beginning we had problems with both systems to power them on whilst having all of their ram slots being full. We decided to install the systems with just 2 slots populated and later try to install the rest of ram sticks. Both systems did succeed to support 3 sticks. We tried many different procedures to make the systems work with their fourth ram slots being populated. We waited for new bios updates and flashed the boards when they were available, we tried different ram sticks with different frequencies etc. One day while we were trying to install the fourth stick, the X4 machine did accept it. The other one did not. The most mind boggling thing was that after one of my trials the X3 system begun to not operate with the third slot populated. Our boards did have AMD 770 chipsets and we even tried to change the board of the X3 machine with another 770 chipset board. Now my questions are : Should we change the cpu ? What is causing the X3 system to not accept the fourth (or now the third) ram stick ? The manufacturers sites do claim that this boards do accept 4 ram sticks (but they only tested them with certain ram brands and models). What are the limitations for maximum ram configurations on motherboards ? Are there some "rules of thumb" except frequency, voltage, chip type considerations for which we did check our parts ? Our boards are : Gigabyte GA-MA770-DS3 Sapphire PC-AM2RX780 - PURE CrossFireX 770

    Read the article

  • What causes Windows Media Player on Windows 8 to not play the entire library?

    - by somequixotic
    Behavior 1: Verify that the WMP playlist is clear of all songs. Turn on the "Shuffle" and "Repeat" features. Double-click on a music track in the Library. Click the "Next" button (double right angle brackets). A random song from any track in the Library is randomly chosen and played. When observing the Playlist (clicking the "Play" tab), the entire contents of the Library appears in the Playlist. Behavior 2: Verify that the WMP playlist is clear of all songs. Turn on the "Shuffle" and "Repeat" features. Double-click on a music track in the Library. Click the "Next" button (double right angle brackets). The button visually depresses like it has registered the click, but nothing happens. Absolutely nothing. Moreover, the "Previous" button is grayed out. When observing the Playlist, only the one song that was double-clicked appears in the Playlist. What causes Behavior 2? I cannot correlate any specific action I've taken with Behavior 2, and Behavior 1 has been the case as long as I can remember, all the way back to Windows XP. Even earlier during my usage of Windows 8, I recall Behavior 1 working correctly. But suddenly, inexplicably, without changing any settings in WMP, Behavior 2 kicked in, and persists after reboots. I've tried sfc /scannow in an administrator prompt. All system files are in order. I've downloaded all Windows Updates and driver updates. I've attempted to alter WMP options and playback settings to no avail. So... what is causing Behavior 2? Is this an intended, valid behavior, or is something malfunctioning? How would I know what that "something" is? How would I go about fixing it without just reinstalling Windows 8 fresh?

    Read the article

  • Wireless router blocking some sites while using ethernet is fine

    - by Micke
    I'm using Windows 7 and my router is a wireless Apple Airport Express that is approximately two years old. Suddenly I can't access some sites (for example www.sthlm.friskissvettis.se, or www.vegetarian-shoes.co.uk, some streamed tv-shows on svtplay.se, and a number of other random sites) when connecting to internet with my router. It worked good until recently and I'm fairly sure this problem emerged when my ISP upgraded from 10/10mbit to 100/10mbit speed. Most other sites like facebook and google works fine. When using my network cable to connect to internet everything works fine and I can access these sites. Firmware is current and I've tried reseting the router to factory defaults. Tried different browsers, and I can't ping the "blocked" sites either. Tracert www.sthlm.friskissvettis.se starts with 10.0.0.1 and continues through a number of long addresses until it says timeout. The last working address before timeout was sth-tcy-ipcore01-ge-0-2-0.neq.dgcsystems.net [83.241.252.13], if it matters. Tracert www.vegetarian-shoes.co.uk also eventually gives me a timeout. When the network cable is plugged in, I still get timeout on tracert www.sthlm.friskissvettis.se even though I can access the site in Chrome. Weird. www.vegetarian-shoes.co.uk doesn't give me a tracert timeout when the cable is plugged in, and I can access the site as usual. I've tried changing DNS servers to use opendns servers instead, but to no use. I've tried pinging these two sites with a lower MTU packet size (with this method: http://www.richard-slater.co.uk/archives/2009/10/23/change-your-mtu-under-vista-or-windows-7/), but still can't access them through ping... I don't know what to do anymore.... any suggestions???

    Read the article

  • Using NginX and Apache alongside for both static and dynamic files

    - by faridv
    Background: I've searched a lot and found these useful threads about using of Apache or NginX for static or dynamic files. But they are old (mostly about 1 or 2 years ago) and I think both webservers, specifically Nginx has had important changes in performance and usage. So I think take another look on these issue cannot be that bad. Nginx (for static files) and Apache (for dynamic content)? nginx better than apache for dynamic content? [closed] Apache or NGINX for PHP? Nginx as reverse proxy to Apache with only dynamic content? My question: I have a PHP web application with lots of dynamic files and lots of static contents (videos, images etc.) and it's currently running on a CentOS 6 server and Apache 2.2 since 2 months ago. In past few days, number of our site visitors have gained so fast and I just thought if this number continues to increase with current ratio, we need to change many things (web server, application, etc.) to prevent failures. Because of hardware limitations that we are facing, I thought that it's best for us to start with web server. Should I start with something else? Should I try to increase performance of my PHP application and forget about web server for now? (even if gonna take a long time!) Because of huge usage of .htaccess files (for redirection, rewrites, etc.), I think it's gonna be painful to migrate to NginX as default web server or maybe only for dynamic files. Does this mean that I can't even use Nginx as reverse proxy? I'm not sure latest stable version of NginX and PHP-FPM have a better performance over my current Apache and my limitations (too many things) won't let me to give it a try. Which one is doing better currently? What will I lose by migrating to Nginx? To make it short, what should I do?

    Read the article

  • why adding router will hide all share folders

    - by user1285419
    I have several computers running winxp installed in my office, they are all connecting to the WAN providing by the building (wall socket) (DHCP, mask 255.255.252.0). I setup a shared folder in my computer so all other computer in the same group could access it. This configuration have been using for long time. Recently, I am trying to setup a router. I have the WAN port of the router go to the wall socket, connect the NIC to the LAN port of the router, setup the router in DHCP mode (192.168.0.100/255.255.255.0 to 192.168.0.110 /255.255.255.0), I turn off all the firewall (windows one and router's builtin one), the NIC has ip set as DHCP. If I ipconfig/all, I see that the NIC was assigned ip 192.168.0.100. I can access the internal, email whatever. However, the shared folder can no longer be accessed by other computers in the same group. I think it is the problem of ip. But what's really weird is if I turn off the DHCP function in the router, ipconfig/all always give 0.0.0.0/255.255.255.255 and I cannot access the internet. I have no idea what's going on. Anyone know how to fix it and allow the shared folder in application of router? Thanks.

    Read the article

  • Change the default route without affecting existing TCP connections

    - by Patrick Horn
    Let's say I have two public network addresses on my server: one NAT through an ISP (192.168.99.0/24), and a VPN through a different ISP (192.168.1.0/24), already configured with a per-host route to the VPN server through my ISP. Here is my initial routing table. I am currently routing through my ISP on subnet 192.168.99.0/24. $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.99.1 0.0.0.0 UG 0 0 0 eth1 55.66.77.88 192.168.99.1 255.255.255.255 UGH 0 0 0 eth1 192.168.99.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 Now, I want new TCP connections to switch to my 192.168.1.0/24 so I type the following: $ route add -net 0.0.0.0 gw 192.168.1.1 dev tap0 When I do this, it causes some long-standing TCP connections to hang. Is there a way to I safely change the default interface for new connections, while allowing existing TCP connections to use the old route (i.e. do I need enable some sort of stateful routing table)? I am okay with a solution that only works with established TCP connections, and I don't care how hacky it is. For example, if there is a way to add temporary iptables rules for existing connections to force them over the old route. But there has to be some way to do this. EDIT: Just a note about a simple "route add -host ... " for existing connections: this solution would work if I am fine with leaving a subset of IPs on the old interface. However, in my application, this actually doesn't solve my problem because I want to allow new connections to come on the new interface even if they have the same source IP. I'm now looking at using the "ip route" command to set source-based routing rules.

    Read the article

  • Security in shared hosting vs VPS 'virtual appliances'

    - by Pedro Loureiro
    I have to change my hosting provider. Right now I have a shared hosting account but I'm considering trying the LAMP stack appliance from turnkeylinux.org. I'm very comfortable with using linux, I've been using it for a long time. I have no problem ssh'ing into remote machines and do whatever I have to do (coding, reading logs, moving files, deploying, etc). The problem is that none of those tasks have involved securing the server/firewall. My experience has been as a desktop user or developer deploying apps/files in remote servers. Ignoring the security in the application logic (read: any scripts, frameworks, websites I might have created or installed) - I'm worried about things like base configuration of deamons, firewall, ports, executable scripts being readable from the outside and whatnot. My question is: how do you compare the (expected) out of the box security of the LAMP stack from turnkey and the (expected) security of a "regular" shared hosting provider? I was hoping to find some guides with a list of steps to do to protect my server but the only documentation I found was simply referring to ubuntu's documentation.

    Read the article

  • Using two ports on my ZyXel USG gateway to patch two devices together?

    - by Matthew Beza
    I don't know if this is possible but it would save me many long drives! I have drawn, with my epic MS Paint skills, my current setup. I have a ZyXel USG300 Gateway with a built in 5 port switch. It supports bridging, tunnles, VLANs, etc.. I have a Cisco WLC2112 plugged into port 6 (P6). The cisco is set to 192.168.6.2 and P6 is set to 192.168.6.1. This works but I need to incorporate a Nomadix AG3000 to handle guests. (Router (A) in the picture). So I need the Cisco WLC2112 to use the Nomadix as if it where plugged into it's LAN port. Right now the Nomadix WAN Port is plugged into (P2) on the ZyXel, and the Nomadix LAN port is plugged into (P3). Is it possible to set something up where (P3) and (P6) are somehow "Patched" as if the Cisco was plugged directly into the Nomadix LAN port? Basically making the ZyXel a fancy Cat5e coupler?

    Read the article

  • duplicity fail: not promping for password: "Running 'sftp user@host' failed"

    - by Thr4wn
    I have two linode VPS accounts and I want to back up one onto the other (the reasons are mainly for fun and to practice server administration.) the short version Duplicity isn't even asking for my password, but immediately says "invalid SSH password" (but I can ssh into the other server). why? the long version When I run duplicity /home/me scp://[email protected]//root/backup I get Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) Invalid SSH password Running 'sftp [email protected]' failed (attempt #2) Invalid SSH password Running 'sftp [email protected]' failed (attempt #3) And it says Invalid SSH password immediately with no opportunity for me to actually type the password. When I type duplicity full -v9 --num-retries 4 /home/me scp://[email protected]//root/backup I get Main action: full Running 'sftp [email protected]' (attempt #1) State = sftp, Before = 'Connecting to 97.107.129.67... [email protected]'s' State = sftp, Before = '' Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) I can ssh into [email protected] fine, and in fact have the ip in known_hosts before I tried any of this. serer 1 (from which I'm running the duplicity command) is Linode's default Ubuntu 8 setup with only a handful of programs installed via apt-get. server 2 (represented by x.x.x.x) is literally only Linode's default Ubuntu 8 setup I previously tried using SystemImager -- would that have changed settings in a destructive way? (I have removed and rebooted since then) Isn't Duplicity supposed to prompt for password? Am I using it wrong? are there common mistakes/dependencies I need to know about? Is there any way that x.x.x.x could be setup that could make this not work (I used Linode's default Ubuntu 8 setup and barely )?

    Read the article

  • Data recovery on a corrupted 3TB disk

    - by Mark K Cowan
    Short version I probably need software to run a deep-scan recovery (ideally on Linux) to find files on NTFS filesystem. The file data is intact, but the references are no longer present. Analogous to recovering data from a "quick-formatted" partition. Hopefully there is a smarter way available than deep-scan, one which would recover filenames and possibly paths. Long version I have a 3TB disk containing a load of backups. Windows 7 SP1 refused to detect the disk when plugged in directly via SATA, so I put it on a USB/SATA adaptor which seemed to work at first. The SATA/USB adaptor probably does not support disks over 2.2TB though. Windows first asked me if I wanted to 'format' the disk, then later showed me most of the contents but some folder were inaccessible. I stupidly decided to run a CHKDSK on my backup disk, which made the folders accessible but also left them empty. I connected this disk via SATA to my main PC (Arch Linux). I tried: testdisk ntfsundelete ntfsfix --no-action (to look for diagnostically relevant faults, disk was "OK" though) to no avail as the files references in the tables had presumably been zeroed out by CHKDSK, rather than using a typical journal'd deletion). If it is useful at all, a majority of the files that I want to recover are JPEG, Photoshop PSD, and MPEG-3/MPEG-4/AVI/MKV files. If worst comes to worst, I'll just design my own sector scanner and use some simple heuristic-driven analysis to recover raw binary blocks of data from the disk which appears to match the structures of the above file types. I am unfamiliar with the exact workings of NTFS but used to be proficient at recovering FAT32 systems with just a hex-editor, so I can provide any useful diagnostic information if you let me know how to find it! My priorities in ascending order of importance for choosing the accepted answer: Restores directory structure Recovers many filenames in addition to the file data Is free / very cheap Runs on Linux Recovers a majority of file data The last point is the most important, but the more of the higher points you match the more rep you'll probably get :)

    Read the article

  • Installation Error on Windows Vista: "Side-by-Side configuration is incorrect"

    - by Maxim Z.
    NOTE: This is not a dupe of this other question. That question refers to a similar problem with 2 programs, while I'm only having it with 1, so the solution there doesn't apply to my situation. My relative asked me to install H&R Block 2009 on his Windows Vista 32-bit computer. I ran the installation program, which succeeded, but when I try to open the application itself, it gives me the following error: The application failed to start because its side-by-side configuration is incorrect. Please see the application event log for more detail. Here are the steps I've done so far to try and remedy this problem: In elevated command prompt, run the command: sfc /scannow Uninstall H&R Block 2009 Uninstall Microsoft Visual C++ 2005 Redistributable Reinstall Microsoft Visual C++ 2005 Redistributable by downloading from MSFT website Reinstall H&R Block 2009 This didn't fix it. I've searched for a long time and haven't found anything that works. The H&R Block site itself states that the way to fix this problem is to uninstall and reinstall H&R Block 2009. Has anyone run into this issue before? If so, how can I fix it? Thanks in advance.

    Read the article

  • DD-WRT router causing IP address conflicts across network

    - by r.tanner.f
    My DD-WRT router has lost its mind! I just set up two DD-WRT routers, one as a WAP (working fine) and one in Client Bridge (routed) mode (the problem). Not long after setup I started seeing IP address conflicts on other machines. The event log always points the finger at my Client Bridge router's MAC address. Neighbour table overflow The log on my router is flooded with Neighbour table overflow errors. These start a minute or two after boot. The network is rather large, with +200 IP addresses being used in this subnet. The other router shows no such errors. Mass ARP requests from 1.1.1.1 I'm also seeing constant ARP requests (with the problem router's MAC address) from 1.1.1.1. Seems like it's bugging everything on the network for its MAC address and then promptly forgetting it (or never receiving a response). Configuration: Model: Buffalo N600 Firmware: DD-WRT v24SP2-MULTI (03/21/11) Wireless Mode: Client Bridge (routed) I'm not sure what configuration details are relevant and I'd rather not have comments flooded, so just ping me in this chat if you want to know something. Why is my router stealing IP addresses and how can I stop it?

    Read the article

  • System Center Essentials server running out of disk space due to stored old updates

    - by Ricket
    We have a System Center Essentials (SCE) server to filter updates to our laptops. We've configured it to download the update, and then the laptops get the update from this server; this of course reduces our internet bandwidth and the time it takes for employees to receive the updates, which reduces the complaints we get about how long updates take. However we currently have a total of 2,255 updates stored on the server. SCE gives a breakdown: Updates with installation errors: 29 Updates needed by computers: 280 Updates installed/up-to-date: 0 Updates with no status: 1946 Our little server has 68gb of hard disk space, and the updates are currently taking 32gb and counting. Some of the updates date back to 2003, but we can't figure out a way to delete them to free up space on the server. Right-clicking an update and clicking Uninstall threatens to remove the update from all computers, which is not what we want. Some of the updates even inform us upon viewing: This update has been replaced by a newer update. Before declining this update, it is recommended that you approve the new update first and verify that this update is no longer needed by any computers. How do you prevent your SCE server from filling its hard drive space? Is there a way to configure the server to only keep updates that are still needed? Furthermore, why (in the above breakdown of updates) are there so many updates with "no status" and 0 updates that are "installed/up-to-date"?

    Read the article

< Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >