Search Results

Search found 8849 results on 354 pages for 'cloud hosting'.

Page 309/354 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Dedicated virtual setup is slow with WordPress

    - by kovshenin
    Hey. I'm running a Fedora linux server on the Amazon EC2 platform. I'm pretty sure there's something wrong with my configuration as it seems to be very slow. SSH sometimes takes over 30 seconds to connect, a WordPress generated web page could take 5 seconds to load, and it could take 20 seconds to load, which is pretty awkward. MySQL queries are all executed in less than a second, so I don't think that's the case. I'm not really sure where the issue lies, but a simple page written in PHP loads instantly. A fresh WordPress installation starts lagging. Same works perfect on grid hosting at MediaTemple for instance, so I'm pretty sure I missed something. If you could please direct me to the right tools and articles which would help me out. Thanks so much! Fedora Core 8, php 5.2.6, MySQL 5.0.45, OpenSSH 4.7p1, OpenSSL 0.9.8b. PHP is configured as a module to Apache 2.2.9, all websites based on virtual hosts. I have some on-going php scripts running from time to time in the background via cron. Thanks.

    Read the article

  • Rename Active Directory domain following Windows 2000 -> 2008 migration.

    - by ewwhite
    I'm working with a site that needs an internal DNS domain rename. It currently has a DNS name of domain.abc.com and NT name of ABC. I'm trying to get to a DNS name of abctrading.com and NT name of ABCTRADING. Split DNS would be used. The site originally ran from a single Windows 2000 domain controller hosting AD, file, print, DHCP and DNS services. There was no Exchange system in the environment. The 50 client PCs are all Windows XP with a handful of users using roaming profiles. All users are in a single OU and there are no group policy/GPOs. I'm a Linux engineer, but have been trying to guide another group of consultants to reach a more suitable setup. With the help of this group, we were able to move the single Windows 2000 system to a set of Windows 2008 R2 servers separated into domain controller and file/print systems (virtualized). We are also trying to add an Exchange 2010 system to this mix. The Windows 2000 server was demoted and is no longer in the picture. This is the tricky part, as client wants the domain renamed and the consultants aren't quite sure how to get through it without another 32-40 hours of testing/implementation. THey say that there's considerable risk to do the rename without a completely isolated test environment. However, this rename has to be done before installing Exchange. So we're stuck at this point. I'd like to know what's involved in renaming the domain at this point. We're on Windows Server 2008. The AD is healthy now. Coming from a Linux background, it seems as though there should be a reasonable path to this. Also, since the original domain appears to be a child/subdomain, would that be a problem here. I'd appreciate any guidance.

    Read the article

  • Are there any tests I can run on a network to simulate 100 heavy network users?

    - by marc.gayle
    I will be hosting a Ruby on Rails workshop at a small hotel in the near future, and while they have 'Wifi' everywhere on the property, and the property normally hosts 150 - 300 people, I am not 100% confident that they have hosted 150 tech people that tend to have heavy web surfing habits/needs. Their tech department is also 1 or 2 guys. Are there any automated tests I can download and run from my laptop, on the network, that would simulate 100 'heavy users' on the network at the same time? Their broadband pipe is a 15mbps cable connection. Would that suffice for the general surfing needs of 100 - 150 techies? I know all it takes is 1 or 2 bit torrenters to kill the entire network, but assuming we can at the very least block those ports or encourage the attendees not to file share on the network, would that speed suffice for general surfing needs? What are good resources online that would allow me to quickly get up to speed on the IT related issues, so that I can ask their sysadmins the right questions? Edit: Note that I am fairly technical, so assume I can get up to speed quickly even with technical manuals, etc.

    Read the article

  • Network access lags for Win7 when server network utilization is high

    - by Jeff Miles
    We have a Dell PE2950 file server running Windows 2008, hosting a DFS namespace of ~1.2 TB. This server has two Broadcom 1Gbps NICs teamed together. When there is high traffic going to the server across the network (greater than 200 Mbps), any Windows 7 client accessing a DFS share at the time experiences severe performance problems. For example: Computer A has an AutoCAD drawing opened directly from the DFS share. Performance is normal, not causing any issues. Computer B begins a file transfer, putting a 11GB file onto a different DFS namespace, on the same server Computer A immediately notices lag while using AutoCAD. The cursor momentarily freezes within AutoCAD every 10 seconds or so, and any browsing of the DFS share is extremely slow. Computer B completes file transfer, and performance resumes to normal for Computer A. This is only affecting Windows 7 clients, using a variety of hardware (desktop + laptop). All of our Windows XP clients see no performance impact during the file transfer. Things I have tried with no change: Had Computer A work from an entirely different RAID array from the file transfer destination Updated NIC drivers on clients and server Enabled TCP offload and receive side scaling on the server NIC (previously disabled when the issue began) Antivirus disabled during file transfer I am currently having a user test applications other than AutoCAD when the file transfer occurs, and will update the question with that result. Does anyone have any recommendations for resolution or additional troubleshooting steps?

    Read the article

  • Creating an ec2 image on amazon fails at mkfs.ext3

    - by Dave Orr
    I'm trying to create an image of my ec2 instance in Amazon's cloud. It's been a bit of an adventure so far. I did manage to install Amazon's ec2-api-tools, which was harder than it seemed like it should have been. Then I ran: ec2-bundle-vol -d /mnt -k pk-{key}.pem -c cert-{cert}.pem -u {uid} -s 1536 Which returned: Copying / into the image file /mnt/image... Excluding: /sys/kernel/debug /sys/kernel/security /sys /proc /dev/pts /dev /dev /media /mnt /proc /sys /etc/udev/rules.d/70-persistent-net.rules /etc/udev/rules.d/z25_persistent-net.rules /mnt/image /mnt/img-mnt 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00677357 s, 155 MB/s mkfs.ext3: option requires an argument -- 'L' Usage: mkfs.ext3 [-c|-l filename] [-b block-size] [-f fragment-size] [-i bytes-per-inode] [-I inode-size] [-J journal-options] [-G meta group size] [-N number-of-inodes] [-m reserved-blocks-percentage] [-o creator-os] [-g blocks-per-group] [-L volume-label] [-M last-mounted-directory] [-O feature[,...]] [-r fs-revision] [-E extended-option[,...]] [-T fs-type] [-U UUID] [-jnqvFKSV] device [blocks-count] ERROR: execution failed: "mkfs.ext3 -F /mnt/image -U 1c001580-9118-4a50-9a25-dcf02be6d25f -L " So mkfs.ext3 wants -L, which is a volume name. But ec2-bundle-vol doesn't seem to take in a volume name as an argument, and the docs (http://docs.amazonwebservices.com/AmazonEC2/gsg/2006-06-26/creating-an-image.html) don't seem to think one should be needed. Certainly their sample command: # ec2-bundle-vol -d /mnt -k ~root/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem -u 495219933132 -s 1536 doesn't specify anything. So... any help? What am I missing?

    Read the article

  • My client's solution of a Windows SBS 2011 VM on an Ubuntu host and VirtualBox is pinning the host CPU

    - by Scott Stamp
    Here's my situation, I've got a client hosting two servers (one VM), with the host providing VMware Zimbra, the other Windows Small Business Server 2011. Unfortunately, the person before me had configured this setup as follows. Host: Ubuntu Desktop Edition 10.04 (I know, again, not my choice) running VMware Zimbra 8GB of RAM On-board RAID1 of two 320GB Seagate Barracuda drives for the OS Software RAID5 of four 500GB WD Caviar Black drives on MDADM for bulk storage (sorry, I don't know the model #) A relatively competent quad-core Intel Core i7 CPU from the Nehalem architecture (not suspicious of this as the bottleneck) Guest: Windows Small Business Server 2011 4GB of RAM Host-equivalent CPU allocation VDI file for OS hosted on the on-board RAID, VDI file for storage hosted on the on-board RAID For some reason when running, the VM locks up when sitting nearly idle, and the VirtualBox process reports values of 240%+ in top (how is that even possible?!). Anyone have any ideas or suggestions? I'm totally stumped on this one. Happy to provide whatever logs you'd like to take a look at. Ideally I'd drop VirtualBox and provision this with VMware Workstation, but the client has objected to the (very nominal) costs involved. If hardware needs to be purchased to help, it will be, but we're considering upgrades a last-resort at this time. Thanks in advance! *fingers crossed*

    Read the article

  • Running KVM/XEN/Hyper-V VMs from a RAM disk, is this possible? Practical?

    - by Ausmith1
    Currently I'm using ESX (v3 and v4) to test a scripted OS (Windows 2003) and application install DVD. The DVD ISO (8GB) is mounted on a 1Gbps NFS datastore and the VMDK's (20GB) are on an SSD mounted via NFS over a 10Gbps link. It still takes a lot longer than I'd really like for to run through a test iteration and I'm wondering if mounting the virtual disks and ISO on a RAM disk on the same server as the hypervisor is running on would be worth my while. I can dedicate a server to this VM and 32GB of RAM in the system should be adequate to do the trick I'd guess. (1GB hypervisor OS, 28GB RAM disk and 2GB for the VM is < the 32GB available to me) Since hosting a RAM disk within ESX does not seem possible I'm open to trying KVM/Xen/Hyper-V. KVM would probably be my first choice of these three. Anyone out there tried this? Bear in mind this is purely for a test run of the installer, the VM will be discarded as soon as the test is completed so I'm not worried about losing data from the remote possibility of a power failure.

    Read the article

  • My Ruby on Rails application only works if the address contains the port

    - by True Soft
    I have a Ruby on Rails application that works ok on my notebook ( http://localhost:3000/ ) I uploaded it on my hosting server, created with CPanel X an application, the URL is http://example.com:12007/ created a rewrite from http://example.com/ to http://example.com:12007/, and started it. If I write in my browser http://example.com:12007/ or http://www.example.com:12007/ all the pages work as expected. But if I write http://example.com/ or http://www.example.com/ the first page is displayed, but without any css or images (just like it wouldn't find them). I can see all the text (even the text from my MySQL database), but with no format. And if I click on any link, I get a error page like this: Not Found The requested URL /some_controller was not found on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. What should I do to make my website work without writing the port in the address bar? The content of my /public_html/.htaccess file is RewriteEngine on RewriteCond %{HTTP_HOST} ^example.com$ [OR] RewriteCond %{HTTP_HOST} ^www.example.com$ RewriteRule ^/?$ "http\:\/\/127\.0\.0\.1\:12007%{REQUEST_URI}" [P,QSA,L] which I guess was generated by CPanel Rewrites.

    Read the article

  • Pair programming with tmux and Vagrant

    - by neezer
    Does anyone have a clear step-by-step guide for setting up a shared tmux session on a Vagrant vbox that my coworkers (on our local office lan) could SSH into? The articles I've found online only seem to cover setting this up from machine to machine (no virtualbox setups), and I'm not very good at networking, so I haven't been able to extrapolate a solution... We're all running the latest Macs in our office, btw. Here's one article I've found but haven't been able to get working with Vagrant: http://blog.voxdolo.me/remote-pairing-with-vim-and-tmux.html EDIT: To clarify, I don't really know how I should be setting up Vagrant to allow me to SSH into it from a machine outside the one hosting the VM. The article above suggests that I add the tunnels host on my physical machine running the VM (here-on referred to as the MBP), so I did that. Next is the ProxyCommand host declaration, which I have also assumed should live on the MBP. So next I try SSHing into the MBP from a guest machine (another separate physical machine on my network), and that seems to work... but that only gets me into the MBP, not the Vagrant image running on the MBP. I normally login Vagrant image on the MBP via vagrant ssh (per the docs), and I know how to forward ports on the Vagrant VM to the MBP, but it's unclear to me how I could forward ports/SSH from the MBP to the Vagrant VM, which I assume I would need to do so that my guest machine could SSH in--through the MBP--to my Vagrant image. That, in a nutshell, is what I'm trying to accomplish. I do my development work in Vagrant VMs which keeps my MBP nice and clean of any dev-related cruft and also keeps my dev environments totally isolated from one another, yet I would like to start pair-programming with my coworkers via tmux, thus the reason why I've asked this question. I would like to accomplish all of this without setting up an additional user account on the MBP, or giving my coworkers access to my local user account on the MBP to get to my Vagrant VM, if that's at all possible.

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • Insufficient channel capacity of 1GBit

    - by Roman S
    There is a Caching Server (Varnish): it receives data from Amazon S3 on request, saves it for some time and gives it to the client. We have encountered the problem of insufficient channel capacity of 1GBit. Peak load within 4 hours completely chokes the channel. Server performance is sufficient for now. Approximately 4.5TB of data are transmitted per day. More than 100TB are accumulated per month. The first thought that comes to mind is simply to add one more 1GBit port and sleep peacefully until 2GBit are not enough (it may happen quite quickly) or one server is not able to handle it. And then we just need to add new Caching Servers. But now we need a Load Balancer, which will send requests on one and the same URL, always on one and the same server (to avoid multiple copies of the same cached objects). Here are the questions: Does a Balancer need a band equal to sum of all bands of Caching Servers? What shall we do in case there are no ports in a Balancer? Should we add more Balancers or solve the problem by means of Round robin DNS? What are the standard approaches to such problems? Can anyone advise hosting-companies, which can solve this problem? We are interested in American and European markets.

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0? Link up/down every hour or 2

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). [mymacaddress]/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) It's being logged every hour. I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. There are some tunable SCTP parameters but it's not something I'm familiar with. Do I have to add changes to /etc/system? Looks like sctp_heartbeat_interval might be what I need to change? If it makes any difference, I have a few solaris zones running on this server, each with their own IP address on a virtual interface. eth0:0, eth0:1, etc. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else? Update: This was happening more frequently but now it seems to be happening roughly every hour or every two hours. It's not consistent. I tried setting setting the link speed and duplex to match the switch port and that seemed to make it stop happening for a few hours but then it started again.

    Read the article

  • Security implications of adding www-data to /etc/sudoers to run php-cgi as a different user

    - by BMiner
    What I really want to do is allow the 'www-data' user to have the ability to launch php-cgi as another user. I just want to make sure that I fully understand the security implications. The server should support a shared hosting environment where various (possibly untrusted) users have chroot'ed FTP access to the server to store their HTML and PHP files. Then, since PHP scripts can be malicious and read/write others' files, I'd like to ensure that each users' PHP scripts run with the same user permissions for that user (instead of running as www-data). Long story short, I have added the following line to my /etc/sudoers file, and I wanted to run it past the community as a sanity check: www-data ALL = (%www-data) NOPASSWD: /usr/bin/php-cgi This line should only allow www-data to run a command like this (without a password prompt): sudo -u some_user /usr/bin/php-cgi ...where some_user is a user in the group www-data. What are the security implications of this? This should then allow me to modify my Lighttpd configuration like this: fastcgi.server += ( ".php" => (( "bin-path" => "sudo -u some_user /usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) ...allowing me to spawn new FastCGI server instances for each user.

    Read the article

  • IIS SMTP server (Installed on local server) in parallel to Google Apps

    - by shaharru
    I am currently using free version of Google Apps for hosting my email.It works great for my official mails my email on Google is [email protected]. In addition I'm sending out high volume mails (registrations, forgotten passwords, newsletters etc) from the website (www.mydomain.com) using IIS SMTP installed on my windows machine. These emails are sent from [email protected] My problem is that when I send email from the website using IIS SMTP to a mail address [email protected] I don’t receive the email to Google apps. (I only receive these emails if I install a pop service on the server with the [email protected] email box). It seems that the IIS SMTP is ignoring the domain MX records and just delivers these emails to my local server. Here are my DNS records for domain.com: mydomain.com A 82.80.200.20 3600s mydomain.com TXT v=spf1 ip4: 82.80.200.20 a mx ptr include:aspmx.googlemail.com ~all mydomain.com MX preference: 10 exchange: aspmx2.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx3.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx4.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx5.googlemail.com 3600s mydomain.com MX preference: 1 exchange: aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt1.aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt2.aspmx.l.google.com 3600s Please help! Thanks.

    Read the article

  • Can I use a Drobo FS over the internet? -Or alternative?

    - by SeniorShizzle
    I have a lot of files. A HUGE Aperture library, lots of design work from Photoshop, and a rather large iTunes library too. I really want to get a Drobo FS, the networked one, to store all of my stuff on, so that I can get to it from my MacBook Air (which obviously with its minuscule 64GB drive can't hold my Aperture library by itself) and my iMac which is my main powerhouse. The dealbreaker for me is that I NEED to be able to access my Aperture library, and especially my iTunes library from across the internet. I understand that it will probably be slow and everything, but there's nothing else I can do besides hauling around a huge hard drive with me. So, is there any way I can somehow share my Drobo across the internet, on a VPN or something? My other alternative is to upload everything to my web host, FatCow, which offers unlimited disk space (something I hope to make them regret) and then access it using Expandrive. My only thoughts with this are that with the Drobo, any work that I do locally will be much snappier than if I have to work everything off the cloud. Any suggestions about alternatives would also be welcome.

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • What are the requirements for Windows Remote Assistance over Teredo?

    - by Jens
    I try to get the Windows 7 (or Vista) remote assistance feature to work, without using UPnP on the novices computer. After enabling Teredo on the expert's computer (that is in a corporate network, and therefore has teredo disabled by default), I tried to connect to the novice both using Easy Connect and the invitation file with no success. My triubleshooting included the following (so far). A connection to the novice from my home pc was successful, hinting at a misconfiguration on the experts side. Both computers have a "qualified" connection to the Teredo Server. Both computers have a valid Teredo IP, access to the Global_ PNRP cloud and can resolve names registered with PNRP on the other computer. The expert can resolve the PNRP Id automatically generated with an Easy Connect help request Both computers can ping the other's PNRP name. Both computers can ping the other's Teredo IP Address using ping -6 Now, I am a little stumped. I expected Remote Assistance to work at this point, since my corporate firewall has no Teredo filtering. What could RA cause not to work in this setting? Thanks in advance!

    Read the article

  • Add single sign-on into existing web app

    - by EvilDr
    Apologies if this isn't the best site, I've search for an answer but can't find anything quite right. I don't actually now the correct terminology I should be using here, so any pointers will be appreciated. I have a web application that accessed by many different users across different organisations. Access is provided by each user having a unique username/password which is stored in SQL (database fields are customerID, userID, username). Some organisations are now asking if we can change this to allow "Active Directory single sign-on" so that users don't need to remember yet another set of login details. From research I can see how this is achieved using OpenAuth and Google (etc), but I know hardly anything about AD and can't find much information on this (again I'm sure it helps when you know the terminology). Is this request even possible to achieve, given that most users will be from different (and unrelated) organisations? I saw on a Microsoft Build video not long ago that there is some kind of replication service for AD to allow Cloud authentication. Is this what I should be aiming for?

    Read the article

  • IIS / Virtual Directory authentication.

    - by Chris L
    I have an IIS(v6)/Windows 2003/.Net 3.5(app code, libraries etc.) server hosting a website at www.mywebsite.com mapped to E:\Inetpub\wwwroot\mywebsite, we also have a virtual directory (VirtDir) mapped out to E:\Inetpub\wwwroot\mywebsite\files (although in theory this could be in a different directory or a separate machine) where we store a customer's files(a bunch of .pdf & .xls). Currently to access a file you can enter into the url something like: www.mywebsite.com/VirtDir/Customer/myFile.pdf and get access to the file. The problem is the user doesn't have to log into www.mywebsite.com to get access to the file, we would prefer them to log in first. We would like the user to login via the mywebsite and if valid, let them download files from the virtual directory. The www.mywebsite.com and VirtDir are separate sites on the same farm. Allow Anon Access, and Integrated Windows Authentication both enabled. I'm more of a developer and less of a Sys Admin, but hopefully I'm in the right spot, any help would be appreciated.

    Read the article

  • Error during SSL installation cPanel/WHM

    - by baswoni
    I have a dedicated server and I am using the install wizard via WHM to install an SSL certificate. I have the following keys: Certificate key RSA private key CA certificate I paste these three elements into the wizard along with the domain, IP address and username but I get this error: SSL install aborted due to error: Unable to save certificate key. Certificate verification passed Have I missed a step? I have given it another go to make sure I am copying and pasting the info correctly and I am now getting the following error: SSL install aborted due to error: Sorry, you must have a dedicated ip to use this feature for the user: username! If you are intending to install a shared certificate you must use the username "nobody" for security and bandwidth reporting reasons. Even though I am using a dedicated IP address, I am getting this problem. I thought I would also add that this SSL certificate has been installed on a shared hosting environment with my previous hostig provider. The account with them is still active, however the domain and its contents now reside on the dedicated server - could this cause problems?

    Read the article

  • Setting up MySQL Linux slave with a Windows master

    - by philwilks
    I'm running a MySQL 5.0 database server on Windows Server 2008. The total size of the database is about 1Gb. I make daily backups, but I'd like to step up to having a slave server for extra protection. My thinking was that I wouldn't need the expense of a Windows machine to do this, and a Linux "cloud server" from RackSpace would do the job well for quite a low cost. However I have little experience with Linux, so I have a few questions... Does this sound like a good idea? Is there anything wrong with linking Windows and Linux MySQL servers? Does Linux have the equivalent of Remote Desktop Connection? If so can I use this from a Windows machine? Would a particular Linux distro be well suited to this task? RackSpace offer ArchLinux, CentOS, Debian, Fedora and Ubuntu. My immediate thinking is to go with Ubuntu as I've heard it's more friendly for people coming from a Windows background. Any comments you have would be very appreciated! Phil

    Read the article

  • Malicious content on server - next steps advice [closed]

    - by Under435
    Possible Duplicate: My server's been hacked EMERGENCY I just got an e-mail from my hosting company that they got a report of malicious content being hosted on my vps. I was unaware of this and started looking into it. I discovered a file called /var/www/mysite.com/osc.htm. Soon after I discovered some weird php files wp-includes.php and ndlist.php both recognized as being PHP/WebShell.A.1 virus. I removed all these files but I'm unsure of what to do next. Can anyone help me analyze the output below of sudo netstat -A inet -p -e and give advice on what's best to do next. Thanks very much in advance Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 localhost.localdo:mysql localhost.localdo:37495 TIME_WAIT root 0 - tcp 0 1 mysite.com:50524 xnacreators.net:smtp SYN_SENT Debian-exim 69746 25848/exim4 tcp 0 0 mysite.com:www tha165.thehealtha:37065 TIME_WAIT root 0 - tcp 0 0 localhost.localdo:37494 localhost.localdo:mysql TIME_WAIT root 0 - udp 0 0 mysite.com:59447 merlin.ensma.fr:ntp ESTABLISHED ntpd 3769 2522/ntpd udp 0 0 mysite.com:36432 beast.syus.org:ntp ESTABLISHED ntpd 4357 2523/ntpd udp 0 0 mysite.com:48212 formularfetischiste:ntp ESTABLISHED ntpd 3768 2522/ntpd udp 0 0 mysite.com:46690 formularfetischiste:ntp ESTABLISHED ntpd 4354 2523/ntpd udp 0 0 mysite.com:35009 stratum-2-core-a.qu:ntp ESTABLISHED ntpd 4356 2523/ntpd udp 0 0 mysite.com:58702 stratum-2-core-a.qu:ntp ESTABLISHED ntpd 3770 2522/ntpd udp 0 0 mysite.com:49583 merlin.ensma.fr:ntp ESTABLISHED ntpd 4355 2523/ntpd udp 0 0 mysite.com:56290 beast.syus.org:ntp ESTABLISHED ntpd 3771 2522/ntpd

    Read the article

  • How to handle sh: fetch: command not found

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps

    Read the article

  • What are problems and pitfalls with a public facing Active Directory

    - by Ralph Shillington
    The situation that i'm faced with is this: We plan on using a number of server applications hosted on Amazon EC2 machines, mainly Microsoft Team Foundation Server. These services rely heavily on Active Directory. Since our servers are in the Amazon cloud it should go without saying (but I will) that all our users are remote. It seems that we can't setup VPN on our EC2 instance -- so the users will have to join the domain, directly over the internet then they'll be able to authenticate and once authenticated, use that token for accessing resources such as TFS. on the DC instance, I can shut down all ports, except those needed for joining/authenicating to the domain. I can also filter the IP on that machine to just those address that we are expecting our users to be at (it's a small group) On the web based application servers, I imagine all we need to open is port 80 (or 8080 in the case of TFS) One of the problems that I'm faced with is what domain name to use for this Active directory. Should I go with "ourDomainName.com" or "OurDomainName.local" If I choose the latter, does that not mean that I'll have to get all our users to change their DNS address to point to our server, so it can resolve the domain name (I guess I could also distribute a host file) Perhaps there is another alternative that I'm completely missing.

    Read the article

  • Port 22 is not responding

    - by Emanuele Feliziani
    I'm trying to make the jump to VPS from shared hosting for better performances and greater flexibility, but am stuck with the fact that I can't access the machine via ssh. First of all, the machine is a CentOS 6.3 cPanel x64 with WHM 11.38.0. Sshd is running (it appears in the current running processes). Making a port scan I see that port 22 is not responding. Port 21 is, but I am not able to access the machine via ftp (I think it's a security measure, but I don't know where to disable/enable it). So, I'm stuck in WHM and have no way to access the configuration of the machine, neither via ssh nor with ftp/sftp. When trying to connect with ssh via Terminal I only get this: ssh: connect to host xx.xx.xxx.xxx port 22: Operation timed out I also tried to access with the hostname instead of the IP address and it's the same. There seem to be no firewall in WHM and I have whitelisted my home IP address to access ssh, though there were no restrictions in the first place. I have been wandering through all the settings and options in WHM for several hours now, but can't seem to find anything. Does anybody have a clue as to where I should start investigating? Update: Thanks everyone. It was in fact a matter of firewall. There was a firewall not controlled by the WHM software. I managed to crack into the console from the vps control panel (a terrible, terrible java app that barely took my keyboard input) and disabled the firewall altogether running service iptables stop so that I was able to access the console via ssh with the terminal. Now I will have to set up the firewall again because the command I ran looks like having completely wiped the iptables. Can you recommend any newby-friendly resource where I can learn how to go about this and what should I block? Or should I just go with something like this: http://configserver.com/cp/csf.html ? Thanks again to everyone who helped me out.

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >