Search Results

Search found 6826 results on 274 pages for 'dedicated hosting'.

Page 184/274 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • How to run nodejs on linux platform

    - by rotem
    How to run node.js on host with linux platform? To run node.js on localhost with windows operation system is simple I download package from nodejs.org/download/ and I execute Windows Installer (.msi) I go to console command line and I type node file.js and everything fine. but in my host with linux platform I have control panel with no option to run type file exe, msi and there is no window with command line, So how can I be able to run nodejs on my host? I call to support of my hosting bluehost.com and they don't know. my Details server and control panel Thanks for any help

    Read the article

  • How can I tell if my live web-server is overloaded?

    - by Nick G
    We have a live webserver which doesn't seem to be performing all that well. It's a Dell PowerEdge machine, a few years old (dual core, 4GB) which is hosting about 20 low-traffic websites. However it doesn't seem to be as fast as it used to be. How can we determine the cause of this? If it's website traffic, I would be expecting high CPU but CPU usage is quite low and hovers around the 15-30% mark except for very brief periods. I'm wondering perhaps, if rather than CPU performance being a problem, perhaps it's disk thrashing due to the constant read/writes of all the small web files and database queries. It has 4x 7200 RPM SATA drives in RAID 5. So is there a way to check that it's not disk thrashing?

    Read the article

  • Stop sending packets to private IPs

    - by SlasherZ
    I have a problem that my server got locked down because it was sending packets to private IPs. My question is, what is the best solution to stop that? Here is the log that I got from my hosting provider: [Mon Jun 2 00:04:36 2014] forward-to-private:IN=br0 OUT=br0 PHYSIN=vm-44487.0 PHYSOUT=eth0 MAC=78:fe:3d:47:3d:20:00:1c:14:01:4e:cd:08:00 SRC=78.46.198.21 DST=192.168.249.128 LEN=1454 TOS=0x00 PREC=0x00 TTL=64 ID=58859 DF PROTO=UDP SPT=41366 DPT=41234 LEN=1434 [Mon Jun 2 00:17:15 2014] forward-to-private:IN=br0 OUT=br0 PHYSIN=vm-44487.0 PHYSOUT=eth0 MAC=78:fe:3d:47:3d:20:00:1c:14:01:4e:cd:08:00 SRC=78.46.198.21 DST=192.168.249.128 LEN=1456 TOS=0x00 PREC=0x00 TTL=64 ID=52234 DF PROTO=UDP SPT=55430 DPT=41234 LEN=1436

    Read the article

  • Default documentroot apache does not work

    - by James Wise
    I have apache version 2.2 and php 5.3.15 on a single server. I configured virtual hosting and a default vhost. 0_default_.conf - goes to /var/www/default sub.domain.com.conf - goes to /var/www/sub.domain.com My question is, how could I set the default documentroot to sub.domain.com permanently? That means all request should be redirected to sub.domain.com. I try to remove 0_default_.conf but when viewing the page it display the php source code of sub.domain.com. Here is my configurations -- http://pastebin.com/4e3awUJ4 Although I can create index.php to /var/www/default and permanently redirect to sub.domain.com site but it's not viable solution for me because what if I didn't point the ip address of sub.domain.com to the server so user cannot view that subdomain. I would appreciate if anyone could share their knowledge and wisdom. Thanks. JamesW

    Read the article

  • Serve PHP page in web root but show contents in subdirectories

    - by David
    I have a web site on a shared hosting server. My directory layout looks like this /home /user /public_html /pics /family There is an index.php file in public_html. I need help writing .htaccess rules that will Serve the index.php file when www.domain.org is requested Force the user back to public_html when www.domain.org/pics is requested Allow the user to see the directory contents when www.domain.org/pics/family is requested I experimented with a lot of combinations of RewriteCond and RewriteRule, but I don't understand the documentation and examples well enough to know if what I want to do is even possible. The web server application is some version of Apache.

    Read the article

  • Recovering database files from a corrupted VHD

    - by Apocalypse9
    We have a SQL server hosted on a virtual machine. Our hosting company updated/restarted the server and for some reason the virtual machines became unbootable. We've spoken to Microsoft and used a few higher level tools to attempt to recover the virtual machines but were unsuccessful. In browsing the file system the database folder doesn't even appear. I'm wondering if there are any lower level tools that might be able to find and copy the database files. As far as I know the physical hard drive is ok, so I'm hoping there may be some way to recover the files themselves even if the rest of the virtual machine file-system is a loss. Obviously we're in a bit of a bind, and any help/ suggestions are very much appreciated.

    Read the article

  • WordPress issues with htaccess causing 500 server error

    - by Scott B
    I have a few customers of my custom wordpress theme that are reporting that their sites have went down over the past few weeks due to a 500 internal server error. In each case, it appears that the htaccess file has been to blame. In one case, the user's hosting company found a "_pvt/service.pwd" line in there that was apparently causing the problem. In another instance, the hosting company indicated that a chron job appeared to be causing the issue and sent the user the following as evidence... root@cherry [/home/login/public_html]# stat .htaccess File: `.htaccess.orig' Size: 587 Blocks: 8 IO Block: 4096 regular file Device: 811h/2065d Inode: 590021607 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 2234/login) Gid: ( 2231/login) Access: 2010-03-07 16:42:01.000000000 -0600 Modify: 2010-03-26 09:15:15.000000000 -0500 Change: 2010-03-26 09:45:05.000000000 -0500 In yet another instance, the user reported this as the cause... The permissions on my .index file somehow got changed to 777 instead of 644 I'm just seeking to help these users understand what's going on, the likely cause and how to prevent it. I also want to eliminate my theme as a potential contributing factor. I have two areas in which I want to submit here to make sure that they are not likely to cause such an issue. They are my permalink rewrite code as well as my upgrade script (which sets 755 on the destination folder (my theme folder). Here's the permalink rewrite code... if (file_exists(ABSPATH.'/wp-admin/includes/taxonomy.php')) { require_once(ABSPATH.'/wp-admin/includes/taxonomy.php'); if(get_option('permalink_structure') !== "/%postname%/" || get_option('mycustomtheme_permalinks') !=="/%postname%/") { $mycustomtheme_permalinks = get_option('mycustomtheme_permalinks'); require_once(ABSPATH . '/wp-admin/includes/misc.php'); require_once(ABSPATH . '/wp-admin/includes/file.php'); global $wp_rewrite; $wp_rewrite->set_permalink_structure($mycustomtheme_permalinks); $wp_rewrite->flush_rules(); } if(!get_cat_ID('topMenu')){wp_create_category('topMenu');} if(!get_cat_ID('hidden')){wp_create_category('hidden');} if(!get_cat_ID('noads')){wp_create_category('noads');} } if (!is_dir(ABSPATH.'wp-content/uploads')) { mkdir(ABSPATH.'wp-content/uploads'); } And here is the relevant lines from my uploader script... // permission settings for newly created folders $chmod = 0755; // Ensures that the correct file was chosen $accepted_types = array('application/zip', 'application/x-zip-compressed', 'multipart/x-zip', 'application/s-compressed'); foreach($accepted_types as $mime_type) { if($mime_type == $type) { $okay = true; break; } } //Safari and Chrome don't register zip mime types. Something better could be used here. $okay = strtolower($name[1]) == 'zip' ? true: false; if(!$okay) { die("This upgrader requires a zip file. Please make sure your file is a valid zip file with a .zip extension"); } //mkdir($target); $saved_file_location = $target . $filename; if(move_uploaded_file($source, $saved_file_location)) { openZip($saved_file_location); } else { die("There was a problem. Sorry!"); }

    Read the article

  • How to exclude IP from htaccess domain redirect

    - by ijujym
    I'm trying to write a custom redirect rule for some testing purposes on 2 domains with exactly same site. The code I am using is: RewriteEngine on RewriteCond %{REMOTE_ADDR} !^1\.2\.3\.4$ RewriteCond %{HTTP_HOST} ^.*site1.com [NC] RewriteRule ^(.*)$ http://www.site2.com/$1 [R=301,L] What I want is to redirect all requests for site1 to site2 except for requests from IP address 1.2.3.4. But currently requests from that IP are also being redirected to site2. Is there something I've missed in settings? ( note: both domains are on the same shared hosting account )

    Read the article

  • Which libraries use the "We Know Where You Live" optimization for std::make_shared?

    - by KnowItAllWannabe
    Over two years ago, Stephan T. Lavavej described a space-saving optimization he implemented in Microsoft's implementation of std::make_shared, and I know from speaking with him that Microsoft has nothing against other library implementations adopting this optimization. If you know for sure whether other libraries (e.g., for Gnu C++, Clang, Intel C++, plus Boost (for boost::make_shared)) have adopted this implementation, please contribute an answer. I don't have ready access to that many make_shared implementations, nor am I wild about digging into the bowels of the ones I have to see if they've implemented the WKWYL optimization, but I'm hoping that SO readers know the answers for some libraries off-hand. I know from looking at the code that as of Boost 1.52, the WKWYL optimization had not been implemented, but Boost is now up to version 1.55. Note that this optimization is different from std::make_shared's ability to avoid a dedicated heap allocation for the reference count used by std::shared_ptr. For a discussion of the difference between WKWYL and that optimication, consult this question.

    Read the article

  • Linux: how to verify my network configuration before doing a restart

    - by wael34218
    I am trying to build a network bridge for my VMs on a server. So I added a new file and changed another in the /etc/sysconfig/network-scripts directory. Then I did a network reboot with the following command:/etc/init.d/network restart After that the server was not up again. I contacted the hosting provider's support for help. I need a way to verify my new configuration before a network restart. I need to make sure that it will be up again, just like apache's /etc/init.d/httpd configtest

    Read the article

  • GitHub Not Accessible from Google Chrome

    - by KPthunder
    Whenever I try to go to GitHub through Google Chrome 11 I get the following error message: GitHub works perfectly fine in Firefox 4: This has been going on for a few weeks. This is a fairly new install of Windows (I don't even remember if I've successfully got on to GitHub on this installation with Chrome in the past). I don't even use GitHub for hosting my own code personally, but this has proven annoying in that I can't even access other people's projects through Chrome! Does anybody know what is going on here? An interesting side note: The Sight extension for Chrome doesn't work either! It worked on my old installation of Windows but not on this one. Is my Chrome installation just screwy? I've tried disabling certain other extensions but nothing seems to change.

    Read the article

  • ProFTPD Virtual User Directory

    - by Nik
    Alright, I'm trying to replicate a web hosting company's basic setup here by authenticating virtual users via SQL and redirect/jail them to their directory. I've accomplished most of the goals here, with the exception of redirect/jailing them to their directory. The directories are stored in /home/ftp and that's what DefaultRoot is set to. I want each individual user to have and be jailed into their own directory. It doesn't appear that setting homedir in SQL has any effect. Upon logging into FTP with any user, it logs into the DefaultRoot with no directory jailing or redirect. How do I accomplish this last task?

    Read the article

  • What exactly is an invalid HTTP_HOST header

    - by rolling stone
    I've implemented Django's relatively new allowed hosts setting, which is meant to prevent attackers from submitting requests with a fake HTTP Host header. Since adding that setting, I now get anywhere from 20-100 emails a day notifying me of invalid HTTP_HOST headers. I've copied in an example of a typical error message below. I'm hosting my site on EC2, and am relatively new to setting up/maintaining a server, so my question is what exactly is happening here, and what is the best way to manage these invalid and I assume malicious requests? [Django] ERROR: Invalid HTTP_HOST header: 'www.launchastartup.com'.You may need to add u'www.launchastartup.com' to ALLOWED_HOSTS.

    Read the article

  • How can I let my users set PHP.ini settings for wordpress?

    - by jldugger
    I set up a wordpress server from a fairly standard Ubuntu 9.10 for a class and they're constantly running into problems with the default PHP.ini settings. First memory settings were too low, then the file upload limits were too small, etc. And more concerning was a wordpress wide blank page that I suspect was killed for ram consumption but turning on php errors in php.ini didn't reveal anything! I'm not familiar with shared hosting, but I feel there's a way such places allow users to edit such things without needing me to intervene and restart Apache.

    Read the article

  • Zeroing SSD drives

    - by jtnire
    We host VPSes for customers. Each customer VPS is given an LVM LV on a standard spindle hard disk. If the customer were to leave, we zero out this LV ensuring that their data does not leak over to another customers. We are thinking of going with SSDs for our hosting business. Given that SSDs have the "wear levelling" technology, does that make zeroing pointless? Does this make this SSD idea unfeasable, given we can't allow customer data to leak over to another customer? Thanks

    Read the article

  • RemoteApps and Cached Credentials

    - by user66774
    I'm looking for a guidance on an issue we're having. We are hosting an application over terminal services through RDWeb on Windows 2008 Server. To give users the ability to change their password we've exposed the iisadmpwd to allow the users to change their passwords. When the users change their password, they are prompted to log into broker server, even if they log off of the RDWeb page and log back in. What we've found is that the credentials seem to be cached in memory after logging in. Ending task on TSWBPRXY.EXE, WKSPRT.EXE, closing IE and logging back into the RDWEb page, then launching the application allows the user to log into the application without additional credentials. I'm wondering if there is a better way to either have the user change their password from a web interface, but allow them to reestablish their connection from the RDWeb login page rather than through the RDP login prompt that comes up.

    Read the article

  • Wastage of resources in Virtualization

    - by Sabeen Malik
    I am not sure if this is the write place to ask the question. However i hope it is. When looking for a VPS earlier today, I was trying to understand how each container would work in the background. Keeping in mind the fact that the operating system uses most of the power and power on a system, wouldn't having multiple operating systems in the same machine mean more wastage of resources. For instance if i was running centOS on a dedicated box and it was running lets say 20 background OS level processes. Then i go and install a virtualization platform and install 5 more centOS virtual machines in the same system which are exactly the same as the host operating system. Doesn't this mean duplication of those 20 processes 6 times? So internally the context switching is happening between 120 processes instead of 20?

    Read the article

  • Tell AppleScript to go to a specific window in Excel

    - by Nick
    I've got a script that pulls information from an Excel(Mac Excel'04) spreadsheet, and processes it through a local database. My problem(which is temporary, pending a dedicated scripting machine w/ Excel '08) is when I need to work on another spreadsheet in Excel. I want to ensure that the AppleScript continues reading data from the correct spreadsheet. Is there a way to pass reference to the specific Excel file in AppleScript, as opposed to just telling the Application in general? Or possibly just referencing the spreadsheet without having to have it open ... ?

    Read the article

  • Will this SPF record restrict delivery of email for the original domain?

    - by user199421
    As part of the product we offer we send emails on behalf of our clients. Because the emails don't come from an IP associated with the client they are sometimes flagged as spam. We advised some of our clients to add an SPF record approving us to send emails on their behalf. We saw immediate improvement in deliverability rates after making the change however one of our clients was notified by his hosting provider that the SPF record we suggested to add would "slightly restrict" all emails that don't come from our servers (including our client's own servers). The record we use is this: v=spf1 a mx include:ourdomain.com ~all So my question is if the warning we received about this is correct and if so why and what can be done to solve this (allow sending email both from original domain and by ourselves).

    Read the article

  • Hyper-V Server 2008 Configuration

    - by Eternal21
    I need to set-up Lync on a Server 2008 machine. The problem is that Lync cannot be set up on a Domain Controller. That means I need to have one Server 2008 that's a domain controller and another that's Server 2008 running Lync. I figured the best way would be hosting it on a single machine, using virtual machines. I installed Server 2008, but now my question is this. Do I add two virtual machines (Domain Controller and Lync), or do I only add one virtual machine for Lync, and the 'parent' Server 2008 can act as a domain controller?

    Read the article

  • Do you leave Windows Automatic Updates enabled on your production IIS server?

    - by Nobody
    If you were running a 24/7 website on Windows Server 2003 (IIS6). Would you leave the Windows automatic update feature enabled or would you turn it off? When enabled, you always get the latest security patches and bug fixes automatically as soon as they're available, which is the most secure choice. However, the machine will sometimes get automatically rebooted to apply the updates leading to a couple of minutes of downtime in the middle of the night. Also, I've seen rare occasions where the machine does not restart correctly resulting in further downtime. If auto updates are off, when do you apply the patches? I guess you have to use a load balancer with multiple web servers and rotate them out of the production site, apply patches manually, and put them back in. This can be logistically inconvenient when the load balancer is managed by a hosting company. You will also have machines in production that don't always have the latest security patches and you have to routinely spend time deciding which patches to apply and when.

    Read the article

  • Access server bound to localhost:5000 from different computer

    - by Jesse
    I am working on a web application using the Pylons framework. The web server is binding to localhost:5000 so I am able to access my application by going to localhost:5000 in my browser. I would like to be able to access the server from another computer on the same network. The computer that is hosting the server and application is running Mac OSX and the computer I would like to be able to access the application is running Windows 7 (I have cygwin with SSH installed as well as PuTTY). I could work around this by binding to the host name of the computer but would rather leave it running only on localhost. I was thinking I could do something with SSH tunneling but have not had any luck so far. Any ideas?

    Read the article

  • Multiple Apps - One SSL

    - by Optix App Development
    I'm trying to configure a domain and SSL to run multiple Facebook apps through the SSL. What I need advice on is routing the apps through the SSL without actually hosting them on that server. Ideally they would be hosted on the client's server. Any advice on how to do this? UPDATE Following the advice from the replies I have setup a domain which houses my Facebook apps under one SSL. So far this is working well. Thanks guys. :)

    Read the article

  • How to tell IIS7 to allow POST to a text file (to solve 405)?

    - by meticulous
    If I want to allow HTTP POST to text files *.txt (i.e. I'm taking an example of what could be any static resource normally accessible by GET). The error is: Server Error 405 - HTTP verb used to access this page is not allowed. The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt access. How can I accomplish this? Background: I'm using apps.facebook.com to hit my hosted facebook app and facebook sends HTTP POST now through to the iFrame hosting my app. This facebook behaviour has been around for a while but it's being forced now. In turn this forces me to make stuff available to the POST verb.

    Read the article

  • How to point subdomain to a nameserver?

    - by vonconrad
    I've got an old crusty WHM/cPanel server which I'm trying to get rid of. I've got a new setup on shared hosting which is much cheaper in the long run. The problem is that there are a bunch of websites on the server whose domains I don't have access to. They're currently pointing to name servers of my domain (ns.mydomain.com), but the new provider has their own name servers (ns.provider.com) which I have to use instead. My initial idea was to set up a CNAME to point my name server to my provider's: ns.mydomain.com CNAME ns.provider.com, but I read in this question that this would be a bad idea. The accepted answer suggests using an A record instead, and I want to make sure how this would work. Assuming ns.provider.com has an IP address of 123.123.123.123, is it just a matter of doing ns.mydomain.com A 123.123.123.123? Is there any way the provider could block those requests as the name server domain technically doesn't belong to them?

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >