Search Results

Search found 8849 results on 354 pages for 'cloud hosting'.

Page 302/354 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0?

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). xxxx:xxxx:xxxx/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else?

    Read the article

  • Port forwarding + shared connection with Ubuntu

    - by Joey Adams
    Because my wireless router's ethernet ports are defective, I set up a shared wireless connection from my laptop (which has wifi) to my eMac (which does not) via a crossover ethernet cable. The laptop is behind a router as 192.168.1.131, and the eMac is behind the laptop as 10.42.43.1 . The laptop is running Ubuntu 9.10 (Karmic). I achieved the shared connection through NetworkManager Applet. I right-clicked on the network icon at the topright, went to Edit Connections, selected the Wired connection named "Auto eth0", clicked "Edit...", went to the "IPv4 Settings" tab, and selected the Method "Shared to other computers". The eMac can now access the Internet. Now I want to enable port forwarding. There's a game I want to play that needs port 6112 forwarded (both TCP and UDP) in order to host games. I set up the router to enable port forwarding for 192.168.1.131 (the laptop), but port forwarding still isn't available on the eMac. I suppose I need to pretend my laptop is a router and configure port forwarding on it, indicating that incoming connections to the laptop (192.168.1.131) should be forwarded to the eMac on the shared connection (10.42.43.1 ). Thus, packets coming into the router on port 6112 would be redirected to the laptop (by the router), then to the eMac (by the laptop). My question is, how would I do that on Ubuntu (in light of NetworkManager's presence)? Also, if I can't get this to work, does anyone mind hosting a comp stomp? :D

    Read the article

  • Has anyone used the sharedband connection bonding product?

    - by John Rennie
    See http://www.sharedband.com/ for details on the product. Obviously Sharedband aren't too keen on giving away their technical secrets, but I would guess that it bonds the connections at the IP layer i.e. their routers send the IP packets to the SharedBand routers over all available lines and the ShareBand routers handle all the virtual circuitry and provide the NATing to whatever IP address(es) they've assigned you. It looks a clever idea, and a good way to provide some resilience over ADSL links. You can even use ADSL links from different ISPs and SharedBand will still bond them for you. But, I find myself wondering how well it really works, and whether it's worth it. The Draytek routers can already load balance (though not bond) up to four ADSL lines, so the SharedBand product really only offers an advantage if you're hosting servers i.e. you can have one IP address to accept incoming connections through all your (working) ADSL lines. But should you really try and host servers using ADSL lines, especially since ADSL upload performance isn't stellar? Wouldn't it be better to use a hosted server, or maybe pay up for a leased line with a SLA? So I'm asking if anyone is using SharedBand, and if so what do you think of it? JR

    Read the article

  • Short POST data in HTTP

    - by Matt
    We're hosting a customer's Debian Linux web server. It's running a PHP based web application. The server is sitting behind our firewall with it's own virtual interface and port 80 is forwarded internally to a machine sitting in the DMZ. The issue we're having is that when data is posted to the server it seems to be being cut short for some users. It's reproducable for some users on the same box. But the same user sending the same data on the same lan on another PC it works. The data gets cut to around 1140 bytes I'm told. Any idea why this might be happening? The customer is blaming our firewall, but then surely we'd have issues with other services. I'm suspecting it's a problem with the website itself. Suggestions on how to isolate the problem would be of help. Our firewall is Astaro. EDIT: A customer has set the ethernet frame size temporarily to 500bytes on the server. This made it work for now! I know some of the customers are using an internet provider that runs PPPoE

    Read the article

  • Cygwin, ssh, and git on Windows Server 2008

    - by Paul
    Hi everyone. I'm trying to setup a git repository on an existing Windows 2008 (R2) server. I have successfully installed Cygwin & added git and ssh to the packages, and everything works perfectly (thanks to Mark for his article on it). I can ssh to localhost on the server, and I can do git operations locally on the server. When I try to do either from the client, however, I get the "port 22, Bad file number" error. Detailed SSH output is limited to this: OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to {myserver} [{myserver}] port 22. debug1: connect to address {myserver} port 22: Attempt to connect timed out without establishing a connection ssh: connect to host {myserver} port 22: Bad file number Google tells me that this means I'm being blocked, usually, by a firewall. So, double-checked the firewall settings on the server, rule is there allowing port 22 traffic. I even tried turning off the firewall briefly, no change in behavior. I can ssh just fine from that client to other servers. The hosting company swears that there's no other firewalls blocking that server on port 22 (or any other port, they claim, but I find that hard to believe). I have another trouble ticket into them, just in case the first support person was full of it, but meanwhile I wanted to see if anyone could think of anything else it can be. Thanks, Paul

    Read the article

  • How do I fix issue causing "incomplete startup packet" log message trying to implement replication in Postgresql?

    - by colour me brad
    I've got two cloud servers running Ubuntu 13.04 and PostgreSQL 9.2. I've primarily used this blog post to aid me in setting things up. However, to do the initial database dump to the slave I'm using pg_start_backup/pg_stop_backup strategy used in this other blog post. I've read through the docs and postgres wikis as well. I ran into several problems I was able to solve, but I can't get past this wretched "the database is starting up" failure. I'm not sure if seeing "cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory" after "consistent recover state reached" is normal or the first sign of a problem. The searching I've done on "the database is starting up" and "incomplete startup packet" tells me that something is sending empty TCP packets to the slave. The only thing that even knows about the slave is the master, so I'm not sure why it's sending empty packets... Has anyone worked with this and have an idea what might be going wrong? The postgres log on the slave looks like so: 2013-08-26 13:01:38 CDT LOG: entering standby mode 2013-08-26 13:01:38 CDT LOG: restored log file "000000010000000000000039" from archive 2013-08-26 13:01:38 CDT LOG: incomplete startup packet 2013-08-26 13:01:39 CDT LOG: redo starts at 0/39000020 2013-08-26 13:01:39 CDT LOG: consistent recovery state reached at 0/390000E0 cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory 2013-08-26 13:01:39 CDT LOG: streaming replication successfully connected to primary 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:41 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT LOG: incomplete startup packet 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up thanks! brad

    Read the article

  • How do you add a domain name to a VPS?

    - by jasonaburton
    Hi all, I have a VPS with allgamer.net (I use it to play minecraft). I also have a domain name with networksolutions.com What I want to do is attach that domain name to the VPS. I want to run a wiki on my domain name. If this is possible I can avoid buying another hosting plan just for the wiki. How do I go about doing this? I have very little knowledge in server administration so any advice you guys have is greatly appreciated! I'm pretty sure I have to change the DNS in my domain name to the DNS for my VPS, but on allgamer.net's interface there is no discernable place to find out what I need to change it to. Is there a way to find out the DNS via SSH on my VPS? As well, when I first got my VPS with allgamer.net I filled out a form for it with all my information, but they also wanted a domain name along with it. I gave them the domain name I currently own, but for some reason, it's like it's not connected to the VPS, like if I go to mydomain.com there's nothing, as well, if I use mydomain.com for my minecraft server, it also doesn't work. It's as if it's serving no purpose by being "attached" to my VPS. Any insights into this? Thanks for any help you guys can give me.

    Read the article

  • Blocking requests from specific IPs using IIS Rewrite module

    - by Thomas Levesque
    I'm trying to block a range of IP that is sending tons of spam to my blog. I can't use the solution described here because it's a shared hosting and I can't change anything to the server configuration. I only have access to a few options in Remote IIS. I see that the URL Rewrite module has an option to block requests, so I tried to use it. My rule is as follows in web.config: <rule name="BlockSpam" enabled="true" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{REMOTE_ADDR}" pattern="10\.0\.146\.23[0-9]" ignoreCase="false" /> </conditions> <action type="CustomResponse" statusCode="403" /> </rule> Unfortunately, if I put it at the end of the rewrite rules, it doesn't seem to block anything... and if I put it at the start of the list, it blocks everything! It looks like the condition isn't taken into account. In the UI, the stopProcessing option is not visible and is true by default. Changing it to false in web.config doesn't seem to have any effect. I'm not sure what to do now... any ideas?

    Read the article

  • Chroot jail of Nginx and php

    - by sqren
    I'm hosting multiple websites on one VPS, and want to chroot each website, eg. /chroot/website1 /chroot/website2 I'm using makejail, which is a highlevel tool, for creating the jails, and copying the libraries and dependencies. Easy peasy. Each website will need nginx, php and mysql. For php I'm using php5-fpm which actually supports chroot by configuration, however I'm not using this (maybe I should?) My question is which approach of the following three is the better: 1) Every website will have its own seperated instance of nginx, php and mysql. The downside is, that each webserver + php has to listen to a different port. I also need a "master" nginx web server in front of them, reverse proxying to the chrooted servers behind it. Probably most secure, but also most advanced. 2) I don't make any chroot jails manually. I setup one nginx web server, that proxies php requests to php-fpm, on different ports. I can have multiple php-fpm configurations each with is own chroot'ed folder. This is quite managable - however only php will be chrooted. Not the actual webserver. Is this secure enough. Also, I tried this option out, and it seems I will need to use TCP instead of sockets for connecting to MySQL. 3) You tell me ;) I'm quite new to chroot jailing, so please correct me if I'm wrong in my assumptions. I've been reading all the tutorials I could find, however, I find the market for chroot guides very scarce. Any help or inputs much appreciated!

    Read the article

  • Plesk wildcard subdomain not working

    - by avdgaag
    I'm trying to set up a wildcard subdomain on my VPS. Ultimately I want to end up with this: main site: my.domain.tld subdomain: sub1.my.domain.tld - should end up serving my.domain.tld/sub1 I am using plesk 8.6. I have created a DNS A record pointing at my VPS' IP. I have then restarted the DNS server and waited up to 24 hours. But trying ping sub1.my.domain.tld results in an unknown host error. So I know there's more stuff involved, configuring apache etc. But so far, I cannot even get the subdomain working at all, let alone serve up the right content. I have also tried a CNAME record, to no effect. I have also tried creating a regular subdomain with a fixed name, which also does not work. Pre-configured subdomains DO work, like ftp.my.domain.tld or mail.my.domain.tld. I am clearly missing something here, but my hosting provider charges a small fortune for any support request not involving hardware physically burning down, so I'm hesitant to ask them. Any ideas?

    Read the article

  • ProFTPD / PAM issues with new centos/virtualmin install

    - by iamthewit
    I just installed CentOS 5.4 on a rackspace cloud server and installed virtualmin which all seemed to go fine. The only problem I have is that I can not access the virtual servers directories via FTP. I get the following from filezilla: Status: Connecting to 1.1.1.1:21... Status: Connection established, waiting for welcome message... Response: 220 FTP Server ready. Command: USER username Response: 331 Password required for username. Command: PASS *************** Response: 230 User username logged in. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (1,1,1,1,216,214) Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing and I get this from my /var/secure/log file Sep 22 19:40:42 stickeeserver proftpd: pam_unix(proftpd:session): session opened for user username by (uid=0) Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - USER nastypasty: Login successful. Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - Preparing to chroot to directory '/home/username' Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - mod_delay/0.5: delaying for 728 usecs Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - error setting IPV6_V6ONLY: Protocol not available Any help would be greatly appreciated, I'm not totally new to Linux but it's not my strongest subject. I do like to know exactly why problems occur though and how exactly to fix them so the more detail the better! cheers

    Read the article

  • Active Directory Domain Services - Network Name Cannot be Found

    - by Arief
    I have really weird problem that I could not explain. I am trying to redirect all users home folder to the new server. I have copied all the files including the permissions to the new server. All I need to do is to update the user profile for home folder by changing the server name. However, I got this message when I enter the new server name: My server that serving as AD can resolve the name by ping and nslookup of the server name. The only thing that I don't understand why the MMC cannot resolve the name. I did change with the IP Address and I still get the same error message. Thank you so much for your help. UPDATE: I know what seems to be a problem, but I don't know how to fix it. The new server that will serve all Home folder is actually sitting in the cloud with different IP Address as the Domain Controller. The Domain Controller is sitting locally in the office with 10.0.0.0/24 IP Addresses. The new server that is sitting in on Data Centre is on 172.10.10.10/24 IP Addresses. The static route has been set up on both end, and the DNS as well. I believe this is the issue. Does anyone how to overcome this situation? Thank you.

    Read the article

  • How to configuration keepalived on Amazon EC2?

    - by oeegee
    I rad some article. Keepalived over GRE tunnel for failover on VPS environment http://blog.killtheradio.net/how-tos/keepalived-haproxy-and-failover-on-the-cloud-or-any-vps-without-multicast/ but, I don't know how to configuration? and How to call this architecture? only I Know that How to config Master/Backup configuration at keepalived. What I want to know that How does work keepalived? I want to design this.... XMPP Server(EC2) | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy #1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 Thanks! but What I want to know how to work on keepalived with unicast patche modul. ELB is expansive. and this is first totaly design. [Flow] ELB -- XMPP Server -- ELB -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ELB | Casandra#1 Casandra#2 Casandra#3 Casandra#4 and change first design. [Flow] ELB -- XMPP Server -- HAProxy Master(Casandra Farm) -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy#1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 this is second. [Flow] ELB -- HAProxy(XMPP Farm) -- XMPP Server -- HAProxy(Casandra Farm) -- Casanda It's OK? ELB | HAProxy#1 HAProxy#2 HAProxy#3 HAProxy#4 XMPP#1 XMPP#2 XMPP#3 XMPP#4 | Casandra#1 Casandra#2 Casandra#3 Casandra#4

    Read the article

  • Hyper-V Guests Dying

    - by Jon Rauschenberger
    I just hit my THIRD instance of a Hyper-V guest machine dying with the exact same behavior. In all three instances we are hosting WS2008 guests on a WS2008 host. AFter a config change, we reboot the guest and the guest OS comes up but in a very cripled state. Specifically, we are able to log into the guest, but can't launch any apps and the guest never comes active on the network. I opened a support ticket with MS the second time this happened and they focused in on the DCOM subsystem not coming up...best explanation they could provide was that permissions on key system files got corrupted. I eventually gave up on the ticket after close to 10 hours on the phone trying different things that were going no where. What really concerns me is that we have now seen the exact same thing happen to a guest hosted on a completly differet host machine. There is zero hardware overlap between the two. Has anyone seen this before?? It's really odd behavior, but it also seems like there's a pattern here that's concerning me. Thanks, jon

    Read the article

  • Amazon AWS Ec2 instance, Elastic IP, Domain name from external domainseller, and Google Apps for Email

    - by Sid
    We are hosting our site on an Ec2 instance. Our Elastic IP is w.x.y.z and Public DNS is: ec2-w-x-y-z.compute-1.amazonaws.com. We've bought a domain name domainname.com from a lesser known domain-name-seller. We added an A-record pointing domainname.com to w.x.y.z. Will this work or do we need a CNAME record to point to the same too? We wanted to use Google apps for emailing so adjusted the TXT/MX records according to the Google Apps instructions to be able to send/recv email using @domainname.com email addresses. Have we got it right, more important, we came across queries relating to email sent from ec2-w-x-y-z.compute-1.amazonaws.com (our users can send email from their onsite accounts) going to spam (rDNS not pointing to domainname.com but to ec2-w-x-y-z.compute-1.amazonaws.com). How can we fix this? We came across SPF records, do they provide a complete solution? We aren't sure as to how to use them. Can you help pls? Thank you, Sid

    Read the article

  • If spaces in filenames are possible, why do some of us still avoid using them?

    - by Chris W. Rea
    Somebody I know expressed irritation today regarding those of us who tend not to use spaces in our filenames, e.g. NamingThingsLikeThis.txt -- despite most modern operating systems supporting spaces in filenames. Non-technical people must look at filenames created by geeks and wonder where we learned English. So, what are the reasons that spaces in filenames are avoided or discouraged? The most obvious reason I could think of, and why I typically avoid it, are the extra quotes required on the command line when dealing with such files. Are there any other significant reasons, other than the practice being a vestigial preference? UPDATE: Thanks for all your answers! I'm surprised how popular this was. So, here's a summary: Six Reasons Why Geeks Prefer Filenames Without Spaces In Them It's irritating to put quotes around them when referenced on the command line (or elsewhere.) Some older operating systems didn't used to support them and us old dogs are used to that. Some tools still don't support spaces in filenames at all or very well. (But they should.) It's irritating to escape spaces when used where spaces must be escaped, such as URLs. Certain unenlightened services (e.g. file hosting, webmail) remove or replace spaces anyway! Names without spaces can be shorter, which is sometimes desirable as paths are limited.

    Read the article

  • How to fix a Postfix/MySQL/Dovecot Unknown Host Issue?

    - by thiesdiggity
    I am having an issue with one of my Postfix/Dovecot mail servers and I'm unsure how to fix the problem. I will try to explain it in detail, here it goes: I have an Ubuntu server setup using Virtual hosting with Postfix, Dovecot and MySQL. We have one domain setup as a virtual domain, for this example I am going to use mail.example.com. Under that domain we have one email address. I have another server (MS Exchange) setup using another one of my sub-domains, ex.example.com. The problem is that when I SMTP into the account on mail.example.com and try to send an email to an account on ex.example.com, I get the email returned back to us with an "unknown host" error. Now, I know that the mail.example.com server can resolve the ex.example.com domain because I can ping/dig while SSH'd into it. I can also log into Postfix via Telnet and send an email to an ex.example.com mailbox. I'm guessing that it has something to do with Postfix/Dovecot looking locally for the domain in the virtual domain list because of the tld domain (example.com)? If that's the case, how do I get Postfix/Dovecot to only look locally for the entire URL (mail.example.com) and if it doesn't find it, send it to the correct server by looking up the MX/A records (which I know exist and are setup correctly)? I have been working on this all day and any guidance would be GREATLY appreciated! Thanks for your time!

    Read the article

  • Need to link WP Blog with Rails App on Heroku

    - by John Glass
    I have a client who wants to migrate his Rails app to Heroku. However the client also has a blog associated with his domain that runs on WordPress. Currently, the WordPress blog is running happily alongside the Rails app, but once we migrate to Heroku, that clearly won't be possible. The url for the app is like http://mydomain.com, and the url for the blog is like http://mydomain/blog. I realize that the best long-term solution is to redo the blog in a Rails format like Toto or Jekyll. But in the short term, what is the best way to continue hosting the WP blog where it is (or somewhere) but use Heroku to run the app? The client doesn't want the blog to be on a subdomain, but to remain at mydomain/blog for SEO reasons and also since there is traffic to the blog. I have two ideas: Use rack_rewrite or refraction (or just a regular old 301 and Apache mod_rewrite) on the old (non-Heroku) server to redirect the main url from the old site to Heroku. In this case, I can just leave the Wordpress blog running happily where it is. I think?? Is there a reason to choose one of those options (rack_rewrite, refraction, or mod_rewrite) over the others if I do it this way? Switch the DNS info to point to the Heroku site, and then use a 301 redirect from the blog to the old site. But then I'll have to get the old (non-Heroku) site on a subdomain and use some kind of rewrite rules anyway so it looks like it isn't a subdomain. Are either of these approaches preferable, or is there another way to do it that's easier that I'm missing?

    Read the article

  • Apache directory structure with multiple hosted languages.

    - by anomareh
    I just got a new work machine up and running and I'm trying to decide on how to set everything up directory wise. I've done some digging around and really haven't been able to find anything conclusive. I know it's a question with a variety of answers but I'm hoping there's some sort of general guidelines or best practices to go by. With that said, here are a few things specific to my situation. I will be doing actual development and testing on the same machine as the server. It is a single user machine in the sense that I will be the only one working on the machine. There will be multiple hosted languages, specifically PHP and RoR while possibly expanding later. I'd like the setup to translate well to a production environment. With those 3 things in mind there are a couple of things I've had in the back of mind. Seeing as it's a single user machine I haven't been able to decide whether or not I should be working on things out of my home directory or if they should be located outside of it. I'm feeling that outside of a user directory would be better as it would translate better to a production environment, but I'm also not sure if that will come with any permission annoyances or concerns seeing as I'll be working on the same machine. Hosting multiple languages seems like it may be a bit quirky. With PHP I've found you're generally just dumping the project somewhere in the document root where as something like a Rails app you have the entire project and you only want the public directory in the document root. Thanks for any insight, opinion, or just personal preference from experience anyone can offer.

    Read the article

  • iOS 6 in-app email does not send from within any app that supports it

    - by Joe Termine
    A strange problem -- Last night I upgraded to the final release of iOS 6 on my iPhone 4S and my iPad 2. When I open an app that allows you to send emails from within the app (e.g. adobe Reader, TurboScan, etc.) -- doesn't matter which one -- I am prompted with the email dialog from within the app, I can compose the message, but when I go to send one of two things will happen: either the email sending sound will "swoosh" and the dialog will close (leading me to think it worked) or some apps with good error handling will say there is an "error sending email." The error logs on my device are not reporting errors. It's just that the email doesn't really send. I have two Exchange mail boxes on these devices. One connects to a corporate network hosting on-premise exchange 2007 and the other connects to Gmail over the exchange interface. Have attempted to delete and re-pair these accounts (one at a time) without any change. I'm wondering if others are experiencing this problem, or whether I should just wipe the devices and chalk it up to (another) failed upgrade. Thoughts much appreciated. Joe

    Read the article

  • RSA keys - virtual hosts

    - by Bosworth99
    Pardon my noobness, but I just got started with VPS (linux) hosting; setting up passwordless ssh for multiple users has proved to be kind of a pain. Currently I'm the single user of this ubuntu 10.04 LTS VPS (linode.com). I was able to establish a single rsa passkey under my home/user/.ssh/authorized_keys location. Fine. PuTTy works as expected, and Filezilla (sftp) links up as required. I've been working on a single site that this user owns, and thats not been a problem. Now, I want to set up some other sites, and I've chosen Webmin with the VirtualMin plugin to make this work. I made another user (or, rather, virtualmin did), but I've been unable to get FileZilla to link up to this new user. Could anyone with experience here explain what the setup is supposed to look like? IE - can I use a single rsa key pair for all accounts (if, for example, I give ownership of files to the original user?). Or is it standard practice to create a separate key pair for each user, and establish a separate putty/filezilla login for each? I've spent enough time dinking around with this to be frustrated. "Sever rejected the provided key" error sucks after the fifth hour. I'm about to set up an ftp server and call it a day. Any thoughts would be most welcome -

    Read the article

  • Migrating to Amazon AWS etc: What key statistics/questions should be analyzed and asked?

    - by cerd
    I searched SOverflow pretty extensively for something similar to this set of questions. BACKGROUND: We are a growing 'big(ish)' data chemical data company that are outgrowing our lab and our dedicated production workhorses. Make no mistake, we need to do some serious query optimization. Our data (It comes from a certain govt. agency so the schema and lack of indexing is atrocious). So yes, I know, AWS or EC2 is not a silver bullet in the face of spending time to maybe rework your queries/code entirely 'out of the box'. With that said I would appreciate any input on the following questions: We produce on CentOS and lab on Ubuntu LTS which I prefer especially with their growing cloud / AWS integration. If we are mysql centric, and our biggest problem is these big cartesian products that produce slow queries, should we roll out what we know after more optimization with respect to Ubuntu/mySQL with the added Amazon horsepower? Or is there some merit to the NoSQL and other technologies they offer? What are the key metrics I need to gather from apache and mysql other than like: Disk I/O operations, Data up/down avgs and trends and special high usage periods/scenarios? I've reviewed AWS/EC2 fine print, but want 2nd opinions. What other services aside from the basic web/database have proven valuable to you? I know nothing of Hadoop or many other technologies they offer, echoing my prev. question, do you sometimes find it worth it (Initially having it be a gamble aside from basic homework) to dive/break into a whole new environment and try to/or end up finding a way of more efficiently producing your data/site product? Anything I should watch out for in projecting costs, or any other general advice when working with AWS folks from anyone else where your company is very niche and very very technical (Scientifically - or anybody for that matter)? Thanks very much for your input - I think this thread could be valuable to others as well.

    Read the article

  • Emails from web site sometimes blank or gibberish

    - by John Gardeniers
    Our company has one web site with an online store based on osCommerce. The system sends emails for various reasons, such as password changes, order confirmations, etc., using PHP's mail() function. We occasionally have customers report that the email they received is either blank (email is plain text format) or gibberish (email is in HTML format). In the latter case it's really just HTML that's being displayed as raw text but of course the customers can't read it. In this case the first opening tag's <, and sometimes a few more characters, has gone missing. In an attempt to determine whether this was happening only for certain customers or email systems I configured the web site to send a CC of each message to a service account at my end. Those CC'd messages always arrive intact and display correctly in Outlook. For what it's worth, it seems to happen a little more frequently to Hotmail users but is certainly not limited to them. As the web site is on a shared (Debian) host there's precious little I can do about debugging things from that end, although if I made the right request I feel the hosting company staff would help me, even though they have limited resources to spend on such matters. Any suggestions on what else I might do to try and determine just why those emails are not being received correctly by some customers, yet a CC copy arrives just fine?

    Read the article

  • Upgrading from SQL2000 database to SQL Express 2008 R2

    - by itwb
    Hi, We have a web application which uses a MSSQL 2000 backend database. We are currently paying a ridiculous amount for Shared Hosting, with the database costs alone costing us $150 per month (MSSQL 100mb extra space is $40 per month). Our database size is 896.38 MB I am looking at getting a Virtual Private Server and upgrading the database to a MSSQL2008 Express database. I am aware that the Express version is limited to a 10GB database (with R2), and is constrained to a single CPU. I have also been offered SQL Server 2008 Web Edition for $19/per month, but I cannot find many details on the difference between Express and Web. Any suggestions here? What I would also like to know is: If we upgrade the database to MSSQL 2008 database, is there any issues with possible data transformations in the future? I.e. Is it possible to download and mount it with SQL Server 2008 Standard edition? I'm more concerned about how to get data in and out of the database through SQL Management tools. Are there any other issues that I might face? Thanks, Mike

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >