Search Results

Search found 10622 results on 425 pages for 'shared hosting'.

Page 346/425 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • Apache: Isn't chmod 755 enough to set up symlink or alias on Apache httpd on Mac OS 10.5?

    - by eed3si9n
    On my Mac OS 10.5 machine, I would like to set up a subfolder of ~/Documents like ~/Documents/foo/html to be http://localhost/foo. The first thing I thought of doing is using Alias as follows: Alias /foo /Users/someone/Documents/foo/html <Directory "/Users/someone/Documents/foo/html"> Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </Directory> This got me 403 Forbidden. In the error_log I got: [error] [client ::1] (13)Permission denied: access to /foo denied The subfolder in question has chmod 755 access. I've tried specifying likes like http://localhost/foo/test.php, but that didn't work either. Next, I tried the symlink route. Went into /Library/WebServer/Documents and made a symlink to ~/Documents/foo/html. The document root has Options Indexes FollowSymLinks MultiViews This still got me 403 Forbidden: Symbolic link not allowed or link target not accessible: /Library/WebServer/Documents/foo What else do I need to set this up? Solution: $ chmod 755 ~/Documents In general, the folder to be shared and all of its ancestor folder needs to be viewable by the www service user.

    Read the article

  • Google Chrome no longer treats " Web Apps" specially

    - by Adrian Petrescu
    I'm running Google Chrome (Dev Channel), with the --enable-apps flag, in both OSX and Ubuntu. I have four or five WebApps installed, and they appear in the "New Tab" page just fine. The problem is that, before, when the feature first became available in the Dev Channel, the actual tabs hosting the webapps received special treatment; they would have 3D Dock-like look, and (more importantly) the tab bar would be hidden while using that tab. Sometime in the last few weeks, however, it seems that the special treatment just disappeared with one of the daily updates. The webapps still show up in the New Tab page, they still work in the sense that they capture all URLs going to that webapp, and they use the right icons; but they've basically become indistinguishable from just a regularly pinned tab. The two special features mentioned above have disappeared, on both Ubuntu and OS X. My questions are simply: a) Does this happen to anyone else? When exactly did it begin? b) Why did Google regress the feature? c) Is there any flag I can enable to get it back?

    Read the article

  • Drowning in documents - recommend doc management solutions?

    - by Martin Day
    I've been researching document management lately. I want to organise my docs at home and also at the office. Finding affordable solutions one can actually test drive is quite hard. Some that I've downloaded just don't seem to work (testing on brand new Vista PC). I've seen some software on Amazon like Paperport but not really sure what they're like. For home I'd like something to organise files, full text search, good scanner integration, nice interface etc. But for the office it seems harder. I need something that does proper workflow and keeps versions. It will have an audit trail. Documents can be approved, checked in/out etc. I know a few clients who would like something similar. It would be great just to import thousands of documents from a shared drive and get them indexed with dupes killed. I'd like to be super clear about how/where the documents are being stored so that maintenance and backups are clear. My Google/twitter searches lead back to the same tired and vague webpages pushing what look like expensive and custom made solutions. Some might be very good I suppose but it's darn hard to tell. I don't mind a hosted package but all in all I don't think something like Google Docs, as good as it is now, will work. There are too many quirks and missing features (as compared to Office). Being able to work directly with the common Office file formats is important. I've noted a similar sounding question asked here back in August but it didn't seem to turn up too many solutions that I could easily and quickly apply. Also there could have been some changes since then so I feel it's worth asking.

    Read the article

  • Dealing with upgrade of libevent on Amazon AWS

    - by Dreen
    I am building an application (in Python) on Amazon EC2 that has a following dependency chain: gevent-websocket ---> gevent ---> libevent The last one (libevent) got upgraded on Sunday and my server is now generating this error: (...) File "/usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/__init__.py", line 41, in <module> from gevent import core ImportError: libevent-1.4.so.2: cannot open shared object file: No such file or directory Not wanting to spend much time on the issue, I tried to mitigate it by creating a symlink to an always-recent version: $ sudo ln -s /usr/lib64/libevent.so /usr/lib64/libevent-1.4.so.2 But it didn't quite work: (...) File "/usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/__init__.py", line 41, in <module> from gevent import core ImportError: /usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/core.so: undefined symbol: current_base I am a bit stumped as to how to proceed. Should I create more symlinks? To what? Or is there a better way to solve this problem... PS. For the record I am using Amazon AMI.

    Read the article

  • ubuntu's average load never below "0.00 0.01 0.05"

    - by Karma Fusebox
    I have several ubuntu 12.04 VMs running on a ubuntu 12.04 KVM host. Those of the virtual machines that are totally idle with no services (except syslog and the other "small" standard stuff of a fresh installation) show a constant load of "0.00 0.01 0.05" in top/htop as average 1/5/15. When there are "real" applications running, the load averages behave perfectly normal but they never fall below the mentioned values. While this doesn't affect performance at all and could easily be ignored, it screws up the monitoring graphs in a very annoying way: (Notice how load15 behaves nicely if 0.05 for a short time in the right half of the pic) Unfortunately I don't know what diagnostic outputs might be helpful for you, so here's some default stuff: # top top - 16:31:01 up 1:05, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 62 total, 1 running, 61 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 99.2%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019464k total, 73452k used, 946012k free, 6140k buffers Swap: 0k total, 0k used, 0k free, 22504k cached . # free -m total used free shared buffers cached Mem: 995 72 923 0 6 21 -/+ buffers/cache: 43 951 Swap: 0 0 0 . # iostat -x /dev/vda Linux 3.2.0-32-virtual (vm3) 11/15/2012 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.25 0.00 0.65 0.20 0.24 98.66 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.14 0.12 0.51 0.22 6.74 1.46 22.50 0.02 23.26 20.64 29.30 7.63 0.56 Need something else? Has anyone ever seen this behavior? Might this be a bug in kvm/ubuntu/kernel 3.x in the end? Thanks a lot!

    Read the article

  • Network topology for both direct and routed traffic between two nodes

    - by IndigoFire
    Despite it's small size, this is the most difficult network design problem I've faced. There are three nodes in this network: PC running Windows XP with an internal WiFi adapter.Base station with both WiFi and a Wireless Modem (WiModem)Mobile device with both WiFi and WiModem The modem is a low-bandwidth but high-reliability connection. We'd like to use WiFi for high-bandwidth stuff like file transfers when the mobile is nearby, and the modem for control information. Here's the tricky part: we'd like the wifi traffic to go directly from the mobile to the PC, as rebroadcasting packets on the same WiFi channel takes up double the bandwidth. We can do that with a manual configuration by giving the both the PC and the base station two IP addresses for their WiFi interfaces: one on a subnet shared with the mobile, and one on their own subnet. The routes on the PC are set up so that any traffic going to the mobile via WiModem goes through the secondary IP address so that return traffic from the mobile also goes through the WiModem. Here's what that looks like: PC WiFi 1: 192.168.2.10/24 WiFi 2: 192.168.3.10/24 Default route: 192.168.2.1 Base Station WiFi 1: 192.168.2.1/24 WiFi 2: 192.168.3.1/24 WiModem: 192.168.4.1/24 Mobile WiFi: 192.168.3.20/24 WiModem: 192.168.4.20/24 We'd like to move to having the base station automatically configure the mobile and PC, as the manual setup is problematic when you start having multiple mobiles and PCs. This means that the PC can only have 1 IP address and needs to be treated as being pretty simple. Is it possible to have a setup driven by DHCP on the base station that is efficient with bandwidth?

    Read the article

  • High disk I/O activity in CentOS server

    - by triiim
    I have about 16 websites in a CentOS dedicated, and I am having some problems on high traffic hours, it seems to be a high disk I/O activity causing a general slowdown. I've installed atop and this is what I see on the bottom (the server has been restarted thats why the values are so low): *** system and process activity since boot *** PID RDDSK WRDSK WCANCL DSK CMD 1/18 2176 1.7G 7.3G 854.4M 39 mysqld 671 1248K 3.0G 0K 13 flush-8:0 566 0K 1.1G 0K 5 jbd2/sda2-8 2401 124.2M 529.1M 22408K 3 crond 2032 2.2G 502.0M 0K 12 nginx 2360 425.8M 115.3M 4188K 2 httpd flush-8:0 and jbd2/sda2-8 are the processes I see with iotop using 99% on the IO column, and they are the processes that write the most on the hdd (after mysql). From what I saw in google this could be caused by some ext4 related bug, the current kernel is: Linux srvr.com 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux I asked the hosting support to update the kernel and they tried but they now say that the server wont boot with the new installed kernel and they had to go back to the previous, they are not helping very much. Does someone has any idea how could I solve the high disk usage caused by flush-8:0 and jbd2/sda2-8 processes?

    Read the article

  • Virtual box host-only adapter configuration

    - by Xoundboy
    I have VirtualBox 4 running on Win 7 with a Centos 6 guest VM set up for hosting my dev server. When I'm connected to my home network the guest can be accessed via a static IP address that I configured (192.168.56.2), but not when I'm in the office. I'm guessing that the DHCP server in the office doesn't have a gateway configured for the 192.168.56.x IP range. I read something about the VB host-only adapter that should allow me to set this guest VM up in such a way that I don't need to be on any network to be able to access the guest from the host using a static IP. I've not been able to find out exactly how to configure this though. Can anyone give me an example configuration, thanks. UPDATE: Thanks for your responses. I've now set up a single virtual network adapter in VirtualBox and set it to host-only: C:\Users\Ben>vboxmanage list hostonlyifs Name: VirtualBox Host-Only Ethernet Adapter GUID: d419ef62-3c46-4525-ad2d-be506c90459a Dhcp: Disabled IPAddress: 192.168.56.2 NetworkMask: 255.255.255.0 IPV6Address: fe80:0000:0000:0000:78e3:b200:5af3:2a57 IPV6NetworkMaskPrefixLength: 64 HardwareAddress: 08:00:27:00:94:e8 MediumType: Ethernet Status: Up VBoxNetworkName: HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter On the guest I've set up eth0 to use the same IP address as the host-only adapter (192.168.56.2) but when I try to log in using Putty I still get "Network Error : connection refused". VirtualBox DHCP servier is enabled but I can't ping the gateway (192.168.56.1) from either host nor guest. There's no firewall running on either OS. What next?

    Read the article

  • Windows 7 SSH file server

    - by Siriss
    Hello all- I have looked at the other posts, but have not quite found an answer I have a question about windows file sharing over SSH. I have copssh installed and it is working for Remote desktop connections. I have port 22 forwarded on my router etc. I connect from a Mac or Putty with this address: ssh -l copsshusername 3391:localhost:3389 [external ip] That works fine. I would like to configure Windows 7 to allow my ssh account that I use to login, access to certain shared folders. I have documents and videos and things that I would like to be able to download externally. I have done this before on Linux and a long time ago on XP, but I cannot figure out what I am missing on Windows 7. There is a designated SSH user that copssh uses to run the service and that I use to to login as. I have googled and googled and have not found a solution that does everything I need that is why I am turning here for ideas. I hope I am explaining this correctly. Thank you very much for your help!

    Read the article

  • How to reduce celeryd memory consumption?

    - by Gringo Suave
    I'm using celery 2.5.1 with django on a micro ec2 instance with 613mb memory and as such have to keep memory consumption down. Currently I'm using it only for the scheduler "celery beat" as a web interface to cron, though I hope to use it for more in the future. I've noticed it is the biggest consumer of memory on my micro machine even though I have configured the number of workers to one. I don't have many other options set in settings.py: import djcelery djcelery.setup_loader() BROKER_BACKEND = 'djkombu.transport.DatabaseTransport' CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler' CELERY_RESULT_BACKEND = 'database' BROKER_POOL_LIMIT = 2 CELERYD_CONCURRENCY = 1 CELERY_DISABLE_RATE_LIMITS = True CELERYD_MAX_TASKS_PER_CHILD = 20 CELERYD_SOFT_TASK_TIME_LIMIT = 5 * 60 CELERYD_TASK_TIME_LIMIT = 6 * 60 Here's the details via top: PID USER NI CPU% VIRT SHR RES MEM% Command 1065 wuser 10 0.0 283M 4548 85m 14.3 python manage_prod.py celeryd --beat 1025 wuser 10 1.0 577M 6368 67m 11.2 python manage_prod.py celeryd --beat 1071 wuser 10 0.0 578M 2384 62m 10.6 python manage_prod.py celeryd --beat That's about 214mb of memory (and not much shared) to run a cron job occasionally. Have I done anything wrong, or can this be reduced about ten-fold somehow? ;) Update: here's my upstart config: description "Celery Daemon" start on (net-device-up and local-filesystems) stop on runlevel [016] nice 10 respawn respawn limit 5 10 chdir /home/wuser/wuser/ env CELERYD_OPTS=--concurrency=1 exec sudo -u wuser -H /usr/bin/python manage_prod.py celeryd --beat --concurrency=1 --loglevel info --logfile /var/tmp/celeryd.log Update 2: I notice there is one root process, one user child process, and two grandchildren from that. So I think it isn't a matter of duplicate startup. root 34580 1556 sudo -u wuser -H /usr/bin/python manage_prod.py celeryd wuser 577M 67548 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 578M 63784 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 271M 76260 +- python manage_prod.py celeryd --beat --concurrency=1

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • Sync, share and backup policy using NAS

    - by Cue
    Trying to come up with a way to keep in sync while sharing and keeping a backup of my music/photos and movies. Currently I have an iMac in Greece and a MBP with me in the UK. As a result I've ended up with 2 iPhoto and iTunes libraries not to mention Documents scattered here and there, user settings etc. I also like to have a backup in case of a drive failure or the need to clean install. It seems that iPhoto and iTunes don't work really well with networked libraries. The way I think about it is to have a NAS where I keep my iTunes and iPhoto library but also rsync daily to my MBP to have a local copy. That way my files are shared across the network as well as act like a backup. In addition I get to have my files wherever I take my MBP but also have the ability to clean install. The tricky part comes from keeping in sync the iMac which is miles away. Again I'm considering a mirror setup (NAS, rsync to the iMac) as well as an rsync between the two NAS. It pretty much resembles the way Dropbox works, sans the requirement to go through their servers but I'm no "superuser" and don't really know if it is even feasible to have such a setup. Looks like there are so many things that can go wrong.

    Read the article

  • Connecting SVN from Remote Server

    - by Ashish
    I have hosted my repository in assebbla & it works fine. now I want to write a script that can automate the build process : 1. Take the code from assembla repository 2. Make a dump and copy it onto my web server. what I have researched from net states that use of commands like svn co svn+ssh://[email protected]/home/svn/test I believe I need to open Shell on my server and type these commands but shell has been disabled from my server admin. I tried to run the same from php using exec , admin has disabled that too. (am using shared hosting and want to do a automated deployment using these simple steps. i don't want to bring my local system in this process) now am not sure even if I get the shell access open to my server these commands like svn will work there as I don't have SVN installed on my server (its installed on assembla). kindly let me know if any more explanation is required regarding the same or if am going on the wrong track. Am a newbie so please be descriptive in answering :) Thanx in advance Ace

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • How do i play nicely with MS SQL/SQL Server 2008

    - by acidzombie24
    Big problem. I have nearly given up. I am trying to port my prototype to use MS SQL so it will work on a server once i get it (the server will be SQL Server 2008, shared, i dont know any more info). So i tried to connect to SQL Server via visual studios IDE and had no luck. I enabled TCP and named pipes and restarted the service (and computer) with still no luck. I remembered about mdf files so i made that after an obstacle of not being able to make the connect string require i figure out visual studio has it in its properties and successfully connected with that. Then i had a problem with nested transactions. After not being able to figure out how to check i wondered if i can configure it to allow it somehow. I always thought all of MS were the same except for limitations but sql server seems to support nested transactions so theres no point trying to work around the problem with .mdf files since i wont need them and really just used it to port the base of my sql code and to check if syntax is correct. I tried installing SQL Server Management Studio since people mentioned it several times (as a solution or at least help). When installing it on windows 7 it says it may not be compatible. After running it, it launched SQL Server Installation Center (64-bit) which doesnt seem to be the same thing as i dont see a way to modify any of my server (networking) configurations or edit user permissions, etc. I am clueless what to do next. Does anyone have any ideas? I'm posting here bc i think my problem is more configurations and sql server then programming.

    Read the article

  • Local user vs. domain user? What is the right way here?

    - by ebeeb
    I'm a software developer in a company with 6 employees. Everyone has a machine for him-/herself, so none of the machines is shared. I'm currently setting up my machine with Windows 8 and was experimenting a bit with domain and local user accounts. Correct me please if I'm wrong, but I think the idea behind is, that domain users generally should not be able to modify the configuration of a machine (like installing software), since they are able to login on every single machine in the domain. The local user (usually just one local administrator per machine) is the one who cares about the configuration of the machine. But in my case the login into the domain is just for being able to access directories/servers in the domain (I do not really know the details, all I know is, that loggin into the domain user account is necessary). So overall I've got a local admin account and a domain account used on my machine. While working I'm logged in to my domain user account. But it annoys me, that I always need to enter the credentials of my local user account when I'm about to update/install something, which happens quite often as a software developer. I fixed this with adding the domain account into the user accounts in my control panel and putting it into the Administrators group. The only thing I wanted to know about this: is there something REALLY bad about doing this? Or is there maybe a more common way to be able to act like a local admin, while logged in as a domain user? PS: I'm sorry about the tags, but I don't know the proper ones. I'd be glad if some of the superuser experts could fix this :-)

    Read the article

  • Handling emails on a web server - Making sure the FQDN is set correctly based on the website sending the email

    - by webnoob
    I have a Windows 2008 Web Edition server hosting multiple websites using IIS 7.5. At the moment, all the emails are sent via the IIS6 SMTP service. The FQDN of the SMTP service is set to the computer name at the moment which isn't correct as it doesn't resolve to a valid DNS entry and is not RFC compliant. Some questions: Is there any way I can change the FQDN of the SMTP service based on the site sending the email? Would it be Ok to just setup mailserver.mydomain.com and use that as the FQDN for all the sites on multiple domains. Should I be using some other mail server software to handle this better? The reason I am asking is lots of emails are hitting spam folders because the settings are incorrect. I have access to the code that is running the websites so if something needs to be done there then that shouldn't be a problem. The sites are written using ASP.NET 2.0. EDIT: I have just found an option to create an SMTP virtual service. Would this be the way forward? Create a virtual server for each site? Thanks.

    Read the article

  • IIS 6 Denies access to the default document

    - by Jim
    I've got Windows Server 2k3 with IIS6 hosting a couple ASP.NET MVC 2 applications (.NET 4), all in the Default Web Site. Most of them simply use Integrated authentication, but a couple use forms as well. All the applications work properly and are correctly accessible. The problem I'm trying to resolve is access to the default document. It is currently specified as index.htm. Both index.htm and the Default Web Site are configured to allow anonymous access (with none of the authenticated acces boxes checked). However, access is denied to the file. Accessing via server.domain.tld/ and server.domain.tld/index.htm both yield 401 errors. However, server.domain.tld/default.htm (file does not exist) properly returns a 404. If I alter the file security on index.htm to allow integrated authentication, then requesting /index.htm directly works properly for users with domain accounts, but anonymous users get a login prompt/401. How can I configure IIS to allow all users to view index.htm via server.domain.tld/?

    Read the article

  • Can Dovecot IMAP automatically create Maildir folders for new (virtual) users?

    - by user233441
    everyone. I am learning to set up a dovecot home IMAP server using a virtual Ubuntu 12.04 machine. My intention is eventually to have a home server that uses POP3 to take email from several addresses and remove them from my ISP's servers, while making them accessible through a home IMAP server (this is similar to the setup described at https://help.ubuntu.com/community/POP3Aggregator, which explains how to set up the system with dovecot version 1, and is thus outdated). I intend to use the ISP's server directly when sending messages, and to BCC all sent messages to myself. I've completed the basic set up of the test server: getmail uses POP3 to fetch messages from two test email accounts, and successfully delivers them to the respective Maildir-style new folders on the virtual machine. Dovecot then successfully sees these messages. I have two questions: 1) I had to set up new, cur, and tmp folders for both of the test accounts manually to get this setup to work. Is there a way to get dovecot to create these Maildir folders automatically when I create a new virtual user account (e.g., when I add a user and password combination to my dovecot password file), or is it expected that I write a bash script to automate that task? 2) I would welcome any comments you have on how this approach could be improved as I learn to set it up. My motivations with this approach are 1) to enable archiving/storing emails from several hosting providers that impose a cap on server storage, and 2) to give me somewhat greater control over email storage without requiring that I set up and administrate a mail server from scratch (which I'm not yet prepared to do) (this follows the recommendations at https://ssd.eff.org/tech/email). Thank you!

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • Mapped network drive connection timeout

    - by Terix
    I have server "Alpha" and server "Beta". Server Beta has a shared folder, that is mapped on server Alpha as "X:" On server Alpha there is a .vbs script that runs and take some files on local drive and copy them on X: drive. My issue is if no user log on server Alpha for a long time, it seems like the tcp connection underneath the mapped drive has a timeout, and the vbs script fails on the copy of file. As soon I log with remote desktop on Alpha server, the .vbs is successfull on the copy of the files. I have made many tests, using file logs to check what was happening and I found no way to refresh the connection and let the .vbs be able to copy files unattended.. I have always to log with remote desktop on Alpha server to refresh the connection and let the .vbs copy the files without issues. What can I do to avoid to log every time? The .vbs script runs 3 times a day and is very annoying. I do not have control over server Beta so I cannot change anything there, and I am very limited on changes I can do on server Alpha ( I cannot change registry and that sort of things)

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • Can nginx be an mail proxy for a backend server that does not accept cleartext logins?

    - by 84104
    Can Nginx be an mail proxy for a backend server that does not accept cleartext logins? Preferably I'd like to know what directive to include so that it will invoke STARTTLS/STLS, but communication via IMAPS or POP3S is sufficient. relevant(?) section of nginx.conf mail { auth_http localhost:80/mailproxy/auth.php; proxy on; ssl_prefer_server_ciphers on; ssl_protocols TLSv1 SSLv3; ssl_ciphers HIGH:!ADH:!MD5:@STRENGTH; ssl_session_cache shared:TLSSL:16m; ssl_session_timeout 10m; ssl_certificate /etc/ssl/private/hostname.crt; ssl_certificate_key /etc/ssl/private/hostname.key; imap_capabilities "IMAP4rev1" "UIDPLUS"; server { protocol imap; listen 143; starttls on; } server { protocol imap; listen 993; ssl on; } pop3_capabilities "TOP" "USER"; server { protocol pop3; listen 110; starttls on; pop3_auth plain; } server { protocol pop3; listen 995; ssl on; pop3_auth plain; } }

    Read the article

  • What kind of server configuration is best for a chatting app? [closed]

    - by mohabitar
    I'm just now starting to go deeper into the world of cloud hosting and databases, and am getting overwhelmed by how deep this information goes. It's all a little too much to consume in a short amount of time. I get a lot of pricing information, but I'm unable to determine what that means to me. I'm making what you might compare to an email app. Users can send messages to one another. I just don't understand, out of the several options, what would be ideal for an app like this, where users would be constantly sending and receiving text data. With Amazon DynamoDB, I have to specify a pre-defined throughput with number of reads and writes per second. Sure I can just type 50, but I'm not exactly sure what 50 writes per second represents. I'm trying to determine what would be the most cost efficient solution, and I want to know what a throughput of 50 reads/writes/second compares to. Is that a high number? What is a good throughput number for a message sending app with say 50,000 daily users? I'm just providing specific numbers so I can understand what these throughput numbers represent. 100 transactions/second to me seems like a small number since I'm not familiar with this stuff, so I'm just looking to bring everything in context. What would 100 read/write/second be useful for? Are there any average example values available? And I'm not sure what each service is good for. For a message sending app, is there any reason I'd want to choose say Amazon DynamoDB over Google App Engine? Any insight would be greatly appreciated.

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >