Search Results

Search found 10789 results on 432 pages for 'experience'.

Page 337/432 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Running multiple copies of openssh-server (sshd) on Ubuntu

    - by cecilkorik
    I may be attacking this problem the wrong way, if so let me know. I have a server which is available through SSH from both the public internet and the local LAN. I would like to have two very different security policies for each, by running two copies of sshd with two different sshd_config files each on a different port. Some of the things I'd like to change is to allow password or public-key authentication on the LAN, but public-key only from the internet. All (real) users could login from the LAN side, but only certain authorized users would be individually whitelisted to login through the internet. As far as I can tell this requires having two different SSH daemons running on different ports with different sshd_configs. I am fine with the different ports part, I can easily forward port 22 to any port I want through my firewall. So my question is what is the best way to actually START the second sshd under Ubuntu 10.04 LTS. Is there a recommended way to do something like this? Surely I am not the first person with this sort of need. I have a bit of experience with upstart, and I can manually hack the second sshd into /etc/init/ssh.conf I suppose but I'm not sure if that will get overwritten by the package. However I do it, It's important to ensure both sshd processes always get restarted after any automatic or manual upgrade of the openssh-server package. Thanks in advance.

    Read the article

  • Multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

  • OS X superuser folders automatically created. Perusers launchd process appears to kill 501

    - by Ric Pen
    New Apple laptop OSX 10.8.2. I have used OS X but many years previously, and am not familiar with subtleties or changes in com.apple.launchd.peruser.x... I have previously (and in retrospect, foolishly) made changes to these rapidly spawned new peruser accounts (my initial reaction was that if ipfw was disabled, then I might well be under hacker attack, which I have dealt with, years ago), but I believe I was wrong, and the results of my efforts at preserving the system's integrity have in fact been destructive, overreactive, and have resulted in much work to restore. My understanding from other posts is that superuser protocols have changed quite dramatically since I bought the first developer version of OS X many years ago. Haven't developed on Apple much since then, w/ exception of WebObjects (IMO, much underrated at that time, and was more user friendly than ASP (prior to .NET, I vaguely recall). Creation of apparently nasty peruser folders appear to confound 501 process, which logs inability to find firewall (ipfw). Can someone help me with this? I am concerned that either the system is improperly configured, an application was improperly installed (although there is little here beyond Apple's SDK, which I find quite accommodating and intuitive). Still, I am a novice, only sporadically develop at this time, and would really just like to see this system running happily. Please offer assistance, in the form of potential info sources, or if you have had a similar experience, then perhaps scripts to suss out this issue. I do not wish to damage the system, but Apple's Developer connection and discussion threads do not appear to have dealt with this particular issue recently... Although I may well have missed something you have not - please apprise. Any assistance on this issue is very much appreciated - by an old guy, who wants to do some things which were fun about 20 years ago.

    Read the article

  • Which CPU for XEN - LAMP testbed - Budget

    - by deploymonkey
    Dear serverfault knowledgeables, im in a decision dilemma right now, which I can't resolve due to lack of hands on experience. I need to build a testbed for basically virtualizing a LAMP application (os'ses not yet decided) including server side calculations. I'll opt for XEN since it seems better supported by cloud hosters at the moment. The hardware is for a proof of concept for a startup doing saas and might be used for closed live alpha/beta later on. After testing, the testbed might be a) deployed as a colocated white box server b) used as workstation Single socket is enough. We want to have ECC memory for reliability, this excludes most of the consumer line at intel. If intel CPU, then threaded cpu (HT) is preferred have at least 16 gig ram If justified by price and reliability is not too bad, a high quality desktop MB instead of a server MB would be worth a try It came down to the opteron 6128 vs. the xeon 5620 for me after a lot of research, but I don't necessarily have to be right. Which CPU is preferrable, concerning TCO (MB price, power requirements 24/7...) , Opteron 6128 or Xeon 5620? Which one offers better performance in real world applications? (Do You have any other suggestions I probably overlooked?) Thank You for Your consideration

    Read the article

  • What is the quickest and safest way to test new software and revert all changes, if needed?

    - by calbar
    I'm looking for Windows software that will allow me to quickly create a "checkpoint", do whatever I might need to do to my computer - install programs/drivers/updates, create/delete personal files, reboot the system multiple times, open questionable attachments - and then revert the entire system back to when the checkpoint was created. Essentially I want Windows Restore Points that save my personal files and partitions, too. It sounds like disk imaging might be the ticket, but creating them is much too slow and the restore process too involved... I'm hoping to sacrifice full disaster recovery for speed. Creating a checkpoint should be as close to one-click as possible, and rolling back should be a matter of selecting a restore point and rebooting. Ding! I'm familiar with Sandboxie, True Image Home "Try and Decide", Returnil, and a number of other "virtual system" apps that actively "catch" changes and allow you to commit or reject them. I'm not interested in these for a number of reasons - I prefer the "cut and dry" restore point approach. Finally, I'll note that I've just recently become aware of Comodo Time Machine. It sounds absolutely perfect, however, a quick skim through the user forums show more than a few horror stories of corrupted, unbootable systems. Any positive personal experience with the software to suppress my superstitions, or suggestions for more established alternatives would be greatly appreciated - Comodo Time Machine seems relatively new. Thanks for your help!

    Read the article

  • Dell PowerEdge 860 won't boot but gets power

    - by fierflash
    I have a Dell PowerEdge 860. Got an 2008 R2 running on it. Here is the scenario I'm currently in: I can boot it up and work as usuall. But when I restart it, it just shuts down, doesn't reboot, and it won't boot up again. Then it stays in this "mode", On/off button blinking green(standby mode?) and System Indicator LED blinking amber. If I try to press the On/Off button on the frontpanel it won't boot. It's like you can hear the fan from the PSU running but nothing else is trying to boot. If I unplug the powercable and then put it back in again it's the same, the extremley noisy fans won't start and it's just sitting there. If I let it "rest" for like 1 to 24 hours while the powercable is unplugged and I plug it in again, it boots directly, without me having to push the On/Off button. What I have tried: Run diagnostics(not the one in POST phase) and everything is fine Booting without any cables except the powercable Boot without memory installed Boot without the raidcable plugged in Cleared NVRAM Reseated all the components and cables Googling like a freak for any possible solution Tried Dell Support but I don't have any warranty left so they can't help me unless i pay some ridiculous amount of money I suppos Do anyone have some experience with the same issue? Do I need to replace something or is there any other way to determine the problem? Any help will be greatly appreciated.

    Read the article

  • Proxmox drbd configuration split brain [on hold]

    - by AudioDan
    I am planning a proxmox HA configuration with two Dell R710 machines (dual 6 core processors in each) with enterprise level drive raid arrays. I would be using DRBD with a quorum disk on a third machine. I would dedicate two 1GB nics on each server to the DRBD communications. We would have approximately 12 to 14 Virtual Machines running on this pair of servers. The proxmox manual recommends creating two DRBD resources - one for the Virtual Machines that normally run on ServerA and one for the Virtual Machines that normally run on ServerB. This is because of the Primary/Primary state in which this configuration runs. If both servers have VMs talking to the same DRBD resource and a split brain situation occurs, there is potential for data corruption that must be resolved. While I understand it would take more effort to create new virtual machines, can anybody foresee any potential problems with running a separate DRBD resource for each VM instead? Does anyone have experience running a setup that way and has it worked well? It seems to me that would allow more flexibility in moving machines back and forth.

    Read the article

  • Building an Email server for mass emails

    - by EGHDK
    I recently started doing IT odd jobs for a company. The company has a pretty decent sized mail list that costs them over $3000 per month to send out email from. The company is set on creating their own email server so that it can just run and send emails to the client lists. They only send out emails roughly once a month. Has anyone had experience with this? This wouldn't be an email server I guess (as it doesn't need to handle incoming messages) It just has to be able to send around 200,000 emails, once a month. What would be the best way to go about this? Services online like MailChimp have proved to be too pricey. It's not an ad that is being sent out, it's more of a monthly newsletter, so we don't need any crazy software for ROI or anything crazy like that. If I could fit 200,000 people in GMAIL, I'd do it, but I don't think I can (heh... maybe I should try).

    Read the article

  • How to make an "import image" button/field in a PDF form?

    - by Joe
    How to make an "import image" button/field in a PDF form? I am designing a "Lost Pet" poster for a local animal shelter. The idea is to make a PDF file that users of the shelter's website can download, insert their pet's information, and then print it out if their pet goes missing. I will be designing the poster in InDesign CS3, and then exporting it to a PDF file. I will then add form fields for the user to fill out. I am ok with making text entry form fields. That is a simple matter. What's not so simple is figuring out a way to allow the user to insert a photo of their lost pet into the PDF, save the file, and then print it out with the photo in it. I am running Adobe Acrobat Professional 8. I am running it on a Mac. All of the searches I've done have told me that if I was running Acrobat in Windows, I'd have access to this other program called LiveCycle Designer which has pre-built form item libraries, including a Image Field. But I cannot find any similar option on the Mac version. Has anyone has any experience doing something like this? If so, I could certainly use some tips on making this work. Just a quick clarification... This is a volunteer design project that I am doing for the shelter, so I don't want to spend money on any extra software/addons at all. I am hoping this can be done somehow with the pre-existing software I have.

    Read the article

  • How to set up mysql storage for certain rsyslog input matches?

    - by ylluminate
    I'm draining various logs from Heroku to an rsyslog linux (ubuntu) server and am starting to have a little more to bite off than I can chew in terms of working with my log histories. I am needing to be able drill back in time based on more flexible details and more flexible access than what the standard syslog file(s) provide. I'm thinking that logging to mysql may be the correct approach, but how do I set this up such that it pulls only certain log entries into a table based on an identified? For example, I see a long hex string identifying each log entry from a certain Heroku app instance. I assume that I can just pipe those into the mysql socket vs ALL rsyslog input into mysql... Could someone please direct me to a resource that can walk me through the process of setting something like this up or simply provide some details that can help? I have 15+ years of Unix experience so I just need some nudging in the right direction as I've not really done a tremendous amount of work with syslog daemons previously in terms of pooling various servers into one. Additionally, I'd be interested in any log review tools that could make drilling through log arrangements like this more handy for developers.

    Read the article

  • Building an Email server for mass emails

    - by EGHDK
    I recently started doing IT odd jobs for a company. The company has a pretty decent sized mail list that costs them over $3000 per month to send out email from. The company is set on creating their own email server so that it can just run and send emails to the client lists. They only send out emails roughly once a month. Has anyone had experience with this? This wouldn't be an email server I guess (as it doesn't need to handle incoming messages) It just has to be able to send around 200,000 emails, once a month. What would be the best way to go about this? Services online like MailChimp have proved to be too pricey. It's not an ad that is being sent out, it's more of a monthly newsletter, so we don't need any crazy software for ROI or anything crazy like that. If I could fit 200,000 people in GMAIL, I'd do it, but I don't think I can (heh... maybe I should try).

    Read the article

  • Secure data from a server to a workstation using jumper hosts

    - by apalsson
    Hello. I have a WWW-server, my problem is that the content is sensitive and should not be accessible for people without proper credentials. How can I improve the ease of use but still maintain security following scenario; The Server is accessed through a "jumper host", i.e. the client connects to the jumper using VPN-connection and uses RemoteDesktop to access the jumper. From the jumper he uses RemoteDesktop again to access the Server. Finally on the Server the user can access content using a WWW-browser. All the way from the VPN-client to the WWW-browser requires authentication using a SmartCard-token. This seems quite secure to me. Content only gets mirrored on the RemoteDesktop between Server and jumper, no cached files to worry about. Connection between jumper and client is protected using VPN(ssl), so no eavesdropping. But it is quite cumbersome for the clients with many steps and connections to open. :( So, how can I improve the user experience accessing my server without compromising security? Thanks.

    Read the article

  • sudo fdisk in a live session does not show all hard drives

    - by cornbread
    I am having Grub2 issues in my Ubuntu 10.04 dual boot, 2 hard drive system. So I am attempting to follow the standard grub2 reinstallation guide (cant post link because of spam filters allowing only one... ?_?) Don't know if this is the root of my problem, but my speedy internal HD with my OS on it is not showing up anywhere in a live session. Not in nautilus, behind fdisk.... no where. When I can get the main system to boot, there is no issue seeing all available partitions. But the live session sees only the 1TB internal media/backup hard drive. I need access to the other hard drive and it's partitions to finish the grub2 re-installation but I am not sure anymore that is the underlying issue. Anyone have experience with this? The issue I have identified as a grub2 issue is fully described here. SandPvvr describes it exactly. Some notes: I do not see the grub2 menu for my os's holding down the shift key after my bios screen works maybe 10% of the time Not related to reinstalling a windows os. havent been touched in a year do some web development. issue may have started when I was playing with ruby and django. not sure on this. Could a dev environment do this? fdisk in live session ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0001d518 Device Boot Start End Blocks Id System /dev/sdb2 1 121601 976759939 5 Extended /dev/sdb5 487 110765 885816036 83 Linux /dev/sdb6 110766 121601 87040138+ b W95 FAT32 /dev/sdb7 1 486 3903700+ 82 Linux swap / Solaris Partition table entries are not in disk order

    Read the article

  • Using NFS for scalable PHP/MySQL web application

    - by Jeroen Moons
    Here's the situation: I have a PHP/MySQL web application that accepts user uploads (pdf files). From these pdf files' pages a preview image is made on the fly and presented to the web app's users. Some pdfs might be on the large side, most will be under 50 MB but some extreme cases could be as large as a few hundred MB. A little waiting for the preview image for large pdf files is acceptable but no more than a minute let's say. Everything is running on one server for now, but soon the app will hit the server's limit on both storage and processing power. My idea to solve the problem: To deal with this situation I had the idea of having one or more pdf processing servers as needed, and one or more file storage servers. These two types of servers are mounted to the server on which the actual app runs using NFS. The app could then use GearMan to delegate pdf processing tasks to these processing servers. The processing server can mount the storage server and read the file stored there, process it and write its output to that server. The servers I'm talking about will be amazon ec2 instances. The web app returns a link to the resulting pdf preview image on the storage server that was used which can then be used on the front end to show the image to the user. My question: I have zero experience with apps that use multiple servers, is this idea viable or is there a better way to do it? Is an NFS setup fast and reliable enough for this situation?

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • Poor performance on IIS7, only on Windows Server 2008, fine on Windows 7?

    - by user32005
    Hi, I'm new to IIS7 but have experience with other versions. I've been working on an application that works great in a dev environment (as always) but when I push it to a windows server 2008/IIS7 box performance takes a noticeable hit. The dev environment is Windows7/IIS7. The configuration in IIS is the same on the dev box as the server. I've tried all sorts of things to try and find a reason for this but I cant come to any conclusion. I've ruled out database problems on the live box as all data is cached after the first request. I've confirmed this to be true and made sure there is no additional database traffic. I've ruled out network issues with a combination of monitoring requests with fiddler and local debugging on the server. Whenever the code runs on the server there seems to be a performance issue. The server: Intel Core 2 Quad CPU Q6600 @ 2.40ghz with 2gb RAM. I know this is not fantastic but I was expecting it to at least perform as well as my dev environment (which is running much more on a lower spec). The CPU using peaks at under 60%, and memory usage is less than half of the available. I've enabled failed request tracing and most of the time is spent in a custom HttpModule, this module works to handle every request, I cant get any more detail as to what within the module may be causing the problem. Any ideas, I've been pulling my hair out for days now. Thanks

    Read the article

  • Windows desktop virutalization instead of replacing work stations

    - by Chris Marisic
    I'm head of the IT department at the small business I work for, however I am primarily a software architect and all of my system administration experience and knowledge is ancillary to software development. At some point this year or next we will be looking at upgrading our workstation environment to a uniform Windows 7 / Office 2010 environment as opposed to the hodge podge collection of various OEM licensed editions of software that are on each different machine. It occurred to me that it is probably possible to forgo upgrading each workstation and instead have it be a dumb terminal to access a virutalization server and have their entire virtual workstation hosted on the server. Now I know basically anything is possible but is this a feasible solution for a small business (25-50 work stations)? Assuming that this is feasible, what type of rough guidelines exist for calculating the required server resources needed for this. How exactly do solutions handle a user accessing their VM, do they log on normally to their physical workstation and then use remote desktop to access their VM, or is it usually done with a client piece of software to negotiate this? What types of software available for administering and monitoring these VM's, can this functionality be achieved out of box with Microsoft Server 2008? I'm mostly interested in these questions relating to Server 2008 with Hyper-V but fell free to offer insight with VMware's product line up, especially if there's any compelling reasons to choose them over Hyper-V in a Microsoft shop. Edit: Just to add some more information on implementation goals would be to upgrade our platform from a Win2k3 / XP environment to a full Windows 2008 / Win7 platform without having to perform any of that associated work with our each differently configured workstation. Also could anyone offer any realistic guidelines for how big of hardware is needed to support 25-50 workstations virtually? The majority the workstations do nothing except Office, Outlook and web. The only high demand workstations are the development workstations which would keep everything local.

    Read the article

  • Not getting IP from ISP on Multicast Network

    - by Johan Nielsen
    Im having an odd issue with my ISP (COMX.dk) I have a managed access gateway box (Telsay) with three 8P8C ports for use with Internet and Ip-Tv (respectively on different VLANS (so does my ISP tell me)) To utilize a port you will need to register your device's mac address through an online interface. You will then get your device paired with a static ip. I am using one port actively and I have registered another device (router). The router is configured to listen for an active dhcpd on the network. When my router get a lease I get a private ip 192.168.2.2 (not the one bound to my mac) which is odd! I unconnected my router from the gateway and connected my laptop directly. Same thing happened - I was given a private address. I did a port scan on the gateway and found port 80 to be open and browsed to the ip. I was then presented with a management interface of a Belkin wireless router (HMMM!!!!) <--by the way, not my gear At this point I called the ISP to let them know of my issue/findings - Only to be replied "Well, we cant see any rogue dhcp servers" (thinking to myself, well I can) I then decided that it could be fun to try the other port of my gateway, only to experience the same. So I reconnected my router and used the remaining port to make an observer(wireshark promic etc.) I am able to see my router trying to discover a dhcp server but I can also see my ISP's IGMP and PIMv2 packages just repeating the same pattern. Hello...Hello...Hello :) So I called them again, only to get the same response, "we dont see any rogue dhcp's...we cant see the host you are talking to (mac address of the Belkin router)...you are definitively connected through wireless?!?(no im not, no such thing as a wireless wire - i thought to myself)" My questions is, What is going on? (besides from what im reporting here) What am I seeing that the don't? What can I tell them in order for them to resolve mine/their issue?

    Read the article

  • Why would I need a firewall if my server is well configured?

    - by Aitch
    I admin a handful of cloud-based (VPS) servers for the company I work for. The servers are minimal ubuntu installs that run bits of LAMP stacks / inbound data collection (rsync). The data is large but not personal, financial or anything like that (ie not that interesting) Clearly on here people are forever asking about configuring firewalls and such like. I use a bunch of approaches to secure the servers, for example (but not restricted to) ssh on non standard ports; no password typing, only known ssh keys from known ips for login etc https, and restricted shells (rssh) generally only from known keys/ips servers are minimal, up to date and patched regularly use things like rkhunter, cfengine, lynis denyhosts etc for monitoring I have extensive experience of unix sys admin. I'm confident I know what I'm doing in my setups. I configure /etc files. I have never felt a compelling need to install stuff like firewalls: iptables etc. Put aside for a moment the issues of physical security of the VPS. Q? I can't decide whether I am being naive or the incremental protection a fw might offer is worth the effort of learning / installing and the additional complexity (packages, config files, possible support etc) on the servers. To date (touch wood) I've never had any problems with security but I am not complacent about it either.

    Read the article

  • Apache, Tomcat and mod_jk for load balancing

    - by pHk
    Hi guys. I've set-up a basic Apache (2.2.x) and Tomcat (6.0.x) set-up using mod_jk for load balancing using the worker.properties file. Preliminary testing seems to show that this works relatively well, and it was quite easy to set-up. However; the fact that it was so easy to set-up has got me a little worried. We're dealing with 100 - 300 concurrent users using the same web application (deployed on 2 or 3 Tomcat instances). I have done a little Googling and looking around on here and there seems to be more than 1 way to accomplish this (one example on here used a balancer:// style URL, which I've never seen before in an Apache config). For example, one question I ask myself is how reliable the load detection on mod_jk really is (Busyness, Session, Request, etc). In your experience, does this set-up prove to be reliable in real world scenarios? Any pointers on improvements, pit falls or interesting literature/articles? I've worked with Apache before, but am in no way an expert. Thanks in advance.

    Read the article

  • Smart card driven membership and door entry system

    - by Rob G
    I'm looking at putting in a smart card driven system at my local sports club (which doesn't have oodles of money), and since they're willing to pay for hardware, and I'm willing to do the technical setup, I was wondering if anyone had any experience in setting something like this up. Writing any software needed is not the problem, I've pretty much got that covered with various open source projects out there and custom code I'll write, but it's more the hardware side I'm not too sure about and I'm looking for advice from people out there. I'm sure there are numerous complications, but on the surface it looks fairly simple. I'd basically like to enable members to swipe/touch a smart card at the door to gain entry to the club, walk up to a touch screen PC and swipe/touch a card reader there to "login" to the system I create, which will allow them to book club facilities etc. I may even want that same card to then activate things like lights or music when they enter the room they've booked. Pretty Eutopian I know, but still, we'd like to get as close as we can. As I said, the software shouldn't be a problem, and on the hardware side, so far I'm looking at: All in one touch screen PC running Windows 7 or Ubuntu USB card reader (not sure which one to buy) Smart Cards (again, never bought these before) Door/lighting hardware that could be triggered (not sure here either) If anyone has any advice on implementing something like this - especially the items I'm not sure about above, and of course anything I've missed out that's crucial, I'd be most grateful. Recommended hardware that you've used for something like this would be fantastic!

    Read the article

  • Struggling with proper way to setup Permissions on Linux/Apache Web Server

    - by Dr. DOT
    Your expert experience and assistance is great, greatly appreciated here. I have been running a LAMP server for a long time, yet I still struggle with the best way to set file & directory permissions for FTP and WWW protocol activity. My Control panel is WHM/cPanel (not that it makes a difference), and out-of-the box: files are owned by the user account setup in WHM (eg, "abc") files have a group setting of "abc" as well file permissions are created with 644 directories are owned by "abc" directories have a group setting of "abc" directories permissions are created with 0755 Again, these are the default permission settings. Now everything is fine with FTP activity, but please advise me if any of these file/directory settings create issues, especially with security. Here's where my struggle comes into play. I have PHP apps that allow a visitor to create, edit, rename, delete, etc. sub-directories and files in certain selected directories. PHP runs as "nobody" on my server. So in order to get my PHP/Web apps to work, I have had to: chown nobody * chgrp nobody * chmod 0777 * to everything in these certain & selected sub-directories. I know this is probably a huge security whole (so don't ask me for any links :) but how should I set all the permissions to allow my FTP user to do his thing while allowing the PHP apps to do their thing will also "minimizing" any security risks and exposures? I know that big CMS systems like Drupal, Joomla, WordPress and so on, handle this. Thanks ahead of time for reading through this and offering your expert advice!

    Read the article

  • Does a VPN requirement kill the concept of having a Web Application in the Cloud?

    - by Christian
    Recently I posted a question in SO, but so far I got no answers. I wonder if I'm asking the wrong question. This is the problem: We need to design an application which offers a public http web service, but at the same time it must consume some services through a VPN connection from other existing company. There is no other alternative but to use a VPN connection to access those services. We want to host our application in some cloud infrastructure like Heroku or Amazon EC2. But there is no direct way to access the VPN services of the other company from there. The solution I'm thinking, but I don't like is to have a different server to expose the services from that VPN. But this will require the setup of another server which I prefer to avoid. In the case this is the solution, can I use an Amazon EC2 instance to connect to a VPN? This is what I was thinking, is it correct? I don't have experience using VPNs, tunnels or those kind of networking stuff. I will really appreciate if you can propose me an alternative solution, or just give me a comment.

    Read the article

  • Mavericks permission issues with Windows Server deduplicated shares

    - by dmohlmaster
    We have a number of 10.9-10.9.3 - Mavericks - machines installed throughout our facility. Much of the user content is pulled from shares stored on our Windows Server 2012 fileservers with deduplication enabled. I have found that files newly written or unoptimized are able to be accessed without issue - read, written, modified, etc. Once the file gets optomized/deduplicated and Windows adds the P & L attributes - sparse and symlink - the Macs running Mavericks begin to have access issues. Once the files get deduplicated, users begin receiving read access errors when copying files (see error1 below). This happens when copying to folders within the current folder tree or copying somewhere to the local system. If you 'stop' the copy operation and retry a few more times, it may eventually work for the specific instance but fail again later. I am however, able to copy these files without issue via the terminal. Other systems running 10.7 do not experience the same issues and are able to access file server resources without issue. Many of the systems having issues are newer and thus not able to be downgraded to 10.8 or 10.7. I have tried finder replacements such as Pathfinder but the results are the same. I know this is at least similar to the issues many Mac users are already experiencing and posting about but I haven't seen it directly linked to deduplication and the attributes written by Windows server. Has anyone seen this issue? Have any solutions been found? Error 1: When copying files after the PL attributes have been set by deduplication. "One or more items can't be copied to "Foler" because you don't have permissions to read them. ******************************************' Via the system.log, I am also seeing the following error when accessing these deduplicated file shares. The reparse point tag listed below is "IO_REPARSE_TAG_DEDUP" Reported error: "smbfs_nget: filename.ext - unknown reparse point tag 0x80000013"

    Read the article

  • This web site needs a different Google Maps API key. A new key can be generated at http://code.googl

    - by MJI
    Apologies in advance if this is the wrong place to post. I tried searching for this issue and all that seemed to come up were questions posted from people who had this issue with their web pages. I couldn't find questions related to this issue from a laypersons perspective. I'm not a developer. I have no domain, nor wish to have one at the time. Rather I'm just a regular person who likes to upload photos to some photo related sites. My uploading process constantly gets interrupted by one of these annoying API errors. I get it at least two times, one when I click the page to upload, and also right after it has uploaded. It also pops up if I go to edit a photo or delete it. This interrupts my browsing experience until I click okay. I just want a fix for the annoying without having to register for a key. I tried before and it required a web domain. I rather not have to create a domain and go through such hoops just to fix this. Is there a solution for this problem that doesn't require registration? Another thing to note: I have used two computers. One has the message pop-up and the other doesn't. What is different about the two computers?

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >