Search Results

Search found 19480 results on 780 pages for 'do your own homework'.

Page 594/780 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Cannot terminate process, "already terminated"

    - by felix-freiberger
    On Windows 8, I regularly get processes into a state where I can't terminate them. Skypekit.exe seems to be the process that's most likely to trigger that issue, but other processes can do that, too. When I try to terminate these processes, I sometimes get an "access denied" message, sometimes nothing happens - but every following attempt to kill that process results in an "access denied" message, too, even though I... have administrative rights (and ran the task manager with it) own that process have the right to terminate it "Process Hacker 2" shows a more detailed error message, stating that I couldn't terminate the process because it already is terminated. Still, the process is most definitely still there, because every task manager I tested still can see it. Process Hacker's "Terminator" is unable to kill such a process, but when running the "Close the process' handles" tactic, Process Hacker gets stuck himself, leaving its windows in "not responding". In that state, other task managers are in turn unable to kill Process Hacker. The only way I found to actually end these processes is to shutdown (which works without any problems). Why is this happening? How can I kill these processes?

    Read the article

  • Cant configure DNS properly on centos

    - by Nuker
    I am on a VPS i must manage my own. I have network problems because in the last days many of my users report they cant enter my site from my domain and seems like Google and Facebook cant either (this never happened before). However i can enter my site without problems and so many other people as well. So i tested by making a php include like this <?php include 'http://mysite.com/somepage.php'; ?> and i get this error: Warning: include(): php_network_getaddresses: getaddrinfo failed: Name or service not known in I even tried by including content from yahoo.com or facebook and didnt work either. However the includes will work if i use IPs instead of domains. Do i have a DNS problem or something? What can i do to fix it? Im on a Linux 2.6.32-431.11.2.el6.x86_64 on x86_64 CentOS Linux 6.5 I have this on my resolv.conf # Generated by NetworkManager # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com nameserver 8.8.8.8 nameserver 8.8.4.4 Thank you.

    Read the article

  • Proxmox: VMs and different public IPs

    - by Raj
    I have a server which has two NICs and both are directly connected to internet. I have five different public IP addresses available for the VMs. The host machine (Proxmox) doesn't need to use any (it'll use a private IP and that's all) but will have internet connection. I've gone through the Proxmox documentation and I'm not able to understand the big picture to set up the right network configuration for my needs. In short, what I have is: One server (Proxmox, host machine) On that server, 5 VMs are created 5 public IP addresses available (one for each VM), let's say: 80.123.21.1, 80.123.21.2, 80.123.21.3, 80.123.21.4, 80.123.21.5 What I have now for the host is the following: auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto eth1 iface eth1 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.1.101 netmask 255.255.255.0 bridge_ports eth0 bridge_stp off bridge_fd 0 auto vmbr1 iface vmbr1 inet manual It can be reached from the internal network, so that's OK. It has internet connection, which is also OK. vmbr1 is going to be used by the VMs. Each VM will have its own IP on his network interfaces configuration file. For some reason, VMs will not have internet and they won't be able to have public IP address. If I use NAT, it will work correctly, but they will not use the public allocated IP addresses for them. Am I missing something?

    Read the article

  • Does anyone know where I could find a 2 input USB voltage meter?

    - by John O
    What we really need is a tiny UPS, of sorts. We'll be hooking up a solar cell and a battery to a single board computer. Currently, that SBC is a custom Pic32 device, and it does it's own UPS and voltage monitoring duties. I've been tasked with trying to replicate all of its features with off the shelf products... and for the most part I've succeeded. But I don't currently have any way to switch between two sources of juice, or monitor when they're getting low. These guys have something: http://www.mini-box.com/picoUPS-100-12V-DC-micro-UPS-system-battery-backup-system I really like it, the price is well within the budget. We might even work it in though it does 12V and I'll probably be using 5V... there are enough engineers on hand to figure out something. But I'd still have no idea what the voltage was for the PV or battery. I was hoping that there was some simple little USB multimeter thing that I could use to monitor this with, but I can't seem to come up with anything. I've found all sorts of cool hardware, but nothing that will help us. Does anyone know of anything?

    Read the article

  • Log connections to program

    - by Zac
    Besides for using iptables to log incoming connections.. Is there a way to log established inbound connections to a service that you don't have the source to (suppose the service doesn't log stuff like this on its own)? What I'm wanting to do is gather some information based on who's connecting to be able to tell things like what times of the day the service is being used the most, where in the world the main user base is, etc. I am aware I can use netstat and just hook it up to a cron script, but that might not be accurate, since the script could only run as frequently as a minute. Here is what I am thinking right now: Write a program that constantly polls netstat, looking for established connections that didn't appear in the previous poll. This idea seems like such a waste of cpu time though, since there may not be a new connection.. Write a wrapper program that accepts inbound connections on whatever port the service runs on, but then I wouldn't know how to pass that connection along to the real service. Edit: Just occurred to me that this question might be better for stackoverflow, though I am not certain. Sorry if this is the wrong place.

    Read the article

  • Something like Dropbox for local use

    - by Casper
    I am looking for a solution to sync folder pairs between a NAS and multiple local macs. Each of the macs could edit files and the other macs should then get synced automatically. Basically my own local version of Dropbox without using "cloud-storage". I have looked into solutions using rsync. As I understand it rsync is not really capable of doing a bi-directional sync. I also do not want to necessarily invoke the sync process. I would prefer a daemon running in the background - waiting and checking for changes and then syncing them "live". The program should also be flexible enough to recognize that it sometimes (in the case with laptops) can not reach the NAS. It should then just wait for the connection to be back again, without bugging me ever few minutes. I have looked into synk, folderwatch, rsync and a few others, but I haven't really found a solution. Isn't there something like "offline folders" from microsoft for the mac? Thanks PS: just for clarification - I don't want to sync for backup purposes, instead I am wanting to sync so that all macs have a local copy of the most recent changes to files.

    Read the article

  • Powershell Get-Process cannot connect to remote computer

    - by amandion
    I've been struggling with this for a few hours and can't figure this out. I have two Windows 7 computers. One is my workstation that is using Powershell to do administrative maintenance. The other is the machine I'd like to use Powershell remoting on to execute remote Powershell cmdlets on. On both computers, I've enabled Powershell remoting and added all computers to TrustedHosts with the * value. On the remote computer, I've started the Remote registry service and ensured that the DCOM, Winmgmt and the Winrm services are running. Firewall is disabled on remote machine too. The cmdlet I try to run is: Get-Process -ComputerName $name Where $name is the name of the remote machine. I keep getting an error saying that it could not connect to the remote PC. I've also tried using the IP and I get the same error. These PCs are not in a domain. I am able to do the following successfully: Invoke-Command {get-Process} -ComputerName $name -Credential $creds Where $name is the machine name and $creds is the user name and password for the remote computer's local Admin account. This gives me the same output I would expect. While this is an acceptable workaround, I am curious, why doesn't using get-process with remoting work as it should? I've seen a few articles on the web suggesting people have had success with it on its own. Each time I am using Powershell on my workstation with elevated privileges. Any ideas?

    Read the article

  • Hostname vs webpage domain.

    - by Mark
    Hi All, Im just starting to look at deploying a webpage and get into the joy of DNS etc. And im wondering how you set up multiple web-servers all with thier own hostnames/public IP addresses, and yet have them serve up a webpage from one domain. For example, lets say you have a website example.com, and an A record in DNS that points at it's IP address of 1.2.3.4 . You want to have two servers, prod1 and prod2 with some kind of load balancer in front of them for fail over reasons. The way I see it you would want to have the hostnames of these servers as prod1.example.com and prod2.example.com and perhaps loadb.example.com. How would you set up the DNS so this would all work. ie you could ssh to any of the server domains, prod1.example.com, prod2.example.com or loadb.example.com and also just use the www.example.com url to go to the website. And would all these server names be resolvable from the public internet and is that safe? This would be a linux environment, for arguments sake ubuntu, a django framework dynamic website, running in apache 2.2 Cheers Mark

    Read the article

  • How would I change the DocumenRoot on the version of Apache that came pre-installed on my Mac OS X s

    - by racl101
    OK, so I want to take advantage of the Apache server that comes installed on my Mac OS X system (which means, I would like not to have to install my own version of Apache since I might as well tryto use what comes bundled), and as such, I went to change some settings in the configuration file: /etc/apache2/httpd.conf Namely, I changed the these two lines: DocumentRoot "/Users/myusername/Sites" <Directory "/Users/myusername/Sites"> So that they initially pointed to a folder in my Dropbox folder (so I could have my docs sync to my Dropbox): DocumentRoot "/Users/myusername/Dropbox/public_html" <Directory "/Users/myusername/Dropbox/public_html"> That didn't work. So then I figured, ok maybe it was too much to ask to make folder in my Dropbox be my document root. So then I thought, what if I make the Document root another folder of my choosing like so: DocumentRoot "/Users/myusername/dev-sites/public_html" <Directory "/Users/myusername/dev-sites/public_html"> and that didn't work either. After looking within the httpd.conf file for clues it seems that only two directories appear to work as Document root paths for the Apache that comes bundled with Mac OS X: /Users/myusername/Sites (or ~/Sites) and /Library/WebServer/Documents/ But trying to use any other directories didn't seem to work. I would get 403 errors on my browser. I was wondering if there was some other settings to change on the httpd.conf file or any permissions to set to make this work. Any help would be appreciated and many thanks in advance.

    Read the article

  • Switched to new router and now experiencing lag?

    - by Mr_CryptoPrime
    I switched from a dynex-802.11b/g to a Netgear-802.11b/g/n just yesterday. My router is down stairs because my phoneline upstairs is retarded....but my PS3 is still upstairs (SOCOM: Confrontation is game I am experiencing issues). I have done everything I can to make sure the connection is solid and have checked the status and it has been as high as 80% and usually lingers at about 60%. I thought about upgrading my bandwidth from 1.5mbs to 7mbs, but I am guessing something is wrong if it worked fine before? Now the game seems more laggy and my voice chat is choppy. Others seem to receive my voice data fine because I can hear my own feedback clearly from other players (if you are in close proximity to another player and speak and there volume is loud enough sometimes you can hear yourself). I wonder if I port forward or setup DMZ then it will be fixed, but I am not sure and don't know quite how to do it. Has anyone else ever experienced this when switching routers? What did you do to fix it? Thanks!

    Read the article

  • crontab not running on VirtualBox unless I'm logged in

    - by Mike
    I am running Ubuntu Server 9.04 in VirtualBox on my work PC as a development environment. I have some scripts that I've put in my user's crontab that run throughout the day while I'm SSHed into the VM. Last night, I closed PuTTy and all of my other running applications (except for VirtualBox and the VM) and went home. I came back this morning to discover that my cron jobs didn't run at all, yet when I SSHed into the VM, the next scheduled job ran. I set the schedule to 5min to test, disconnected again, and the jobs stopped running on schedule. They seem to only run if I'm logged in to the machine. Obviously, I want them to run on schedule even if I'm not logged in to the VM, otherwise there's no point. Is there something I've failed to configure correctly? New Information: There are now 3 entries in /var/log/cron.log saying the following "Mount of private directory return code [256]"... the entries correspond to when the cron job is supposed to run. I thought they are supposed to run as my userid? Why would my own userid be unable to run a script in my home directory?

    Read the article

  • No partition on USB Flash Drive?

    - by Skytunnel
    A friend gave me a corrupted USB memory stick to try recovery data from. But I've had some unusual results, so thought I'd share to see if anyone is familiar with this problem... First off I just tried opening from my own PC. Windows prompted to Format the drive, which I of course declined Downloaded TestDisk to anaylsis the drive. And right away I noticed something strange, on the listed drives it comes up as Disk /dev/sdc - 6144 B - USB Flash Drive That's right, the first USB flash drive smaller than a floppy disk!? Moving on anyway... first anaylsis comes up with: Partition sector doesn't have the endmark 0xAA55 TestDisk's Quick Search gave no results, moved on to Deeper Search: No partition found or selected for recovery This left me stumped. I tired a couple of other programs with no success I did manage to get a backup image, but it was just as small as TestDisk indicated, so nothing of use on it After a few hours trying various suggestions from other sources, I gave in and just tried formatting the drive. But returned the message: Windows was unable to complete the format. From googling that, the suggestion was to delete the partition. But there is no partition to delete in this case. most recently I've tried formatting from cmd, and got this result: Format D: /FS:FAT32 The type of the file system is RAW The new file system is FAT32 Verifying 0M 11 bad sectors were encountered during the format. These sectors cannot be guaranteed to have been cleaned The volume is too small for FAT32 Anyone got any suggestions? UPDATE: As per suggestion from @Karen, I tried running a CLEAN from DISKPART, results as follows DiskPart has encountered an error: The request could not be preformed because of an I/O device error.

    Read the article

  • Bridging networks problems

    - by Eric
    In my setup I have 3 computers and 2 (wireless d-link) routers. Computer1 has ethernet and wireless interfaces ethernet : 192.168.0.x (DHCP) wireless : 192.168.10.254 (static) Computer 2 has ethernet with two ips ethernet1 : 192.168.0.90 (static) ethernet2 : 192.168.10.110 (static) Computer 3 is a particular device with a hardcoded ip that I can't change wireless : 192.168.10.41 (static) Router1 manages internet and DHCP for network 192.168.0.0/24 Router2 is more complicated. I don't use DHCP. I use it to bridge between both networks. Its static ip is 192.168.10.1 Computer1 can ping Computer2. Computer1 can ping Computer3. Computer1 can ping Router1. Computer1 cannot ping Router2. Computer2 cannot ping Computer3. Computer2 can ping Router2. Router1 can ping Router1 Router2 can ping Computer2 Router2 cannot ping Computer1 Router2 cannot ping Computer3 This is very weird. Router2 manages the wireless connection, it should be able to ping its own computers right? My question is obviously : How can I make it so Computer2 can access everything else. This is a traditional case of "it was working before christmas and now it doesn't". The ethernet wiring is as follow : [ Computer1 ]----[ Router1 ]---[ Router2 ]---[ Computer3 ] I am using switch (lan) ports on Router1/2.

    Read the article

  • Why does lighttpd keep static files in cache, even when modified on disk ?

    - by Pixelastic
    I am using lighttpd to serve static files. I have a bunch of images in a dir that I regularly update. This will change the file content (and filesize) as well as the modification date, but not their filename. When I access the files through http, the updates are not taken into account and lighty serves the old file. I can manually rename the file to something different, then lighttpd will return a 404 error, and if I rename my file back, I will get the correct updated version. Seems like lightty is using some kind of cache mechanism of its own (which is fine) to return static files. Unfortunatly, it seems that this mechanism doesn't update itself when files are modified. I checked through Wireshark, and my browser is really doing a request to the file, this is not a browser caching issue. It returns a 200 OK when requesting it from an empty cache, and a 304 Not Modified otherwise, as expected. But the file is returned with a wrong Last-Modified header that do not reflect the real last modification date. Maybe there is some config directive that I am not aware of ? I would like the files returned by lighty to reflect the changes made on disk directly, or at least being able to invalidate its cache.

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • How do I prevent my computer from freezing when it starts to swap?

    - by cdauth
    I work as a Java programmer, so I often have to run several programs at the same time that consume a lot of memory. When my memory is full and Linux starts swapping, my computer almost completely freezes. I can see that it is heavily writing on the hard-disk and everything reacts really slowly, often not at all. Moving the mouse in X sometimes doesn’t work at all, sometimes it has a delay of several seconds, clicking usually has a delay of several minutes. Sometimes it is possible to change to the TTY (with a long delay), there I can usually type without delay, but when I try to log in, it takes several minutes after typing in the user name until the password prompt appears, and usually an error message appears that tells me that the login timed out. So the only possibility is usually to restart the computer. I noticed that other intensive writing to the hard disk also significantly slows down my computer. Sometimes, I used rsync to limit the bandwidth when I copied files around on my own computer, as else the system would be almost unusable. How can this be? At the moment it seems more useful to me to completely turn off swapping. That might crash some processes, which is unfortunate, but the alternative at the moment is to crash all processes by turning off my computer. I am using Gentoo Linux with kernel 3.6.2-gentoo, I have a 10 GB swap partition on a HDD.

    Read the article

  • LAMP Stack Version Help -- Is there a website or version tracker source to help suggest the right versions of each part of a platform stack?

    - by Chris Adragna
    Taken singly, it's easy to research versions and compatibility. Version information is readily available on each single part of a platform stack, such as MySQL. You can find out the latest version, stable version, and sometimes even the percentage of people adopting it by version (personally, I like seeing numbers on adoption rates). However, when trying to find the best possible mix of versions, I have a harder time. For example, "if you're using MySQL 5.5, you'll need PHP version XX or higher." It gets even more difficult to mitigate when you throw higher level platforms into the mix such as Drupal, Joomla, etc. I do consider "wizard" like installers to be beneficial, such as the Bitnami installers. However, I always wonder if those solutions cater more to the least common denominator -- be all to many -- and as such, I think I'd be better to install things on my own. Such solutions do seem kind of slow to adopt new versions, slower than necessary, I suspect. Is there a website or tool that consolidates versioning data in order to help a webmaster choose which versions to deploy or which upgrades to install, in consideration of all the other parts of the stack?

    Read the article

  • Graphics artifacts/distortion with Win7 and nVidia

    - by Gepard
    Problem I encounter is rather hard to describe, so I provide a screenshot: http://dl.dropbox.com/u/1732760/video-distortion.png As you can see there are some horizontal stripes in random colors. These stripes appear sometimes in all windowed apps, games and on the desktop too. They tend to stay in place until I refresh window (or force it to to redraw by for example minimizing and maximizing again). They also tend to appear in the same place and shape multiple times, even if they disappear, it's very likely they will be again in the same place after a while. These artifacts do not blink or change if computer is idling. If I don't touch anything, do not use mouse, they will stay in place forever (unless some app redraws its window on its own). I first encountered this problem some weeks ago. Back then I thought it might be cooling problem, so I took out the graphics card, removed dust from the radiator and fan and put it into PC back. I also ran some stress test using Furmark (peak tempearature was ~65C) to see if the problem becomes more intense if the card gets hotter, but suprisingly no artifacts whatsoever appear during the stress test. Graphic card is Galaxy GeForce 7300 GT with DDR3 memory, was never overclocked. Drivers are the latest, from Nvidia site. OS is Windows 7 64-bit, updated. AMD64 3000+, 2GB RAM. I'm running a dual monitor setup with 2 19'' Samsung LCDs and problem is on both, so I assume it's not a monitor or cable issue.

    Read the article

  • How to add a writable folder to the PHP document root on linux

    - by Ron Whites
    We are building an example bash script for our PHP TestCoverage Tool use on Linux. The development environment is Ubuntu 12.04_1 but we intend to have the linux example work across as many linux versions as possible without modification. The example linux script requires a variable be set to the PHP Document Root path and by default uses a small PHP example source to show the user how our GUI and text report shows the covered and uncovered PHP code areas. The linux script is also intended to be easily alterable by the user to automate the TestCoverage display of users PHP code. The problem we are having with Ubuntu 12.04 (any linux?) is that the PHP Apache2 document root is defined in /etc/apache2/sites-available/default as /var/www and /var/www is defaulted with "drwxr-xr-x" read only access. So in order to add our own folder as /var/www/SDTestCoverage we must change /var/www to "drwxrwxrwx" read-write access. So it seems our script (at least on Ubuntu) will need to ... 1. acquire and save the /var/www permissions then do .. 2. sudo chmod 777 /var/www (to make writable) 3. mkdir -p /var/www/SDTestCoverage (create our folder under the document root) 4. sudo chmod 777 /var/www/SDTestCoverage (make our subfolder writable) 5. and finally restore /var/www permissions Thanks and our Questions are .. 1. Is this the standard way (using Ubuntu) one adds a writable folder under the PHP Document Root? 2. Is this the most general purpose way one adds a writable folder under the PHP Document Root on other versions of Linux?

    Read the article

  • Migrating ODBC information through a batch file

    - by DeskSide
    I am a desktop support technician currently working on a large scale migration project across multiple sites. I am looking at a way to transfer ODBC entries from Windows XP to Windows 7. If anyone knows of a program or anything prebuilt that already does this, please redirect me. I've already looked but haven't found anything, so I'm trying to build my own. I know enough basic programming to read the work of others and monkey around with something that already exists, but not much else. I have come across a custom batch file written at one site that (among other things) exports ODBC information from the old computer and stores it on a server (labelled as y: through net use at the beginning of the file), then later transfers it from the server to a new computer. The pre-existing code is for Windows XP to XP migrations. Here are the pertinate bits of code: echo Exporting ODBC Information start /wait regedit.exe /e "y:\%username%\odbc.reg" HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI (and later on) echo Importing ODBC start /wait regedit /s "y:\%username%\odbc.reg" We are now migrating from Windows XP to 7, and this part of the batch file still seems to work for this particular site, where Oracle 8i and 10g are used. I'm looking to use my cut down version of this code at multiple sites, and I'm wondering if the same lines of code will still work for anything other than Oracle. Also, my research on this issue has shown that there are different locations in 64 bit operating systems for 32/64 bit entries, and I'm wondering what effect that would have on the code. Could I copy the same data to both parts of the registry, in hopes of catching everything? Any assistance would be appreciated. Thank you for your time.

    Read the article

  • What should I encrypt in Debian during install?

    - by ianfuture
    I have seen various guides and recommendations on web about how best to do this but nothing that clearly explains the best way and why. So I understand there is a need for part of Debian during install to be un-encrypted on its own partition to allow it to boot. Most info I have seen is call this /boot and set the boot flag. Next I believe the best approach is to create another partition out of all the rest of the disk space, encrypt this, then on top of that create a LVM and then within the LVM create my various partitions , name them , select size, and file system type. Can I include /swap in the encrypted LVM part ? Is this approach sound? If so what are the partitions I should use (this is going to be a minimal server install with a view to install as and when what I need for a dev server)? Finally how does the installer know what to put in each partition I define ? I appreciate there are more than one question but any help and suggestions would be appreciated. If further clarification is needed please mention in the comments . EDIT : 16/3/2010 After Richard Holloways reply I thought it relevant to add this info: The reasons why I want to do this are to explore maximising security on any server install and set up, due to interest in the area of Computer Security and Forensics. Also I am trying to peform the task as if it being performed in an enterprise situation. On a technical matter, once set up and configured with minimal packages and ssh this server will not physically be easy to access so I will only be entering via ssh. (Yes I know why encrypt something no one will ever be able to get their hands on? Because I can and I want to is the simple answer, but see above too).

    Read the article

  • NAS for Mac OS X Server

    - by SamAdmin
    I'm using Mac OS X Server and want to allow the users that connect to their network accounts to store their data on a NAS drive. I want the users to connect to the Lion server as this allows for better policies and management for me and for their afp share to be located on a NAS drive. I've looked into home directories and network logins however I don't want the users to connect into a different login environment, just an authentication against their provided account on the Lion server and for their finder to take them to their own storage area - located on the NAS drive. Currently I am using FreeNAS for both authentication and storage however there are getting to be far too many people to manage each afp share and account, plus just using FreeNAS is extremely limiting for expansion and if something goes wrong with 1 entity the entire system goes down. Using the Lion server for user accounts and policies will be much better for this expanding business. I have looked into LDAP, using the Lion server as an LDAP server to authenticate against for FreeNAS however I have had issues with this and thought a different approach could be better from the other side of the situation... Providing the account with somewhere to store data rather than the afp share authenticating against an LDAP server. I am wrong to try it this way? Is it possible to logically add storage to a Mac OS X Server which can be recognised as a local drive, so can be used for network accounts?

    Read the article

  • Apache with multiple domains, single IP, VirtualHost is catching the wrong traffic

    - by apuschak
    I have a SOAP web service I am providing on a apache web server. There are 6 different clients (IPs) that request data and 3 of them are hitting the wrong domain. I am trying to find a way to log which domain name the requests are coming from. Details: ServerA is the primary ServerB is the backup domain1.com - the domain the web service is on domain2.com - a seperate domain that server seperate content on ServerB ServerA is standalone for now with its own IP and DNS from domain1.com. This works for everyone. ServerB is a backup for the web service, but it already hosts domain2.com. I added entries into the apache configuration file like: <VirtualHost *:443> ServerName domain2.com DocumentRoot /var/www/html/ CustomLog logs/access_log_domain2443 common ErrorLog logs/ssl_error_log_domain2443 LogLevel debug SSLEngine on ... etc SSL directives ... </VirtualHost> I have these for both 80 and 443 for domain1 and domain2 with domain1 being second. The problem is when we switch DNS for domain1 from ServerA to ServerB, 3 out of the 6 clients show up in the debug logs as hitting domain2.com instead of domain1.com and fail their web service request because domain2.com is first in the apache configuration file and catching all requests that don't match other virtualhosts, namely domain1.com. I don't know if they are hitting www.domain1.com, domain1.com (although I added entries for both) or using the external IP address or something else. Is there a way to see which URL they are hitting not just the page request or someother way to see why the first domain is catching traffic meant for the second listed domain? In the meantime, I've put domain1.com higher in the apache configuration than domain2.com. Now it catches the requests for all clients and works, however I don't know what it is catching and would like to make domain2.com the first entry again with a correct entry for domain1.com, for however they are hitting it. Thank you for your help! Andrew

    Read the article

  • sudo or acl or setuid/setgid?

    - by Xavier Maillard
    for a reason I do not really understand, everyone wants sudo for all and everything. At work we even have as many entries as there are way to read a logfile (head/tail/cat/more, ...). I think, sudo is defeating here. I'd rather use a mix of setgid/setuid directories and add ACL here and there but I really need to know what are the best practices before starting up. Our servers have %admin, %production, %dba, %users -i.e many groups and many users. Each service (mysql, apache, ...) has its own way to install privileges but members of the %production group must be able to consult configuration file or even log files. There is still the solution to add them into the right groups (mysql...) and set the good permission. But I do not want to usermod all users, I do not want to modify standards permissions since it could change after each upgrade. On the other hand, setting acls and/or mixing setuid/setgid on directories is something I could easily do without "defacing" the standard distribution. What do you think about this ? Taking the mysql example, that would look like this: setfacl d:g:production:rx,d:other::---,g:production:rx,other::--- /var/log/mysql /etc/mysql Do you think this is good practise or should I definetely usermod -G mysql and play with standard permissions system ? Thank you

    Read the article

  • cPanel FTP account access to sym links from parent directory

    - by totbar
    I would like to give a potential developer temporary access to some of my projects. I have almost everything in its own subdomain, and each directory is a sibling to my public_html directory. It looks something like: ("developer" is the cPanel account name.) developer/ *This is the top level directory for the cPanel account. "/home/developer" site1/ *site1.mysite.com site2/ *site2.mysite.com site3/ *site3.mysite.com public_html/ *www.mysite.com ... etc I created a directory inside public_html called tempdev and I added symbolic links to each of the sibling directories listed above. My understanding of cPanel is that I can only assign one user with "Special FTP Access" per domain. I really dont want to give a complete stranger my login creds, (its just a development environment but still). So I used the cPanel FTP account creator UI. It will not allow me to assign the user access to the directories outside of public_html. I cant even give access to public_html either. So I made the tempdev directory in www and created the symlinks. Using the new account, I can see the symlinks, but I can go into them. Is there a better way to accomplish what I am attempting?

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >