Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 303/401 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • TC hashing filters - single rule deletion

    - by exa
    For traffic shaping I'm currently using a setup that looks exactly like the setup from LARTC, on this page: http://lartc.org/howto/lartc.adv-filter.hashing.html I have a simple problem with that - everytime I want to modify something in the hash table (like assign a IP to different flowid), I need to delete the whole filter table and add it again filter by filter. (I actually don't do it by hand, I have a nice program that does it for me... but still...) There is a problem - I got roughly 10k filters allocated this way and deleting and refilling the whole filtertable can get pretty lengthy, which is not exactly good for traffic shaping. My program could easily manage to delete only the rules that need to be deleted (thus reducing the whole problem to several commands and miliseconds), but I simply don't know the command that deletes only the one hashing rule. My tc filter show: filter parent 1: protocol ip pref 1 u32 filter parent 1: protocol ip pref 1 u32 fh 2: ht divisor 256 filter parent 1: protocol ip pref 1 u32 fh 2:a:800 order 2048 key ht 2 bkt a flowid 1:101 match 0a0a0a0a/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 2:c:800 order 2048 key ht 2 bkt c flowid 1:102 match 0a0a0a0c/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 link 2: match 00000000/00000000 at 16 hash mask 000000ff at 16 The wish: 'tc filter del ...' command that removes only one specific filter (for example the 0a0a0a0a IP match (IP address 10.10.10.10)). Removal of some small subgroup would also be good - for example I could still recreate a bucket (bkt a) pretty fast. My attempts: I tried to number all the filters using prio, but with no help -- they just create something unusuable (but deletable) below, but the bucketed filters remain there after that gets deleted. Any ideas? edit - I'm adding a simplified tl;dr description of the problem: I created hash filter on some interfce just like in this http://lartc.org/howto/lartc.adv-filter.hashing.html I want to find a command that deletes one rule (e.g. 1.2.1.123) from the table, leaving the rest untouched and working.

    Read the article

  • Load balancing with puppet

    - by Gonçalo Queirós
    Hi there. Im trying to setup a loadbalancing system. My load balancer (nginx) has a conf file where i should list all IP's of the upstream servers. I could put the IP's on the conf manually, but this ways i would need to change the conf file every time i add/remove an upstream server. For now i came up with two different ideas, but i don't like much of neither: 1 - Have every upstream machine to use Exported Resources to create a file with it's IP..Then the load balancer server will have an "include conf_directory/*", and load all the files created by the upstrem servers. Since the load balancer is using nginx this can be done, but if i wan't latter on to configure something that doesn't have the "include" on the conf files, this solution will not work. 2 - If the config doesn't support the "include" command, then we could have again, every upstream server use the Exported Resources to create a filw with its IP, and latter on, the load balancer execute a command that would pick every file and generate the config Both versions addopt the same techinque, the difference is that version 2 is used when the server (that needs to have a conf generated) doesn't recognize a command like "include" inside its own conf. Now, my question is, is there any way to do this in a different form? I suspect that there is, since puppet is made to manage multiple servers, it seems a bit strange not have a easy way to configure load balancers.

    Read the article

  • how to block spam email using Microsoft Outlook 2011 (Mac)?

    - by tim8691
    I'm using Microsoft Outlook 2011 for Mac and I'm getting so much spam I'm not sure how to control it. In the past, I always applied "Block Sender" and "Mark as Junk" to any spam email messages I received. This doesn't seem to be enough nowadays. Then I've started using Tools Rules to create rules based on subject, but the same spammer keeps changing subject lines, so this isn't working. I've been tracking the IP addresses they also seem to be changing with each email. Is there any key information I can use in the email to apply a rule to successfully place these spam emails in the junk folder? I'm using a "Low" level of junk email protection. The next higher level, "high", says it may eliminate valid emails, so I prefer not to use this option. There's maybe one or two spammers sending me emails, but the volume is very high now. I'm getting a variation of the following facebook email spam: Hi, Here's some activity you have missed. No matter how far away you are from friends and family, we can help you stay connected. Other people have asked to be your friend. Accept this invitation to see your previous friend requests Some variations on the subject line they've used include: Account Info Change Account Sender Mail Pending ticket notification Pending ticket status Support Center Support med center Pending Notification Reminder: Pending Notification How do people address this? Can it be done within Outlook or is it better to get a third party commercial software to plug-in or otherwise manage it? If so, why would the third party be better than Outlook's internal tools (e.g. what does it look for in the incoming email that Outlook doesn't look at)?

    Read the article

  • Get Safari to use different autocompletion on different URLs on same hostname

    - by Luke404
    I have a webserver publishing different services over the same SSL VirtualHost, the two most commonly used being PhpMyAdmin and Cacti. These (and others) use 'cookie' style authentication, asking user and password in an HTML form (thus not using HTTP Authentication). Being on the same hostname, the Safari browser didn't manage too well stored passwords: if I login to one app with user foo, and then go to app two it would propose me user foo and its password in the login form. Changing just the username to bar used to be sufficient to let Safari autocomplete the correct password in its form field. Annoying, but I could live with it - usernames are short and easy to remember when compared to the passwords we use. After the update to safari5 this seems to be no longer true: if I store in safari (actually user keychain on OSX) credentials for https://www.foobarbaz.com/app1 AND credentials for https://www.foobarbaz.com/app2 there seem to be no way for it to autocomplete both based on the url. Even editing the keychain to add the path (it will store only the hostname by default) does not help. Is there anything I can do to let it work the way I want while still keeping everything on one hostname? Modifying anything server side is of course possible, but I can't switch apps to HTTP Auth (and not every one will support it anyway) to use different 'realms'.

    Read the article

  • Windows VPN for remote site connection drawbacks

    - by Damo
    I'm looking for some thoughts on a particular way of setting up a estate of machines. We have a requirement to install machines into unmanned, remote locations. These machines will auto login and perform tasks controlled from a central server. In order to manage patching, AV, updates etc I want these machines to be joined to a dedicated domain for this estate. Some of the locations will only have 3G connectivity (via other hardware), others will be located on customer premises in internal networks. The central server (of ours) and the Domain Controller will be on a public WAN. I see two ways of facilitating this. Install a router at each location and have a site to site VPN between the remove device and the data centre where the servers are location Have the remote machine dial up and authenticate via a Windows VPN connection to the DC via RAS Option one is more costly to setup and has a higher operational cost. It also offers better diagnostics if the remote PC goes down. Option two works well but is solely dependent on the VPN connection been made before any communication can be made to the remote machine. In a simple test, I can got a Windows 7 machine to dial a VPN prior to authentication to a domain, then automatically login to the machine using domain credentials. If the VPN connection drops, it redials. I can also create a timed task to auto connect every hour in case of other issues. I'd like to know, why (if at all) is operating a remote network of devices which are located in various out of band locations in this way a bad idea? Consider 300-400 remote machines all at different sites. I'd rather have 400 VPN connections to a 2008 server than 400 routers, however I'd like to know other opinions on this.

    Read the article

  • Webserver python update script

    - by ThePyCoder
    So i have made this website on which you can trade stocks based on real stock quotes with virtual money. The stock quotes are in a MySQL database and are updated using a python script which runs every minute or so. Now, this works fine on my local machine with xampp but how about moving the project to a commercial web server? Basically I want my page hosted by a professional company but do those kind of servers support python scripts running in the background? Because a dedicated server would be to expensive and the script does some other sql tasks too so it can't be replaced by PHP or so... So, are there any good web hosting services out there who give me the possibility of running a script in the background and hosting a website in the foreground? For what server specifications do i have to look for? Thnx in advance! PS: I've done some research, and I found a python supporting web host WITH ssh support. Is that what I need? Or is the ssh not allowed to start processes?

    Read the article

  • How to encourage Windows administrators to pick up scripting

    - by icelava
    When i worked as an administrator in my first job, I was frustrated our administration processes with Windows servers were a series of point-and-clicks; we could never match the level of efficiency with the Unix servers which had a group of shell scripts to automate a lot of the work. I soon read about WSH and ADSI and wasted no time learning just how much automation I was able to achieve with scripting. There was a huge problem though - almost none of my Windows colleagues were really interested in learning scripting. They seemed happy with the manually mouse-clicking chores and were never excited at the prospect of using scripts to do the work on their behalf. I struggled to convince them to pick up scripting skills despite the evident increases in efficiency. I left that job in pursuit of a full-time software development career thereafter. Almost a decade on working in various environments and different customers, I still encounter Windows administrators mainly possessing this general "mood" where they would avoid scripting as much as possible. Despite the increasing level of accessibility Windows server technologies are opening up for scripting and automation. I am almost certain the majority of administrators are administrators precisely because they absolutely hate performing any kind of programming duties. What are some means to encourage and motivate administrators that scripting can really help them in the long run?

    Read the article

  • MySQL getting stuck, eating up disk i/o

    - by bonez05
    Hi all, Using mySQL 5.0.51 on Solaris . At intermittent times it looks like MySQL is getting 'stuck' . The disk usage on the server spikes to 98% busy from reads. I used dtrace (specifically DTrace toolkit - iosnoop) to track down what processes was using all the reads. Mysql was calling tablename.TDM hundreds of times per second. There was no more than average load on the webserver that could account for this. There were no cronjobs running, and no other utilities like mysqldump or anything. It is a master / slave replication setup. As a jury-rigged fix, I altered the mysql table from 'tablename' to 'tablename2' and then back to 'tablename' This fixed the problem temporarily, and "unsticks" mysql. The disk usage goes back down and dtrace is no longer showing hundreds of reads to 'tablename.TDM' / second. A couple ideas I had are: 1. MySQL version bug 2. Infinite loop somewhere in my application (which i'm not sure how likely this is) 3. ?? Has anybody seen this before or have any insight? Thanks

    Read the article

  • Can a website company that builds 4-5 websites a year afford dedicated hosting?

    - by Petras
    We manage about 30 websites that use shared ASP.NET SQL Server web hosting. These are typical small/medium business websites and they perform fine in this environment. Recently I was looking at VPS hosting in this thread http://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps After contacting a provider in one of the replies I was told that VPS hosting is not recommended for 30 sites, even if they are small. The resource requirements might be too great even for VPS. So I should turn to dedicated hosting. The lowest cost dedicated hosting is $219 per month (see http://www.serverintellect.com/dedicated/pentiumdservers.aspx). But this is only for a single processor which seems too light for a machine running both IIS and SQL. In our office all the developers work on quad cores so I assume I’d really need the Quad Processor. However, this starts at $599 monthly. Now, I won’t be able to transfer all of our 30 sites to this machine. I’d only be able to transfer say 5 or 6. However, moving forward, I’d be able to host all future sites on this machine. This amounts to 4-5 per year. Let’s look at the economics. Shared hosting costs are typically $16.95 monthly (see http://www.crystaltech.com/dotnet.aspx). So here’s the dilemma First months costs: $599 First month revenue: 6x$16.95 = $101.7 Loss in first month: $497.3 First year costs: $599x12=$7188 First month revenue: 6x$16.95x12 + 5x$16.95x6(averaged) = $1728.9 Loss in first year: $5459.1 Clearly it is going to take years for this server to pay for itself. It just doesn’t seem economical! Am I missing something here, or is dedicated not the way to go with the amount of sites we build?

    Read the article

  • Windows Explorer Hangs on Right-Click

    - by Bryan
    I am not sure if this is the right site to post this one as I typically post coding questions on stackoverflow. But I'll ask anyways and hopefully someone can move it if it's incorrect. Currently I have a customer built PC, utilizing an Intel i7 chip, 1300WATT PSU, 8Gigs of RAM, and two video cards. Originally I had the one video card (NVIDIA) that used the PSU and had two DVI output. After purchasing a third monitor I installed another ATI) graphics card not needed any PSU connectors. After installing and restarting, I noticed that when I right-click on my desktop, or through Windows Explorer it will hang, freeze then restarted. Sometimes after Windows Explorer restarts the problem dissipates. I checked to make sure everything was connected properly and it was. I repaired the ATI Catalyst Control Center to see if that had an issue, and I checked to see if either video card required updated drivers. Nothing worked. I tried restarting my PC and that didn't work. I tried using ShellXView (I forgot what it's actually called) and tried closed processes but that didn't work. Does anyone have any idea what could have caused this orpossible solutions I should try?< Thanks in advance.

    Read the article

  • Disabling LDAP Signing on Windows PDC in Local Policy

    - by Golmaal
    I just tripped over my own feet it seems. Playing around on a Windows 2008 R2 server (set up as domain controller), I was intrigued by certain warning event (event id 2886) which says: "To enhance the security of directory servers, you can configure both Active Directory Domain Services (AD DS) and Active Directory Lightweight Directory Services (AD LDS) to require signed Lightweight Directory Access Protocol (LDAP) binds." So I thoughtlessly did some Googling and set the relevant policies which enforce LDAP signing. Now I don't remember but I may have done that using Local Policy. Now I have setup a pfsense box which must authenticate AD users via LDAP. While the firewall can communicate over secure channel, it is difficult to manage the same for other packages such as Squid and SquidGuard. So now I have to disable i.e. undo those policy changes. The problem is that they are greyed out! The policies in question are LDAP server signing and LDAP client signing. I don't remember what I did but when I access these policies from Local Policy editor on the server, they are set to "Require Signing" and are greyed out. The same policies can still be set via Default Domain Controller option in Group Policy editor. So how can I reset these greyed out policies? Thanks

    Read the article

  • Window Server 2003 Print spooler

    - by mikenardone
    Hello Everyone in ServerFault, I am new to this website. I have been coming here to fix my own problems. I believe everyone here on this website is great. I could not find this issue anywhere. I am sure that other people had this issue. I have IBM X3850 48GB ram 2 TB of hard drives, four NIC cards. 2 Xeon 1.7 CPU. I am running VMware ESX. I believe that was the paid version if not then it is ESXI. I have 7 Servers on this server. All Window server 2003. On one of the Servers I keep on getting CPU is at 100% . So when I go into task manager and look at the processes that is going on, it is my print spooler. I have 30 different HP laserjet printers and two copiers from HP. I believe it is an driver issue, but I can't figure with one is doing this. Is there any programs for window server 2003 that finds bad print drivers.

    Read the article

  • HTC Diamond Touch sync problem

    - by Anders
    I have a HTC Diamond Touch with all my contacts etc. on it. Did however not use it for 6mo while being abroad. When I start the phone now I realize that the touch screen has stopped working. I have tried restarting, soft resetting, shutting it off etc but the touch just wont follow commands. However, I can manage the phone by buttons so it's not frozen. Hence I can get into the phone and watch contacts but not use it to call etc. The problem is, how do I get my 300 contacts out of the thing!? When I'm plugging in the phone, it lets me choose between "Sync with Outlook" and "Use as storage device". It automatically selects "Use as storage device". Now, I cannot choose to sync it with the buttons. I can not change this option afterwards either. In short, I have a phone with all of my contact data and am completely unable to get that out of it. Any tips/help/suggestions? If possible, preferably one that does not including sending the phone to a hardware workshop for three weeks in order to get it fixed:)

    Read the article

  • Using VMware Guest OS to enable Host OS to ssh to remote network

    - by Reuben L.
    Basically I have an issue because my host OS is 64-bit Linux Mint (Ubuntu derived) and it doesn't seem to be compatible with the Juniper Network Connect that is used by the network at my workplace. Thus, I am unable to ssh from terminal to the network. I can't make changes to the workplace network either so that leaves me with looking for solutions on my end. The main reason for me to access the network from home is to check on my running processes or to issue more commands to a few workstations. Putty is the desperate choice I usually make but it means I have to reboot to Windows and also have limited control. I've tried several other methods and they have all failed. Recently, I setup a VM with Windows 7 as the guest OS. Now half my problems are fixed as I don't have to physically reboot the system - I just have to engage Juniper Network Connect on the VM. However, I would still like to use my Linux terminal to ssh to the network. It sounds plausible that I could somehow manipulate ports to connect to the remote network from the host OS tunneled through the guest OS, but I really have no clue how to do so... Can anyone help?

    Read the article

  • Firefox window disappears

    - by Lord Torgamus
    Now this is odd. At some point in the last half hour, my Firefox window disappeared. I didn't notice, as I was working in another program at the time. No Firefox icon shows up with Alt-Tab, and no Firefox listing shows up under the Applications tab in the task manager. There is a Firefox entry under the Processes tab. Normally, I probably wouldn't have noticed, just opened Firefox up again, but I'm listening to an Internet radio station and the stream never stopped. When I did open a new Firefox window, it showed up in the Task Manager's applications tab. I'm running Windows XP, and my Firefox has the add-ons Adblock Plus, BetterPrivacy, Cert Viewer Plus, DOM Inspector, Firebug, Greasemonkey, Java Quick Starter, Live HTTP headers, Microsoft .NET Framework Assistant, NoScript, WebDeveloper and XPather. The radio station is Slacker; it's never given me any trouble before, and I've been using it for months. I don't think there was anything unusual in my open tabs; just a few static pages at non-sketchy sites like Java APIs, plus GMail and the aforementioned Slacker. Googling brought up a handful of similar-but-not-quite-the-same errors, none of which had useful resolutions. Does anyone know how to bring that window back and/or prevent this from happening again?

    Read the article

  • Sudden problems with iptables not running

    - by Fourjays
    I've got a sudden issue with iptables not running on my CentOS 5.8/DirectAdmin XenVPS. All I have done today is install PHP APC and run an update (although I admittedly didn't pay much attention today - I usually do). Iptables has been running fairly smoothly since I installed it over 6 months ago. Basically when I try to run iptables -L it tells me: iptables v1.3.5: can't initialize iptables table `filter': iptables who? (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. I've looked around and tried a few things and it appears that maybe my kernel doesn't have the modules loaded? I've been reading this and tried the two commands they suggest to no avail. Except there does appear to be a mismatch on one bit of output: -bash-3.2# cd /lib/modules -bash-3.2# ls 2.6.18-194.32.1.el5xen 2.6.18-238.5.1.el5xen 2.6.18-274.7.1.el5xen 2.6.39.1-cs-domU 2.6.18-238.12.1.el5xen 2.6.18-238.9.1.el5xen 2.6.37.2-cs-domU 3.0.1-cs-domU -bash-3.2# depmod -a WARNING: Couldn't open directory /lib/modules/2.6.18-274.18.1.el5xen: No such file or directory FATAL: Could not open /lib/modules/2.6.18-274.18.1.el5xen/modules.dep.temp for writing: No such file or directory Does this mean the versions are out of sync? If so, what are my next steps to getting this fixed? As you can probably tell I am still learning how to manage my server so please be very clear in all advice. Many thanks :)

    Read the article

  • Nginx + PHP-FPM Too Many Resources

    - by user3393046
    My Server has the following Specs CPU: 6 Cores Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz RAM: 32 GB I have a problem with nginx+php-fpm. They are taking too many resources for an unknown reason. Even if i restart the nginx + php-fpm the start up processes will use many resources. My nginx Config is the following: user nginx; worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; worker_rlimit_nofile 300000; events { worker_connections 6000; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } My php-fpm pool config is the following [www] user = nginx group = nginx listen = /var/run/php5-fpm.sock listen.owner = nginx listen.group = nginx listen.allowed_clients = 127.0.0.1 pm = ondemand pm.max_children = 1500; pm.process_idle_timeout = 5; chdir = / security.limit_extensions = .php I'm using on pm.ondemand since my website has to support many concurrent connections at the same time and i was unable to to it with dynamic/static. I guess this isnt the problem because as i said earlier when i restart nginx+php-fpm at the same time, they are taking too much resources without any request. Here is the screenshot with the CPU Usage http://s28.postimg.org/v54q25zod/Untitled.png

    Read the article

  • IP6 seems to be enabled - How do I configure it without interfering with IP4?

    - by Mister IT Guru
    I noticed that some of my Centos boxes have IP6 enabled, and seem to have addresses. I have no problem with this, but I would like to get a handle on it, and even connect to them using IP6. This would really help if for any reason DHCP has a hiccup. But I'm a bit lost as to where the configuration on my CentOS box is. (I am also on google researching this, but I like server fault! :) ) I am hoping that I would be able to log into this via the VPN because every now and then that DHCP device has a bad morning, and needs to be restarted. (I'm also looking into this issue, but someone else handles that, management separation gone mad!) It's a remote client, so it would be a lot easier for me to connect to these systems which seem to self configure, to use that as a pivot via ssh tunnels to get to other remote devices to continue to manage them, while out main route is fixed. I guess, my questions are How can I configure IP6 without interfering with IP4, and On CentOS, can I influence this auto configuration I seem to be seeing?

    Read the article

  • Deploy our own software using Puppet?

    - by Ken
    (Apologies in advance for the stupidity in this question. I'm normally a programmer, not a sysadmin, but I've taken it upon myself to automate some things, and clean up some other things which are automated but not in the prettiest way. :-) I've been looking around at various tools for automation of software deployment to a bunch of servers, like cfengine, Puppet, and Chef. So far, Puppet looks the most appealing, but I've certainly not committed to anything yet. These tools all look like they can do a great job of keeping a bunch of servers up-to-date with prepackaged software. What I don't get is: how does one use a tool (like Puppet) to manage deployments of our own internal software? I think I'm at a loss because I've seen a thousand tutorials showing how to keep Apache ensure => latest (which is pretty cool), but nothing that quite corresponds to my use-case today, which is something more like: when a human being pushes The Button, pull branch A from the version-control repository B run command C to compile it copy the binaries D to servers E1 through E10 on each server, run command F to make all changes take effect Puppet sounds great, and I totally see the advantage of declarative, idempotent configuration over some shell scripts, but I've not seen any tutorials for "you want to update your shell scripts to Puppet (or Chef, or cfengine) so here's what you should do". Is there such a thing? Is it obvious to other people how to take the things provided in the Puppet docs and replicate the behavior I want? Am I just not getting it? What it's sounding like to me, so far, is that the human being (#1) would manually package the software (#2 and #3) external to Puppet, manually update the Puppet config, which would trigger Puppet to update the servers ... maybe? (I'm a little confused here, as I'm sure you can tell.) Thanks!

    Read the article

  • Windows 7 Automatically Connecting To Unsecured Wireless Networks On Startup

    - by Xtend
    Most of the questions on this topic related to folks connecting to somebody else's wireless network when their own was available and could remedy the situation by going to their connections and unchecking the "connect automatically" box. See this: " Avoid automatically connecting to wireless network on windows 7 " as an example. In my situation, I've noticed that Win 7 will automatically connect to any unsecured wifi network - even if I have never connected to it in the past. If I am traveling and boot Win 7, it will start and connect to what appears to be the best signaled unsecured network without prompting me for confirmation (note: in the above link, "Naveen" seems to have same problem). Obviously, that is a security concern to me. Further, when I open "Network and Sharing" and "Manage wireless networks" the network is not displayed (probably because I labelled it a public network). Again, these are new, never connected with before, wireless networks. I always promptly disconnect from them but don't want to have to be on constant guard for an auto connection to a malicious network. This began about a month ago, as I recall, Win 7 did not behave like this in the past, I didn't monkey with wifi settings, and don't use a 3rd party connection manager. I did have to download some internet security certificates for army website access but I don't think that should mess with network settings. Any ideas how I can tell Win7 cease automatically connecting to networks or, at least, to prompt me for a confirmation before connecting?

    Read the article

  • "could not find suitable fingerprints matched to available hardware" error

    - by Alex
    I have a thinkpad t61 with a UPEK fingerprint reader. I'm running ubuntu 9.10, with fprint installed. Everything works fine (I am able to swipe my fingerprint to authenticate any permission dialogues or "sudo" prompts successfully) except for actually logging onto my laptop when I boot up or end my session. I receive an error below the gnome login that says "Could not locate any suitable fingerprints matched to available hardware." What is causing this? here are the contents of /etc/pam.d/common-auth file # # /etc/pam.d/common-auth - authentication settings common to all services # # This file is included from other service-specific PAM config files, # and should contain a list of the authentication modules that define # the central authentication scheme for use on the system # (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the # traditional Unix authentication mechanisms. # # As of pam 1.0.1-6, this file is managed by pam-auth-update by default. # To take advantage of this, it is recommended that you configure any # local modules either before or after the default block, and use # pam-auth-update to manage selection of other modules. See # pam-auth-update(8) for details. # here are the per-package modules (the "Primary" block) auth sufficient pam_fprint.so auth [success=1 default=ignore] pam_unix.so nullok_secure # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_ecryptfs.so unwrap # end of pam-auth-update config #auth sufficient pam_fprint.so #auth required pam_unix.so nullok_secure

    Read the article

  • Photo managing software that supports network drives?

    - by musicfreak
    My dad is a photographer in his free time, and he's been using Lightroom to manage his photos. However, recently, we put all of our photos on a NAS drive to allow us to access them from any computer at any time. The problem with this is that Lightroom cannot load catalogs from network drives. We need support for network drives because we'd like to be able to browse the photos from any computer, and for any computer to be able to add photos to the collection. Right now we're just syncing the Lightroom catalog file between us, but the extra step is a pain, and doing it manually makes it error-prone. Is there any software (free or commercial) that has proper support for network drives? The only real feature I need is to be able to sort photos by date and by some sort of tags. I don't need any editing features like those found in Lightroom; my dad is comfortable using Photoshop to edit photos. Also, if there is another solution to this that I haven't thought of, feel free to share.

    Read the article

  • How to rename network printer on Windows 7?

    - by Adrian McCarthy
    This question is similar to How do you rename a printer device in Windows 7 64 bit, except the answers there do not work, and I'll provide more information. This is a home network, not a domain. I have set up a Brother HL-5170DN. It is a network printer connected directly to an Ethernet hub. I can connect to it with Windows 7, but on Windows 7 it defaults to the name "binary_p1 on Brn37415f", which isn't very useful. And I cannot seem to change the name. I have it working with several Windows XP and Vista machines, and I can change the name on those machines. On Windows 7 Printer properties: I can see the "binary_p1" name on the General tab. I can select the text, but I cannot change it. The field is not grayed out, but I cannot type anything into it. On the Ports tab, all of the controls are grayed out (disabled). The selected Port is called "\\Brn_37415f\binary_p1", and it's described as "Client Side Rendering Provider" and the printer field says "binary_p1". On the Security tab, I can see that my account has "Manage this printer" permissions. If I choose Printer Server Properties, I can select the port and click Configure Port, but I get a dialog that says, "An error occurred during port configuration. This option is not supported." I have found many forums with people asking the same question without getting an answer. Update: No more bounties to offer, but I'm still looking for a solution to this problem.

    Read the article

  • Proper web server setup

    - by DMin
    I just got myself a slicehost basic slice to play around with so I can learn how to setup web-servers. I have Ubuntu 10.04.2 installed on the server. I was able to successfully get the server up and running from scratch, these were the things I did - following this tutorial. I know this is probably just a starters tutorial, so, I was wondering if you guys can tell me what you like to do while setting up production servers. These are the steps that were followed : Update and Upgrade Ubuntu sudo apt-get install apache2 php5-mysql libapache2-mod-php5 mysql-server Backup a copy of and edit apache2.conf Set : 'ServerTokens Full' to 'ServerTokens Prod''ServerSignature On' to 'ServerSignature Off' Backup php.ini and then Change “expose_php = On” to “expose_php = Off” Restart Apache Install Shorewall firewall Configure Shorewall to only accept HTTP and SSH connections(in the rules file) Enable shorewall on startup Add the website to the server : sudo usermod -g www-data root sudo chown -R www-data:www-data /var/www sudo chmod -R 775 /var/www I want make this CommunityWiki but can't seem to find the option to do it. Please feel free to add any feedback on the processes and things I am doing right/wrong. Much appriciated, thanks! :)

    Read the article

  • How to get data out of a Maxtor Shared Storage II that fails to boot?

    - by Jonik
    I've got a Maxtor Shared Storage II (RAID1 mode) which has developed some hardware failure, apparently: it fails to boot properly and is unreachable via network. When powering it on, it keeps making clunking/chirping disk noise and then sort of resets itself (with a flash of orange light in the usually-green LEDs); it then repeats this as if stuck in a loop. In fact, even the power button does nothing now – the only way I can affect the device at all is to plug in or pull out the power cord! (To be clear, I've come to regard this piece of garbage (which cost about 460 €) as my worst tech purchase ever. Even before this failure I had encountered many annoyances about the drive: 1) the software to manage it is rather crappy; 2) it is way noisier that what this type of device should be; 3) when your Mac comes out of sleep, Maxtor's "EasyManage" cannot re-mount the drive automatically.) Anyway, the question at hand is how to get my data out of it? As a very concrete first step, is there a way to open this thing without breaking the plastic casing into pieces? It is far from obvious to me how to get beyond this stage; it opens a little from one end but not from the other. If I somehow got the disks out, I could try mounting the disk(s) on one of the Macs or Linux boxes I have available (although I don't know yet if I'd need some adapters for that). (NB: for the purposes of this question, never mind any warranty or replacement issues – that's secondary to recovering the data.)

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >