Search Results

Search found 11188 results on 448 pages for 'django manage py'.

Page 376/448 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • SharePoint 2010 Enterprise wiki - [New page] missing

    - by icelava
    I am trying to ramp up knowledge on SharePoint deployment and usage (never did before), due to a direction to use SharePoint 2010 as a repository platform (wiki format) for our customer's infrastructure documentation. In my test virtual server, a new site of Enterprise wiki template was setup. Went into Site Actions Manage Site Features to activate Wiki Page Home Page. The default sub-web then went from /Pages to /SitePages and looks like the default Team template. The odd thing is the Site Actions is missing the New Page option. My colleague does not understand why this is the case, as it ought to be there. The original /Pages sub-web does have the option. What conditions are in play that influences the appearance of that option? UPDATE Another phenomenon observed is in the Site Actions View All Site Content view, the wiki document libraries listed in the grid will have their hyperlink (e.g. "Site Pages") lead straight to the direct default page. It would not show its own table listing of pages under that document library, unlike the original Pages document library, which expectedly show up as a listing. I wonder if this hints to any problems.

    Read the article

  • How to re-add a RAID-10 failed drive on Ubuntu?

    - by thiesdiggity
    I have a problem that I can't seem to solve. We have a Ubuntu server setup with RAID-10 and two of the drives dropped out of the array. When I try to re-add them using the following command: mdadm --manage --re-add /dev/md2 /dev/sdc1 I get the following error message: mdadm: Cannot open /dev/sdc1: Device or resource busy When I do a "cat /proc/mdstat" I get the following: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [r$ md2 : active raid10 sdb1[0] sdd1[3] 1953519872 blocks 64K chunks 2 near-copies [4/2] [U__U] md1 : active raid1 sda2[0] sdc2[1] 468853696 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdc1[1] 19530688 blocks [2/2] [UU] unused devices: <none> When I run "/sbin/mdadm --detail /dev/md2" I get the following: /dev/md2: Version : 00.90 Creation Time : Mon Sep 5 23:41:13 2011 Raid Level : raid10 Array Size : 1953519872 (1863.02 GiB 2000.40 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 25 09:25:08 2012 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : c6d87d27:aeefcb2e:d4453e2e:0b7266cb Events : 0.6688691 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed 2 0 0 2 removed 3 8 49 3 active sync /dev/sdd1 Output of df -h is: Filesystem Size Used Avail Use% Mounted on /dev/md1 441G 2.0G 416G 1% / none 32G 236K 32G 1% /dev tmpfs 32G 0 32G 0% /dev/shm none 32G 112K 32G 1% /var/run none 32G 0 32G 0% /var/lock none 32G 0 32G 0% /lib/init/rw tmpfs 64G 215M 63G 1% /mnt/vmware none 441G 2.0G 416G 1% /var/lib/ureadahead/debugfs /dev/mapper/RAID10VG-RAID10LV 1.8T 139G 1.6T 8% /mnt/RAID10 When I do a "fdisk -l" I can see all the drives needed for the RAID-10. The RAID-10 is part of the /dev/mapper, could that be the reason why the device is coming back as busy? Anyone have any suggestions on what I can try to get the drives back into the array? Any help would be greatly appreciated. Thanks!

    Read the article

  • Application for time and project management

    - by user10826
    I want to improve the way I organize my projects/tasks/schedule What I do now is: keep an excel sheet with the name of the most important tasks/projects, I look at it at the beginning of each day and decide the ones I will focus on on iCal I write down events for each day, or for a concrete time (13 to 14 hours). I set up each day the tasks I want to accomlish, and allocate them hours I use Things (culture code) to keep info about tasks and projects not very important and which are not time allocated yet (GTD name = someday) I use Mail on Mac and create folders for the mails I want to process with the name of the different projects I save the main info for each project on freemind maps My system works well at the moment but it is pretty complicated to use. I want to make it better and I am looking for something with these requirements: must be 100% offline accessable it should use as less programs/resources as possible, ideally just one program should be able to manage all my info I can use the GTD methodology mixed with priorities and I can allocate each task converted to event on my calendar I can have different daily/weekly, etc views on a calendar to see the "big picture" must run on mac os x leopard price does not matter, I will pay for this So, according to your experience, can you recommend me something like this? Thanks

    Read the article

  • IP queue buffer

    - by summerbulb
    I seem to have an issue with IP queue. I have a linux machine that I am using to run some experiments. The linux machine is configured to be a router, having two NICs, connecting two other computers, and managing their network traffic. All incoming packages are captured, using iptables, and analyzed by a C application. The application analyzing the packets has a built-in delay, as part of the experiment. So I have one very fast computer sending packets through my linux-router and a (relatively) slow linux-router that analyses and deals with the packets, one by one. This situation leads to the fact that when I fire up a sender application on one of the computers connected to the linux-router, my IP queue on the linux-router gets filled up (almost) instantaneously. The IP queue's max length is currently set to 1024, and if it overflows, the packets are dropped. This is expected and i'm OK with it. But, (and this is where it gets interesting), every now and then I get the following error: "Failed to receive netlink message: No buffer space available" At start, I thought this was due to the IP queue overflow, but after some analysis i found that sometimes I get the error even if the IP queue buffer did not overflow, and sometime I DON'T get the message even though the buffer DID overflow. When I run > cat /proc/net/ip_queue, I get the following table (also used to monitor the IP queue overflow): Peer PID : 27389 Copy mode : 2 Copy range : 65535 Queue length : 0 Queue max. length : 1024 Queue dropped : 1166875 Netlink dropped : 2916 Looking at the last two values, Queue dropped seems to refer to packets that did not manage to get into the IP queue because the buffer was full. I can see this value rise as I bombard the linux-router. Netlink dropped ( as it's name implies :) ) seems to have to do with the error i'm getting. I did my best to search for material on this error, but wasn't able to find anything that seemed to point me in the required direction. Bottom line: Why am I getting this error and what can I do to avoid it?

    Read the article

  • Gmail won't forward mail sent to myself.

    - by BHare
    I own a dedicated server with a domain, we'll say foobar.com. I use google apps to manage my email SMTP servers. Now I don't check two gmail inboxes. I have my own personal one, and then I have foobar.com's inbox from google apps. Naturally the easiest thing to do is just have all foobar's emails forwarded to my personal one. So then I am only checking 1 inbox. This is all fine and dandy. I use MSMTP that with a wrapper that uses /etc/aliases. I have it set so any mail attempting to go to root (Things from cron, etc) will go to [email protected]. So when google app's (foobar.com) gets an email from the email I have setup with it ([email protected]), it automatically doesn't forward the message. This is a "feature" to gmail/google apps I suppose. How do I get around it? workarounds? etc. I could just have my alias set to my personal email but I wanted a place to have all foobar related emails archived in one place (googleapps).

    Read the article

  • Windows VPN for remote site connection drawbacks

    - by Damo
    I'm looking for some thoughts on a particular way of setting up a estate of machines. We have a requirement to install machines into unmanned, remote locations. These machines will auto login and perform tasks controlled from a central server. In order to manage patching, AV, updates etc I want these machines to be joined to a dedicated domain for this estate. Some of the locations will only have 3G connectivity (via other hardware), others will be located on customer premises in internal networks. The central server (of ours) and the Domain Controller will be on a public WAN. I see two ways of facilitating this. Install a router at each location and have a site to site VPN between the remove device and the data centre where the servers are location Have the remote machine dial up and authenticate via a Windows VPN connection to the DC via RAS Option one is more costly to setup and has a higher operational cost. It also offers better diagnostics if the remote PC goes down. Option two works well but is solely dependent on the VPN connection been made before any communication can be made to the remote machine. In a simple test, I can got a Windows 7 machine to dial a VPN prior to authentication to a domain, then automatically login to the machine using domain credentials. If the VPN connection drops, it redials. I can also create a timed task to auto connect every hour in case of other issues. I'd like to know, why (if at all) is operating a remote network of devices which are located in various out of band locations in this way a bad idea? Consider 300-400 remote machines all at different sites. I'd rather have 400 VPN connections to a 2008 server than 400 routers, however I'd like to know other opinions on this.

    Read the article

  • Why does my Intel Tolapai network chip not transmit packets?

    - by Hanno Fietz
    I'm trying to deploy an embedded system (NISE 110 by Nexcom) based on the Intel EP80579 (Tolapai) chip. Tolapai apparently integrates controllers for Ethernet etc. on a single chip (Intel homepage). The machine can't get a network connection. Diagnosis as far as I could manage: Drivers drivers from Intel compiled and installed without problems (version 1.0.3-144). Kernel version and Linux distribution (CentOS 5.2, 2.6.18) match the driver's installation instructions. drivers are loaded and show up in lsmod (module names are gcu and iegbe) interfaces eth0 and eth1 show up in ifconfig ifconfig I can bring up the interfaces with fixed IP pinging the interface locally works ifconfig shows flag UP but not RUNNING Link ethtool shows "Link detected: no", "Speed: unknown (65536)" and "Duplex: unknown (255)" Link LED is on on the other side of the cable, ethtool shows "Link detected: yes" and reports a speed of 1000 Mbps, which has allegedly been auto-neogotiated with the problematic device. Network traffic analysis the device does not reply on ARP, ICMP echo or anything else (iptables is down) when trying to send ICMP or DHCP requests, they never reach the other end activity LED is off on the device, on at the other end. I tried the following without any effect: Different cables (2 straight, one crossed), I get the link LED lit up on each. Three different devices on the other end (one PC, one netbook, one router) Fixed ARP table entries on both sides Connecting both network ports of the machine with each other, won't ping through the cable, but will ping locally. Tried straight and crossed cables for that.

    Read the article

  • Get Safari to use different autocompletion on different URLs on same hostname

    - by Luke404
    I have a webserver publishing different services over the same SSL VirtualHost, the two most commonly used being PhpMyAdmin and Cacti. These (and others) use 'cookie' style authentication, asking user and password in an HTML form (thus not using HTTP Authentication). Being on the same hostname, the Safari browser didn't manage too well stored passwords: if I login to one app with user foo, and then go to app two it would propose me user foo and its password in the login form. Changing just the username to bar used to be sufficient to let Safari autocomplete the correct password in its form field. Annoying, but I could live with it - usernames are short and easy to remember when compared to the passwords we use. After the update to safari5 this seems to be no longer true: if I store in safari (actually user keychain on OSX) credentials for https://www.foobarbaz.com/app1 AND credentials for https://www.foobarbaz.com/app2 there seem to be no way for it to autocomplete both based on the url. Even editing the keychain to add the path (it will store only the hostname by default) does not help. Is there anything I can do to let it work the way I want while still keeping everything on one hostname? Modifying anything server side is of course possible, but I can't switch apps to HTTP Auth (and not every one will support it anyway) to use different 'realms'.

    Read the article

  • Deleting "undeletable" files in Vista

    - by Nik Reiman
    I recently upgraded my workstation from XP SP3 to Vista Business, and during the upgrade Windows moved my old C:\Windows directory to C:\Windows.old. I got all of the stuff I needed out of that folder, but there are six "undeletable" files there so I cannot remove it. They are: Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-H Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-V Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelper.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroPDF.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\pdfshell.dll Whenever I try to delete the files either through explorer or a command line, I get a permission denied error. I have tried to grant myself full permission on the files, but again, permission denied. I don't even have acrobat installed on my Vista machine, and I uninstalled Adobe updater. However, I still can't manage to get rid of these files. How do I nuke them for good? Edit: I was able to take ownership of the files, but I still can't delete them. Renaming them did not work, as I was denied permission to do that as well. I'll try booting up in safe mode and getting rid of them there. Edit II: Booting up into safe mode did not allow me to delete the files. Bummer.

    Read the article

  • Solution to easily share large files with non-tech-savvy users?

    - by Tim
    Hey all, We've got a server setup at work which we'd like to use to exchange large files with known clients easily. We're looking into software to facilitate this, but somewhow typing "large file hosting" into Google gives questionable results.. ;) We've come up with the following requirements, and I hope any of you can points us in the direction of a solution that offers this functionality, or is malleable to our needs. Synchronization / revision management is of no concern, it's mostly single large (up to 1+ GB) file uploads & downloads we'll need. We'd like to make the downloads expire & be removed after a certain number of days / downloads, to limit the amount of cleanup we'd have to do. The data files exchanged sometimes hold confidential information, so the URLs generated should be random and not publicly visible. Our users are of the less technically savvy variety, so a simple webform would be best over a desktop client (because we also have to support a mix of operating systems). As for use of the system we'd either like to send out generated random URLs for them to upload their files, or have an easy way manage & expire users. Works on a linux (Ubuntu) server (so nothing .Net-related please) Does anyone know of software that fits the above criteria? We've already seen a few instances of this within the scientific community, but nothing we could use directly.. Best regards, Tim

    Read the article

  • Solution to easily share large files with non-tech-savvy users?

    - by Tim
    Hey all, We've got a server setup at work which we'd like to use to exchange large files with known clients easily. We're looking into software to facilitate this, but somewhow typing "large file hosting" into Google gives questionable results.. ;) We've come up with the following requirements, and I hope any of you can points us in the direction of a solution that offers this functionality, or is malleable to our needs. Synchronization / revision management is of no concern, it's mostly single large (up to 1+ GB) file uploads & downloads we'll need. We'd like to make the downloads expire & be removed after a certain number of days / downloads, to limit the amount of cleanup we'd have to do. The data files exchanged sometimes hold confidential information, so the URLs generated should be random and not publicly visible. Our users are of the less technically savvy variety, so a simple webform would be best over a desktop client (because we also have to support a mix of operating systems). As for use of the system we'd either like to send out generated random URLs for them to upload their files, or have an easy way manage & expire users. Works on a linux (Ubuntu) server (so nothing .Net-related please) Does anyone know of software that fits the above criteria? We've already seen a few instances of this within the scientific community, but nothing we could use directly.. Best regards, Tim

    Read the article

  • how to block spam email using Microsoft Outlook 2011 (Mac)?

    - by tim8691
    I'm using Microsoft Outlook 2011 for Mac and I'm getting so much spam I'm not sure how to control it. In the past, I always applied "Block Sender" and "Mark as Junk" to any spam email messages I received. This doesn't seem to be enough nowadays. Then I've started using Tools Rules to create rules based on subject, but the same spammer keeps changing subject lines, so this isn't working. I've been tracking the IP addresses they also seem to be changing with each email. Is there any key information I can use in the email to apply a rule to successfully place these spam emails in the junk folder? I'm using a "Low" level of junk email protection. The next higher level, "high", says it may eliminate valid emails, so I prefer not to use this option. There's maybe one or two spammers sending me emails, but the volume is very high now. I'm getting a variation of the following facebook email spam: Hi, Here's some activity you have missed. No matter how far away you are from friends and family, we can help you stay connected. Other people have asked to be your friend. Accept this invitation to see your previous friend requests Some variations on the subject line they've used include: Account Info Change Account Sender Mail Pending ticket notification Pending ticket status Support Center Support med center Pending Notification Reminder: Pending Notification How do people address this? Can it be done within Outlook or is it better to get a third party commercial software to plug-in or otherwise manage it? If so, why would the third party be better than Outlook's internal tools (e.g. what does it look for in the incoming email that Outlook doesn't look at)?

    Read the article

  • Load balancing with puppet

    - by Gonçalo Queirós
    Hi there. Im trying to setup a loadbalancing system. My load balancer (nginx) has a conf file where i should list all IP's of the upstream servers. I could put the IP's on the conf manually, but this ways i would need to change the conf file every time i add/remove an upstream server. For now i came up with two different ideas, but i don't like much of neither: 1 - Have every upstream machine to use Exported Resources to create a file with it's IP..Then the load balancer server will have an "include conf_directory/*", and load all the files created by the upstrem servers. Since the load balancer is using nginx this can be done, but if i wan't latter on to configure something that doesn't have the "include" on the conf files, this solution will not work. 2 - If the config doesn't support the "include" command, then we could have again, every upstream server use the Exported Resources to create a filw with its IP, and latter on, the load balancer execute a command that would pick every file and generate the config Both versions addopt the same techinque, the difference is that version 2 is used when the server (that needs to have a conf generated) doesn't recognize a command like "include" inside its own conf. Now, my question is, is there any way to do this in a different form? I suspect that there is, since puppet is made to manage multiple servers, it seems a bit strange not have a easy way to configure load balancers.

    Read the article

  • TC hashing filters - single rule deletion

    - by exa
    For traffic shaping I'm currently using a setup that looks exactly like the setup from LARTC, on this page: http://lartc.org/howto/lartc.adv-filter.hashing.html I have a simple problem with that - everytime I want to modify something in the hash table (like assign a IP to different flowid), I need to delete the whole filter table and add it again filter by filter. (I actually don't do it by hand, I have a nice program that does it for me... but still...) There is a problem - I got roughly 10k filters allocated this way and deleting and refilling the whole filtertable can get pretty lengthy, which is not exactly good for traffic shaping. My program could easily manage to delete only the rules that need to be deleted (thus reducing the whole problem to several commands and miliseconds), but I simply don't know the command that deletes only the one hashing rule. My tc filter show: filter parent 1: protocol ip pref 1 u32 filter parent 1: protocol ip pref 1 u32 fh 2: ht divisor 256 filter parent 1: protocol ip pref 1 u32 fh 2:a:800 order 2048 key ht 2 bkt a flowid 1:101 match 0a0a0a0a/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 2:c:800 order 2048 key ht 2 bkt c flowid 1:102 match 0a0a0a0c/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 link 2: match 00000000/00000000 at 16 hash mask 000000ff at 16 The wish: 'tc filter del ...' command that removes only one specific filter (for example the 0a0a0a0a IP match (IP address 10.10.10.10)). Removal of some small subgroup would also be good - for example I could still recreate a bucket (bkt a) pretty fast. My attempts: I tried to number all the filters using prio, but with no help -- they just create something unusuable (but deletable) below, but the bucketed filters remain there after that gets deleted. Any ideas? edit - I'm adding a simplified tl;dr description of the problem: I created hash filter on some interfce just like in this http://lartc.org/howto/lartc.adv-filter.hashing.html I want to find a command that deletes one rule (e.g. 1.2.1.123) from the table, leaving the rest untouched and working.

    Read the article

  • [tcpdump] Proxy delegate refusing connexion ?

    - by simtris
    Hi guys, I'm a little disapointed ! My aim was to build a VERY simple smtp proxy under debian to handle mail from a port (51234) and forward it to the standard 25 port. I compile and install a "delegate" witch can handle easily that. It's working very well like that : delegated SERVER="smtp://anotherSmtpServer:25" -P51234 The strange thing is, it's working on my virtual test machine and on the dedicated server in local but I can't manage to use it trought internet. I test it like that. telnet [mySrv] 51234 Of course, no firewal, no deny host, no ined/xined, the service delegated is listening on the right port ... 2 clues : The port is answering trought internet with nmap as "51234/tcp open tcpwrapped" have a look at the tcpdump following : 22:50:54.864398 IP [myIp].1699 [mySrv].51234: S 2486749330:2486749330(0) win 65535 22:50:54.864449 IP [mySrv].51234 [myIp].1699: S 2486963525:2486963525(0) ack 2486749331 win 5840 22:50:54.948169 IP [myIp].1699 [mySrv].51234: . ack 1 win 64240 22:50:54.965134 IP [mySrv].43554 [myIp].auth: S 2485396968:2485396968(0) win 5840 22:50:55.243128 IP [myIp] [mySrv]: ICMP [myIp] tcp port auth unreachable, length 68 22:50:55.249646 IP [mySrv].51234 [myIp].1699: F 1:1(0) ack 1 win 46 22:50:55.309853 IP [myIp].1699 [mySrv].51234: . ack 2 win 64240 22:50:55.310126 IP [myIp].1699 [mySrv].51234: F 1:1(0) ack 2 win 64240 22:50:55.310137 IP [mySrv].51234 [myIp].1699: . ack 2 win 46 The part "auth" seems suspect to me but didn't ring a bell. I could certaily do with some help. Thx a lot !

    Read the article

  • Can a website company that builds 4-5 websites a year afford dedicated hosting?

    - by Petras
    We manage about 30 websites that use shared ASP.NET SQL Server web hosting. These are typical small/medium business websites and they perform fine in this environment. Recently I was looking at VPS hosting in this thread http://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps After contacting a provider in one of the replies I was told that VPS hosting is not recommended for 30 sites, even if they are small. The resource requirements might be too great even for VPS. So I should turn to dedicated hosting. The lowest cost dedicated hosting is $219 per month (see http://www.serverintellect.com/dedicated/pentiumdservers.aspx). But this is only for a single processor which seems too light for a machine running both IIS and SQL. In our office all the developers work on quad cores so I assume I’d really need the Quad Processor. However, this starts at $599 monthly. Now, I won’t be able to transfer all of our 30 sites to this machine. I’d only be able to transfer say 5 or 6. However, moving forward, I’d be able to host all future sites on this machine. This amounts to 4-5 per year. Let’s look at the economics. Shared hosting costs are typically $16.95 monthly (see http://www.crystaltech.com/dotnet.aspx). So here’s the dilemma First months costs: $599 First month revenue: 6x$16.95 = $101.7 Loss in first month: $497.3 First year costs: $599x12=$7188 First month revenue: 6x$16.95x12 + 5x$16.95x6(averaged) = $1728.9 Loss in first year: $5459.1 Clearly it is going to take years for this server to pay for itself. It just doesn’t seem economical! Am I missing something here, or is dedicated not the way to go with the amount of sites we build?

    Read the article

  • Disabling LDAP Signing on Windows PDC in Local Policy

    - by Golmaal
    I just tripped over my own feet it seems. Playing around on a Windows 2008 R2 server (set up as domain controller), I was intrigued by certain warning event (event id 2886) which says: "To enhance the security of directory servers, you can configure both Active Directory Domain Services (AD DS) and Active Directory Lightweight Directory Services (AD LDS) to require signed Lightweight Directory Access Protocol (LDAP) binds." So I thoughtlessly did some Googling and set the relevant policies which enforce LDAP signing. Now I don't remember but I may have done that using Local Policy. Now I have setup a pfsense box which must authenticate AD users via LDAP. While the firewall can communicate over secure channel, it is difficult to manage the same for other packages such as Squid and SquidGuard. So now I have to disable i.e. undo those policy changes. The problem is that they are greyed out! The policies in question are LDAP server signing and LDAP client signing. I don't remember what I did but when I access these policies from Local Policy editor on the server, they are set to "Require Signing" and are greyed out. The same policies can still be set via Default Domain Controller option in Group Policy editor. So how can I reset these greyed out policies? Thanks

    Read the article

  • "could not find suitable fingerprints matched to available hardware" error

    - by Alex
    I have a thinkpad t61 with a UPEK fingerprint reader. I'm running ubuntu 9.10, with fprint installed. Everything works fine (I am able to swipe my fingerprint to authenticate any permission dialogues or "sudo" prompts successfully) except for actually logging onto my laptop when I boot up or end my session. I receive an error below the gnome login that says "Could not locate any suitable fingerprints matched to available hardware." What is causing this? here are the contents of /etc/pam.d/common-auth file # # /etc/pam.d/common-auth - authentication settings common to all services # # This file is included from other service-specific PAM config files, # and should contain a list of the authentication modules that define # the central authentication scheme for use on the system # (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the # traditional Unix authentication mechanisms. # # As of pam 1.0.1-6, this file is managed by pam-auth-update by default. # To take advantage of this, it is recommended that you configure any # local modules either before or after the default block, and use # pam-auth-update to manage selection of other modules. See # pam-auth-update(8) for details. # here are the per-package modules (the "Primary" block) auth sufficient pam_fprint.so auth [success=1 default=ignore] pam_unix.so nullok_secure # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_ecryptfs.so unwrap # end of pam-auth-update config #auth sufficient pam_fprint.so #auth required pam_unix.so nullok_secure

    Read the article

  • HTC Diamond Touch sync problem

    - by Anders
    I have a HTC Diamond Touch with all my contacts etc. on it. Did however not use it for 6mo while being abroad. When I start the phone now I realize that the touch screen has stopped working. I have tried restarting, soft resetting, shutting it off etc but the touch just wont follow commands. However, I can manage the phone by buttons so it's not frozen. Hence I can get into the phone and watch contacts but not use it to call etc. The problem is, how do I get my 300 contacts out of the thing!? When I'm plugging in the phone, it lets me choose between "Sync with Outlook" and "Use as storage device". It automatically selects "Use as storage device". Now, I cannot choose to sync it with the buttons. I can not change this option afterwards either. In short, I have a phone with all of my contact data and am completely unable to get that out of it. Any tips/help/suggestions? If possible, preferably one that does not including sending the phone to a hardware workshop for three weeks in order to get it fixed:)

    Read the article

  • IP6 seems to be enabled - How do I configure it without interfering with IP4?

    - by Mister IT Guru
    I noticed that some of my Centos boxes have IP6 enabled, and seem to have addresses. I have no problem with this, but I would like to get a handle on it, and even connect to them using IP6. This would really help if for any reason DHCP has a hiccup. But I'm a bit lost as to where the configuration on my CentOS box is. (I am also on google researching this, but I like server fault! :) ) I am hoping that I would be able to log into this via the VPN because every now and then that DHCP device has a bad morning, and needs to be restarted. (I'm also looking into this issue, but someone else handles that, management separation gone mad!) It's a remote client, so it would be a lot easier for me to connect to these systems which seem to self configure, to use that as a pivot via ssh tunnels to get to other remote devices to continue to manage them, while out main route is fixed. I guess, my questions are How can I configure IP6 without interfering with IP4, and On CentOS, can I influence this auto configuration I seem to be seeing?

    Read the article

  • Sudden problems with iptables not running

    - by Fourjays
    I've got a sudden issue with iptables not running on my CentOS 5.8/DirectAdmin XenVPS. All I have done today is install PHP APC and run an update (although I admittedly didn't pay much attention today - I usually do). Iptables has been running fairly smoothly since I installed it over 6 months ago. Basically when I try to run iptables -L it tells me: iptables v1.3.5: can't initialize iptables table `filter': iptables who? (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. I've looked around and tried a few things and it appears that maybe my kernel doesn't have the modules loaded? I've been reading this and tried the two commands they suggest to no avail. Except there does appear to be a mismatch on one bit of output: -bash-3.2# cd /lib/modules -bash-3.2# ls 2.6.18-194.32.1.el5xen 2.6.18-238.5.1.el5xen 2.6.18-274.7.1.el5xen 2.6.39.1-cs-domU 2.6.18-238.12.1.el5xen 2.6.18-238.9.1.el5xen 2.6.37.2-cs-domU 3.0.1-cs-domU -bash-3.2# depmod -a WARNING: Couldn't open directory /lib/modules/2.6.18-274.18.1.el5xen: No such file or directory FATAL: Could not open /lib/modules/2.6.18-274.18.1.el5xen/modules.dep.temp for writing: No such file or directory Does this mean the versions are out of sync? If so, what are my next steps to getting this fixed? As you can probably tell I am still learning how to manage my server so please be very clear in all advice. Many thanks :)

    Read the article

  • Windows 7 Automatically Connecting To Unsecured Wireless Networks On Startup

    - by Xtend
    Most of the questions on this topic related to folks connecting to somebody else's wireless network when their own was available and could remedy the situation by going to their connections and unchecking the "connect automatically" box. See this: " Avoid automatically connecting to wireless network on windows 7 " as an example. In my situation, I've noticed that Win 7 will automatically connect to any unsecured wifi network - even if I have never connected to it in the past. If I am traveling and boot Win 7, it will start and connect to what appears to be the best signaled unsecured network without prompting me for confirmation (note: in the above link, "Naveen" seems to have same problem). Obviously, that is a security concern to me. Further, when I open "Network and Sharing" and "Manage wireless networks" the network is not displayed (probably because I labelled it a public network). Again, these are new, never connected with before, wireless networks. I always promptly disconnect from them but don't want to have to be on constant guard for an auto connection to a malicious network. This began about a month ago, as I recall, Win 7 did not behave like this in the past, I didn't monkey with wifi settings, and don't use a 3rd party connection manager. I did have to download some internet security certificates for army website access but I don't think that should mess with network settings. Any ideas how I can tell Win7 cease automatically connecting to networks or, at least, to prompt me for a confirmation before connecting?

    Read the article

  • Photo managing software that supports network drives?

    - by musicfreak
    My dad is a photographer in his free time, and he's been using Lightroom to manage his photos. However, recently, we put all of our photos on a NAS drive to allow us to access them from any computer at any time. The problem with this is that Lightroom cannot load catalogs from network drives. We need support for network drives because we'd like to be able to browse the photos from any computer, and for any computer to be able to add photos to the collection. Right now we're just syncing the Lightroom catalog file between us, but the extra step is a pain, and doing it manually makes it error-prone. Is there any software (free or commercial) that has proper support for network drives? The only real feature I need is to be able to sort photos by date and by some sort of tags. I don't need any editing features like those found in Lightroom; my dad is comfortable using Photoshop to edit photos. Also, if there is another solution to this that I haven't thought of, feel free to share.

    Read the article

  • How to get data out of a Maxtor Shared Storage II that fails to boot?

    - by Jonik
    I've got a Maxtor Shared Storage II (RAID1 mode) which has developed some hardware failure, apparently: it fails to boot properly and is unreachable via network. When powering it on, it keeps making clunking/chirping disk noise and then sort of resets itself (with a flash of orange light in the usually-green LEDs); it then repeats this as if stuck in a loop. In fact, even the power button does nothing now – the only way I can affect the device at all is to plug in or pull out the power cord! (To be clear, I've come to regard this piece of garbage (which cost about 460 €) as my worst tech purchase ever. Even before this failure I had encountered many annoyances about the drive: 1) the software to manage it is rather crappy; 2) it is way noisier that what this type of device should be; 3) when your Mac comes out of sleep, Maxtor's "EasyManage" cannot re-mount the drive automatically.) Anyway, the question at hand is how to get my data out of it? As a very concrete first step, is there a way to open this thing without breaking the plastic casing into pieces? It is far from obvious to me how to get beyond this stage; it opens a little from one end but not from the other. If I somehow got the disks out, I could try mounting the disk(s) on one of the Macs or Linux boxes I have available (although I don't know yet if I'd need some adapters for that). (NB: for the purposes of this question, never mind any warranty or replacement issues – that's secondary to recovering the data.)

    Read the article

  • Deploy our own software using Puppet?

    - by Ken
    (Apologies in advance for the stupidity in this question. I'm normally a programmer, not a sysadmin, but I've taken it upon myself to automate some things, and clean up some other things which are automated but not in the prettiest way. :-) I've been looking around at various tools for automation of software deployment to a bunch of servers, like cfengine, Puppet, and Chef. So far, Puppet looks the most appealing, but I've certainly not committed to anything yet. These tools all look like they can do a great job of keeping a bunch of servers up-to-date with prepackaged software. What I don't get is: how does one use a tool (like Puppet) to manage deployments of our own internal software? I think I'm at a loss because I've seen a thousand tutorials showing how to keep Apache ensure => latest (which is pretty cool), but nothing that quite corresponds to my use-case today, which is something more like: when a human being pushes The Button, pull branch A from the version-control repository B run command C to compile it copy the binaries D to servers E1 through E10 on each server, run command F to make all changes take effect Puppet sounds great, and I totally see the advantage of declarative, idempotent configuration over some shell scripts, but I've not seen any tutorials for "you want to update your shell scripts to Puppet (or Chef, or cfengine) so here's what you should do". Is there such a thing? Is it obvious to other people how to take the things provided in the Puppet docs and replicate the behavior I want? Am I just not getting it? What it's sounding like to me, so far, is that the human being (#1) would manually package the software (#2 and #3) external to Puppet, manually update the Puppet config, which would trigger Puppet to update the servers ... maybe? (I'm a little confused here, as I'm sure you can tell.) Thanks!

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >