Search Results

Search found 18729 results on 750 pages for 'edit'.

Page 117/750 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Linux: Can't overwrite files on samba store

    - by jonescb
    I'm using CentOS 5.5 with smbclient 3.0.33-3.28-el5 (latest version in repo), and I can't overwrite files in my Samba store. I am not the admin for the Samba server, so there isn't anything I can do server side. But I do have write permission to the server. I know the server runs Windows XP or Server 2003; I don't know. I can delete the file, and then copy the new version over, but I can't overwrite it. Using the cp command I'll get this error: [jonescb@localhost ~]$ cp foo.txt /mnt/si_storage/foo.txt cp: cannot create regular file `/mnt/si_storage/foo.txt': No such file or directory` And if I edit a file on the server using vim, I can save it once, but if I save it again I get this: "/mnt/si_storage/foo.txt" E212: Can't open file for writing This is my /etc/fstab entry for the samba server: //192.168.1.2/SI_STORAGE /mnt/si_storage cifs username=myuser,password=mypass 0 0 Edit: I can overwrite files just fine on my XP machine. The CentOS box is the only one having problems.

    Read the article

  • Redirect from folder containing website

    - by Sam
    I have a website reached from this url: http://www.mysite.com/cms/index.php being served from this directory: public_html/cms/index.php In public_html I have this .htaccess RewriteRule (.*) cms/$1 [L] Which lets me get to the site like this: http://www.mysite.com/index.php But now if I reference the 'old' address, I'd like to redirect to the rewritten address with a permanent redirect code. for example: http://www.mysite.com/cms/?q=node/1 is redirected to... http://www.mysite.com/?q=node/1 How can I make this happen? EDIT: Also in the .htaccess file supplied with Drupal(cms), this is written. I've tried enabling it, but it doesn't seem to have any effect. # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal EDIT: Including more of my .htaccess file - seems relevant. # Block access to "hidden" directories whose names begin with a period. RewriteRule "(^|/)\." - [F] #Strip cms folder from url RewriteRule (.*) cms/$1 RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^ index.php [L] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch>

    Read the article

  • Possible disk IO issue

    - by Tim Meers
    I've been trying to really figure out what my IOPS are on my DB server array and see if it's just too much. The array is four 72.6gb 15k rpm drives in RAID 5. To calculate IOPS for RAID 5 the following formula is used: (reads + (4 * Writes)) / Number of disks = total IOPS. The formula is from MSDN. I also want to calculate the Avg Queue Length but I'm not sure where they are getting the formula from, but i think it reads on that page as avg que length/number of disks = actual queue. To populate that formula I used the perfmon to gather the needed information. I came up with this, under normal production load: (873.982 + (4 * 28.999)) / 4 = 247.495. Also the disk queue lengh of 14.454/4 = 3.614. So to the question, am I wrong in thinking this array has a very high disk IO? Edit I got the chance to review it again this morning under normal/high load. This time with even bigger numbers and IOPS in excess of 600 for about 5 minutes then it died down again. But I also took a look at the Avg sec/Transfer, %Disk Time, and %Idle Time. These number were taken when the reads/writes per sec were only 332.997/17.999 respectively. %Disk Time: 219.436 %Idle Time: 0.300 Avg Disk Queue Length: 2.194 Avg Disk sec/Transfer: 0.006 Pages/sec: 2927.802 % Processor Time: 21.877 Edit (again) Looks like I have that issue solved. Thanks for the help. Also for a pretty slick parser I found this: http://pal.codeplex.com/ It works pretty well for breaking down the data into something usable.

    Read the article

  • Vim: How to Install Airline?

    - by MunkyCheez
    I'm just getting started with Vim. It's a fun experience, but I've found it to be kind of overwhelming. I'm trying to get this plugin installed, vim-airline, but I'm having a lot of trouble. The Installation section on the Github page simply states: copy all of the files into your ~/.vim directory Presumably, this means download the .zip, extract it, and copy all of those files into ~/.vim/. I did this, but Vim just starts up like normal, and running :help airline just gives: Sorry, no help for airline I assume that this means it isn't getting installed. Also, the statusbar remains the same. I'm new to Vim and would really like to get this working. Thanks in advance! EDIT: I also tried putting the files into /usr/share/vim/vim73/. No dice. EDIT 2: I ran :helptags ~/.vim/doc and now the help-page displays when I type :help airline, but I'm still not getting the plugin itself (the status bar). Vim looks the same, but it can now display the help page.

    Read the article

  • umask seems to vary by user

    - by paullb
    I've got a development Ubuntu system for which I have several users: myself (with full sudo) and about 5 other users. (I've set up the system so everything in this respect is still at its default setting) I'm trying to set the system up so that multiple people can collaborate in a single directory by using grouing and I want the default permissions to be 664. However when some users edit files the permissions were 644. After a lot of investigating most users have a umask (checked at the prompt) of 0002 and when they create files they are 664 (as expected) but there are 2 (myself and one other) who have 0022 umask (so the files that come out are 644 and nobody else can write to them). I've looked everywhere but can't figure out why a couple users wind up with a different umask e.g. there is nothing the .bash_profile or anything like that) Any ideas for the source of the discrepancy? /etc/bashrc if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi /etc/profile if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi EDIT: My (bad) ~/.bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions export LANG=en_US.utf8 Other user (good) .bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions

    Read the article

  • Why does GIMP start up so slow on my machine?

    - by muntoo
    EDIT 2: One way would be to disable script-fu. Does anyone know how to disable it at startup? Is it possible? Would GIMP still work? What wouldn't work? Could I start it later, after loading? Is there any way to speed up GIMP's startup time on Windows Vista Home Premium 32-Bit 1.6 [Dual] Intel Processors? On XP [different computer], it loads in less than 3 seconds. On Vista, it takes 20 seconds: 2 Seconds (other - fonts, brushes, etc) 18 Seconds (extension-script-fu) It just freezes at extension-script-fu. Looking at Process Explorer, I see that it's not taking any CPU at all. EDIT 1: It does seem to be taking 50% of the CPU. It gets stuck for about 18 seconds, and then starts working again. Then, the actual GIMP program pops up [...finally]. I have the latest stable version running (I think). I tried it with XP SP2 Compatibiliy mode and/or Run As Administrator, but that didn't help.

    Read the article

  • NTFS 'Owner' missing when accessing hard disk from external USB adapter

    - by trismarck
    I have a hard drive with Windows XP SP3 installed on it. When the drive is connected through the standard SATA connector inside the laptop, everything works as expected. However when I remove the drive from the laptop and connect the drive to the external USB adapter, almost all files / folders lose the 'Owner' field contents. I was wondering why could that be. I've tried two USB adapters and this happens on each. I could take the ownership of all of the files, but this would overwrite the Owner value (the Owner value that is present when the drive is accessed through standard SATA connector in the laptop). //edit: if the hard drive is used through the USB adapter, I can't access most of the files, at least until I take ownership of the files (/folders). This is how it looks like: HDD inside USB adapter: HDD inside laptop: (note the Owner column) //edit: some of the files on the first screenshot have Owner field filled up. That's because I took the ownership of those files / folders to be able to access the files on the hard drive. //edit2: also, if the hard drive is connected through USB adapter and if I've took the ownership of some files by the 'ddd' user, then if i login as a different user (lets say 'eee' user), the owner field is _still_ empty: ddd user: eee user: eee user can't access the 'ddd' folder. Both users have Administrator priviledges.

    Read the article

  • Using keyboard disables touchpad left button for a second on Acer laptop in Windows 8.1

    - by Robert Kilar
    The problem is present in the whole system not only in games: desktop, chrome, games, everywhere. When I press any "input key" on a keyboard for example in desktop I can't select the file by left mouse button OR by tapping the touchpad for about one second(right button works immediately). Later on the LMB works well. There is NO delay, button is just deactivated for a second. In games that means that when I run I cannot shoot for example. When I switched LMB and RMB functions in windows control panel still the LMB is getting disabled and RMB works fine. By "input key" I mean letter or a number, keys like Alt, CapsLock, Ctrl does not affect touchpad. I do not remember that problem when I used Windows 7. USB mouse works like it should. The problem existed when I was using Elantech touchpad driver and after I uninstalled it and used Windows 8.1 generic driver. EDIT I installed the Elantech drivers and set values to 0 at every disable... key. But the problem is still present. EDIT 2 THE LAPTOP IS Acer V3-571G I have turned off disabling function in touchpad but it did not fix it. I know that touchpad is NOT broken down. Turned on the animated touchpad icon of elantech drivers and put it on the task bar(on a picture) When I type the letter and press the LMB the dynamic icon displays the click but it is ignored.

    Read the article

  • Trouble with Russian pc's on my wifi

    - by hogni89
    I have created a WiFi hotspot for the local community. The problem is, some Russian PC's (Windows XP, Windows Vista and Windows 7) can't get internet connection (We have a lot of bypassing russian fishing vessels / cargo ships). The pc's obtain a valid IP address, and some of them can even manage to send some few packages - But none of them are usable on the network. They all say "Limited internet access" or "???????????? ?????? ? ????????". The thing these PC's have in common is, that they all run a Russian installation of Windows. No one else has problems with the WiFi hotspot - Danish and English Windows, Linux and OS X all work like a charm. Can it be, that there is a difference between the danish / english windows installation, compared to the Russian installation? EDIT START They can't ping the router (One PC got one response - ONCE), they can't access any sites and Windows newer asks "Is this network public, home or work?". EDIT END PS: The hotspot is a airMAX rocket M from Ubiquiti Networks, Inc (www.ubnt.com)

    Read the article

  • Steps to deploy a custom routing protocol

    - by user134589
    I'm a Ph.D Student and I'm researching a Service Centric Networking architecture with resourceallocation on a large scale. What I'm looking to do is expand an existing routing protocol like OSPF with extra fields and some new message types that I need for communication between Nodes. I want to manipulate the cost of a network link and I want paths to be calculated like in OSPF V2/v3, but using the cost that my algorithms have calculated. What I have I have the source code of OSPF from Quagga. I am assuming I can edit this code how I want, including packet structures and creating new types. Yes, I am aware it won't be easy but this is a 6 years research project and I am eager to develop something new, to move forward. What I need I would like to know how I can deploy the edited OSPF source files I have (written in C) on any type of server. I have a large testbed environment available with hundreds of virtual nodes and pretty much any OS out there. So if I want to test my extended protocol, how do I make all the nodes in a network use this to communicate? I do not understand what parts of the kernel I need to edit here. I tried searching for days now and I am unable to find how to deploy a non-existing routing protocol, without the use of an application-level framework. If somebody could push me in the right direction that'd be awesome. note: I need this to be a routingprotocol and not an application, since I want this to work on op of the network layer for performance reasons. Thanks!

    Read the article

  • How can I make grub2 boot into Windows 7?

    - by Grzenio
    I had Windows 7 installed on my system, then I installed Debian testing with grub2 as its boot manager. Initially I couldn't see windows entry in grub at all, so I ran: aptitude install os-prober kcpuload update-grub Now I can see the entry, but when I select it I get only Win7 system restore, instead of the the real thing. Any ides how to make it work? EDIT: I tried the suggested approach to add a new file to /etc/grub.d, which generated an entry in grub.cfg, but it does not appear in the grub menu on boot :( I have this: grzes:/home/ga# cat /etc/grub.d/11_Windows #! /bin/sh -e echo Adding Windows >&2 cat << EOF menuentry “Windows 7? { set root=(hd0,2) chainloader +1 } And I have the following grub.cfg file: grzes:/home/ga# cat /boot/grub/grub.cfg # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ ${prev_saved_entry} ]; then set saved_entry=${prev_saved_entry} save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z ${boot_once} ]; then saved_entry=${chosen} save_env saved_entry fi } insmod ext2 set root=(hd0,3) search --no-floppy --fs-uuid --set 6ce3ff31-0ef7-41df-a6f5-b6b886db3a94 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi set locale_dir=/boot/grub/locale set lang=en insmod gettext set timeout=5 ### END /etc/grub.d/00_header ###

    Read the article

  • Have .cmd in startup wait for system wide startup to run.

    - by Dan
    On a computer I'm not an administrator on, there is a startup file (.cmd file in C:\Documents and Settings\All Users\Start Menu\Programs\Startup) for all users. It does several things I don't like, so I have created my own .cmd file which I have placed in C:\Documents and Settings\[user]\Start Menu\Programs\Startup. I want my code to run after the system wide, because my program first undoes the network mappings the system wide file did, and then replaces it with my own network mappings. How can I make my program wait and start as soon as the system wide one is done? (I cannot remove or edit the system wide file). Thanks EDIT: The time that the system wide file takes to runs varies and I would like my file to run right after. The "If Exists" method seems a little to contingent since the system wide script can change or a file could be moved. I am hoping to give my script to a few of my coworkers, so hoping to have it work without me having to update anything. Also, being a linux guy, I don't know cmd code, so please write out any coding suggestions.

    Read the article

  • How to kill ostensibly immortal process?

    - by DeeDee
    I had some huge file transfers operating on an NFS mount. The server on which the mount point resided was carelessly rebooted, and now the server from which these large transfers were initiated seems to be bogged down by them. If I run top, I see the following: The first thing I tried was to run kill with each the -1 -2 -9 and -15 flags, and each of the process ids shown above in turn. This allowed me to proceed, but didn't kill the processes. The next thing I attempted was to reboot the server, but neither reboot nor shutdown -r now worked. When I ran shutdown -r now the standard broadcast message was sent out, but the sever did not reboot. I confirmed this by looking at the server uptime, which was 25 days. So now I'm a little stuck. I'm running these commands as root. EDIT: Here's another interesting tidbit: In top, I don't see that any other processes are using more than a fraction of a percent of memory or more than 5% of CPU. EDIT 2: output of /var/log/messages

    Read the article

  • Adding file to /etc/cron.d doesn't make it run (ubuntu 10.04)

    - by tom
    If I scp a cron file into /etc/cron.d it doesn't run unless I edit the file and change the command. Then crond seems to pick up the cron file. How can I make cron reload its cron files in ubuntu 10.04? 'touch'ing the file doesn't work nor does 'restart cron' or 'reload cron'. My cron file is set to run every minute and logs to a file. Nothing ends up in the log file until I edit the command, and there's no entry for it in /var/log/syslog I'm stumped. Here's my cron file saved to /etc/cron.d/runscript # Runs the script every minute. This is safe because it will exit with success if it's already running * * * * * www-data if [ -f /usr/local/bin/thing ]; then exec /usr/bin/php /usr/local/bin/thing mode:prod -a 14 -d >> /var/log/thing/mything.log 2>&1; else echo `date +'[%D %T]'` "Thing not deployed. Command not run\n" >> /var/log/thing/mything.log; fi &

    Read the article

  • Connecting to SVN server from a computer outside of my LAN

    - by Tom Auger
    I've got a Fedora server running Subversion and svnserve on port 3690. My repo is at /var/svn/project_name. I have my router forwarding port 3690 to the local server (as well as port 80, 21, 22 and a few others). When I connect locally to svn://192.168.0.2/project_name it works great. When I connect from an external server to svn://my.static.ip/project_name I get a time out connecting to the host. However, if I http://my.static.ip there is no problem, so port forwarding is working (at least for port 80). I don't want to run WebDAV or svn via HTTP/s. I'd like it to work using svnserve, as documented in the svn book. What have I misconfigured? EDIT Here is the last part of my iptables dump. I'm not an expert, but it looks OK to me: ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:svn ACCEPT udp -- anywhere anywhere state NEW udp dpt:svn ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:6680:6699 ACCEPT udp -- anywhere anywhere state NEW udp dpts:6680:6699 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited EDIT 2 Results from sudo netstat -tulpn tcp 0 0 0.0.0.0:3690 0.0.0.0:* LISTEN 1455/svnserve

    Read the article

  • Changing SkyDrive default location on Windows 8.1

    - by jmblack
    Update: This issue is due to the use of any non-local location for the SkyDrive folder. I've currently worked around this by using a local folder, but this has not resolved the original problem. I am trying to change the location where SkyDrive is located in Windows 8.1. This is the newly integrated SkyDrive, and not a downloadable client. There are a couple of similar questions on this site, but they pertain to Windows 8 and 8.1 Preview: Changing the SkyDrive default path in Windows 8.1 (Windows 8.1 Preview) How do I change the SkyDrive default folder in Windows (Windows 8) The SkyDrive folder is a system folder in 8.1, and like other system folders, has a Location tab in its properties that theoretically lets you change its location. I was able to do this with other system folders such as Downloads and Pictures, but there is an error when I try to change this for SkyDrive. Edit: The location that causes this error is a mapped network drive. The error does not occur when trying to move SkyDrive to another location on the same drive. There are no permission restrictions on the mapped network drive. Edit #2: This is the method I followed that produced the error: A screenshot of the error can be found here Has anyone else encountered this or have any information on this?

    Read the article

  • /etc/hosts: What is loghost? (fresh install of Solaris 10 update 9)

    - by cjavapro
    # # Internet host table # ::1 localhost 127.0.0.1 localhost XX.XX.XX.XX myserver loghost What is the purpose of loghost? If it was not for having loghost in there, all the /etc/hosts files on all the servers in this particular network could be identical. Edit: I looked at /etc/syslog.conf #ident "@(#)syslog.conf 1.5 98/12/14 SMI" /* SunOS 5.0 */ # # Copyright (c) 1991-1998 by Sun Microsystems, Inc. # All rights reserved. # # syslog configuration file. # # This file is processed by m4 so be careful to quote (`') names # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # *.err;kern.notice;auth.notice /dev/sysmsg *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator *.alert root *.emerg * # if a non-loghost machine chooses to have authentication messages # sent to the loghost machine, un-comment out the following line: #auth.notice ifdef(`LOGHOST', /var/log/authlog, @loghost) mail.debug ifdef(`LOGHOST', /var/log/syslog, @loghost) # # non-loghost machines will use the following lines to cause "user" # log messages to be logged locally. # ifdef(`LOGHOST', , user.err /dev/sysmsg user.err /var/adm/messages user.alert `root, operator' user.emerg * ) Very interesting. when shutting down,, alerts go to all users probably through *.emerg * Looking at ifdef, it seems that the first parameter checks to see if current machine is a loghost, second parameter is what to do if it is and third parameter is what to do if it is not. Edit: If you want to test a logging rule you can use svcadm restart system-log to restart the logging service and then logger -p notice "test" to send a test log message where notice can be replaced with any type such as user.err, auth.notice, etc.

    Read the article

  • Google analytics ignoring "required step" in goals

    - by Matt Huggins
    I am A/B testing a landing page to see which converts better. The funnels are set up exactly the same in terms of the goal completion URL and funnel steps, with one exception: the first step (which is a required step) has a different URL to represent each of the two landing pages. Unfortunately, Google is tracking a conversion for both of these goals regardless of which landing page a user is reaching! It looks like the "required step" is broken...perhaps it is a deeper issue if others haven't noticed it, such as it only not working when the goal URL is the same between multiple goals. Here is an example of what I mean. Goal 1: Goal URL: /users/dashboard (head match) Funnel: 1. /homepages/index1 (required step) 2. /users/register 3. /users/edit Goal 2: Goal URL: /users/dashboard (head match) Funnel: 1. /homepages/index2 (required step) 2. /users/register 3. /users/edit As you can see, the only difference is step #1 of the funnel. Since I am A/B testing the landing page of the site, this should be the only difference. However, when I look at the goal page of Google Analytics, I see that the goal is being recorded for both of these regardless of the landing page being reached. The only tinkering I've done is attempting to wrap each funnel step's goal in ^ and $ characters for an exact regular expression match, but this didn't make a difference. Thoughts?

    Read the article

  • IPTABLES syntax help to forward Remote Desktop requests to a VM [CentOS host]

    - by NVRAM
    I've a VM running MSWindows XP hosted on my CentOS 5.4 machine. I can rdesktop into it from the hosting machine and work just fine using the private ddress (192.168.122.65), but I now need to allow Remote Desktop access from other computers (not just the machine hosting the VM). [Edit] I only need to allow access for a day or so, so don't want to add a NIC (for XP activation reasons). Could someone help me with the iptables syntax? The VM is on a private/virtual network: 192.168.122.65 and my CentOS machine is on a physical network, at 10.1.3.38 (and 192.168.122.1 as the GW for the virtual net). I found this question, but none of the answers seemed to work and I'm a bit timid at blindly trying variations. My FORWARD rules are as listed. Thanks in advance. # iptables -L FORWARD Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable RH-Firewall-1-INPUT all -- anywhere anywhere [Edit] If I do play "blindly" is there a simple way to reset the settings on CentOS (a la service network restart)?

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • With puppet, can you have the client ask to be a certain set of roles?

    - by Aitch
    I've recently got my puppetmaster and client up and running and have had the client correctly signed, then requested and applied simple changes, all good. I have a growing number of machines (100). They are not consistently named (historical reasons). They fall into a handful of categories (think of it like: dataserver_type1, dataserver_type2, webserver_type1, webserver_type2....). New instances of these types of machines are added weekly. I don't understand (yet) or cannot see how I can declare a "generic" node of (say) "dataserver_type1" that contains whatever modules it needs, and set something in the client puppet.conf that says "I am a dataserver_type1" without using the hostname/FQDN If I set the node name in the catalog as (say) "my-data-server-type1" - the certified hostname - it picks it up and works. I know you can use patterns for hostnames but as I said, my server names are not consistent, and I can't change them. This seems disingenuous to have to edit a file and manually add a node for each server, when they continue to grow. Edit: Digging deeper, it seems roles may be what I want. But there still seems to be an element whereby the master has contain a list of roles that a specific named server should do. Perhaps what I am asking is, how can a client say "I want to be this role", without the server having to be updated?

    Read the article

  • What is my BaseDN supposed to be with the following configuration of OpenLDAP?

    - by fuzzy lollipop
    I have the following in my OpenLDAP configuration. Using the latest version OpenLDAP on Centos 5.3. Installed using yum. From my /etc/openldap/slapd.conf database bdb suffix "dc=company,dc=com" rootdn "cn=Manager,dc=company,dc=com" From my /etc/openldap/ldap.conf BASE dc=company,dc=com I have successfully added an entry with ldapadd and retrieved it with ldapsearch from a local bash shell on the box. Now I am trying to get a Graphical Editor to connect to this server remotely so I can enter people from my laptop. But I am having no luck. I tried JXplorer, and it connects with Anonymous bind without me having to specify a BaseDN but I can't edit anything that way. If I try and give it a user name and password, using Manager and my rootpw I have in clear text just for testing, every GUI Client on my remote laptop complains about my BaseDN not being the correct format when I enter dc=company,dc=com and I tried cn=Manager,dc=company,dc=com. Error opening connection: [LDAP: error code 34 - invalid DN] I have tried multiple clients and all of them connect as anonymous, none let me connect authenticated where I can actually create or edit anything. I am using Manager as my username and the password from rootpw, is that correct?

    Read the article

  • Windows 7 mouse multi selection on windows explorer with CTRL and point to select / single click to open

    - by bortao
    This is something that have annoyed me since i changed to Windows 7. I don't do double click for a long time now since windows introduced 1 click selection (Folder Options Click items as follows Single click to open an item). The problem is, when i click Control and hover over the items, they select/deselect automatically every small amount of mouse movement i make. What i want is to select/deselect the items only when i enter/exit them. So to deselect an item im hovering, i have to move the mouse out of it, and then back in. I'm using Intellimouse Explorer official drivers. (edit) Here's another related annoyance: When you are hovering something and move the mouse inside other item (Holding Ctrl) The new item may or may not be selected. If you continue moving the mouse it gets selected/deselected as you move. (more edit) I have found that the parameter HKCU\Control Panel\Mouse\MouseHoverHeight / MouseHoverWidth have some influence here. If it's set to 2, the item select/deselect really quickly as you move the mouse. When set to larger numbers, its slower. But setting to 20 or 200 don't seem to be much different.

    Read the article

  • Firefox url / link to a group of saved bookmarks?

    - by This_Is_Fun
    In Firefox you can easily save a group of tabs together. When (re-)accessing this group, the 'cascading' bookmark menu shows each individual bookmark (and under a line) it says "open all in tabs" I'm looking for a way to launch those tabs without going up through the bookmark menu. Possible options: A) Record a simple macro w/ any number of "superuser" utilities* ('A' is not the preferred option, since many 'little-macros' are hard to keep track of) b) Use Autohotkey (similar to option 'A' and more flexible once you learn the basics) c) How does Firefox load all those tabs? The info must be stored somewhere (as a type of URL??) Quick Summary: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. How do I find the content (exact code) of that 'hyper-link', and / or "How do I easily launch the tabs?" .. . New EDIT #1: I'm looking for a way to launch those tabs without going up through the bookmark menu, or cluttering the bookmarks toolbar which I hide anyway :o) .. . New EDIT #2: I tried to keep the question simple and not mentioning Autohotkey programming. The objective is to launch all tabs using a button on an AHK gui. When grawity said, "It's just an ordinary folder containing ordinary bookmarks," he (she) reminds me I can easily find the folder / Now how to launch to urls inside that folder? .. FYI: (Basic-level) AHK works like this: ; Open one folder ButtonWinMerge_Files: Run, C:\Program Files\WinMerge\ Return .. ; Use default web browser for one link ButtonGoogle: Run, http://google.com Return .. . Question still open: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. "How to 'replicate' the way Firefox launches the tabs with one click?"

    Read the article

  • Windows Server 2008 is stuck at "configuring updates - stage 3 of 3 - 0% complete"

    - by Chris
    This has happened the last two times I've done updates to this system, and I really have no idea what is going on. It is installing a only a month's worth of updates. It only responds to ping and no services are up, so I can't view the system remotely (I have to hook up a monitor to see this message). In the past I've just restarted the system at this point and it eventually finishes updating. I want to know what I can do to avoid this situation, how to diagnose what is going on, and how to get any kind of remote access during the updates. Edit: I can start the machine in safe mode (where I did nothing but backup some files). I restarted and it no longer tries to do a windows update, just goes to the desktop where everything seems extremely broken. I can click on some things, but not launch most programs. I guess all I can do at this point is do a system restore or something. Edit: Re-installed windows on this system yesterday. That's my usual solution to issues I don't feel like diagnosing, like this one.

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >