Search Results

Search found 5099 results on 204 pages for 'distribution groups'.

Page 159/204 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • chrooted sftp user with write permissions to /var/www

    - by matthew
    I am getting confused about this setup that I am trying to deploy. I hope someone of you folks can lend me a hand: much much appreciated. Background info Server is Debian 6.0, ext3, with Apache2/SSL and Nginx at the front as reverse proxy. I need to provide sftp access to the Apache root directory (/var/www), making sure that the sftp user is chrooted to that path with RWX permissions. All this without modifying any default permission in /var/www. drwxr-xr-x 9 root root 4096 Nov 4 22:46 www Inside /var/www -rw-r----- 1 www-data www-data 177 Mar 11 2012 file1 drwxr-x--- 6 www-data www-data 4096 Sep 10 2012 dir1 drwxr-xr-x 7 www-data www-data 4096 Sep 28 2012 dir2 -rw------- 1 root root 19 Apr 6 2012 file2 -rw------- 1 root root 3548528 Sep 28 2012 file3 drwxr-x--- 6 www-data www-data 4096 Aug 22 00:11 dir3 drwxr-x--- 5 www-data www-data 4096 Jul 15 2012 dir4 drwxr-x--- 2 www-data www-data 536576 Nov 24 2012 dir5 drwxr-x--- 2 www-data www-data 4096 Nov 5 00:00 dir6 drwxr-x--- 2 www-data www-data 4096 Nov 4 13:24 dir7 What I have tried created a new group secureftp created a new sftp user, joined to secureftp and www-data groups also with nologin shell. Homedir is / edited sshd_config with Subsystem sftp internal-sftp AllowTcpForwarding no Match Group <secureftp> ChrootDirectory /var/www ForceCommand internal-sftp I can login with the sftp user, list files but no write action is allowed. Sftp user is in the www-data group but permissions in /var/www are read/read+x for the group bit so... It doesn't work. I've also tried with ACL, but as I apply ACL RWX permissions for the sftp user to /var/www (dirs and files recursively), it will change the unix permissions as well which is what I don't want. What can I do here? I was thinking I could enable the user www-data to login as sftp, so that it'll be able to modify files/dirs that www-data owns in /var/www. But for some reason I think this would be a stupid move securitywise.

    Read the article

  • CLI package to replace Plesk

    - by dotancohen
    Myself and another programmer are tasked with maintaining a few webservers. I prefer CLI tools, she prefers Plesk. However, I am adamant about not installing Plesk for quite a few reasons. I have written a small Python script for adding new domains, and now I am about to add the ability to configure email addresses while abstracting the details of Postfix from her. Before I go that route, I have googled to see if anything already exists, and am surprised that I have come up with nothing! Are there any mature, stable "control panels" or "server admin" tools like Plesk, but which are accessed via the CLI over SSH? I am looking for the following features: Add / remove / configure domains served by Apache. Add / remove / configure email boxes and mail groups. Add / remove MySQL databases, users, and configure users to databases. Provide basic monitoring of "server health", that is: memory usage, disk usage, CPU usage, bandwidth usage. Possibly set up STFP accounts so that only specific FTP users could access specific /var/www/someSite/ directories. Note that I was unsure if this question is OT for ServerFault. As per the ServerFault about page (There seems to be no more FAQ) this question meets two of the "ask about" criterion and zero of the "don't ask about" with the possible exception of being opinion-based. Therefore, to keep on-topic, I would like to know about the available applications but we should be subjective and less opinionated. Thank you!

    Read the article

  • Forcing users to change password on first login - Windows Server 2008 R2 Remote Desktop Services

    - by George Durzi
    I'm setting up a demo lab environment in which each demo lab user is assigned 4 accounts to use in the lab. Users access the lab via Remote Desktop to the "client" machine in the lab - exposed at demolab.mydomain.com. The Client machine is a Windows 2008 Server R2 Enterprise Edition server The Remote Desktop Services role is configured on this server Remote Connection settings are configured to allow users to connect with any version of the Remote Desktop Client All accounts are members of the local Administrators and Remote Desktop Users groups All accounts are configured to be forced to change the default password after first login The user is instructed to remote into the lab with an account designated as their main account, and establish 3 more remote desktop sessions within the lab using their 3 other assigned demo lab accounts. When establishing the initial remote desktop connection to the lab using their main account, the user sees the change password dialog as expected. However, after logging in and trying to establish remote desktop connections to the server with their three other accounts, they are prompted that they need to change the password after logging in but can't continue with the login process - they don't see the expected change password experience. After logging in with a primary accounts, it doesn't make a difference if I try establishing a Remote Desktop connection to the environment using the name of the server, e.g. Client, or demolab.mydomain.com. I experimented with changing the settings for Remote Connections to require NLA but that didn't make a different. Appreciate any tips. Thanks

    Read the article

  • vim coloring for git

    - by kelloti
    I'm on Windows and my vim loads with a terrible colorscheme with vim. The message is blue on black (so I can't see what I'm typing). I need to change the colorscheme, but :colorscheme slate doesn't do anything. :version vim - vi improved 7.3 (2010 aug 15, compiled oct 27 2010 17:51:38) ms-windows 32-bit console version included patches: 1-46 compiled by bram@kibaale big version without gui. features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +gettext/dyn -hangul_input +iconv/dyn +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape +multi_byte +multi_lang -mzscheme -netbeans_intg -osfiletype +path_extra -perl +persistent_undo -postscript +printer -profile -python -python3 +quickfix +reltime +rightleft -ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl -tgetent -termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -xfontset -xim -xterm_save -xpm_w32 system vimrc file: "$vim\vimrc" user vimrc file: "$home\_vimrc" 2nd user vimrc file: "$vim\_vimrc" user exrc file: "$home\_exrc" 2nd user exrc file: "$vim\_exrc" compilation: cl -c /w3 /nologo -i. -iproto -dhave_pathdef -dwin32 -dfeat_cscope -dwinver=0x0400 -d_win32_winnt=0x0400 /fo.\objc/ /ox /gl -dndebug /zl /mt -ddynamic_iconv -ddynamic_gettext -dfeat_big /fd.\objc/ /zi linking: link /release /nologo /subsystem:console /ltcg:status oldnames.lib kernel32.lib advapi32.lib shell32.lib gdi32.lib comdlg32.lib ole32.lib uuid.lib /machine:i386 /nodefaultlib libcmt.lib user32.lib /pdb:vim.pdb -debug My $HOME\_vimrc looks like colorscheme slate syn on set shiftwidth=2 set tabstop=2 and my $VIM\vimrc is the stock vimrc that comes with the Windows Vim distribution. How do I change my console Vim colorscheme? Especially for Git commits.

    Read the article

  • Whats the easiest route to trying out mono 2.6?

    - by E J
    We have several web applications built on Microsoft technologies (asp.net+mvc framework, built using VS2008, MS SQL Server). I have recently be playing with Ubuntu (9.10), installed using Wubi, and wanted to see if I can get our apps running on a foss software stack. I have got the hang of the very basics of Postgresql and I have read that there is some support for Linq to SQL in mono (as of 2.6) as well as asp.net/MVC. However I am unsure how to go about getting Mono 2.6 up and running. Here is what I have discovered so far: Ubuntu is not meant for the 'cutting edge' it is designed to be stable hence, it sometimes takes a release cycle or two for new software to make it to the repositories Mono is already installed by default, but it is likely to stay at version 2.4 for at least the 10.4 release You can install paralell environments of Mono, if you know what your doing. I have had a go at setting up parallel environments, but haven't had any luck yet. (And TBH I am not certain that that will do what I think it's gonna do). (tl;dr start here) Is there a distribution of Linux similar enough to Ubuntu, that I wouldn't have to start the learning curve all over again, but that will let me install Mono 2.6, Postgresql, (and possibly mono-develop 2.4)? Or should I persist with Ubuntu?

    Read the article

  • Most scalable way of serving a small set of static HTTP content

    - by Ekevoo
    The story: Hi guys. I'm among the people responsible for serving the results of the most anticipated (by number of people participating) annual entrance exam in my state. As such, when our results are published, the interest is overwhelming. In the past we delegated the responsibility of serving the results to the media, but that spoils a little the officialness of these results. This year we went with a little (long overdue) experiment of using lighttpd instead of Apache as well as other physical network optimizations I wasn't directly involved with. The results were very satisfactory. The server didn't choke even once, nor we saw any of the usual Twitter complaints on unavailability and/or slowness that were previously common. However, because we still delegated the first publication of the results to the media I'm still not 100% sure we can handle the load of actually publishing the results first. The question: Now because these files are like 14MB in total and a true lightweight Linux distribution isn't that big either, I'm thinking: what if next year we run full RAMdrive? Is there any? Is that useful? Is that worth it for a team that uses Debian almost exclusively? Are there other optimizations that I should be focusing on instead?

    Read the article

  • Setting Windows 7's Recycle Bin to automatically have a default disk space allocation for deleted files from newly mounted drives

    - by galacticninja
    How do I set Windows 7's Recycle Bin to automatically have a default disk space allocation for deleted files from external hard drives and TrueCrypt-mounted volumes? I remember in Windows XP, I can set a percentage of total disk space that will automatically be used as storage capacity for deleted files by the Recycle Bin, and this will be applied to all external HDs or TC-mounted volumes. Windows 7 defaults to the 'Don't move files to the Recycle Bin. Remove files immediately when deleted' setting for newly mounted external HDs and TC mounted volumes. Since I am expecting deleted files to go to the Recycle Bin, sometimes this causes an 'Oops' when I delete files in external hard drives or TC mounted volumes, as Windows does not move deleted files to the Recycle Bin, but just deletes the files permanently. I have to remember to manually set a custom Recycle Bin storage space for each new drive that is mounted by Windows to avoid this issue. I only use and mount TrueCrypt file containers, not drives. I also don't mount TrueCrypt file containers as removable drives. ('Mount volume as removable medium' is unchecked in Mount Options.) In my $Recycle.Bin > Properties > Security settings, 'System' and 'Administrators' are already set to 'Full Control', while 'Users' only have 'Special Permissions' checked in gray. There are no other groups. I haven't changed or edited anything in these settings. I am using Windows 7 Ultimate.

    Read the article

  • Amazon EC2 Nat Instance - goes out but not back in

    - by nocode
    I've followed Amazon's steps and list what I've done. I've created 6 subnets (4 private SN1: 10.50.1.0/24, SN2: 10.50.2.0/24, SN3: 10.50.3.0/24, SN4: 10.50.4.0/24) and 2 public (SN5: 10.50.101.0/24 and SN6: 10.50.102.0/24) -I have a Bastion host and a NAT instance on SN5 and assigned EIP's to both. I created a test instance on SN1. edit: -NAT instance has source/destination check disabled -On the NAT instance, I had enabled the following commands to be bootstrapped: echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -s 10.0.0.0/16 -j MASQUERADE -In my VPC, the private subnets have their own route table and configured 0.0.0.0/0 to the NAT instance with 4 subnets being associated with the route table. I have a second route table for my public subnets and 0.0.0.0/16 is pointed towards the IGW (with the other 2 subnets associated with it). -For Security Groups, I have the NAT instance accepting all traffic on each of the 4 subnets and all OUTBOUND traffic is allowed. For my test server, I have allowed all outbound access and have allowed all traffic from the public subnet of the NAT host. I can ping internally with no issues. On my test instance, if I try to ping google.com, DNS resolves however I don't get a reply back. On my NAT instance, I run a tcpdump and can see the request being requested to google.com but it's not sending the reply back. My NAT host can ping and receive a reply from google. From the test host, when I ping the NAT instance, the tcpdump shows a request and receive. Is there something I'm missing? EDIT: I've figured it out - I had to save the iptable config and restart the service.

    Read the article

  • cannot reach munin port on other AWS instance

    - by Amedee Van Gasse
    2 AWS instances, in the same region but different availability zones, one is in regular EC2 and the other is in VPC, both have an Elastic IP, both are 64bit Amazon Linux AMI 2014.03.1. Both are running munin-node. The instance in the VPC is running munin-cron. I have added incoming TCP and UDP port 4949 to the security groups of both instances. On the munin node, I added an allow-line with the IP address (regular expression) of the munin server to /etc/munin/munin-node.conf. I bind munin-node to any interface using host *. Then I did sudo service munin-node restart. Then I ran netstat. $ sudo netstat -at | grep munin tcp 0 0 *:munin *:* LISTEN So the port is open there. On the munin server AND on the munin node: $ nmap AMAZON-IP -p 80,4949 | grep tcp 80/tcp open http 4949/tcp closed munin On the munin node: $ nmap localhost -p 80,4949 | grep tcp 80/tcp open http 4949/tcp open munin So from the outside, the http port is open (Apache is running) but the munin port is closed. The node can't even reach the munin port on it's own public IP address, but it can on localhost. I added port 80 as a sanity check, to be sure that there is network connectivity at all. So what am I overlooking here?

    Read the article

  • How do I replicate Gmail filtering (forwarding mostly)?

    - by projectdp
    I have reached the limits of Gmail forwarding. Before there was no need to verify forwarding addresses. It's a problem for me now because the addresses I want to forward to are not natural inboxes but automated systems with no way to track the verification email contents. I want to set this up for example: mobile - email - facebook-email - flickr-email - tumblr-email - posterous-email How do I do this without Gmail filters? I think I need to use fetchmail to watch my inbox and then autoforward to the above addresses. Is fetchmail the best solution to this issue? Any other MRA's? I'd like to do some more complicated things with the emails in an automated fashion too, how would I go about monitoring the inbox, doing some actions to the email before forwarding, and forward everywhere? prerequisites: a server: fetchmail daemon to poll the account local mailbox script to clean & forward appropriately (python probably) sendmail + ~/.forward file backup email account (Gmail probably) Any help would be greatly appreciated. I'm trying to automate my social content distribution.

    Read the article

  • Block users from Social networking websites while firewall is down

    - by SuperFurryToad
    We currently have a SonicWall firewall, which does a pretty good job a blocking Social networking websites like Facebook and Bebo. The problem we are having is that sometimes we need to temporarily disable our firewall blocklist so we can update our company's page on Facebook for example. Whenever we do this, have see an avalanche of users logging on to their Facebook pages during work time. So what we need a way to block access while the firewall is down. For the sake of argument, we have two groups of users - "management" and "standard users". "standard users" would have no access to Facebook, but "management" users would have access. Perhaps something like a host file redirect for non-management users. This could probably be enforced via group policy that would call a bat file to copy down the host file, depending if the user was management or not. I'm keen to hear any suggestions for what the best practice would be for this in a Windows/AD environment. Yes, I know what we're doing here is trying to solve a HR problem using IT. But this is the way management wants it and we have a lot of semi-autonomous branch offices that we don't have a lot of day to day contact with, so an automated way of enforcing this would be the most preferable method.

    Read the article

  • Disable RAID to JBOD in server IBM x3400 M2

    - by BanKtsu
    Hi I just wanna disable the default RAID in my server IBM System X3400 M2 Server(7837-24X),i have 3 disk drives SAS. I want to make them a JBOD "Just a Bunch Of Disks", because I want to install in the drive 0 CentOS, and the other two make them cache files for a squid server. I disable the RAID in the BIOS: System Settings/Adapters and UEFI drivers/LSI Logic Fusion MPT SAS Driver -PciRoot(0x0)/Pci(0x3,0X0)/Pci(0x0,0x0) LSI Logic MPT Setup Utility RAID Properties/Delete Array Later I boot the CentOS live CD and install the OS in the drive 0, and the others 2 mounted like this: *LVM Volume Groups vg_proxyserver 139508 lv_root 51200 / ext4 lv_home 84276 /home ext4 lv_swap 4032 Hard Drive sdb(/dev/sdb) free 140011 sdc(/dev/sdc) free 140011 sdd(/dev/sdd) sdd1 500 /boot ext4 sdd2 139512 vg_proxyserver physical volume(LVM) But when I restart the server give me the error: Boot failed Hard Disk 0 UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15130,0X0)) ........PXE-E18:Server response timeout. UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15132,0X0)) ........PXE-E18:Server response timeout. and the OS not start. The IBM force me to do a RAID?,why?

    Read the article

  • Why does my Intel Tolapai network chip not transmit packets?

    - by Hanno Fietz
    I'm trying to deploy an embedded system (NISE 110 by Nexcom) based on the Intel EP80579 (Tolapai) chip. Tolapai apparently integrates controllers for Ethernet etc. on a single chip (Intel homepage). The machine can't get a network connection. Diagnosis as far as I could manage: Drivers drivers from Intel compiled and installed without problems (version 1.0.3-144). Kernel version and Linux distribution (CentOS 5.2, 2.6.18) match the driver's installation instructions. drivers are loaded and show up in lsmod (module names are gcu and iegbe) interfaces eth0 and eth1 show up in ifconfig ifconfig I can bring up the interfaces with fixed IP pinging the interface locally works ifconfig shows flag UP but not RUNNING Link ethtool shows "Link detected: no", "Speed: unknown (65536)" and "Duplex: unknown (255)" Link LED is on on the other side of the cable, ethtool shows "Link detected: yes" and reports a speed of 1000 Mbps, which has allegedly been auto-neogotiated with the problematic device. Network traffic analysis the device does not reply on ARP, ICMP echo or anything else (iptables is down) when trying to send ICMP or DHCP requests, they never reach the other end activity LED is off on the device, on at the other end. I tried the following without any effect: Different cables (2 straight, one crossed), I get the link LED lit up on each. Three different devices on the other end (one PC, one netbook, one router) Fixed ARP table entries on both sides Connecting both network ports of the machine with each other, won't ping through the cable, but will ping locally. Tried straight and crossed cables for that.

    Read the article

  • Can I use @import to import Kod's default style sheet into my own?

    - by Thomas Upton
    I understand that Kod is being actively developed and is prone to drastic changes in any area. I would like to modify some small things (like font face and size or certain colors) while still being able to benefit from any changes or updates to the default Kod stylesheet. I thought that I would be able to @import the default stylesheet into my own to achieve this. This is what ~/.kod/custom.css would look like, @import url("file:///Applications/Kod.app/Contents/Resources/style/default.css"); /* Change the default font face and color. */ body { font-family: Menlo, monospace; color: #efefef; } This stylesheet was set with the following defaults command, per the comments at the top of Kod's default CSS file: defaults write se.hunch.kod style/url ~/.kod/custom.css Unfortunately, this didn't work. When I first tried to reload the style, Kod crashed. It opened fine again, but the @import statement wasn't working, and Kod crashed every time I saved the custom.css file. Am I doing something wrong? Did I write my @import statement wrong? Is that not how @import is supposed to work? Did I miss some sort of documentation or Kod Google Groups post that mentions that Kod explicitly disallows this?

    Read the article

  • Flash drive suddenly died. Why? Can I recover it?

    - by mg
    Hi, I have a flash drive that I used not too much but, after few month of inactivity, it died. I know that flash drives have a limited write cycles but I am sure that this is not the problem. I tried to create a new partition table and format the drive nothing worked. This is the output of mkfs.ext2. marco@pinguina:~$ sudo LANG=en.UTF-8 mkfs.ext2 -v -c /dev/sdc1 [sudo] password for marco: mke2fs 1.41.11 (14-Mar-2010) fs_types for mke2fs.conf resolution: 'ext2', 'default' Calling BLKDISCARD from 0 to 4001431552 failed. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 244320 inodes, 976912 blocks 48845 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1002438656 30 block groups 32768 blocks per group, 32768 fragments per group 8144 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Running command: badblocks -b 4096 -X -s /dev/sdc1 976911 badblocks: Input/output error during ext2fs_sync_device Checking for bad blocks (read-only test): done Block 0 in primary superblock/group descriptor area bad. Blocks 0 through 2 must be good in order to build a filesystem. Aborting.... Is there something I can do to recover it?

    Read the article

  • How to setup NTFS ACL with Acces Based Enumeration

    - by Patrick Pellegrino
    We're in the process of migrating from Novell Netware to Windows 2K8 R2 infrastructure (AD, File server, print server... etc) My question is about ACL. While Netware and Windows are totally different, I want to be sure my thnking is good before screwing everything up! There's a scenario : F: | +-- DATA <= Shared as DATA with Access based enumeration | +-- Folder 1 +-- Team 1's Folder +-- Team 2's Folder ... In that case, by default, rights are herited from the F: to the deepest folders. What we want : Administrators group have full control top - down. From DATA, ABE list only folders that users have access. (ex. : I'm in group Team 2, I see Team 2's Folder). From what I understand, at DATA I remove all NTFS ACL to be herited (ex. Users Group), be sure to keep Administrators Group and SYSTEM user. After that, grant Full control (or any right needed) on each folder to Groups or Users that have to have access. Does I'm wrong ? Anything I should take care of ? Any help to my understanding will be very appreciated. Regards.

    Read the article

  • Access denied to external USB disk; update access rights fails in Windows 8

    - by gerard
    I use to work with 2 laptops (Windows vista and Windows 7), my work files being on an external usb disk. My oldest laptop broke down, so I bought a new one. I had no option other than take Windows 8. I suspect something changed with access rights, as my external disk suffered some "access denied" problem on Windows. I was prompted (by Windows 8) somehow to fix the access rights, which I tried to do, getting to the properties - security. This process was very slow and ended up saying disk is not ready Additionally, my external usb disk somehow was not recognized anymore. Back to Windows 7, I was warned that my disk needed to be verified, which I did. In this process, some files were lost (most of them I could recover from the folder found00x, but I have some backup anyway). Also, I don't know why, but under Windows 7, all the folder showed with a lock. Then back again to Windows 8. Same problem : access denied to my disk + no way to change access rights as it gets stuck disk is not ready". Now I am pretty sure there is some kind of bug or inconsistency in Windows 8 / Windows 7. I did 2. and 3. a few times. At some point, I also got an access denied in Windows 7. I could restore access rights to the disk to "System" (properties - security - EDIT for full control to group "system". ). But then I still get the same access right pb on Windows 8, and getting stuck in the process to restore full control to "system" -- and "admin" groups. I upgraded Windows8 with the Windows8 updates available. Does not help.

    Read the article

  • LVM2 volume group lost

    - by MrG
    I updated one of my servers, but - although I took care not to modify - the volume groups on /dev/sdb1 were lost, although the physical volumes seem to be still there: [root@server ~]# pvscan PV /dev/sda2 VG VolGroup lvm2 [465,16 GiB / 0 free] PV /dev/sdb1 lvm2 [1,82 TiB] Total: 2 [2,27 TiB] / in use: 1 [465,16 GiB] / in no VG: 1 [1,82 TiB] [root@server ~]# pvs -v Scanning for physical volume names PV VG Fmt Attr PSize PFree DevSize PV UUID /dev/sda2 VolGroup lvm2 a-- 465,16g 0 465,16g HftbaD-MBs0-3p7D-6O13-CrzU-T9Gb-6W0ofB /dev/sdb1 lvm2 a-- 1,82t 1,82t 1,82t dD4XZP-WStA-61xV-5Sff-ifmW-R4rR-JenHoU [root@server ~]# pvck -d -v /dev/sdb1 Scanning /dev/sdb1 Found label on /dev/sdb1, sector 1, type=LVM2 001 Found text metadata area: offset=4096, size=1044480 Found LVM2 metadata record at offset=10752, size=1037824, offset2=0 size2=0 Found LVM2 metadata record at offset=9216, size=1536, offset2=0 size2=0 Found LVM2 metadata record at offset=7168, size=2048, offset2=0 size2=0 Found LVM2 metadata record at offset=5632, size=1536, offset2=0 size2=0 I attempted to fix it as described here and was able to extract the 4 meta data sets listed above (using i.e. dd bs=1 skip=5632 count=1536 if=/dev/sdb1 of=output.file), none of them includes the lv_data which I'm missing. Please advise how I could access the files which should be on /dev/sdb1 there. Any help is appreciated!

    Read the article

  • How do I push my initial snapshot to a subscriber server in SQL Server 2000?

    - by Kev
    I'm configuring Transactional Replication using the Push model. The scenario is: The SQL Servers: SQL01 (publisher) and SQL02 (subscriber) - both running SQL 2000 SP4. Both servers are standalone (i.e. not domain members) Both servers have their FQDN and NETBIOS names in their HOSTS files I've managed to configure SQL01 to publish my database and configured a Push subscription for SQL02 using the Push New Subscription wizard and set the Distribution Agent to update the subscription continuously. On the Push Subscription wizard "Initialise Subscription" page I've selected "Yes, initialise the schema and data" and ticked the "Start the Snapshot Agent to begin the initialisation process immediately" option. All the required services are running (SQL Agent). When I complete the wizard and browse the Replication - Publications folder I can see my publication (blue book with arrow). The publication shows the Push subscription and its status is Pending. If I look in the c:\Program Files\Microsoft SQL Server\Mssql\Repldata folder I see a number of T-SQL scripts for each table e.g. Products.bcp, Products.sch, Products.idx. What should happen now? Should my replicated database now (magically) appear on the subscription server?

    Read the article

  • OpenLDAP ACLs are not working

    - by Dr I
    First things first, I'm currently working with an OpenLDAP: slapd 2.4.36 on a Fedora release 19 (Schrödinger’s Cat). I've just install the openldap with yum and my configuration is the following one: ##### OpenLDAP Default configuration ##### # ##### OpenLDAP CORE CONFIGURATION ##### include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/inetorgperson.schema include /etc/openldap/schema/nis.schema pidfile /var/lib/ldap/slapd.pid loglevel trace ##### Default Schema ##### database mdb directory /var/lib/ldap/ maxsize 1073741824 suffix "dc=domain,dc=tld" rootdn "cn=root,dc=domain,dc=tld" rootpw {SSHA}SECRETP@SSWORD ##### Default ACL ##### access to attrs=userpassword by self write by group.exact="cn=administrators,ou=builtin,ou=groups,dc=domain,dc=tld" write by anonymous auth by * none I launch my OpenLDAP service using: /usr/sbin/slapd -u ldap -h ldapi:/// ldap:/// -f /etc/openldap/slapd.conf As you can see it's a pretty simple ACL which aim to allow access to the userPassword attribute to a specific group read only, then to the owner read and write to anonymous requiring auth and refuse the access to everyone else. The problem is: Even using a valid user with correct password my ldapsearch ends with zero informations retrieved from the directory, plus I've got a strange response on the result line. # search result search: 2 result: 32 No such object # numResponses: 1 here is the ldapsearch request: ldapsearch -H ldap.domain.tld -W -b dc=domain,dc=tld -s sub -D cn=user,ou=service,ou=employees,ou=users,dc=domain,dc=tld I did not specify any filter as I want to check that ldapsearch is correctly printing only allowed attribute.

    Read the article

  • Cannot Start Passenger 3.0.18 Using Mountain Lion (OS X Server) and RVM

    - by LightBe Corp
    I recently did a clean install of Mountain Lion on my Mac Mini Server. I installed version 3.0.18 using a gem according to the directions on http://www.phusionpassenger.com with no errors that I could see. rvmsudo gem install passenger-enterprise-server-3.0.18.gem rvmsudo passenger-install-apache2-module Here are my entries in /etc/apache2/httpd.conf with my username masked: LoadModule passenger_module /Users/username/.rvm/gems/ruby-1.9.3-p327/gems/passenger-enterprise-server-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /Users/username/.rvm/gems/ruby-1.9.3-p327/gems/passenger-enterprise-server-3.0.18 PassengerRuby /Users/username/.rvm/wrappers/ruby-1.9.3-p327/ruby I uncommented out the following statement: Include /private/etc/apache2/extra/httpd-vhosts.conf Here is a sample virtual host entry. I have three of them in the file. <VirtualHost *:80> ServerName www.mydomain.com ServerAlias mydomain.com PassengerAppRoot /Users/username/Sites/myfolder/ DocumentRoot /Users/username/Sites/myfolder/public <Directory /Users/username/Sites/myfolder/public> Allow from all AllowOverride all Options -MultiViews </Directory> </VirtualHost> I have restarted Apache several times. Here is information from my server: [~]$ ps -ef | grep Passenger 501 18804 303 0 12:39PM ttys000 0:00.00 grep Passenger [~]$ rvmsudo passenger-status Password: **ERROR: Phusion Passenger doesn't seem to be running.** [~]$ rvmsudo passenger-config --version 3.0.18 I have tried doing online searches on this. I was surprised that there was not all that much on this specific error even though from my understanding Passenger has been around for a few years. I have posted this issue on the Phusion Passenger Google Groups but have not heard anything. Any help would be appreciated, the sooner the better LOL. Seriously I need to have one of my three websites up by tomorrow evening. This is the only issue stopping that from happening. Thanks again.

    Read the article

  • 100% CPU load on Ubuntu 10.04.3 LTS 64bit

    - by deadtired
    I have 2 days since I am trying to fix this issue, with no success. The server is a mysql database server. Hardware: DELL Poweredge 1950, 2x Intel Xeon Quad Core E5345 @ 2.33GHz, 16 Gb mem, 2x 146Gb SAS (software RAID1) Software: Ubuntu 10.04.3 LTS, MySQL 5.1.41 Issue: while mysql is not used and runs with no database, everything seems alright. As soon as I install a database, it has the reason to bring all 8 cores in 100% with low memory consumption. So, you can imagine the load average goes high (I saw 212 load average for the first time). The server doesn't become unresponsive, but you can see it's slow while browsing the project installed. Additional info: the database used is not more than 24MB and it was moved from a server with less resources and a lot more larger databases. So it's not the database/project. my.cnf is not a reason also, as I used both default one and the one I use on the same distribution on another server.What is interesting is that mysql doesn't close any process and runs to the limit of the max_connections. Logs are quiet. Nothing there. I switched to this Ubuntu version after I suspected some problems in the newly Ubuntu 11.10 server. This one worked alright for an hour after I made a kernel upgrade to 3.0.1 (it was using the memory also) I tested disk speed and seems alright. Some more output on the running server: dstat -cndymlp -N total -D total 3: htop command: Idea? Did anyone meet the same problem? Any fix you can think of?

    Read the article

  • ssh many users to one home

    - by filippo
    Hiya, I want to allow some trusted users to scp files into my server (to an specific user), but I do not want to give these users a home, neither ssh login. I'm having problems to understand the correct settings of users/groups I have to create to allow this to happen. I will put an example; Having: MyUser@MyServer MyUser belongs to the group MyGroup MyUser's home will be lets say, /home/MyUser SFTPGuy1@OtherBox1 SFTPGuy2@OtherBox2 They give me their id_dsa.pub's and I add it to my authorized_keys I reckon then, I'd do in my server something like useradd -d /home/MyUser -s /bin/false SFTPGuy1 (and the same for the other..) And for the last, useradd -G MyGroup SFTPGuy1 (then again, for the other guy) I'd expect then, the SFTPGuys to be able to sftp -o IdentityFile=id_dsa MyServer and to be taken to MyUser's home... Well, this is not the case... SFTP just keeps asking me for a password. Could someone point out what am I missing? Thanks a mil, f. [EDIT: Messa in StackOverflow asked me if authorized_keys file was readable to the other users (members of MyGroup). Its an interesting point, this was my answer: Well, it wasn't (it was 700), but then I changed the permissions of the .ssh dir and the auth file to 750 though still no effect. Guess it's worth mentioning that my home dir ( /home/MyUser) is also readable for the group; most dirs being 750 and the specific folder where they'd drop files is 770. Nevertheless, about the auth file, I reckon the authentication would be performed by the local user on MyServer, isn't it? if so, I don't understand the need for other users to read it... well.. just wondering. ]

    Read the article

  • Broken Python installation on CentOS 5.8

    - by Beckett
    I already searched for solution to my problem via Google and stackoverflow's search facility, but haven't found anything related specifically to it. Here's the problem: I needed python 2.7.3 on CentOS 5.8 machine which has only python 2.4.3 preinstalled. Also neither there's the suitable version in it's repositories nor I can upgrade installed version. That's why I decided to build python from source code. But I've made a mistake: instead of make altinstall I did make install thus changing default version of the current installation. It was before I found this article - How to install Python 2.7.3 on CentOS 6.2 . I guess 5.8 and 6.2 versions aren't different to the extent this article is inapplicable. After installation of new python version I installed pip, but once I tried to invoke it, I got "No module named pkg_resources" error. In order to solve this issue I installed setuptools from repository. But it had only led to another error: "Distribution Not Found". My final step was to follow the guide I posted the link to, but I was unable to perform last step: easy_install-2.7 virtualenv command threw "-bash: /usr/local/bin/easy_install-2.7: .: bad interpreter: Permission denied" error. Now when I try to invoke pip or pip-2.7 both commands raise the same error with different names of binaries after "-bash:". Is there any way to fix this problem, so I could install new python version (2.7.3) alongside with the preinstalled one (2.4.3) according to the guide? Any help will be appreciated. P.S.: yum is working fine, although it needs python to function, so I hope the damage I unknowingly caused isn't very severe. Also I'm not a native English speaker, so I apologize for possible occasional grammatical and/or spelling errors.

    Read the article

  • Problems installing Windows service via Group Policy in a domain

    - by CraneStyle
    I'm reasonably new to Group Policy administration and I'm trying to deploy an MSI installer via Active Directory to install a service. In reality, I'm a software developer trying to test how my service will be installed in a domain environment. My test environment: Server 2003 Domain Controller About 10 machines (between XP SP3, and server 2008) all joined to my domain. No real other setup, or active directory configuration has been done apart from things like getting DNS right. I suspect that I may be missing a step in Group Policy that says I need to grant an explicit permission somewhere, but I have no idea where that might be or what it will say. What I've done: I followed the documentation from Microsoft in How to Deploy Software via Group Policy, so I believe all those steps are correct (I used the UNC path, verified NTFS permissions, I have verified the computers and users are members of groups that are assigned to receive the policy etc). If I deploy the software via the Computer Configuration, when I reboot the target machine I get the following: When the computer starts up it logs Event ID 108, and says "Failed to apply changes to software installation settings. Software changes could not be applied. A previous log entry with details should exist. The error was: An operations error occurred." There are no previous log entries to check, which is weird because if it ever actually tried to invoke the windows installer it should log any sort of failure of my application's installer. If I open a command prompt and manually run: msiexec /qb /i \\[host]\[share]\installer.msi It installs the service just fine. If I deploy the software via the User Configuration, when I log that user in the Event Log says that software changes were applied successfully, but my service isn't installed. However, when deployed via the User configuration even though it's not installed when I go to Control Panel - Add/Remove Programs and click on Add New Programs my service installer is being advertised and I can install/remove it from there. (this does not happen when it's assigned to computers) Hopefully that wall of text was enough information to get me going, thanks all for the help.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >