Search Results

Search found 20755 results on 831 pages for 'custom map'.

Page 362/831 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • lsof not showing what port a proc is listening on

    - by ericslaw
    I have many processes on a box listening on several ports. I am trying to map ports to pids. The problem is that lsof is not telling me what ports belong to which process. Given an apache listening on port 80, I can see it listening via netstat: user@host% netstat -an|grep LISTEN|grep 80 *.80 *.* 0 0 49152 0 LISTEN But when I try to map port 80 to a pid I get nothing: user@host% lsof -iTCP:80 -t When I try seeing what sockets that specific pid is using I get: user@host% lsof -lnP -p31 -a -i COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME libhttpd. 31 0 15u IPv4 0x6002d970b80 0t0 TCP *:65535 (LISTEN) Notice the *:65535 in the NAME column. Does anyone know why lsof is not reporting the port in use? I am running as root. I am using a mix of lsof and os versions: lsof v4.77 on Solaris10 sparc lsof v4.72 on Redhat4.2 etc I know that linux solutions can use "netstat -p", so I guess I'm only looking for why solaris isn't working, but I find lsof is frequently silent and not showing me expected data.

    Read the article

  • Administrative shares in Windows 7 Pro not visible

    - by Chris Tybur
    My desktop machine has a clean install of Windows 7 Professional. For some reason the standard administrative shares Admin$, C$, D$, etc are not visible, either in Computer Management - Shared Folders - Shares or via net share. I also have a laptop with a clean install of Windows 7 Professional, and I can see the admin shares in both places. As such, I can map to \\laptop\c$ from the desktop, but I can't map to \\desktop\c$ from the laptop. I pretty much took the defaults during the Windows 7 installations. I've tried adding LocalAccountTokenFilterPolicy to the registry on the desktop, but that didn't work. On the desktop I've also disabled UAC, turned off Windows firewall, removed it from a homegroup, made sure file and printer sharing is turned on, but nothing has worked. There is some subtle difference between the two machines that I can't seem to find. I'm logging into both machines using a local account that is in the Administrators group. Both accounts have the same name and password. I really don't want to have to create a new share for the desktop's C drive, especially since C$ is visible and working on the laptop and therefore I should be able to make it work on the desktop. Any idea why the admin shares would work on one machine and not another? Or why LocalAccountTokenFilterPolicy would fail?

    Read the article

  • FFmpeg convert video w/ dropped frames, out of sync

    - by preahkumpii
    I recorded a video using Bandicam with the MJPEG encoder to get the least amount of lag. Now, I am trying to convert that massive file to a h264 avi using ffmpeg. I know there are dropped frames in the video stream...more than 100 in the first two minutes, which I assume is simply because Bandicam dropped some when it couldn't keep up. So, when I convert the file to h264, the video and audio are out of sync, and appear to be more and more out of sync as output video progresses. Here is my basic command in ffmpeg: ffmpeg -i "C:\...\input.avi" -vcodec libx264 -q 5 -acodec libmp3lame -ar 44100 -ac 2 -b:a 128k "C:\...\output.avi" I have tried EVERYTHING I can think of including: -itsoffset [-]00:00:01 Tried this before and after input file. This doesn't work because as the video progresses it becomes more and more out of sync. -async 1 Doesn't work. -vsync 1 Doesn't work, but it does show dropped frames being duplicated. Two inputs of same file with mapping using -map 0:0 -map 1:1. Doesn't work. The source plays just fine. Any ideas how to convert it with ffmpeg and keep the audio and video synced? Thanks.

    Read the article

  • Should I expect ICMP transit traffic to show up when using debug ip packet with a mask on a Cisco IOS router?

    - by David Bullock
    So I am trying to trace an ICMP conversation between 192.168.100.230/32 an EZVPN interface (Virtual-Access 3) and 192.168.100.20 on BVI4. # sh ip access-lists 199 10 permit icmp 192.168.100.0 0.0.0.255 host 192.168.100.20 20 permit icmp host 192.168.100.20 192.168.100.0 0.0.0.255 # sh debug Generic IP: IP packet debugging is on for access list 199 # sh ip route | incl 192.168.100 192.168.100.0/24 is variably subnetted, 2 subnets, 2 masks C 192.168.100.0/24 is directly connected, BVI4 S 192.168.100.230/32 [1/0] via x.x.x.x, Virtual-Access3 # sh log | inc Buff Buffer logging: level debugging, 2145 messages logged, xml disabled, Log Buffer (16384 bytes): OK, so from my EZVPN client with IP address 192.168.100.230, I ping 192.168.100.20. I know the packet reaches the router across the VPN tunnel, because: policy exists on zp vpn-to-in Zone-pair: vpn-to-in Service-policy inspect : acl-based-policy Class-map: desired-traffic (match-all) Match: access-group name my-acl Inspect Number of Half-open Sessions = 1 Half-open Sessions Session 84DB9D60 (192.168.100.230:8)=>(192.168.100.20:0) icmp SIS_OPENING Created 00:00:05, Last heard 00:00:00 ECHO request Bytes sent (initiator:responder) [64:0] Class-map: class-default (match-any) Match: any Drop 176 packets, 12961 bytes But I get no debug log, and the debugging ACL hasn't matched: # sh log | inc IP: # # sh ip access-lists 198 Extended IP access list 198 10 permit icmp 192.168.100.0 0.0.0.255 host 192.168.100.20 20 permit icmp host 192.168.100.20 192.168.100.0 0.0.0.255 Am I going crazy, or should I not expect to see this debug log? Thanks!

    Read the article

  • Setup my domain at Whois.com as nameservers for a Dedicated Server with Kloxo

    - by BoDiE2003
    Hello, this is my first time at ServerFault, I'm a stackoverflow user. I'm faceing the next problem and may be its me, but I can't find proper guides to set up a domain and nameservers for a dedicated box. I have a domain, at whois.com, and a Dedicated server at Reliable Hosting Services, the server has 5 IPs, I know that I need 2 of them for the nameservers. Right now, my domain at whois.com is using nsX.whois.com nameservers and it has 2 child nameservers: ns1.mydomain.com & ns2.mydomain.com pointing to those 2 IPs from my Server. Whats next? I still cannot set that domain as my main server domain since it says: To map an IP to a domain, the domain must ping to the same IP, otherwise, the domain will stop working. The domain you are trying to map this IP to, doesn't resolve back to the IP, and so it cannot be set as the default domain for the IP. Well and I'm stuck on those steps, whats next to have my nameservers working and my main domain assigned to my server? Thank you very much and happy new year!!

    Read the article

  • Faking a Linux environment without chroot

    - by Pascal
    For a university project I want to test a C++11 program on a 32-core machine. Unfortunately the machine has Ubuntu 12.04 with GCC 4.6 installed (we need GCC 4.7 because of some C++11 threading features). In such an environment I would normally run a chroot with a custom linux (say a debootstrap with Ubuntu 12.10). Since we don't get root access on the machine we can't use chroot. So far I have prepared a run-time environment using debootstrap for our code, I compiled it in the debootstrap environemnt. Then copied it onto the server (using rsync). In order to run our C++ code I set the LD_LIBRARY_PATH to export LD_LIBRARY_PATH=~/debootstrap/usr/lib/:~/debootstrap/lib64/:~/debootstrap/usr/lib/x86_64-linux-gnu/:~/debootstrap/lib/x86_64-linux-gnu/:$LD_LIBRARY_PATH and so far our code seems to run. I'm however stuck with our python code. It doesn't seem to be sufficient to set the paths manually. export PYTHONPATH=~/debootstrap/usr/lib/python2.7/dist-packages:~/debootstrap/usr/lib/python2.7:~/debootstrap/usr/lib/python2.7/plat-linux2:~/debootstrap/usr/lib/python2.7/lib-tk:~/debootstrap/usr/lib/python2.7/lib-dynload:~/debootstrap/usr/local/lib/python2.7/dist-packages:~/debootstrap/usr/lib/pymodules/python2.7:~/debootstrap/usr/lib/python2.7/dist-packages/PIL:~/debootstrap/usr/lib/python2.7/dist-packages/gtk-2.0:~/debootstrap/usr/lib/python2.7 Executing our script results in ImportError: No module named _path Is there an easier way to accomplish a "fake"-chroot than just overriding and creating environment variables? Note I need python since we created a custom C++-Python module in order to run our tests. Maybe I should create two questions from this.

    Read the article

  • Windows 7 Professional Cannot Connect to Share - Wrong password

    - by henryford
    I know that this question has actually been asked a few times before, but every solution I found didn't yield any results on my end, I can't get my head around it: When I am trying to connect to a share on the network, I always get the response "The specified network password is incorrect". However, the password is definetly correct and it works if I connect from another machine. I changed the LAN Manager authentication level to "Send LM & NTLM - use NTLMv2 session security if negiotated", I configured Kerberos encryption types to include all suites, rebooted (several times), but still - no luck. I can connect if I use my regular account with which I am logged in, but I need to connect with a different user since my log-in user has not enough privileges on the share. When I do that, the error above comes up. I'm really frustrated at the moment, this problem is driving me crazy. I'd be gladful for any possible solution to this. At the moment I'm using a workaround: I connect to a different machine via RDP, login with the user I have to use for the network-share connection and then I can map the drive and copy/paste from the RDP session to my local workstation. This is also working when I am connecting via RDP with my current login user and map the drive with the other user who has sufficent privileges. Tanks in advance, Thomas

    Read the article

  • User-unique .vimrc file for servers as root user

    - by Scott
    I'm getting thrown into an IDE war at the office, where multiple users have root access on our servers, and like to have everything their own way with VIM. Unfortunately, we have our servers locked down enough to where if you want to do anything, you need to have root access. Obviously (although this is obviously frowned upon), we get tired of typing sudo before each command we type, which would require that we constantly type in our wonderfully complex passwords that are mandated on us over and over again, so naturally we all just execute the sudo su - command upon login to avoid all of this. Of course, when it comes to VIM and custom .vimrc files, we are often times stepping on someone else's custom .vimrc file, and we have some whacked out functionality in these files that users have that may overwrite functionality that we have no idea about, much less have the patience to learn either. When as root on a linux box, is there any way for all of us to still maintain our .vimrc file without having to overwrite the file over and over again every time someone wants to use VIM? Ideally, we have many virtual machines all with VIM installed, so a universal solution across all servers would be best, and we do have our Microsoft Windows user specific home directories mounted on the servers under /home/username. Any recommendations for accommodating this?

    Read the article

  • Recommended programming language for linux server management and web ui integration.

    - by Brendan Martens
    I am interested in making an in house web ui to ease some of the management tasks I face with administrating many servers; think Canonical's Landscape. This means doing things like, applying package updates simultaneously across servers, perhaps installing a custom .deb (I use ubuntu/debian.) Reviewing server logs, executing custom scripts, viewing status information for all my servers. I hope to be able to reuse existing command line tools instead of rewriting the exact same operations in a different language myself. I really want to develop something that allows me to continue managing on the ssh level but offers the power of a web interface for easily applying the same infrastructure wide changes. They should not be mutually exclusive. What are some recommended programming languages to use for doing this kind of development and tying it into a web ui? Why do you recommend the language(s) you do? I am not an experienced programmer, but view this as an opportunity to scratch some of my own itches as well as become a better programmer. I do not care specifically if one language is harder than another, but am more interested in picking the best tools for the job from the beginning. Feel free to recommend any existing projects except Landscape (not free,) Ebox (not entirely free, and more than I am looking for,) and webmin (I don't like it, feels clunky and does not integrate well with the "debian way" of maintaining a server, imo.) Thanks for any ideas!

    Read the article

  • Are these three brand new sticks of RAM really dead?

    - by David Brown
    I'm working on a Dell Dimension 4700 desktop for a friend. It came with 512MB of DDR2 RAM (two sticks of 256MB). One morning, it started blue screening on startup with no helpful error messages. It refused to boot into any form of Windows installation, including Safe Mode, original recovery disk, and my custom Windows PE disk. It did boot into the Ultimate Boot CD, so I ran memtest86, which reported errors everywhere. I removed one stick of RAM and the system booted up just fine. I moved the remaining stick into each slot and the system continued to operate normally, so I came to the conclusion that the stick that I removed was dead. I ordered an exact replacement, along with 2 more sticks of 256MB DDR2 (again, exactly the same as the original), bringing the total system memory to 1GB. Upon installing the three brand new sticks, the system blue screened again, this time stating that win32k.sys attempted to write to read-only memory. I inserted my custom Windows PE disk in order to get a better look at the memory dump with BlueScreenView, but it refused to boot and produced another blue screen, but without an error message. I removed each new stick one-by-one, restarting each time. It continued to blue screen until I was left with only the original stick. I then tried inserting the new sticks in various different orders, but this only produced more blue screens. I reinserted all three sticks (along with the original) and ran memtest86 again, which reported errors all over the place. So, now I'm right back where I started. I don't think it could be the slots themselves, because I can plug the original stick into any slot and it works just fine. System setup reports each stick correctly and shows the total as 1GB, however. It just seems strange to me that all three brand new sticks of RAM could be dead on arrival. Is there something I missed? Or should I just go ahead and RMA them?

    Read the article

  • Server 2003 and SSL Certificates

    - by Keith Stokes
    I have a Windows 2000 domain with dozens of Windows 2000 servers and a few 2003 servers. Each server runs a custom app talking to a 3rd party utilizing self-signed certificates. To help troubleshooting we've created a custom test app. The 2000 servers are able to talk within seconds. The 2003 servers take anywhere from 10-30 seconds using a domain account and much less, usually under 5 seconds using a local account. The only exception to the local account performance is a new account, which is slow initially then faster. If you leave the test app open and reconnect repeatedly it talks in seconds. If you leave it open for sometime between 1 and 2 hours, it reverts back to the previous 10 seconds, so obviously something is caching. Installing the destination certificates in the local 2003 server store makes no difference. I've installed the certificates in AD and that apparently makes domain accounts work in 9-12 seconds, vs 30 seconds that was regular before. Manually clearing the certificate store on the 2003 server makes no difference. I'm at a loss as to where the certs might be cached and if I'm using some sort of domain certificate store that's hiding from me.

    Read the article

  • Managing rolling deployments in the cloud

    - by Josh Nankin
    Recently I've been experimenting with various cloud management tools like RightScale, Scalr, custom scripts for managing a variety of servers, each hosting several roles (app, db, load balancer, job queues, etc). The one thing I find lacking in most solutions is a way to do rolling deployments, i.e. running deployments sequentially across a number of servers with the same role. For instance, I dont want to build all of my webservers at the same time, as that will almost definitely result in some down time or 500s for my customers. I'd rather have one or two servers build at a time, while other servers are still available to handle requests. The other alternative is obviously to launch new servers that automatically update themselves on boot, but this isn't as cost effective, and most likely requires more time for the build to complete (it's faster to build on an existing server than to launch a new server and kill old ones). We've all heard of the big companies having the famous "push to build" button (companies like Twilio, Etsy, etc.) but it seems that they all have custom implementations of this. I'm not talking about a simple ssh-loop, clusterssh, or even an mcollective - I preferably want something with a nice simple interface that allows me to specify something like a RightScript or a Scalr script to run on a set of servers with a specific role, and it builds them sequentially. Does any one know of easy ways to get this done, or is this a candidate for a new open source project?

    Read the article

  • Reverse Proxy issues IIS on Windows Server 2012

    - by ahwm
    I've tried searching, but nothing seems to be working. I have a feeling it might be due to our custom Rewrite module. Here is the excerpt from the web.config that sets it up: <modules runAllManagedModulesForAllRequests="true"> <add name="UrlRewriteModule" type="EShop.UrlRewriteModule"/> </modules> EShop.UrlRewriteModule is a custom class in App_Code which handles incoming requests. I have set up the rewrite rules but it doesn't seem to want to work. I'm inclined to think that our rewrite class is interfering earlier than the proxy rules and saying that the page doesn't exist. Here's what we're trying to accomplish: We are working on a new site for a client, but they have a forum that they're not likely to want to move. I set up a new subdomain to point to the new server while the site is being completed (before we go live) and want the reverse proxy to forward test.domain.com/forum to www.domain.com/forum. After the site goes live, we'll need to forward using an IP address instead. I've set up a reverse proxy successfully with nginx, but we didn't want to set up another server if we didn't need to. Ideas?

    Read the article

  • Replacement for public folder workflow, I'm confused as to how sharepoint does it.

    - by RodH257
    For years Microsoft has been slowly phasing out public folders, perhaps exchange 2010 really is the LAST TIME they'll be shipped... I've heard sharepoint is the replacement, but I don't understand full, can someone give me an idea of how to replace this workflow? In our office, we have projects, they have a project number, ie 10353. Each job folder has a public folder, organized in a hierachy like Projects Year Folder Subfolders The main subfolder we use is for genera correspondence. When an email is received that relates to a project, it is dragged and dropped (or right click move to) a public folder. Adding public folder favourites for each user helps this. When an email is sent, we have a custom email form, which is the default email form, but with a project number field next to the subject line. When you enter the job number in there, it carbon copies our filing system in, which reads the job number and puts the email in the public folder for you. if you need to refer to emails, you go to public folder and find them there. This isn't the best with large jobs, but it works ok. Now, I have limited experience with sharepoint (well, WSS), we've used it to do some neat discussion boards/polls etc as an intranet site, but I haven't seen much of its integration with outlook. The great thing about our solution is how tightly it integrates with outlook which is exactly where the emails are. If you want to forward an old email, you go to public folder and forward it, simple. Any solution that replaces it should be at least as easy as this. Improvements we would like would be to have better searching of emails, better support in exchange (ie future version) and also, custom forms in outlook are being phased out (the VBA kind), so avoiding these would be good. Does sharepoint do this? or what solutions do this kind of thing?

    Read the article

  • How do I do a mail merge that includes images?

    - by Ian Ringrose
    I am trying to find out the practicalities of doing a mail merge when each “record” to be merged on includes some images. I need to: print letters And envelopes Both the letters and the envelopes have: Fixed text Fixed images Text that come from the mail merge record Images that come from the mail merge record I don’t know if all images will be the same size for every record, so a bit of simple “on the fly” automatic formatting may be needed . I need to be able to repeat a single item if I get a problem (e.g when folding the letter). What problems am I likely to have? Is Word 2007 up to this sort of mail merging, or should I be looking at a report writing tool? How do I restart a print run after a printer jam etc? What format should I store the “records” and there images in? E.g Can standard software cope with images that are stored in separate files named after the “CustomerId” that is in the “record” (I can write custom software if needed, but would rather use standard “of-the-shelf” software for the printing, I am planning on custom software for the data creation, so can output in whatever format is needed)

    Read the article

  • Puppet write hosts using api call

    - by Ben Smith
    I'm trying to write a puppet function that calls my hosting environment (rackspace cloud atm) to list servers, then update my hosts file. My get_hosts function is currently this: require 'rubygems' require 'cloudservers' module Puppet::Parser::Functions newfunction(:get_hosts, :type => :rvalue) do |args| unless args.length == 1 raise Puppet::ParseError, "Must provide the datacenter" end DC = args[0] USERNAME = DC == "us" ? "..." : "..." API_KEY = DC == "us" ? "..." : "..." AUTH_URL = DC == "us" ? CloudServers::AUTH_USA : CloudServers::AUTH_UK DOMAIN = "..." cs = CloudServers::Connection.new(:username => USERNAME, :api_key => API_KEY, :auth_url => AUTH_URL) cs.list_servers_detail.map {|server| server.map {|s| { s[:name] + "." + DC + DOMAIN => { :ip => s[:addresses][:private][0], :aliases => s[:name] }}} } end end And I have a hosts.pp that calls this and 'should' write it to /etc/hosts. class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { host{ $name: ip => $name[ip], host_aliases => $name[aliases] } } As you can imagine, this isn't currently working and I'm getting a 'Symbol as array index at /etc/puppet/manifests/hosts.pp:2' error. I imagine, once I've realised what I'm currently doing wrong there will be more errors to come. Is this a good idea? Can someone help me work out how to do this?

    Read the article

  • Clone Microsoft VM > SharePoint issues

    - by Rob
    Hi, We have an existing (production) Hyper-V VM that I want to clone to create a staging server. This server is NOT on a domain/active-directory - uses all local computer accounts. It is Windows 2008 OS, SharePoint 2007, SQL server 2008, Reporting Services, and some custom line-of-business web-applications. We rename the server etc as per the documentation we have, but are having troubles specifically with SharePoint. Some of the sites do not come up, and some issues adding/updating site settings such as site owners. Does anyone know of a definitive resource to this process? thanks Update: We seem to have most working, but the Shared Service Provider, and the SSP's admin site, are not working. In addition, the custom web-parts that reference IIS Session objects are failing (which seem to be related to the Share Service Provider). All the Windows user accounts for various services are renamed during the server cloning, but the windows account names in SQL server are not automatically changed. We've tried to change them in SQL server but still does not seem to work Seems like allot of work - and am thinking that there must be some guidance out there. also getting this: NullReferenceException: Object reference not set to an instance of an object.] Microsoft.Office.Server.Administration.SqlSessionStateResolver.System.Web. IPartitionResolver.ResolvePartition(Object key) +135

    Read the article

  • Subdomain is preventing my search results from rising as expected in page rank

    - by culov
    My problem is that I have a site which has requires a dedicated page for every city I choose to support. Early on, I decided to use subdomains rather than a directly after my domain (ie i used la.truxmap.com rather than truxmap.com/la). I realize now that this was a major mistake because Google seems to treat la.truxmap.com as a completely different site as ny.truxmap.com. So for instance, if i search "la food truck map" my site will be near the top, however, if i search "nyc food truck map" im no where in sight because ny.truxmap.com wouldnt be very high in the page rank by itself, and it doesnt have the boost that it ought to be getting from the better known la.truxmap.com So a mistake I made a year ago is now haunting my page rank. I'd like to know what the most painless way of resolving my dilemma might be. I have received so much press at la.truxmap.com that I can't just kill the site, but could I re-direct all requests at la.truxmap.com to truxmap.com/la and do the same for all cities supported without trashing my current, satisfactory page rank results I'm getting from la.truxmap.com ?? EDIT I left out some critical information. I am using Google Apps to manage my domain (that is, to add the subdomains) and Google App Engine to host my site. Thus, Google Apps provides a simple mechanism to mask truxmap.appspot.com (the app engine domain) as la.truxmap.com, but I don't see how I can mask it as truxmap.com/la. If I can get this done, then I can just 301 redirect la.truxmap.com to truxmap.com/la as suggested below. Thanks so much!

    Read the article

  • EC2 kernel decision and issues with creating a new machine with my AMI

    - by roacha
    I could really use some advice. I started a new instance on EC2 using Amazon's AMI and during the deployment process I selected a Kernel ID of "Use Default". I then configured my server the way that I wanted to and took a snapshot of it. I then created my own AMI to create new servers with. When I try and create a new server with this AMI the server fails to start and I get the error: EXT3-fs: sda1: couldn't mount because of unsupported optional features (240). Which appears to happen because I am selecting a kernel id of "Use default" again when building my second server. I have read that in order for this to work I need to choose the same kernel id that was used in my original server. I have deleted my original server and don't know what it was using. What is the best process to follow in order to not have these issues? Should I choose "Use Default" for my original server? How do you know which kernel it selected? Then should I just document this and always specify this during the deployment of my next servers using my custom AMI? OR should I choose a custom kernel id during the initial build and always use this one moving ahead hoping Amazon never retires it? Thanks for any advice!

    Read the article

  • Emacs 24.1: How do I restore i-search Ctrl-Y behavior from older versions?

    - by Eric
    In emacs 24.1, when you do Ctrl-Y in an interactive search, it yanks the kill buffer into the search string ("it pastes the clipboard contents" in any-other-app's language) and tries to match it. In the last 20 versions or so, pressing Ctrl-Y matches the rest of the current line. I have two very common use cases: Match this line, revert the buffer, and search for the line (less often:) Where else is this text in the buffer? I tried modifying /lisp/isearch.el, switching the bindings for isearch-yank-line (which I want) and isearch-yank-kill (which I'm fine binding to the ridiculous \M-s\C-e key sequence). But I don't think this file even gets picked up. But I don't think this file even gets loaded. If I explicitly load it, I still get the 24.1 behavior. Here's my change: (add-hook 'isearch-mode-hook (lambda () (define-key isearch-mode-map "\C-y" 'isearch-yank-line) (define-key isearch-mode-map "\M-s\C-e" 'isearch-yank-kill) )) No change in the behavior. I even tried hacking isearch.el, still no change. This is on Windows btw, but I suspect it doesn't matter. Could someone tell me how I can restore the old binding?

    Read the article

  • Creating a partitioned raid1 array for booting a debian squeeze system

    - by gucki
    I'd like to have the following raid1 (mirror) setup: /dev/md0 consists of /dev/sda and /dev/sdb I created this raid1 device using mdadm --create --verbose /dev/md0 --auto=yes --level=1 --raid-devices=2 /dev/sda /dev/sdb This gave a warning about metadata being 1.2 and my system might not boot. I cannot use 0.9 because it restricts the size of the raid to 2TB and I assume grub shipped with latest debian (squeeze) should be able to handle metadata 1.2. So then I created the needed partitions like this: # creating new label (partition table) parted -s /dev/md0 mklabel 'msdos' # creating partitions sfdisk -uM /dev/md0 << EOF 0,4096 ,1024,S ; EOF # making root filesystem mkfs -t ext4 -L boot -m 0 /dev/md0p1 # making swap filesystem mkswap /dev/md0p2 # making data filesystem mkfs -t ext4 -L data /dev/md0p3 Then I mounted the root partition, copied a minimal debian install inside and temporary mounted /dev /proc /sys. Afer this I chrooted to the new root folder and executed: grub-install --no-floppy --recheck /dev/md0 However this fails badly with: /usr/sbin/grub-probe: error: unknown filesystem. Auto-detection of a filesystem of /dev/md0p1 failed. Please report this together with the output of "/usr/sbin/grub-probe --device-map=/boot/grub/device.map --target=fs -v /boot/grub" to I don't think it's a bug in grub (so I didn't report it yet) but a fault of mine. So I really wonder how to properly setup my raid1, everything I tried so far failed.

    Read the article

  • Unable to set initcwnd on a Hetzner server

    - by Sergi
    We just ordered a bunch of Hetzner EX40SSD servers with the minimal Debian install image that they provide and everything is just fine except that looking at tcpdumps for fine tuning the network from various locations the initcwnd param seems to be stuck at 6 no matter how we change it. By default Debian 3.2 kernels should have that setting to 10 so it's pretty strange. Is it possible that the NIC driver or a custom setting in the Hetzner Debian image is limiting this param? Even if we set it to 4, like the old kernel default, it doesn't work. Any ideas would be much appreciated! Does anyone know if the NIC drivers provided by default by Debian have some kind on limitation. In a long thread in http://www.webhostingtalk.com/showthread.php?t=1200617&highlight=hetzner they talk about a page http://wiki.hetzner.de/index.php/Installation_des_r8168-Treibers/en where Hetzner states that the included Realtek r8168 driver is not working properly, but nowhere do they say that the initcwnd could be affected. Tomorrow i will try to install a CentOs image and see if Debian is the problem...Last resort would be to install a custom debian image, but that is a pain in the ass! Thanks!

    Read the article

  • Setting umask for all users

    - by Yarin
    I'm trying to set the default umask to 002 for all users including root on my CentOS box. According to this and other answers, this can be achieved by editing /etc/profile. However the comments at the top of that file say: It's NOT a good idea to change this file unless you know what you are doing. It's much better to create a custom.sh shell script in /etc/profile.d/ to make custom changes to your environment, as this will prevent the need for merging in future updates. So I went ahead and created the following file: /etc/profile.d/myapp.sh with the single line: umask 002 Now, when I create a file logged in as root, the file is born with 664 permissions, the way I had hoped. But files created by my Apache wsgi application, or files created with sudo, still default to 644 permissions... $ touch newfile (as root): Result = 664 (Works) $ sudo touch newfile: Result = 644 (Doesn't work) Files created by Apache wsgi app: Result = 644 (Doesn't work) Files created by Python's RotatingFileHandler: Result = 644 (Doesn't work) Why is this happening, and how can I ensure 664 file permissions system wide, no matter what creates the file? UPDATE: I ended up finding a cleaner solution to this on a per-directory basis using ACLs, which I describe here.

    Read the article

  • SSH Keys Authentication keeps asking for password

    - by Rhyuk
    Im trying to set access from ServerA(SunOS) to ServerB(Some custom Linux with Keyboard Interactive login) with SSH Keys. As a proof of concept I was able to do it between 2 virtual machines. Now in my real life scenario it isnt working. I created the keys in ServerA, copied them to ServerB, chmod'd .ssh folders to 700 on both ServerA,B. Here is the log of what I get. debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: Peer sent proposed langtags, ctos: debug1: Peer sent proposed langtags, stoc: debug1: We proposed langtags, ctos: en-US debug1: We proposed langtags, stoc: en-US debug1: SSH2_MSG_KEX_DH_GEX_REQUEST sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: dh_gen_key: priv key bits set: 125/256 debug1: bits set: 1039/2048 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'XXX.XXX.XXX.XXX' is known and matches the RSA host key. debug1: Found key in /XXX/.ssh/known_hosts:1 debug1: bits set: 1061/2048 debug1: ssh_rsa_verify: signature correct debug1: newkeys: mode 1 debug1: set_newkeys: setting new keys for 'out' mode debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: newkeys: mode 0 debug1: set_newkeys: setting new keys for 'in' mode debug1: SSH2_MSG_NEWKEYS received debug1: done: ssh_kex2. debug1: send SSH2_MSG_SERVICE_REQUEST debug1: got SSH2_MSG_SERVICE_ACCEPT debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /XXXX/.ssh/identity debug1: Trying public key: /xxx/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /xxx/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive Password: Password: ServerB has pretty limited actions since its a custom propietary linux. What could be happening? EDIT WITH ANSWER: Problem was that I didnt have those settings enabled in the sshd_config (Refer to accepted answer) AND that while pasting the key from ServerA to ServerB it would interpret the key as 3 separate lines. What I did was, in case you cant use ssh-copy-id like I couldnt. Paste the first line of your key in your "ServerB" authorized_keys file WITHOUT the last 2 characters, then type yourself the missing characters from line 1 and the first one from line 2, this will prevent adding a "new line" between the first and second line of the key. Repeat with the 3d line.

    Read the article

  • How to disable Utility Manager (Windows Key + U)

    - by Skizz
    How do I disable the Windows + U hotkey in Windows XP? Alternatively, how do I stop the utility manager from being active? The two are related. The utilty manager is currently providing a potential security hole and I need to remove it[1]. The system I'm developing uses a custom Gina to log in and start a custom shell. This removes most Windows Key hotkeys but the Win + U still pops up the manager app. Update: Things I've tried and don't work: NoWinKeys registry setting - this only affects explorer hotkeys; Renaming utilman.exe - program reappears next login; Third party software - not really an option, these machines are audited by the clients and additional, third party software would be unlikely to be accepted. Also, the proedure needs to be reasonably straightforward - this has to be done by field service engineers to existing machines (machines currently in Russia, Holland, France, Spain, Ireland and USA). [1] The hole is via the internet options in the help viewer the utility app links to.

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >