Search Results

Search found 8543 results on 342 pages for 'documentation'.

Page 250/342 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Hiera can't find puppet environment

    - by quickshiftin
    I'm testing out hiera and hitting a snag on the hierarchy configuration. What I have is extremely simple, the part that isn't working is specification of hiera datadir files based on environment. Here's the config file (/etc/hiera.yaml) I'm trying --- :backends: - yaml :logger: console :hierarchy: - "%{::environment}" :yaml: :datadir: /var/lib/hiera Now, I have a file /var/lib/hiera/development.yaml blah: meh When I run hiera it's not finding the file or the value $ hiera -d blah DEBUG: Fri Oct 25 15:50:52 -0600 2013: Hiera YAML backend starting DEBUG: Fri Oct 25 15:50:52 -0600 2013: Looking up blah in YAML backend nil I've verified this agent is configured for development $ sudo puppet agent --configprint environment development Now let me prove hiera is capable of finding something; a change to the hiera.yaml file: :hierarchy: - development And now hiera finds the file and the value $ hiera -d blah DEBUG: Fri Oct 25 15:53:25 -0600 2013: Hiera YAML backend starting DEBUG: Fri Oct 25 15:53:25 -0600 2013: Looking up blah in YAML backend DEBUG: Fri Oct 25 15:53:25 -0600 2013: Looking for data source development DEBUG: Fri Oct 25 15:53:25 -0600 2013: Found blah in development meh So why isn't it working with the dynamic environment configuration? I got that straight from the documentation. Note, I have tried running the hiera command via sudo with no change in the result.

    Read the article

  • Setup of HP ProCurve 2810-24G for iSCSI?

    - by 3molo
    Hi, I have a pair of ProCurve 2810-24G that I will use with a Dell Equallogic SAN and Vmware ESXi. Since ESXi does MPIO, I am a little uncertain on the configuration for links between the switches. Is a trunk the right way to go between the switches? I know that the ports for the SAN and the ESXi hosts should be untagged, so does that mean that I want tagged VLAN on the trunk ports? This is more or less the configuration: trunk 1-4 Trk1 Trunk snmp-server community "public" Unrestricted vlan 1 name "DEFAULT_VLAN" untagged 24,Trk1 ip address 10.180.3.1 255.255.255.0 no untagged 5-23 exit vlan 801 name "Storage" untagged 5-23 tagged Trk1 jumbo exit no fault-finder broadcast-storm stack commander "sanstack" spanning-tree spanning-tree Trk1 priority 4 spanning-tree force-version RSTP-operation The Equallogic PS4000 SAN has two controllers, with two network interfaces each. Dell recommends each controller to be connected to each of the switches. From vmware documentation, it seems creating one vmkernel per pNIC is recommended. With MPIO, this could allow for more than 1 Gbps throughput.

    Read the article

  • Can't find standalone Chrome Gmail client that I know exists

    - by Carson
    I'm on Windows. A couple years ago when I switched from Outlook to Gmail (Google Apps), Google provided this awesome little standalone gmail client that was just a single-purpose Chrome install. It launched like a normal application, stayed updated when I updated Chrome. It was Chrome in a separate application that launched only gmail, stayed logged in really well, and "felt" like a gmail mail client, with the gmail interface. It had it's own little red envelope icon, it was a windows app. (I remember there was no Mac equivalent.) I found it while looking through the "this is how you get your company to switch to gmail" documentation that Google provided. I just repaved my box and now I'm looking for this thing again, and I had no idea it would be impossible to find. I've spent literally 2 hours looking, searching, googling, etc. I'm losing my mind. Anyone know how I can get my hands on this? I used it all day every day for 2 years, so I know it exists :), but I can not find it. Any assistance would be gratefully received.

    Read the article

  • Cannot upload files bigger than 8GB to Amazon S3 by multi-part upload due to broken pipe

    - by spencerho
    I implemented S3 multi-part upload, both high level and low level version, based on the sample code from http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?HLuploadFileJava.html and http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?llJavaUploadFile.html When I uploaded files of size less than 4 GB, the upload processes completed without any problem. When I uploaded a file of size 13 GB, the code started to show IO exception, broken pipes. After retries, it still failed. Here is the way to repeat the scenario. Take 1.1.7.1 release, create a new bucket in US standard region create a large EC2 instance as the client to upload file create a file of 13GB in size on the EC2 instance. run the sample code on either one of the high-level or low-level API S3 documentation pages from the EC2 instance test either one of the three part size: default part size (5 MB) or set the part size to 100,000,000 or 200,000,000 bytes. So far the problem shows up consistently. I attached here a tcpdump file for you to compare. In there, the host on the S3 side kept resetting the socket.

    Read the article

  • How do you get Windows 7 to show time remaining in the battery meter?

    - by MrDaniel
    Running Microsoft Windows 7 Home Premium on a HP Laptop. The system tray power meter never shows the time remaining in the system tray. Only really ever show a percentage remaining number as pictured. The windows help documentation on the "battery meter" seems to indicate that it should display a time remaining indicator, is this accurate? How accurate is the battery meter? The accuracy of what the battery meter reports—what percentage of a full charge remains and how long you can use your laptop before you must plug it in—depends on several factors. Most of these factors fall into the following two categories: What you use the laptop for. Because some activities drain the battery faster than others (for example, watching a DVD consumes more power than reading and writing e-mail), alternating between activities that have significantly different power requirements changes the rate at which your laptop uses battery power. This can vary the estimate of how much battery charge remains. Battery hardware and sensor circuitry. Newer, "smart" batteries are equipped with circuitry that calculates the measurements of charge remaining and reports the information to the battery meter. Older batteries use less sophisticated circuitry and might be less accurate.

    Read the article

  • Network Management Cable Labeling Techniques and their alternatives [closed]

    - by Alex
    Possible Duplicate: What is the most effective solution you used to label cables? Yes i know there are a lot of howtos and already answered questions about this topic, like this one: How do you organise the cables in your racks? Currently i am searching the web for different techniques (alternatives) for labeling the cables at the server racks and/or data centers. Unfortunately i do not have any experience with labeling/documentation of network cables in a large scale. As far as I could lookup by now the current labeling techniques are coloring and a self defined print-labeling technique (numbering, text) maybe also according to a standard which are usually used. I want to know if QR, RFID (ok RFID in a data center would be stupid due to the radio frequency wouldn't it be?), Barcodes or similar (??) have already been used by some administrators or why they did not consider such techniques at all? Too complicated (with QR scanner etc..) if you are in front of the cables and want to get quick feedback for what the cable is? What alternatives are out there? Advantages/Disadvantages? Best-Practice? I would appreciate any help on this topic, thank you! Regards, Alex

    Read the article

  • Dismount USB External Drive using powershell

    - by JC
    Hello, I am attempting to dismount an external USB drive using powershell and I cannot successfuly do this. The following script is what I use: #get the Win32Volume object representing the volume I wish to eject $drive = Get-WmiObject Win32_Volume -filter "DriveLetter = 'F:'" #call dismount on that object there by ejecting drive $drive.Dismount($Force , $Permanent) I then check my computer to check if drive is unmounted but it is now. The boolean parameters $force and $permanent have been tried with different permutations to no avail. The exit code returned by the dismount command changes when the params are toggled. (0,0) = exit code 0 (0,1) = exit code 2 (1,0) = exit code 0 (1,1) = exit code 2 The documentation for exit code 2 indicates that there are existing mount points as a reason why it cannot dismount. Although I am trying to dismount the only mount point that exists so I am unsure what this exit code is trying to tell me. Having already trawled the web for people experiencing similar problems I have only found one additional command to try and that is the following: # executed after the .Dismount() command $drive.Put() This additional command does not help. I am running out of things to try, so any assistance anyone can give me would be greatly appreciated, Thanks.

    Read the article

  • Ubuntu 12.04 - Pound Reverse Proxy and Adobe Flex/Flash Auth

    - by James
    First time posting, I have a completely fresh install of ubuntu 12.04 Client as a reverse proxy gateway to our internal network. Our setup is we have one external ip but three domains we would like to point to various webservers on our internal network. It's not so much a load balancing issue or cacheing etc. Merely routing some Client browsers to a port 80 webpage (to adhere to some stricter corporate policies regarding placing port numbers after domain names). I have gone with pound and everything seems to be working fine. Static pages load etc. Everything is good with the exception of a Flash/Flex based WebClient for a Digital Asset Management program. The actual static page loads fine, it is just at the moment of entering credentials, be they correct or incorrect, and hitting login, there is no response whatsoever. Either a rejection or confirmation etc. So the request back to the internal server can't be getting through. I have googled extensively and there might be a solution in a crossdomain.xml file? Documentation isn't very clear. And we are not the authors of the DAM app, and have no control over the code on the Flash/Flex side. Questions: Is there a particular config file/solution for pound that allows Flash/Flex auth information to be forwarded? Is there another reverse proxy program (nginx?)that allows this type of config? Am I looking at this the entire wrong way, should Flash/Flex fundamentally not be allowed to have this access?

    Read the article

  • pam debugging "check pass; user unknown"

    - by lvc
    I am attempting to get Prosody authenticating with its auth_pam module. It is configured to use the pam service name xmpp. The pam.d/xmpp file is copied straight from the one configured for dovecot (originally taken from, I think, dovecot's documentation), which is known to be working: # cat /etc/pam.d/xmpp auth required pam_unix.so nullok debug account required pam_unix.so debug Logging in with dovecot works wonderfully. Logging in with prosody, with exactly the same username and password, causes Prosody to return 'Not authorized', and the following in journalctl -f: Oct 29 22:12:14 riscque.net prosody[9396]: c2s1d010b0: Client sent opening <stream:stream> to riscque.net Oct 29 22:12:14 riscque.net prosody[9396]: c2s1d010b0: Sent reply <stream:stream> to client Oct 29 22:12:14 riscque.net prosody[9396]: [178B blob data] Oct 29 22:12:14 riscque.net unix_chkpwd[9408]: check pass; user unknown Oct 29 22:12:14 riscque.net prosody[9396]: pam_unix(xmpp:auth): conversation failed Oct 29 22:12:14 riscque.net prosody[9396]: pam_unix(xmpp:auth): unable to obtain a password Oct 29 22:12:14 riscque.net prosody[9396]: pam_unix(xmpp:auth): auth could not identify password for [lvc] Oct 29 22:12:14 riscque.net prosody[9396]: riscque.net:saslauth: sasl reply: <failure xmlns='urn:ietf:params:xml:ns:xmpp-sasl'><not-authorized/><text>Unable to authorize you with the authentication credentials you&apos;ve sent.</text></failure> This series of errors seems mutually contradictory - first it says "user unknown", but then that it can't obtain the password for lvc - this username certainly exists on the system. What is likely going on here, and how would I debug this further?

    Read the article

  • Exim 4 Virtual Domains and Catchall on Debian (Squeeze)

    - by parazuce
    Hello, I've been at it for about 4 hours now. Searching as well as trying different tutorials. Here's my setup: I have 2 domains, both under my own DNS server (MX records setup as well). I have exim4 successfully running, and it is able to send messages from both of those domains. I have tested this using sendmail, and manually setting the "From" attribute. Exim successfully delivers mail to users no matter which domain was specified. I'm fine with that, but I'm having an issue editing virtual domains, and adding custom delivery options (such as a catch all). I've been searching for about 4 hours, and I can't find any up-to-date documentation on how to do this. The old methods would be to add a line such as: domainlist local_domains = @:localhost:dsearch;/etc/exim4/virtual Once that line was added, I made a directory at /etc/exim4/virtual, then created files inside such as example.com which would then contain rules for delivery under that domain. This did not work, however. Searching further, I've found that exim no longer supports dsearch (I guess because they claim it never has?) This is where I'm stuck. I'm on a "split" configuration as well.

    Read the article

  • Setting up AJP with JBoss 7

    - by purlogic
    I have two different versions of JBoss on a server, JBoss 6.0 Final and JBoss 7.0.2. I can one run or the other by switching a couple of sym links and issuing a "service jboss start" command. I am, by no means, an expert in JBoss, however JBoss 6.0 appears to have AJP running out of the box with no initial configuration required on port 8009. With JBoss 7, however, I had to vi the file "standalone/configuration/standalone.xml" and add a few entries. Those entries are: In the <subsystem /> tag, I added: <connector name="ajp" protocol="AJP/1.3" socket-binding="ajp" /> In the <socket-binding-group /> tag I added: <socket-binding name="ajp" port="8009"/> Then in Apache's configuration file (httpd.conf), I added: <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> ProxyPass /app ajp://localhost:8009/app ProxyPassReverse /app ajp://localhost:8009/app AJP proxying works with 6, not with 7... I assume it's because I haven't properly set up AJP in JBoss 7 and not entirely sure how to do that. I have searched documentation on their site with not a lot of specifics on how to do so. Any help or insight into setting up AJP with JBoss 7 would be much appreciated!!

    Read the article

  • How to create an MST for silent install using Orca?

    - by Sanarothe
    Hi. I'm trying to deploy 7zip via GPO; I assigned the original MSI, but the package installation simply doesn't take place. What I've gathered is that I need to create an MST. In the spirit of trying to learn as much as possible about it, I've opted to use Orca rather than a third-party automagic tool, but I'm at a loss as to which fields to edit. So far the only change that I've made is to give the license accepted checkbox a value of "1" instead of pointing to another key that, still, just gave it a value of "1." So, to give this some structure, How does (Or what criteria should I consider) creating a MST make the install noninteractive/silent? Do you have to manually reconfigure the MSI to simply not perform the GUI aspects? Or do I have to execute the program in silent mode after defining the variables the the installer requests? (Though, of course, it seems that would defeat the purpose of the MST) How do I determine which fields I need to edit? I've loaded the installer and it takes three inputs: License acceptance, feature set and installation location. I want all of the default values: I'm just trying to deploy it at all, not customize the installation. I BELIEVE that I should be messing with some values in the Registry table, but I really don't know. If I'm not asking the right questions, can someone point me to a THOROUGH resource or documentation for this process? I've already gone over the technet articles on basic Orca use and deployment, but I couldn't really find anything on creating MST that didn't involve a third party program in which one runs a 'dummy' installer to get the before and after snapshots. Thank you very much, Cameron UPDATE: After spending the day troubleshooting, I finally got my server to send out 7zip, but not until I had also assigned firefox. Not sure why it didn't want to send out 7zip by itself, but I also had some domain naming problems. Thanks for the input (GPResult helped enormously.)

    Read the article

  • chef clients behind firewall

    - by tec
    I am currently learning about chef. What I understood so far: I have to install chef-server on an own server or use the hosted chef. I have to install chef-client on the servers that I want to manage aka nodes (manually or using knife bootstrap). I installed several chef tools on my own PC that I can use to manage the nodes, e.g. knife. Now in my case the specialty is that the nodes are behind a firewall/load balancer/proxy. The nodes can access servers on the outside via NAT (http works and I can configure chef-specific hosts to work as well). However they can only be contacted from the outside via a ssh tunnel. There is really much documentation about chef available but I did not find an answer to these questions: When using knife, is it enough when I set up a ssh tunnel manually on my own PC or does the chef server need to contact the nodes? When using knife, can I configure it to setup a ssh tunnel automatically? When using the chef server web ui can I configure it to connect to the nodes via ssh tunnel or do I need a setup where I setup the tunnel myself e.g. using monit? Is this even possible with hosted chef? Instead of using knife or the web ui: Can I issue the same management commands directly on the nodes using chef-client? What solution would you recommend? Thanks a lot for taking your time to help and answering one or more of these related questions

    Read the article

  • Puppet and Vim fighting over Ruby version

    - by devians
    I have installed puppet from the .dmg from puppetlabs. If I remove ruby 1.9.3, puppet works, but other things like my vim install (dependant plugins) do not. According to http://docs.puppetlabs.com/guides/platforms.html#ruby-versions 1.9.3 is supported. So whats going wrong with puppet? % uname -a Darwin Kusanagi.local 11.4.2 Darwin Kernel Version 11.4.2: Thu Aug 23 16:25:48 PDT 2012; root:xnu-1699.32.7~1/RELEASE_X86_64 x86_64 % which ruby /usr/local/bin/ruby % ruby --version ruby 1.9.3p327 (2012-11-10 revision 37606) [x86_64-darwin11.4.2] % /usr/bin/ruby --version ruby 1.8.7 (2012-02-08 patchlevel 358) [universal-darwin11.0] % brew info ruby 1 ? ruby: stable 1.9.3-p327, HEAD http://www.ruby-lang.org/en/ Depends on: pkg-config, readline, gdbm, libyaml /usr/local/Cellar/ruby/1.9.3-p327 (796 files, 17M) * https://github.com/mxcl/homebrew/commits/master/Library/Formula/ruby.rb ==> Options --with-tcltk Install with Tcl/Tk support --with-suffix Suffix commands with "19" --universal Build a universal binary --with-doc Install documentation ==> Caveats NOTE: By default, gem installed binaries will be placed into: /usr/local/Cellar/ruby/1.9.3-p327/bin You may want to add this to your PATH. % puppet /usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- puppet/util/command_line (LoadError) from /usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/bin/puppet:3:in `<main>'

    Read the article

  • external postfix forwarding to zimbra server

    - by Marko
    I want to migrate from my current mail server (old_server) for my domain mydomain.com. old_server setup is Postfix+LDAP+Cyrus. Now I want to migrate my domain mail to Zimbra server (zimbra), but I am considering option to leave current mail server working in the first phase, and then to only have subset of email addresses to be forwarded to zimbra server. It seems that zimbra refers this in their documentation as 'edge MTA'. Current config mydomain.com MX: old_server <---------- smtp send ----------> smtp receive New config mydomain.com MX: old_server zimbra <------------------------------------------- smtp send ----------> smtp receive ---- forward ----> smtp receive I need following: old_server to receive mail for my domain as before, but for some of the email addresses I want them to be delivered to zimbra server. I should be able to determine which email addresses will be forwarded. I would like to avoid possible false spam detections for mails from mydomain.com due to this setup. Questions: How should I configure postfix on old_server to support this mail forwarding? To avoid false spam detection, can I have outgoing mail from mydomain.com to be sent by zimbra or should I use old_server? Is there anything extra I would need to do in order to avoid possibility of my outgoing mails being marked as spam on other servers?

    Read the article

  • What should I do to make sure that IIS does not recycle my application?

    - by AngryHacker
    I have a WCF service app hosted in IIS. On startup, it goes and fetches a really expensive (in terms of time and cpu) resource to use as local cache. Unfortunately, IIS seems to recycle the process on a fairly regular basis. So I am trying to change the settings on the Application Pool to make sure that IIS does not recycle the application. So far, I've change the following: Limit Interval under CPU from 5 to 0. Idle Time-out under Process Model from 20 to 0. Regular Time Interval under Recycling from 1740 to 0. Will this be enough? And I have specific questions about the items I changed: What specifically does Limit Interval setting under CPU mean? Does it mean that if a certain CPU usage is exceeded, the application pool will be recycled? What exactly does "recycled" mean? Is the application completely torn down and started up again? What is the difference between "Worker Process shutdown" and "Application Pool recycling"? The documentation for the Idle Time-out under Process Model talks about shutting down the worker process. While the docs for Regular Time Interval under Recycling talk about application pool recycling. I don't quite grok the difference between the two. I thought the w3wp.exe is the worker process which runs the application pool. Can someone explain the difference to the application between the two? The reason for having IIS7 and IIS7.5 tags is because the app will run in both and hope the answers are the same between the versions. Image for reference:

    Read the article

  • Plesk 10 - creating and using vhost.conf

    - by MrFidge
    I'm having some issues setting up and using a vhost.conf for one of my domains. So far none of the domains have required any extra configuration but now I need to use a PEAR module, so I'm looking to include /usr/share/pear in the PHP settings for the domain. vhost file created in /var/www/vhosts/domain.com/conf/vhost.conf <Directory /var/www/vhosts/domain.com/httpdocs> php_admin_value include_path ".:/usr/share/pear" </Directory> I then restart Plesk using: /usr/local/psa/admin/sbin/websrvmng --reconfigure-vhost --vhost-name=domain.com Or as plesk says that command is obsolete in Plesk 10 I've tried using /usr/local/psa/admin/sbin/httpdmng --reconfigure-domain domain.com And for good luck I've restarted apache too each time. Net result - none of the PEAR includes work unless I edit the include_path in /etc/php.ini! Any tips on how to get this MOFO working? I've had a look through the documentation but TBH I just don't have time to read 40 pages of Plesk manual for one line of code, this can't be that hard, surely! Thanks for any pointers, H

    Read the article

  • Could this server log mean my server is being used as a proxy?

    - by So Over It
    I came across the following entry in my access.log: 58.218.199.147 - - [05/Jun/2012:12:56:04 +1000] "GET http://proxyproxys.com/ HTTP/1.1" 200 183 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" Normally when I see a full URL entry in my access.log I assume it is log spam with people trying to get me to access their site. These entries are normally followed with a 404 response. The above entry is followed with a 200 'success' response! Doing some searching it would seem that this can occur when someone is trying to use your server as a proxy. This disturbed me more - especially because the URL in question has the word proxy in it. Going to the site 'proxyproxys.com' (using hidemyass.com to protect my own identity), the site returns what appears to be some sort of 'proxy judge' ---------------------------------------- HTTP_ACCEPT=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 HTTP_ACCEPT_LANGUAGE=en-US,en;q=0.8 HTTP_USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.53 Safari/536.5 HTTP_CONNECTION=close REMOTE_PORT=56355 REMOTE_HOST=74.63.112.142 REMOTE_ADDR=74.63.112.142 ---------------------------------------- CS_ProxyJudge Result=HIGH_ANONYMITY ---------------------------------------- Question: 1) does the 200 success mean that someone has been able to successfully use my server as a proxy? 2) are there other means of confirming if my server is being used as a proxy 3) can you refer me to documentation to help 'close up' my security gap if there is one. Thanks.

    Read the article

  • Requiring SSH-key Login From Specific IP Ranges

    - by Sean M
    I need to be able to access my server (Ubuntu 8.04 LTS) from remote sites, but I'd like to worry a bit less about password complexity. Thus, I'd like to require that SSH keys be used for login instead of name/password. However, I still have a lot to learn about security, and having already badly broken a test box when I was trying to set this up, I'm acutely aware of the chance of screwing myself while trying to accomplish this. So I have a second goal: I'd like to require that certain IP ranges (e.g. 10.0.0.0/8) may log in with name/password, but everyone else must use an SSH key to log in. How can I satisfy both of these goals? There already exists a very similar question here, but I can't quite figure out how to get to what I want from that information. Current tactic: reading through the PAM documentation (pam_access looks promising) and looking at /etc/ssh/sshd_config. Edit: Alternatively, is there a way to specify that certain users must authenticate with SSH keys, and others may authenticate with name/password? Solution that's currently working: # Globally deny logon via password, only allow SSH-key login. PasswordAuthentication no # But allow connections from the LAN to use passwords. Match Address 192.168.*.* PasswordAuthentication yes The Match Address block can also usefully be a Match User block, answering my secondary question. For now I'm just chalking the failure to parse CIDR addresses up to a quirk of my install, and resolving to try again when I go to Ubuntu 10.04 not too long from now. PAM turns out not to be necessary.

    Read the article

  • Socket options for a tcp server with 3G clients & frequent disconnections

    - by Joel
    I have a TCP server, written in java, sending and receiving many short messages, from 500 bytes to 100 KB long. It's a chess game and chat server, to make it simple. The server is running Debian 6. Half of the clients are connecting from 3G networks, and half over standard DSL. A portion of the 3G clients lose connection pretty often. The error I get on the server and on the client socket is Connection reset. I have come across this page at Oracle documentation: socketOpt. I am wondering what I could tune there to lower the number of disconnections from 3G clients. I don't mind about the ping or transfer rate, but just about the TCP disconnections. I am not skilled enough to understand the impact of each setting, but I sort of understood that the TCP window was important, although I don't know exactly how. So I'm asking if anyone here has an idea ? Thanks if you can help.

    Read the article

  • Getting a TTY in a Connectback Shell

    - by Asad R.
    I'm often asked by friends to help with small Linux problems, and more often than not I'm required to login to the remote system. Usually there are a lot of issues with making an account and logging in (sometimes the box is behind a NAT device, sometimes SSHD isn't installed, etc.) so I usually just ask them to make a connect-back shell using netcat (nc -e /bin/bash ). If they don't have netcat I can just ask them to grab a copy of a statically compiled binary which isn't that hard or time consuming to download and run. Though this works well enough for me to enter simple commands, I can't run any apps that require a tty (vi, for example) and can't use any job control functions. I managed to bypass this issue by running in.telnetd with a few arguments within the connect-back shell that would assign me a terminal and drop me to a shell. Unfortunately in.telnetd isn't usually installed by default on most systems. What's the easiest way to get a fully functional connect-back terminal shell without requiring any non-standard packages? (A small C program that does the job would be fine as well, I just can't seem to find much documentation on how a TTY is assigned/allocated. A solution that doesn't require me to plough through the source code for SSHD and TELNETD would be nice :))

    Read the article

  • What is the correct iptables rule when NATing multiple private subnets?

    - by Jose Mendez
    I have a Centos minimal 6.5 acting as a router. eth0 is connected to a Cisco switch trunk port, allowing VLANs 200-213. I have several VLAN interfaces just as this link suggests: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-networkscripts-interfaces_802.1q-vlan-tagging.html And have IPv4 forwarding, so all my network devices from any of the networks 200-213 can communicate with each other using this linux box as their router. Problem is, I need them to access the Internet, so I added the following rule: iptables -t nat -A POSTROUTING -s 192.168.0.0/16 -j SNAT --to 1.1.1.56 1.1.1.56 is the "outside" address. This works fine, devices connected to the internal networks can ping Intertnet addresses BUT, they stop being able to talk to each other across subnets, so 192.168.211.55 can ping 8.8.8.8, but can't talk to 192.168.213.5. As soon as I do a service iptables restart to remove the rule, I can start talking across internal subnets again. What would be the correct way to set up NAT for multiple private subnets? Or maybe the correct way to set up forwarding?

    Read the article

  • Mod_Perl configuration for multiple domains

    - by daliaessam
    Reading the Mod_Perl module documentation, can we configure it on per domain basis, what I mean can we configure it to run on every domain or specific domain only. What I see in the docs is: Registry Scripts To enable registry scripts add to httpd.conf: Alias /perl/ /home/httpd/2.0/perl/ <Location /perl/> SetHandler perl-script PerlResponseHandler ModPerl::Registry PerlOptions +ParseHeaders Options +ExecCGI </Location> and now assuming that we have the following script: #!/usr/bin/perl print "Content-type: text/plain\n\n"; print "mod_perl 2.0 rocks!\n"; saved in /home/httpd/httpd-2.0/perl/rock.pl. Make the script executable and readable by everybody: % chmod a+rx /home/httpd/httpd-2.0/perl/rock.pl Of course the path to the script should be readable by the server too. In the real world you probably want to have a tighter permissions, but for the purpose of testing, that things are working, this is just fine. From what I understand above, we can run Perl scripts only from one specific folder that we put the directive above. So the question again, can we make this directive per domain for all domains or for specific number of domains?

    Read the article

  • Change XRDP keyboard layout to en-gb Ubuntu 12.04

    - by Earl Sven
    Does anybody know how to change the keyboard layout to en-gb in an XRDP session on Ubuntu 12.04? I am using mstsc.exe to connect to an XRDP server hosting an XVNC session, however I cannot work out how to apply the UK keyboard layout. A bit of googling has yeilded these instructions which allow me to change the keymap, however using the keymap file I downloaded from here I loose the ability to use the arrow keys, home/end etc. Comparing the file with the standard one there are substantially more differences than I would expect considering the similarity between the layouts. I only have RDP access to the box so i don't seem to be able to actually generate a new layout per the instructions above, maybe it's a local console thing? Also I can't change either the RDP client used or the RDP server as they are my only access to the system, I don't have local console access. I do have root priveleges on the OS however. Any thoughts? Edit: I have found http:// xrdp.sourceforge.net/documents/keymap/newkeymap.html (apologies for not typing the link properly but the antispam filter won't let me post more than 2 links) this documentation on the XRDP sourceforge page which describes keymap file format. It indicates the values in the keymap files are unicode 0x64 etc, however the files I have already on my system seem to use a different format 0:0 or 65307:27 etc, does anybody know what the difference is?

    Read the article

  • How to combine try_files and sendfile on Nginx?

    - by hcalves
    I need Nginx to serve a file relative from document root if it exists, then fallback to an upstream server if it doesn't. This can be accomplished with something like: server { listen 80; server_name localhost; location / { root /var/www/nginx/; try_files $uri @my_upstream; } location @my_upstream { internal; proxy_pass http://127.0.0.1:8000; } } Fair enough. The problem is, my upstream is not serving the contents of URI directly, but instead, returning X-Accel-Redirect with a location relative to document root (it generates this file on-the-fly): % curl -I http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg HTTP/1.0 200 OK Date: Mon, 26 Nov 2012 20:58:25 GMT Server: WSGIServer/0.1 Python/2.7.2 X-Accel-Redirect: animals/kitten.jpg__100x100__crop.jpg Content-Type: text/html; charset=utf-8 Apparently, this should work. The problem though is that Nginx tries to serve this file from some internal default document root instead of using the one specified in the location block: 2012/11/26 18:44:55 [error] 824#0: *54 open() "/usr/local/Cellar/nginx/1.2.4/htmlanimals/kitten.jpg__100x100__crop.jpg" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /animals/kitten.jpg__100x100__crop.jpg HTTP/1.1", upstream: "http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg", host: "127.0.0.1:80" How do I force Nginx to serve the file relative to the right document root? According to XSendfile documentation the returned path should be relative, so my upstream is doing the right thing.

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >