Search Results

Search found 2046 results on 82 pages for 'agent ransack'.

Page 47/82 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • converting node inheritance to hiera

    - by quickshiftin
    I'm working on moving over a node inheritance tree to hiera. Presently working on the hierarchy. Prior to hiera, my nodes had a hierarchy as such base pre-prod qa nodes staging nodes development nodes prod nodes Now I'm trying to get the same tier with hiera. Starting out I have this :hierarchy: - base - "%{environment}" - "%{clientcert}" but I need another level to capture pre-prod and prod. My thought would be to add an entry to puppet.conf, something like [agent] realm = pre-prod then :hierarchy: - base - "%{realm}" - "%{environment}" - "%{clientcert}" A couple of questions Are you allowed to place arbitrary properties into puppet.conf? Will hiera see the realm property?

    Read the article

  • My server appears to have been hacked+ scanssh run by zabbix is it normal?

    - by Niro
    I'm running a few EC2/Scalr instances with zabbix monitoring. I received complaints about one of my servers port scanning other servers. the logs show it is accessing port 22 on consecutive IP addresses. I looked at the processes list and saw scanssh is running under the user Zabbix. My question is- Is scanssh part of zabbix? Is it suppesd to run? I have active autodiscovery on zabbix but it is looking at another IP addresses and definately not port 20. Is it possible that something in the config of zabbix agent is controlling it and not the settings on zabbix server? What can I do to find out if zabbix is somehow misbehaving or it is a hacker? Any advice is highly appreciated.

    Read the article

  • Options for PCI-DSS on AWS - file integrity monitoring and intrusion detection

    - by Brill Pappin
    I need to deploy some file integrity monitoring and intrusion detections software on AWS instances. I really wanted to use OSSEC, however it does not work well in an environment where servers can auto deploy and shut down based on load, because it requires server managed keys to be generated. Including the agent in the AMI will not allow monitoring as soon as it comes up because of that. There are many options out there, and several are listed in other posts on this site, however none that I've seen so far deal with the unique problems inherent in AWS or cloud based deployments in general. Can anyone point me at some products, preferably open source, that we might use to cover those portions of PCI DSS that require this software? Has anyone else achieved this on AWS?

    Read the article

  • Hiera + Puppet classes

    - by Amadan
    I'm trying to figure out Puppet (3.0) and how it relates to built-in Hiera. So this is what I tried, an extremely simple example (I'll make a more complex hierarchy when I manage to get the simple one working): # /etc/puppet/hiera.yaml :backends: - yaml :hierarchy: - common :yaml: :datadir: /etc/puppet/hieradata # /etc/puppet/hieradata/common.yaml test::param: value # /etc/puppet/modules/test/manifests/init.pp class test ($param) { notice($param) } # /etc/puppet/manifests/site.pp include test If I directly apply it, it's fine: $ puppet apply /etc/puppet/manifests/site.pp Scope(Class[Test]): value If I go through puppet master, it's not fine: $ puppet agent --test Could not retrieve catalog from remote server: Error 400 on SERVER: Must pass param to Class[Test] at /etc/puppet/manifests/site.pp:1 on node <nodename> What am I missing? EDIT: I just left the office but a thought struck me: I should probably restart puppet master so it can see the new hiera.conf. I'll try that on Monday; in the meantime, if anyone figures out some not-it problem, I'd appreciate it :)

    Read the article

  • Keeping Private SSH Keys Safe

    - by Carmen
    I have a central server where I stored all the private ssh keys to the different machines that I want to ssh to. Currently, only sysadmins have access to this 'central' server. Given the above scenario, I like to ask the following questions: How do you protect your private ssh keys? I read about ssh-agent but I am not sure how to use it or if it can be used in this situation. If a sysadmin leaves and he copies all the private ssh keys, then he has access to all the servers. How do you deal with this situation?

    Read the article

  • Puppet claims to be unable to resolve domains even if domain properly resolves

    - by gparent
    I have a fairly simple puppet setup, one master and one node, both running Debian Squeeze 6.0.4. I have DNS entries for the two machines, client and master respectively. Both client and master's DNS entries resolve correctly on both machines to the right IPs. On my client, I have this configuration: [main] server = master.example.org logdir=/var/log/puppet vardir=/var/lib/puppet ssldir=/var/lib/puppet/ssl rundir=/var/run/puppet factpath=$vardir/lib/facter pluginsync=true templatedir=/var/lib/puppet/templates Key exchange seems to fail, according to this messages in /var/log/syslog: localhost puppet-agent[11364]: Could not request certificate: getaddrinfo: Name or service not known Why is resolution not working only for puppet?

    Read the article

  • sql server log shipping

    - by voam
    I am setting up log shipping with two 2008 sql servers connecting with a vpn. As far as I know as long as the sql agent is able to access the share on the primary/secondary servers everything will work. When I set up the Log shipping on the primary server using SQL Mangagement Studio, before I can set any of the "Secondary Database Settings" it asks me to connect to the secondary server. But I really don't want to open up the connection to the secondary server. I will be initialing this secondary database with a backup so as long as the transaction logs get copies everything should work. How to I work around the GUI not enabling any of the settings for the secondary server until I actually connect to it? Thanks in advance!

    Read the article

  • Log monitoring using Zabbix

    - by Supratik
    Hi I am monitoring logs using Zabbix. I am trying to monitor a log file using Zabbix 1.8.4. I created an item using the following details: Host: Zabbix server Description: logger_test Type: Zabbix agent (active) Key: log[/tmp/scribetest/test3/test3_current,error,,100] Type of Infromation: Log Update interval (in sec): 1 sec Keep history (in days): 90 Status: Active Applications: Log files I created a trigger and attached it with the item logger_test using the following details. Name: logger_test_trigger Expression: {Zabbix server:log[/tmp/scribetest/test3/test3_current,error,,100].str(error)}=1 Severity: disaster The above settings works fine for the first time but next time the trigger shows ZBX_NOTSUPPORTED and after that item also shows "not supported" message. Can you please tell me if anything I am doing wrong here ? Warm Regards Supratik

    Read the article

  • Ad Agency storage/file server +backup needed (NAS or something else?)

    - by Rob
    Looking for a "this is all you need" recommendation. We're a small ad agency with both mac & pcs that access and share files from a 3 yr old Windows 2000 box (no server software). We currently have 1TB on the "server" and back it up to 2 different Seagate Free Agent Pro 1TB external drives. But we're low on space and are looking for something that's bigger, that we can still access from Mac & PC, EASY backup system, secure from viruses, firewall enabled. Not sure if a NAS will work or if we should have a real server. We don't really get on that box except to restore files, or run Norton on it. I hope I've provided enough for a general recommendation. Thanks. Rob Phx

    Read the article

  • How to prove that an email has been sent?

    - by bguiz
    Hi, I have a dispute on my hands in which the other party (landlord's real estate agent) dishonestly claims to not have received an email that I truly did send. My questions is, what are the ways to prove that the email was indeed sent? Thus far, the methods that I have already thought of are: Screenshot of the mail in the outbox Forwarding a copy of the original email I am aware of other things like HTTP/ SMTP headers etc that would exist as well. Are these useful for my purposes, and if so how do I extract these? The email in question was sent using Yahoo webmail ( http://au.mail.yahoo.com/ ). Edit: I am not seeking legal advice here, just technical advice as to how to gather this information.

    Read the article

  • How can I flush my ssh keys on power management activity?

    - by Sam Halicke
    Hi all, Using ssh-agent and private keys per the usual. Everything's working as normal. My question regards best practices on flushing keys from ssh-add on activity like sleep, suspend, hibernate, etc. I thought about writing a simple wrapper around those commands, but then wondered if are they even called? Or does the kernel initiate this activity directly? Are the PM utilities strictly userland? I would like this additional layer of security beyond locking my screen, etc. and was wondering if anyone else had solved this elegantly or has best practices to recommend. Thanks.

    Read the article

  • Can you specify git-shell in .ssh/authorized_keys to restrict access to only git commands via ssh?

    - by Matt Connolly
    I'd like to be able to use a ssh key for authentication, but still restrict the commands that can be executed over the ssh tunnel. With Subversion, I've achieved this by using a .ssh/authorized_keys file like: command="/usr/local/bin/svnserve -t --tunnel-user matt -r /path/to/repository",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty ssh-rsa AAAAB3NzaC1yc2EAAAABIetc... I've tried this with "/usr/bin/git-shell" in the command, but I just get the funky old fatal: What do you think I am? A shell? error message.

    Read the article

  • Scheduled Jobs during hours of autumn time change

    - by NealWalters
    I'm wondering how other people deal with this scenario. What if you have a job scheduled to run at 1:30 am. In the autumn, when time changes, the hour of 1:00:00 to 1:59:59 repeats itself and so that job would run twice. Could be Windows Task Scheduler, SQL Agent or any other scheduling tool. Most of these tools seem to be based on machine time, not UTC time. If I told it to run the job at UTC time each night, then I wouldn't have the duplicate hour issue.

    Read the article

  • Start jade via batch script as service on windows server 2012 amazon ec2 instance

    - by E. Lüders
    I would like to start the Jade agent platform with a batch script on an windows server ec2 instance as a service. The reason for this is, that I want to start jade automatically at system startup. For creating the service I use nssm. However so far its working fine and the service is created. When I want to start the service I get an error message: Windows could not start the NAME service on Local Computer. The service did not return an error... My batch file contains only one line: java jade.Boot -platform-id P%random% If i execute the script via cmd it works fine. Anybody got an idea why this is not working if I start the batch script as a service?

    Read the article

  • Can I pass the LHS of a cfengine3 processes: line to the RHS?

    - by joeforker
    I'm using cfengine to start the foobar process. Apparently the LHS is discarded when I use process_select? Can I simply pass the LHS to a function, rather than having to put the command match pattern in a body argument? bundle agent foobar { processes: "foobar" # documented way would be to use .* here process_select => command("foobar"), restart_class => start_foobar; commands: start_foobar:: "/usr/bin/foobar"; } body process_select command(c) { command => "$(c)"; process_result => "command"; }

    Read the article

  • Sendmail Alias for Nonlocal Email Account

    - by Mark Roddy
    I admin a server which is running a number of web applications for a software dev team (source control, bug tracking, etc). The server has sendmail running solely as a transport to the departmental email server over which I have no control. We have someone who is still in the department but no longer on the dev team so I need to configure the transport agent to redirect all outgoing email (which would be coming from these applications) to the person that has taken their place. I added an entry in /etc/aliases like such: [email protected]: [email protected] But when I run /etc/init.d/sendmail newaliases I get the following error: /etc/mail/aliases: line 32: [email protected]... cannot alias non-local names So clearly I'm doing something I shouldn't. Is there a way to get aliases to work with non-local names or alternatively is their a way to accomplish my goal of redirecting outgoing mail for this user to another one? Technical Specs if the matter: Ubuntu 6.06 sendmail 8.13 (ubuntu provided package)

    Read the article

  • Need to set mailx variable to specify the From address

    - by user256817
    Running Oracle Linux 5.8 (which is just re-branded RedHat EL 5.8) I must change the From address. But we have scripts that use mailx which cannot be re-written to use any extra flags, so I'd like to use internal variables instead, which I see on the linux.die.net manpage on mailx is an alternative to the -r flag: -r address Sets the From address. Overrides any from variable specified in environment or startup files. Tilde escapes are disabled. The -r address options are passed to the mail transfer agent unless SMTP is used. This option exists for compatibility only; it is recommended to set the from variable directly instead. (Source: http://linux.die.net/man/1/mailx) How can we use these mailx variables? I tried adding this to /root/.mailrc, no go: set [email protected] I also added that to /etc/mail.rc with no gold. So I am turning to you, SuperUsers...

    Read the article

  • AWStats log format for tomcat access logs which has X-Forwarded-For

    - by Nix
    What should be the AWStats log format for below tomcat access logs ? I tried these formats but the external IP addresses are not coming into AWStats reports. LogFormat="%host %other %logname %time1 %methodurl %code %bytesd %refererquot %uaquot %referer %other %other" LogFormat="%other %other %logname %time1 %methodurl %code %bytesd %refererquot %uaquot %host_proxy" tomcat valve settings: pattern="%h %l %{USER_ID}s %t &quot;%r&quot; %s %b &quot;%{Referer}i&quot; &quot;%{User-Agent}i&quot; &quot;X-Forwarded-For=%{X-Forwarded-For}i&quot; &quot;JSESSIONID=%{JSESSIONID}c&quot; %D" Log entry: 127.0.0.1 - - [04/Nov/2013:13:39:55 +0000] "GET / HTTP/1.1" 200 12345 "https://www.google.com/url?some_url" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36" "X-Forwarded-For=real_ip, proxy_server_internal_ip" "JSESSIONID=-" 12345

    Read the article

  • Server monitoring for medium scale UNIX network

    - by nbartolomeo
    I'm looking for suggestions for a good monitoring tools, or tools, to handle a mixed Linux (RedHat 4-5) and HPUX environment. Currently we are using Hobbit which is working reasonably well but it is becoming harder to keep track of what alerts are sent out for what servers. Features I'd like to see: Easy configuration of servers. The ability to monitor CPU, network, memory, and specific processes I've looked into Nagios but from what I have seen it won't be easy to set up the configuration for all of our servers ~200 and that without installing a plugin into each agent I won't be able to monitor processes.

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • How can transfer zabbix item from different hosts and save item statistic

    - by Stepchik
    There are two server's (srv1 and srv2). Mysql server has been installed on which of them. Srv1 mysql contains database (db1). Zabbix-server get statistic throw configured agent user parameter (https://www.zabbix.com/documentation/2.0/manual/config/items/userparameters). Yesterday i has been copyed database db1 from mysql srv1 to mysql srv2. I can clone zabbix server item (https://www.zabbix.com/documentation/2.0/manual/config/items) to srv2, but lost all srv1 db1 statistic. Can you advice how keep them?

    Read the article

  • Why does PsExec hang after successfully running a powershell script?

    - by Matt
    The script is fairly straight forward. Simply tries to start a bunch of windows services. Execution locally works fine when on the target machine. The script is actually executing fine as well when done via PsExec, it just never returns until I hit the "enter" key on my CMD prompt. This is a problem, because this is being called from TeamCity, and it makes the Agent hang waiting for PsExec to return. I've tried the following: Adding an exit and exit 0 at the end of the Powershell script Adding a < NUL to the end of the PsExec call, per the answer in this SF question Adding a > stdout redirect This is how I am actually calling psexec: psexec \\target -u domain\username -p password powershell c:\path\script.ps1 No matter what I do, it hangs until I the locally on the cmd prompt. After I hit enter, I get the message: powershell exited on target with error code 0.

    Read the article

  • Apache https configurations

    - by sissonb
    I am trying to setup my domain name with a self signed cert. I created the cert and placed the server.key and server.crt files into C:/apache/config/ Then I updated my httpd.confg host to include the following, <VirtualHost 192.168.5.250:443> DocumentRoot C:/www ServerName mydomain.com:443 ServerAlias www.mydomain.com:443 SSLEngine on SSLCertificateFile C:/apache/conf/server.crt SSLCertificateKeyFile C:/apache/conf/server.key SSLVerifyClient none SSLProxyEngine off SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 CustomLog logs/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </VirtualHost> Now when I go to https://mydomain.com I get the following error. SSL connection error Unable to make a secure connection to the server. This may be a problem with the server, or it may be requiring a client authentication certificate that you don't have. Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. Can anyone see what I'm doing wrong? Thanks!

    Read the article

  • file downloaded via firefox and curl have different size

    - by Arash Mousavi
    When I download a file from this link by Firefox its size is 74580 B, But when I download it by curl with exactly all of header was sent by Firefox its size is 79891 B (I copied all header from Firefox and paste it in curl command). what is the problem? If you need any additional data ask me in comment. My curl command: curl --header 'Host: members.tsetmc.com' --header 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0' --header 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' --header 'Accept-Language: en-US,en;q=0.5' --header 'Referer: http://www.tsetmc.com/Loader.aspx?ParTree=15131F' --header 'Cookie: ASP.NET_SessionId=pwzbckbdpjlzqj45vcdbd455' --header 'Connection: keep-alive' 'http://members.tsetmc.com/tsev2/excel/MarketWatchPlus.aspx?d=0' -o 'MarketWatchPlus-1393_3_14.xlsx' -L

    Read the article

  • iis php internal server error

    - by user1633206
    I developed a website, using php/mysql running at IIS Server as CGI Server API. Suddenly it gives me error 500 after two weeks. It has lot of scripts but index.php home is working. But other script that has header redirection What's wrong with my scripts. livehttp addon of firefox says.. GET /allplans.php?lang=ar&cat=1 HTTP/1.1 Host: www.myhost.com... [edited] User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:14.0) Gecko/20100101 Firefox/14.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Cache-Control: max-age=0 HTTP/1.1 500 Internal Server Error Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 03 Sep 2012 08:09:13 GMT Content-Length: 1208 Connection: Keep-Alive

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >