Search Results

Search found 29612 results on 1185 pages for 'script console'.

Page 357/1185 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • curl http_code of 000

    - by Mikkel Paulson
    I have a shell script that I use to monitor loading times and response codes on my live server cluster. It runs a total of 250 iterations every 5 minutes, distributed across 10 servers and 6 sites. It uses curl with the -w flag to return pertinent information which is then parsed by my shell script: curl -svw 'monitor_load_times %{time_total} %{http_code}' -b 'server=$server' -m 15 -o /dev/null $url 2>&1 This information is then parsed by a graphing script that can display a number of different responses. However, curl will occasionally return a response code of "000". When this happens, it seems to happen multiple times at once despite being distributed over many iterations: What I'm trying to work out is if this is a client-side issue that's skewing my results or if it's actually indicative of a server-side problem affecting my entire cluster. Does 000 mean that the connection was dropped? Database entries corresponding to curl iterations with that response code return "0.000" for the time_total value. All of the search results I've found for curl returning a code of 000 are related to HTTPS being unsupported, but all of my test URLs are HTTP. (The spike in 500 errors is a completely unrelated issue that affected my servers last night.)

    Read the article

  • Using Supermicro IPMI behind a Proxy?

    - by Stefan Lasiewski
    This is a SuperMicro server with a X8DT3 motherboard which contains an On-board IPMI BMC. In this case, the BMC is a Winbond WPCM450). I believe many Dell servers use this a similar BMC model. A common practice with IPMI is to isolated it to a private, non-routable network. In our case all IPMI cards are plugged into a private management LAN at 192.168.1.0/24 which has no route to the outside world. If I plug my laptop into the 192.168.1.0/24 network, I can verify that all IPMI features work as expected, including the remote console. I need to access all of the IPMI features from a different network, over some sort of encrypted connection. I tried SSH port forwarding. This works fine for a few servers, however, we have close to 100 of these servers and maintaining a SSH client configuration to forward 6 ports on 100 servers is impractical. So I thought I would try a SOCKS proxy. This works, but it seems that the Remote Console application does not obey my systemwide proxy settings. I setup a SOCKS proxy. Verbose logging allows me to see network activity, and if ports are being forwarded. ssh -v -D 3333 [email protected] I configure my system to use the SOCKS proxy. I confirm that Java is using the SOCKS proxy settings. The SOCKS proxy is working. I connect to the BMC at http://192.168.1.100/ using my webbrowser. I can log in, view the Server Health, power the machine on or off, etc. Since SSH verbose logging is enabled, I can see the progress. Here's where it get's tricky: I click on the "Launch Console" button which downloads a file called jviewer.jnlp. JNLP files are opened with Java Web Start. A Java window opens. The titlebar says says "Redirection Viewer" in the title bar. There are menus for "Video" "Keyboard" "Mouse", etc. This confirms that Java is able to download the application through the proxy, and start the application. 60 seconds later, the application times out and simply says "Error opening video socket". Here's a screenshot. If this worked, I would see a VNC-style window. My SSH logs show no connection attempts to ports 5900/5901. This suggests that the Java application started the VNC application, but that the VNC application ignores the systemwide proxy settings and is thus unable to connect to the remote host. Java seems to obey my systemwide proxy settings, but this VNC application seems to ignore it. Is there any way for me to force this VNC application to use my systemwide proxy settings?

    Read the article

  • Hudson authentication via wget is return http error 302

    - by Rafael
    Hello, I'm trying to make a script to authenticate in hudson using wget and store the authentication cookie. The contents of the script is this: wget \ --no-check-certificate \ --save-cookies /home/hudson/hudson-authentication-cookie \ --output-document "-" \ 'https://myhudsonserver:8443/hudson/j_acegi_security_check?j_username=my_username&j_password=my_password&remember_me=true' Unfortunately, when I run this script, I get: --2011-02-03 13:39:29-- https://myhudsonserver:8443/hudson/j_acegi_security_check? j_username=my_username&j_password=my_password&remember_me=true Resolving myhudsonserver... 127.0.0.1 Connecting to myhudsonserver|127.0.0.1|:8443... connected. WARNING: cannot verify myhudsonserver's certificate, issued by `/C=Unknown/ST=Unknown/L=Unknown/O=Unknown/OU=Unknown/CN=myhudsonserver': Self-signed certificate encountered. HTTP request sent, awaiting response... 302 Moved Temporarily Location: https://myhudson:8443/hudson/;jsessionid=087BD0B52C7A711E0AD7B8BD4B47585F [following] --2011-02-03 13:39:29-- https://myhudsonserver:8443/hudson/;jsessionid=087BD0B52C7A711E0AD7B8BD4B47585F Reusing existing connection to myhudsonserver:8443. HTTP request sent, awaiting response... 404 Not Found 2011-02-03 13:39:29 ERROR 404: Not Found. There's no error log in any of hudson's tomcat log files. Does anyone has any idea about what might be happening? Thanks.

    Read the article

  • How to get robocopy running in powershell?

    - by Mark Allison
    Hi, I'm trying to use robocopy inside powershell to mirror some directories on my home machines. Here's my script: param ($configFile) $config = Import-Csv $configFile $what = "/COPYALL /B /SEC/ /MIR" $options = "/R:0 /W:0 /NFL /NDL" $logDir = "C:\Backup\" foreach ($line in $config) { $source = $($line.SourceFolder) $dest = $($line.DestFolder) $logfile = $logDIr $logfile += Split-Path $dest -Leaf $logfile += ".log" robocopy "$source $dest $what $options /LOG:MyLogfile.txt" } The script takes in a csv file with a list of source and destination directories. When I run the script I get these errors: ------------------------------------------------------------------------------- ROBOCOPY :: Robust File Copy for Windows ------------------------------------------------------------------------------- Started : Sat Apr 03 21:26:57 2010 Source : P:\ C:\Backup\Photos \COPYALL \B \SEC\ \MIR \R:0 \W:0 \NFL \NDL \LOG:MyLogfile.txt\ Dest - Files : *.* Options : *.* /COPY:DAT /R:1000000 /W:30 ------------------------------------------------------------------------------ ERROR : No Destination Directory Specified. Simple Usage :: ROBOCOPY source destination /MIR source :: Source Directory (drive:\path or \\server\share\path). destination :: Destination Dir (drive:\path or \\server\share\path). /MIR :: Mirror a complete directory tree. For more usage information run ROBOCOPY /? **** /MIR can DELETE files as well as copy them ! Any idea what I need to do to fix? Thanks, Mark.

    Read the article

  • Setting static IP on CentOS without system-config-network

    - by Josh
    Background information: I have a problem installing system-config-network. It appears it cannot find an update to sqlite (checks every mirror and comes back empty handed.) I have tried the skipping broken option and package cleanups from yum-utils. Since I cannot get it installed, I decided to set an ip manually from console. A quick Google search comes back empty handed as for an easy how-to guide that works. What do I need to do for a currently configured DHCP ip to change it to a Static IP from console. Thanks, Josh

    Read the article

  • How can I do Ubuntu Lucid Desktop installation with HTTP preseed?

    - by netvope
    Installation media: ubuntu-10.04-desktop-i386.iso I tried a lot of different boot parameters, but either the installer ignored the preseed configuration, or it boot itself directly as LiveCD. An example of the boot parameters I've tried: auto url=http://mydomain.com/path/preseed.cfg boot=casper only-ubiquity initrd=/casper/initrd.lz quiet splash -- If I remove only-ubiquity, it boots as a LiveCD. If I remove boot=casper, it won't boot. If I add vga=normal locale=en_US console-setup/layoutcode=us console-setup/ask_detect=false interface=auto, it still can't do automatic install. If I remove auto, it's the same. What is the correct boot parameters for launching such an installation? From the apache log of the server hosting preseed.cfg, I see that the installer has no problems fetching the preseed file. My preseed file is almost identical to the one at https://help.ubuntu.com/10.04/installation-guide/example-preseed.txt. Moreover, I have run debconf-set-selections -c preseed.cfg to ensure that the preseed file is correct.

    Read the article

  • Scripting an automated SQLServer 2008 DR move

    - by ItsAMystery
    Hi All We use the built in logshipping in SQLServer to logship to our DR site but once in a month do a DR test which requires us to move back and forth between our Live and BAckup servers. We run multiple (30) databases on the system so manually backing up the final logs and disabling the jobs is too much work and takes too long. I though no problem, I will script it but have run into trouble with it always complaninig that the final logship is too early to apply even though I dont export the final log until putting the database into norecovery mode. Firstly, does any one no a simple and reliable way of doing this? I have lokoed at some 3rd party software (redgate sqlbackup I think it was) but that didnt make it easy in this situation either. What I want to be able to do is basically run a script (a series of stored procedures) to get me to DR and run another to get me back with no dataloss. My scripts are very simplistic at the moment but here they are: 2 servers Primary Paris Secondary ParisT The StartAgentJobAndWait is a script written by someone else (ta) and just checks the jobs have finished or quits it if it never ends. At the moment I am just using a test database called BOB2 but if I can get it working will pass in the database and job names. from PARIS: /* Disable backup job */ exec msdb..sp_update_job @job_name = 'LSBackup_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSCopy_PARIS_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSRestore_PARIS_BOB2', @enabled = 0 exec PARIST.master.dbo.DRStage2 ParisT DRStage2 DECLARE @RetValue varchar (10) EXEC @RetValue = StartAgentJobAndWait LSCopy_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Copy Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' SELECT @RetValue = 0 EXEC @RetValue = StartAgentJobAndWait LSRestore_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Restore Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' exec PARIS.master.dbo.DRStage3 /* Do the last logship and move it to Trumpington */ BACKUP log "BOB2" to disk='c:\drlogshipping\BOB2.bak' with compression, norecovery EXEC xp_cmdshell 'copy c:\drlogshipping \\192.168.7.11\drlogshipping' EXEC PARIST.master.dbo.DRTransferFinish AS BEGIN restore database "BOB2" from disk='c:\drlogshipping\bob2.bak' with recovery

    Read the article

  • openvpn TCP/UDP slow SSH/SMB performance

    - by Petr Latal
    I have question about strange behavior of my openVPN configuration on Debian lenny. I have 2 server configs (one proto tcp-server based and one proto udp based). ISP bandwidth is 7Mbit/7Mbit. When I uses proto tcp-server my download server rate is fine around 6,4 Mbit/s, but upload rate is about 3Mbit/s. When I uses proto udp, my download server rate is around 3Mbit/s and upload rate around 6,4Mbit/s. I tried to handle the MTU, MSSFIX and cipher on/off on server and client configs to synchronize rates, but without solution. Here is TCP based SERVER config: mode server tls-server port 1194 proto tcp-server dev tap0 ifconfig 11.10.15.1 255.255.255.0 ifconfig-pool 11.10.15.2 11.10.15.20 255.255.255.0 push "route 192.168.1.0 255.255.255.0" push "dhcp-option DNS 192.168.1.200" push "route-gateway 11.10.15.1" push "dhcp-option WINS 192.168.1.200" route-up /etc/openvpn/routeup.sh duplicate-cn ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key mssfix cipher BF-CBC Here is UDP based SERVER config: port 1194 proto udp dev tun0 local xx.xx.xx.xx server 11.10.15.0 255.255.255.0 ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 duplicate-cn script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key tun-mtu 1500 mssfix 1212 client-to-client ifconfig-pool-persist ipp.txt Here is TCP/UDP based windows CLIENT config: remote xx.xx.xx.xx --socket-flags TCP_NODELAY tls-client port 1194 proto tcp-client #proto udp dev tap #dev tun pull ca ca.crt cert latis.crt key latis.key mute 0 comp-lzo adaptive verb 3 resolv-retry infinite nobind persist-key auth-user-pass auth-nocache script-security 2 mssfix cipher BF-CBC

    Read the article

  • cset as non-root to set cpu affinity for running processes

    - by RaveTheTadpole
    I've been playing with cset to set cpu affinity for running processes. I'm recreating the built-in "shield" function manually with set and proc, to add some subsets for specific threads of my application. I have a bash script that is calling cset to create the sets, and move the correct threads to the correct sets. It works when run with sudo. Now I'd like to make this script executable by another user, who does not have sudo powers. I trust this user enough to be responsible with cset, but don't want to open up the wide powers of root. I thought that CAP_SYS_NICE -- which is needed for sched_setaffinity, which I just assume cset must use -- on the script would be sufficient, but that didn't work. I tried extending CAP_SYS_NICE to the cset program (which is a thin python wrapper for the cset python library). No dice. The output of cap_to_text on my CAP_SYS_NICE'd scripts is "=cap_ipc_lock,cap_sys_nice,cap_sys_resource+eip" (it has ipc_lock and sys_resource for other reasons; I think only sys_nice is relevant). Any ideas?

    Read the article

  • MaxStartups and MaxSessions configurations parameter for ssh connections?

    - by Webby
    I am copying the files from machineB and machineC into machineA as I am running my below shell script on machineA. If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC. I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying 10 files in parallel. Below is my shell script which I have - #!/bin/bash export PRIMARY=/test01/primary export SECONDARY=/test02/secondary readonly FILERS_LOCATION=(machineB machineC) export FILERS_LOCATION_1=${FILERS_LOCATION[0]} export FILERS_LOCATION_2=${FILERS_LOCATION[1]} PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers SECONDARY_PARTITION=(1643 1103 1372 1096 1369 1568) # this will have more file numbers export dir3=/testing/snapshot/20140103 find "$PRIMARY" -mindepth 1 -delete find "$SECONDARY" -mindepth 1 -delete do_Copy() { el=$1 PRIMSEC=$2 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. } export -f do_Copy parallel --retries 10 -j 10 do_Copy {} $PRIMARY ::: "${PRIMARY_PARTITION[@]}" & parallel --retries 10 -j 10 do_Copy {} $SECONDARY ::: "${SECONDARY_PARTITION[@]}" & wait echo "All files copied." Problem Statement:- With the above script at some point I am getting this exception - ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host And I guess the error is typically caused by too many ssh/scp starting at the same time. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions is set too low. But my question is on which server it is pretty low? machineB and machineC or machineA? And on what machines I need to increase the number? On machineA this is what I can find - root@machineA:/home/david# grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 root@machineA:/home/david# grep MaxSessions /etc/ssh/sshd_config And on machineB and machineC this is what I can find - [root@machineB ~]$ grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10 [root@machineB ~]$ grep MaxSessions /etc/ssh/sshd_config #MaxSessions 10

    Read the article

  • How can I do Ubuntu Lucid Desktop installation with HTTP preseed file and desktop installation CD?

    - by netvope
    Installation media: ubuntu-10.04-desktop-i386.iso I tried a lot of different boot parameters, but either the installer ignored the preseed configuration, or it boot itself directly as LiveCD. An example of the boot parameters I've tried: auto url=http://mydomain.com/path/preseed.cfg boot=casper only-ubiquity initrd=/casper/initrd.lz quiet splash -- If I remove only-ubiquity, it boots as a LiveCD. If I remove boot=casper, it won't boot. If I add vga=normal locale=en_US console-setup/layoutcode=us console-setup/ask_detect=false interface=auto, it still can't do automatic install. If I remove auto, it's the same. What am I missing? From the apache log of the server hosting preseed.cfg, I see that the installer has no problems fetching the preseed file. My preseed file is almost identical to the one at https://help.ubuntu.com/10.04/installation-guide/example-preseed.txt. Moreover, I have run debconf-set-selections -c preseed.cfg to ensure that the preseed file is correct. Any ideas?

    Read the article

  • Check for updates of a specific Debian package list

    - by Erwan Queffélec
    The setup I run a Debian Squeeze host that I use to build a multilanguage project (python, java, php...) and generate custom packages (debian and RPM) automatically (through jenkins) The problem The target distributions of those Debian packages are Etch, Lenny and Squeeze. But our project has some native dependencies that are available only through the DebianRelease + 1 repository (i.e Lenny + 1 == Squeeze, Squeeze + 1 == Wheezy). We for example, need the jetty packages from Squeeze in Lenny, and the cyrus-imapd-2.4 packages from Wheezy in Squeeze. Some additional info : Some package we can simply 'backport by hand' by mirroring the DebianRelease + 1 packages to our own repositories. For instance, the jetty package from Squeeze will run fine on Lenny because it doesn't need an s**tload of additional dependencies However we do need to rebuild some packages. For instance, cyrus-imapd-2.4 from Wheezy has a lot of unsatisfied dependencies on Squeeze. So we need to rebuild it in Squeeze and then upload it to our repo. The question I need to have a simple way of knowing if they are any updates on those extra packages (both "normal" and "security" updates). I could write a simple script that runs weekly, get some parameters from a file, and generate an update report. Let's say the file looks like this: jetty:squeeze cyrus-imapd-2.4:wheezy The script should run as normal user not to mess up the system apt configuration and issue the appropriate commands to generate that report. Does Debian has some built-in apt-* commands/options dedicated to that kind of problem I could use to write this script ? If not, can someone think of another clean solution to achieve what I need ?

    Read the article

  • what can be causes of http server crash?

    - by mithunmo
    Hello , I am using WAMP server on Windows XP. Apache 2.2.11 MySQL 5.1.36 (INNODB engine) PHP 5.3.0 I observe that my WAMP server crashes in the following scenarios IF I use a Low end PC ( low processor speed and low RAM) After making some changes to httpd.conf file .For eg changing the Allow from IP address . But here it crashes only once and then it starts to work fine. Random crashes CRASH LOG szAppName : httpd.exe szAppVer : 2.2.11.0 szModName : php5ts.dll szModVer : 5.3.0.0 offset : 0000c309 C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\httpd.exe.mdmp C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\appcompat.txt My questions Does high CPU utilization/LOW RAM can also cause the HTTP server to crash ? excessive file reading as in every 10 seconds ? unlimited script execution time . I have set the maximum execution time in php script to 0 as my script has to execute for sometimes 2-3 days. Is there any way to avoid this ? Access to Database ? Should we use lock before reading and writing Can these be the reasons for random wamp server crashes ? OR is is some other programming error ? Please guide me . Regards, Mithun

    Read the article

  • Xen HVM networking wont work

    - by Nathan
    I'm trying to get a Xen HVM network working using route however I am failing. Xen PV works fine using Ubuntu but when installing Ubuntu on HVM it fails to pick up the network. I'll let you know now that I'm not that experienced with Xen so I would appreciate any help. vm104 is the HVM thats causing me the problems, here is the configs that I believe should help resolve the problem. [root@eros vm104]# cat vm104.cfg import os, re arch = os.uname()[4] if re.search('64', arch): arch_libdir = 'lib64' else: arch_libdir = 'lib' kernel = '/usr/lib/xen/boot/hvmloader' builder = 'hvm' memory = 6000 shadow_memory = '8' cpu_weight = 256 name = 'vm104' vif = ['type=ioemu, ip=85.25.x.y, vifname=vifvm104.0, mac=00:16:3e:52:3d:fe, bridge=xenbr0'] acpi = 1 apic = 1 vnc = 1 vcpus = 4 vncdisplay = 3 vncviewer = 0 vncconsole = 1 vnclisten = '217.118.x.y' vncpasswd = 'kCfb5S4tE7' serial = 'pty' disk = ['phy:/dev/vpsvg/vm104_img,hda,w', 'file:/home/solusvm/xen/iso/Windows-Server-2008-RC2.iso,hdc:cdrom,r'] device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm' boot = 'cd' sdl = '0' usbdevice = 'tablet' pae=1 [root@eros /]# cat /etc/xen/xend-config.sxp | egrep -v "(^#.*|^$)" (xend-unix-server yes) (xend-unix-path /var/lib/xend/xend-socket) (xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$') (network-script network-route) (vif-script vif-route) (network-script 'network-route netdev=eth0') (dom0-min-mem 256) (dom0-cpus 0) (vnc-listen '0.0.0.0') (vncpasswd '') (keymap 'en-us') The Windows install will not pick up the network - I've tried setting the IP manually by using the Xen servers IP as the gateway and setting the main IP in Windows but no luck. If anyone needs any more information let me know and I appreciate any input!

    Read the article

  • Can Windows-Security-SPP block execution of .exe?

    - by Kirk Marple
    We're seeing a strange situation, where some executables won't run from a Windows command prompt (running as admin). Just running the command (say, filename.exe) gives no response on the console. No errors, no output, nothing. If we copy over the same Windows .exe from a different folder, it "magically" starts working, and we see the default console output. (Happens both on Win7 x64, and Win2008R2 x64. Application is running as 32-bit process.) At the time when it accesses the .exe, I can see events in the application and system logs regarding Windows-Security-SPP, and it makes me believe that the .exe is being blocked from execution. Does this sound familiar?

    Read the article

  • BitLocker with Windows DPAPI Encryption Key Management

    - by bigmac
    We have a need to enforce resting encryption on an iSCSI LUN that is accessible from within a Hyper-V virtual machine. We have implementing a working solution using BitLocker, using Windows Server 2012 on a Hyper-V Virtual Server which has iSCSI access to a LUN on our SAN. We were able to successfully do this by using the "floppy disk key storage" hack as defined in THIS POST. However, this method seems "hokey" to me. In my continued research, I found out that the Amazon Corporate IT team published a WHITEPAPER that outlined exactly what I was looking for in a more elegant solution, without the "floppy disk hack". On page 7 of this white paper, they state that they implemented Windows DPAPI Encryption Key Management to securely manage their BitLocker keys. This is exactly what I am looking to do, but they stated that they had to write a script to do this, yet they don't provide the script or even any pointers on how to create one. Does anyone have details on how to create a "script in conjunction with a service and a key-store file protected by the server’s machine account DPAPI key" (as they state in the whitepaper) to manage and auto-unlock BitLocker volumes? Any advice is appreciated.

    Read the article

  • Lambda expressions in C#?

    - by user30457
    Hey I'm rather new to these could someone explain the significance or prehaps give a link to some useful information on lambda expressions? I encounter the following code in a test and I am wondering why someone would do this: foo.MyEvent += (o, e) = { fCount++; Console.WriteLine(fCount); }; foo.MyEvent -= (o, e) = { fCount++; Console.WriteLine(fCount); }; My instinct tells me it is something simple and not a mistake, but I don't know enough about these expressions to understand why this is being done. Regards,

    Read the article

  • Are there netcat-like tools for Windows which are not quarantined as malware?

    - by Matthew Murdoch
    I used to use netcat for Windows to help track down network connectivity issues. However these days my anti-virus software (Symantec - but I understand others display similar behaviour) quarantines netcat.exe as malware. Are there any alternative applications which provide at least the following functionality: can connect to an open TCP socket and send data to it which is typed on the console can open and listen on a TCP socket and print received data to the console ? I don't need the 'advanced' features (which are possibly the reason for the quarantining) such as port scanning or remote execution.

    Read the article

  • hMail server - sending copy of an e-mail changing the sender

    - by Beggycev
    Dear All please help me with following request. I am using hMail server in a company(test.com) and have several hundred of guest e-mail accounts ([email protected]). I need to accomplish this: When any of the guest e-mails receives a message(either from internal or external sender) this e-mail(or its copy) is sent to another address "[email protected]" which is the same for all of these guest e-mails. But I need the sender to be identified as the [email protected] not as the original sender which happens when I use forwarding. I tried to prepare a simple VBS script using the OnAcceptMessage event to accomplish this. and it works on my testing machine without internet connectivity but not in the production environment. To be specific, if I send an e-mail to [email protected] in my test env it is delivered to the [email protected] with [email protected] being a sender. But in the production env the e-mail stays in the guest mailbox with the original sender. I am interested in any solution, using a rule in hMail or script, anything is welcome. Thank you for any help! The script: Sub OnAcceptMessage(oClient, oMessage) 'creating application object in order to perform operations as hMail server administrator Dim obApp Set obApp = CreateObject("hMailServer.Application") Dim adminLogin Dim adminPassword 'Enter actual values for administrator account and password 'CHANGE HERE: adminLogin = "Admin_login" adminPassword = "password" Call obApp.Authenticate(adminLogin, adminPassword) Dim addrStart 'Take first 5 characters of recipients address addrStart = Mid(oMessage.To, 1, 5) 'if the recipient's address start with "guest" if addrStart = "guest" then Dim recipient Dim recipientAddress 'enter name of the recipient and respective e-mail address() 'CHANGE HERE: recipient = "FINAL" recipientAddress = "[email protected]" 'change the sender and sender e-mail address to the guest oMessage.FromAddress = oMessage.To oMessage.From = oMessage.To & "<" & oMessage.To & ">" 'delete recipients and enter a new one - the actual mps and its e-mail from the variables set above oMessage.ClearRecipients() oMessage.AddRecipient recipient, recipientAddress 'save the e-mail oMessage.save end if End Sub

    Read the article

  • Smart card / auditable access for rack KVM tray

    - by Mark Henderson
    Is there such a thing as a KVM Tray for a standard 19" rack whose use can be validated by a smartcard (or some other auditable authentication method)? It looks like we have a security requirement where just because users have a key to the rack doesn't mean they will be allowed to use the console inside the rack, and rather than just lock the console (and keep track of who has keys), we would prefer to be able to audit the actual user that was attached at the KVM. (It's worth mentioning that I'm aware of the Raritan devices, but they surely can't be the only ones) (If these things existed, I don't think half of the tratoirs that somehow manage to infiltrate CTU on the TV show 24 would ever get away with anything)

    Read the article

  • Birt on Tomcat unable to find JARs

    - by LostInTheWoods
    First, my setup: BiRT Runtime: 3.7.2. Ubuntu 10.04 Tomcat 6 Sun Java 1.6.0 I have a jar file I want to deploy onto the Tomcat server so it is usable by the runtime, so I placed the jar file in /var/lib/tomcat6/webapps/birt/WEB-INF/lib. As I understand it this is the default location for JAR files that are going to be used by a BiRT report. But the jar file is not accessible by the report that is trying to call it. In the BiRT logs I see: Error evaluating Javascript expression. Script engine error: ReferenceError: "DynDSinfo" is not defined. (/report/data-sources/oda-data-source[@id="54"]/method[@name="beforeOpen"]#20) Script source: /report/data-sources/oda-data-source[@id="54"]/method[@name="beforeOpen"], line: 0, text: __bm_beforeOpen() org.eclipse.birt.data.engine.core.DataException: Fail to execute script in function __bm_beforeOpen(). Source: "DynDSinfo" is the class I am trying to reference.. and now for the kicker... this works fine on Tomcat6 on Windows 7. The same files in the same places. So is there some additional configuration or some environmental variable that needs to be set, or something different on the Linux (Ubuntu) platform? All help or ideas gratefully received, Stephen

    Read the article

  • Dynamically add Server 2008 NLB Nodes

    - by Nick Jacques
    Hi All, I have a small NLB cluster for Terminal Servers. One of the things we're looking at doing for this particular project (this is for a college class) is dynamically creating Terminal Servers. What we've done is create policies for a certain OU, that sets the proper TS Farm properties and installs the Terminal Server role and NLB feature. Now what we'd like to do is create a script to be run on our Domain Controller to add hosts to the preexisting NLB cluster. On our Server 2008 R2 Domain Controller, I was thinking of running the following PowerShell script I've kind of hacked together. Any thoughts on if this will work? Is there any way I can trigger this script to run on the DC once all the scripts to install roles are done on the various Terminal Servers? Thanks very much in advance!! Import-Module NetworkLoadBalancingClusters $TermServs = @() $Interface = "Local Area Connection" $ou = [ADSI]"LDAP://OU=Term Servs,DC=example,DC=com" foreach ($child in $ou.psbase.Children) { if ($child.ObjectCategory -like '*computer*') {$TermServs += $child.Name} } foreach ($TS in $TermServs) { Get-NlbCluster 172.16.0.254 | Add-NlbClusterNode -NewNodeName $TS -NewNodeInterface $Interface }

    Read the article

  • How to give a user NTFS rights to a folder, via Powershell

    - by Don
    I'm trying to build a script that will create a folder for a new user on our file server. Then take the inherited rights away from that folder and add specific rights back in. I have it successfully adding the folder (if i give it a static entry in the script), giving domain admin rights, removing inheritance, etc...but i'm having trouble getting it to use a variable I set as the user. I don't want there to be a static user each time, I want to be able to run this script, have it ask me for a username, it then goes out and creates the folder, then gives that same user full rights to that folder based on the username i've supplied it. I can use Smithd as a user, like this: New-Item \\fileserver\home$\Smithd –Type Directory But can't get it to reference the user like this: New-Item \\fileserver\home$\$username –Type Directory Here's what i have: Creating a new folder and setting NTFS permissions. $username = read-host -prompt "Enter User Name" New-Item \\\fileserver\home$\$username –Type Directory Get-Acl \\\fileserver\home$\$username $acl = Get-Acl \\\fileserver\home$\$username $acl.SetAccessRuleProtection($True, $False) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Administrators","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\Domain Admins","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\"+$username,"FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) Set-Acl \\\fileserver\home$\$username $acl I've tried several ways to get it to work, but no luck. Any ideas or suggestions would be welcome, thanks.

    Read the article

  • pure-ftpd not listening on specified port

    - by Jason McLaren
    I installed the pure-ftpd package (version 1.0.35-1) on an Ubuntu 12.04 box (an EC2 instance based on the standard Ubuntu 12.04 AMI). The pure-ftpd daemon is running (verified with ps), though there is no PID file (expected one to be created by the /etc/init.d/pure-ftpd script). Here's the resulting command that gets run by the init.d script: /usr/sbin/pure-ftpd -l pam -O clf:/var/log/pure-ftpd/transfer.log -o -8 UTF-8 -u 1000 -E -B -g /var/run/pure-ftpd/pure-ftpd.pid Here's my real problem: the ftp server isn't actually listening on any port (checked with netstat and nmap). So I can't ftp to the server (either locally using localhost or remotely using the public IP address). I tried adding a Bind file to /etc/pure-ftpd/conf and restarting, but it didn't help. When I installed pure-ftpd, it replaced inetd with openbsd-inetd, but did not run it since there were no services enabled. So inetd is not listening on port 21 either. (Apparently Ubuntu has a no-inetd-by-default policy, according to https://lists.ubuntu.com/archives/ubuntu-users/2010-September/227905.html .) I want to run pure-ftpd by itself (not with inetd) anyways, since the /etc/init.d/pure-ftpd script requires no inetd if you use the UploadScript feature. I'm not familiar with how Ubuntu handles network services (and can't find any relevant docs besides generic man pages), so I'm probably missing something obvious. Nothing seems out of the ordinary with /etc/hosts.allow (empty) or hosts.deny (empty), and I didn't add any firewall rules (iptables -L shows that the firewall is in its initial state). I've checked the pure-ftpd docs; not sure what else to look at. Any help would be appreciated, thanks!

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >