Search Results

Search found 17378 results on 696 pages for 'remote x session'.

Page 240/696 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • PowerShell Remoting: No credentials are available in the security package

    - by TheSciz
    I'm trying to use the following script: $password = ConvertTo-SecureString "xxxx" -AsPlainText -Force $cred = New-Object System.Management.Automation.PSCredential("domain\Administrator", $password) $session = New-PSSession 192.168.xxx.xxx -Credential $cred Invoke-Command -Session $session -ScriptBlock { New-Cluster -Name "ClusterTest" -Node HOSTNAME } To remotely create a cluster (it's for testing purposes) on a Windows Server 2012 VM. I'm getting the following error: An error occurred while performing the operation. An error occurred while creating the cluster 'ClusterTest'. An error occurred creating cluster 'ClusterTest'. No credentials are available in the security package + CategoryInfo : NotSpecified: (:) [New-Cluster], ClusterCmdletException + FullyQualifiedErrorId : New-Cluster,Microsoft.FailoverClusters.PowerShell.NewClusterCommand All of my other remote commands (installing/making changes to DNS, DHCP, NPAS, GP, etc) work without an issue. Why is this one any different? The only difference is in the -ScriptBlock tag. Help!

    Read the article

  • Test tomcat for ssl renegotiation vulnerability

    - by Jim
    How can I test if my server is vulnerable for SSL renegotiation? I tried the following (using OpenSSL 0.9.8j-fips 07 Jan 2009: openssl s_client -connect 10.2.10.54:443 I see it connects, it brings the certificate chain, it shows the server certificate, and last: SSL handshake has read 2275 bytes and written 465 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 1024 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 50B4839724D2A1E7C515EB056FF4C0E57211B1D35253412053534C4A20202020 Session-ID-ctx: Master-Key: 7BC673D771D05599272E120D66477D44A2AF4CC83490CB3FDDCF62CB3FE67ECD051D6A3E9F143AE7C1BA39D0BF3510D4 Key-Arg : None Start Time: 1354008417 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) What does Secure Renegotiation IS supported mean? That SSL renegotiation is allowed? Then I did but did not get an exception or get the certificate again: verify error:num=20:unable to get local issuer certificate verify return:1 verify error:num=27:certificate not trusted verify return:1 verify error:num=21:unable to verify the first certificate verify return:1 HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/html;charset=ISO-8859-1 Content-Length: 174 Date: Tue, 27 Nov 2012 09:13:14 GMT Connection: close So is the server vulnerable to SSL renegotiation or not?

    Read the article

  • Upload a directory recursively to an FTP server

    - by Nicolas Raoul
    I am writing a Linux shell script to copy a local directory to a remote server (removing any existing files). Local server: ftp and lftp commands are available, no ncftp or any graphical tools. Remote server: only accessible via FTP. No rsync nor SSH nor FXP. I am thinking about listing local and remote files to generate a lftp script and then run it. Is there a better way? Note: Uploading only modified files would be a plus, but not required

    Read the article

  • Panic Transmit file upload

    - by 1ndivisible
    I've ditched Coda and bought Transmit. I'm a little confused by the file uploading. I have exactly the same folder structure remotely and locally, but if I right-click a file and choose Upload "SomeFileName.html" The file is always uploaded into the root of the remote site, even if the file is in a folder. If I choose to upload a file at assets/images/some_image.png I would expect it to be uploaded to the same folder on the remote server, not the root. Coda dealt with this perfectly and also told me what files had been modified and needed uploading. Transmit doesn't seem to do either of these things. So my questions are: How can I upload a file to the same path on the remote server without having to drag and drop Is there any way to have Transmit mark edited files or upload only edited files. [There is no tag for Transmit so if someone with more rep could make and add one that would be grand]

    Read the article

  • Git and Amazon EC2 public key denied

    - by MrNart
    I had git working before on /var/html/projectfolder and realized it was a security risk so I made a new folder /projects from the root folder and tried to replicate what I did and now it doesnt work. Here is the backlog of what I did for my local machine and EC2 - server Server-EC2 1.I added my public key to the authorized_user file in ~/.ssh folder 2.Create a bare repository git init --bare 3.Change folder permissions to sudo chgrp -R ec2-user * sudo chmod -R g+ws * Local Machine create a local repository with git init touch, add, commit readme file pointed origin master to ec2 via git remote add origin ssh://ec2-user@remote-ip/path/to/folder This is my output: Permission Denied (publickey) fatal: The remote end hung up unexpectedly

    Read the article

  • Permissions needed to read event log messages remotely?

    - by Neolisk
    When running under a limited account, local event log messages are displaying fine, for remote computer I am getting this error: The description for Event ID ( xxxxx ) in Source ( yyyyy ) cannot be found. The local computer may not have the necessary registry information or message DLL files to display messages from a remote computer. You may be able to use the /AUXSOURCE= flag to retrieve this description; see Help and Support for details. The following information is part of the event: zzzzz. Same remote computer works fine under domain administrator. I am currently experimenting with just the Event Viewer, by using Run As. Original issue is a PowerShell script which does Get-EventLog. Are there any special permissions that need to be in place to able to read event log messages remotely? Supposedly there is a simple solution in Windows 2008 and higher, i.e. just add user to Event Log Readers group. Is there anything like that for Windows 2003?

    Read the article

  • Cisco ASA - NAT'ing VPN traffic

    - by DrStalker
    I have an IPsec VPN setup like this: [Remote users]-[Remote ASA] <-VPN-> [My ASA]-[Subnet A]-[Router 2]-[Subnet B] The VPN is set to handle traffic between [remote users] and [Subnet A]; it does not include [Subnet B]. Pretend the firewall rules for all routers are to permit everything. Now I want to redirect traffic that comes over the VPN to a specific IP on [subnet A] (192.168.1.102) to an IP on [Subnet B] (10.1.1.133) If I add a rule on [My ASA] to NAT traffic to original IP 192.168.1.102 to new IP 10.1.1.133, 1) Will this affect the connections coming in over the VPN? (ie: the VPN packets are unencrypted and then NAT is applied) 2) Will this work when the post-NAT target is on Subnet-B, which is not part of the VPN traffic selection?

    Read the article

  • Can't start a service (sudo) remotely from script and keep it running

    - by Greg Bernhardt
    I have a service (tomcat) that needs sudo to be started. I made a simple script on the remote server in /root/bin/test.sh #!/bin/sh sudo service tomcat start read (The script needs to do other stuff too, just pared down for simplicity). When I run a it directly on the remote server, tomcat starts and continues running on the server after I disconnect. When I run it remotely, the process starts, (I can see it when paused for the "read"), but once the script ends, it's gone. (while paused for the read, run this command locally) ps -ef | grep tomcat I've tried various combinations of nohup, screen, and & on the commands both on the local machine and in the remote machine's test.sh script, but I can't seem to get it working. ssh -t [email protected] "/root/bin/test.sh" ssh -t [email protected] "nohup /root/bin/test.sh" ssh -t [email protected] "nohup /root/bin/test.sh &" ssh -t [email protected] "screen /root/bin/test.sh &"

    Read the article

  • How to start Internet Explorer 8 without any bars

    - by Slacker
    Hi. At work I use LogMeIn to do remote assistance for our customers. The only thing which is bugging me is that you can switch to fullscreen mode but it doesn't offer to open the remote session in a new window without address bar, bookmarks bar and stuff. So I'm looking for a way to open the LogMeIn Tech Console in a new browser window without any bars. Fullscreen mode is nice but not usable when working with other applications besides the remote session. The Internet Explorer Kiosk mode also doesn't fit my needs. I created a bookmarklet with works quite nice but having a link directly on my desktop would be perfect. Does anyone have a hint on that? I use IE8 on Windows XP SP3. Thx in advance Dennis

    Read the article

  • Windows Server 2008 r2 FTP blocking outside connections

    - by nbon
    I have a windows server 2008r2 running IIS 7.5. I am trying to setup a FTP-server in IIS but I'm running into some annoying problems. Setting up the server works fine but when I try to connect from a remote client the connection times out. I have tried to connect to the FTP-server from the localhost and it works flawlessly. I figured that it should be some trouble with the firewall so I went into firewall settings and disabled the Public Profile and my remote connections worked! In my inbound rules there are rules for FTP-connections to allow all profiles etc. I guess they are made automatically when setting up the FTP-server. Anyone got any idea how to allow remote connections without turning off the public firewall?

    Read the article

  • How can I protect files on my NGiNX server?

    - by Jean-Nicolas Boulay Desjardins
    I am trying to protect files on my server (multiple types), with NGiNX and PHP. Basically I want people to have to sign in to the website if they want to access those static files like images. DropBox does it very well. Where by they force you to sign in to access any static files you put on there server. I though about using NGiNX Perl Module. And I would write a perl script that would check the session to see if the user was sign in to give them access to a static file. I would prefer using PHP because all my code is running under PHP and I am not sure how to check a session created by PHP with PERL. So basically my question is: How can I protect static files of any types that would need the user to have sign in and have a valid session created with a PHP script?

    Read the article

  • MSDTC Port 135 open bi-directionally

    - by Stephen Lacy
    I have two servers, a web application server and an SQL Server database running on its own server. I have a firewall between these two servers. Do I have to open port 135 on both the SQL Server and the Web Application Server. Does the SQL Server open its own connection to the Web Application Server on port 135 or any other port? Do I have to in component services point the Web Application Server MSDTC at the SQL Database Server? If the firewall is completely open, the settings in component services set to allow remote connections, remote administration etc is there any other settings that need to be changed in order to allow remote connections to the SQL Server MSDTC?

    Read the article

  • Make Exchange 2007 use the correct SSL certificate

    - by Neil
    I have an SBS 2008 server contososerver.contosodomain.local which is externally accessible with the domain remote.contoso.com and an SSL certificate for the external domain which we installed using the SBS 2008 wizard. This works great for OWA because IIS serves the remote.contoso.com certificate. I also want to turn on external POP3/IMAP4/SMTP however when I try, I get served the internal certificate that SBS generated automatically (using its internal CA) which has the alternate names remote.contoso.com, contososerver.contosodomain.local and contososerver. I tried removing this certificate from Exchange but it won't let me because it needs it for its internal receive connector. So how do I tell Exchange 2007 to use the real certificate for external POP3/IMAP4/SMTP?

    Read the article

  • Iptables REDIRECT + openvpn problem

    - by Emilio
    I want to redirect connection to port 22 to my openvpn binded port, on 60001. Openvpn is running on server on 60001 server:~$ sudo netstat -apn | grep openvpn udp 0 0 67.xx.xx.137:60001 0.0.0.0:* 4301/openvpn I redirect on server port 22 to 60001 server:~$ sudo iptables -F -t nat server:~$ sudo iptables -A PREROUTING -t nat -p udp --dport 22 -j REDIRECT --to-ports 60001 I start openvpn client (openvpn.conf is correct, it works with remote IP 22 replaced with remote IP 60001) client:~$ ./openvpn openvpn.conf Tue Apr 27 00:42:50 2010 OpenVPN 2.1.1 i686-pc-linux-gnu [SSL] [EPOLL] built on Mar 23 2010 Tue Apr 27 00:42:50 2010 UDPv4 link local (bound): [undef]:1194 Tue Apr 27 00:42:50 2010 UDPv4 link remote: 67.xx.xx.137:22 Tue Apr 27 00:42:52 2010 read UDPv4 [ECONNREFUSED]: Connection refused (code=111) Tue Apr 27 00:42:55 2010 read UDPv4 [ECONNREFUSED]: Connection refused (code=111) ... It doesn't connect. iptables shows requests from client to server but no answers. What's wrong with it?

    Read the article

  • How do I know if 'hg clone' is doing the work remotely?

    - by jjfine
    I've got a very simple windows install of Mercurial on my machine. The 'central' repository is located at //mymachine/hg-repos/central. I want remote (VPN) users to be able to create clones of this repository in the hg-repos directory because it gets daily backups. I have given these users full control of the hg-repos directory. My question is this: If I'm on a remote machine, and I run the command: hg clone //mymachine/hg-repos/central //mymachine/hg-repos/central-copy ...is the remote machine doing most of the work? I don't want the client to have to download all of the central repository and then upload it all back because people are going to be using this from across the country. But I suspect this is what's happening here since it works so easily.

    Read the article

  • how to reduce time of git pulling each time when you do a make world on Xen source

    - by Registered User
    I am compiling xen from source and each time I do a make world it basically gives some or the other error my problem are not those errors ( I am trying to debug them) but the problem is each time when I do a make world Xen basically pulls things from git repository + rm -rf linux-2.6-pvops.git linux-2.6-pvops.git.tmp + mkdir linux-2.6-pvops.git.tmp + rmdir linux-2.6-pvops.git.tmp + git clone -o xen -n git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git linux-2.6-pvops.git.tmp Initialized empty Git repository in /usr/src/xen-4.0.1/linux-2.6-pvops.git.tmp/.git/ remote: Counting objects: 1941611, done. remote: Compressing objects: 100% (319127/319127), done. remote: Total 1941611 (delta 1614302), reused 1930655 (delta 1604595) **Receiving objects: 20% (1941611/1941611), 98.17 MiB | 87 KiB/s, done.** and if you notice the last line it is still consuming my bandwidth pulling things from internet.How can I stop this step each time and use existing git repository?

    Read the article

  • How do you launch an SSH connection with port forwarding without interrupting your screen access?

    - by vfclists
    I want to make an SSH connection to another server with forwarding, but without having to log on to the remote server, nor interfere with the screen I am working on. I also need to access the connection to terminate it when I finish with it. eg. say I want to do a mysql backup on a remote server so I use the command ssh user@remote -L 1234:localhost:3306 but after issuing the password I want to run the mysql command in the session, but be able to access the SSH connection when I finish with mysql and terminate it. Is there some way this can be done?

    Read the article

  • git-receive-pack : command not found.

    - by Philippe Mongeau
    I made a git repo on a local machine with "git init --bare" and added it as the remote origin on the project on my main computer with ssh: git add remote origin [email protected]:repoName.git I was able to make a commit and push from my main computer to the other computer the day I created the repo, but today i tried and it didn't work. When I did "git push origin" it returned this error: bash: line 1: git-receive-pack: command not found fatal: The remote end hung up unexpectedly The two machines are mac the main one running Leopard and the server one running Tiger. I think it may be realted to the $PATH of git on the server but I'm not sure. i used theses instrution to create my git server: http://blog.commonthread.com/2008/4/14/setting-up-a-git-server

    Read the article

  • MSSQL 2008 is claiming the firewall is blocking ports even from local machine

    - by Mercurybullet
    I was just hoping to step through a couple queries to see how the temp tables are interacting and I'm getting this message. The windows firewall on this machine is currently blocking remote debugging. Remote debugging requires that the debugging be allowed to receive information from the network.Remote debugging also requires DCOM (TCP port 135) and IPSEC (UDP 4500/UDP500) be unblocked Even when I walked over to the actual machine and tried running the debugger, I'm still getting the same message. Am I missing something or does the debugger try to run remotely even from the local machine? Since this was meant to be just a quick check, I don't need instructions on how to open up the firewall, just hoping there is a way to run the debugger locally instead.

    Read the article

  • Live Mesh beta on Windows 7 - Did anybody get it working?

    - by Andrew J. Brehm
    I have Live Mesh beta installed on 64 bit Windows 7 but cannot connect to the computer remotely. The "Manage devices" page gives this information: This device is synchronizing but cannot be remotely accessed. Check the Live Mesh notifier on the device to see if Live Mesh remote desktop enhancements need to be installed. However, I cannot find instructions on how to use the "notifier" to "see" if Live Mesh remote desktop enhancements are installed or how I could install them. When I originally installed (both times) Mesh I did tell it to install "remote desktop enhancements". Does this just not work with Windows 7 and/or 64 bit?

    Read the article

  • "Reset" colors of terminal after ssh exit/logout

    - by dgo.a
    When I ssh into a remote server, I like the colors of the terminal to change. I use setterm on my remote ~/.bashrc file to get this done. However, when I exit, the terminal colors are not reset to the local ones. I solved the problem, but I am not sure if it is the best solution. This is what I could come up with. On the ~/.bash_logout on the remote server, I put: echo -e "\033[0m" /usr/bin/clear Just out of curiousity: Does anyone know of a better way? (I got the echo -e "\033[0m" line from http://edoceo.com/liber/linux-bash-shell)

    Read the article

  • Multiple public/private key pairs for the same user

    - by bruceb
    First, sorry if this question has already been asked/answered - I've searched but perhaps I haven't recognised the answer.... What we have is a cluster of servers which need to access a single remote server using sftp. We are migrating from one remote server to another at the same (remote) location. We also want to refresh the public/private key pairs on the configuration as part of an ongoing security review. My question is - can we have multiple public/private key pairs for the same user between server A and server B? I want to do this to allow for cutover testing - but am concerned that the software checking keys may only try one of each type (rsa/dsa?) before rejecting the connection method and moving to the next type of key. Hope it's a straightforward question - please let me know if I need to supply more details. Thanks in advance Bruce

    Read the article

  • If a SQL Server Replication Distributor and Subscriber are on the same server, should a PUSH or PULL subsciption be used?

    - by userx
    Thanks in advance for any help. I'm setting up a new Microsoft SQL Server replication and I have the Distributor and Subscriber running on the same server. The Publisher is on a remote server (as it is a production database and MS recommends that for high volumes, the Distributor should be remote). I don't know much about the inner workings of PUSH vs PULL subscriptions, but my gut tells me that a PUSH subscription would be less resource intensive because (1) the Distributor is already remote, so this shouldn't negatively effect the Publisher and (2) pushing the transactions from the Distributor to the Subscriber is more efficient than the Subscriber polling the Distribution database. Does any one have any resources or insight into PUSH vs PULL which would recommend one over the other? Is there really going to be that big of a difference in performance / reliability / security?

    Read the article

  • PPTP VPN Server issue : server = centOS & client = windows 7

    - by jmassic
    I have a CentOS server configured as a PPTP VPN Server. The client is a Windows 7 with "Use default gateway on remote network" in advanced TCP/IPv4 properties enable. He can connect to CentOS without any problem and can access to: The Box of his ISP (http://192.168.1.254/) The CentOS server The website which is hosted by the server (through http://) But he canNOT access any other web service (google.com or 74.125.230.224) I am a beginner with web servers so I do not know what can cause this problem. Note 0 : The Windows 7 user must be able to access the whole internet through the CentOS PPTP proxy. Note 1 : With "Use default gateway on remote network" in advanced TCP/IPv4 UNCHECKED it is the same problem Note 2 : With "Use default gateway on remote network" in advanced TCP/IPv4 UNCHECKED AND "disable class based route addition" CHECKED the Win 7 can access google but with the ISP IP (no use of the VPN...) See Screenshot Note 3 : I have made a echo 1 > /proc/sys/net/ipv4/ip_forward and a iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

    Read the article

  • How can I limit the amount of messages SendMail will recieve in a single incoming connection?

    - by Mike B
    Is there a way to limit how many messages can be received by SendMail in a given SMTP session? I have a SendMail server and an upstream application server is trying to send dozens (potentially hundreds) of messages to it in a single SMTP session (ehlo... mail from... rcpt to... data... rset... mail from... etc). This is causing resource strain on the box since the traffic isn't effectively load balanced. I'd like to implement a policy to have sendmail only allow up to X number of messages in a given SMTP session after which it will require the remote host to reconnect again. I noticed that there's a confCONNECTION_RATE_THROTTLE option but that seems to protect more against multiple connections occurring at once - not a single connection sending a bunch of emails.

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >