Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 164/792 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • problem in start up my RMI server(under ISP) so that it can recieve remote calls over Internet.--Jav

    - by Lokesh Kumar
    i m creating a Client/Server application in which my server and client can be on the same or on different machines but both are under ISP. My RMI programs:- -Remote Intreface:- //Calculator.java public interface Calculator extends java.rmi.Remote { public long add(long a, long b) throws java.rmi.RemoteException; public long sub(long a, long b) throws java.rmi.RemoteException; public long mul(long a, long b) throws java.rmi.RemoteException; public long div(long a, long b) throws java.rmi.RemoteException; } Remote Interface Implementation:- //CalculatorImpl.java public class CalculatorImpl extends java.rmi.server.UnicastRemoteObject implements Calculator { public CalculatorImpl() throws java.rmi.RemoteException { super(); } public long add(long a, long b) throws java.rmi.RemoteException { return a + b; } public long sub(long a, long b) throws java.rmi.RemoteException { return a - b; } public long mul(long a, long b) throws java.rmi.RemoteException { return a * b; } public long div(long a, long b) throws java.rmi.RemoteException { return a / b; } } Server:- //CalculatorServer.java import java.rmi.Naming; import java.rmi.server.RemoteServer; public class CalculatorServer { public CalculatorServer() { try { Calculator c = new CalculatorImpl(); Naming.rebind("rmi://"+args[0]+":1099/CalculatorService", c); } catch (Exception e) { System.out.println("Trouble: " + e); } } public static void main(String args[]) { new CalculatorServer(); } } Client:- //CalculatorClient.java import java.rmi.Naming; import java.rmi.RemoteException; import java.net.MalformedURLException; import java.rmi.NotBoundException; public class CalculatorClient { public static void main(String[] args) { try { Calculator c = (Calculator)Naming.lookup("rmi://"+args[0]+"/CalculatorService"); System.out.println( c.sub(4, 3) ); System.out.println( c.add(4, 5) ); System.out.println( c.mul(3, 6) ); System.out.println( c.div(9, 3) ); } catch (MalformedURLException murle) { System.out.println(); System.out.println("MalformedURLException"); System.out.println(murle); } catch (RemoteException re) { System.out.println(); System.out.println("RemoteException"); System.out.println(re); } catch (NotBoundException nbe) { System.out.println(); System.out.println("NotBoundException"); System.out.println(nbe); } catch (java.lang.ArithmeticException ae) { System.out.println(); System.out.println("java.lang.ArithmeticException"); System.out.println(ae); } } } when both Server and client programs are on same machine:- i start my server program by passing my router static IP address:-192.168.1.35 in args[0] and my server starts...fine. and by passing the same Static IP address in my Client's args[0] also works fine. but:- when both Server and client programs are on different machines:- now,i m trying to start my Server Program by passing it's public IP address:59.178.198.247 in args[0] so that it can recieve call over internet. but i am unable to start it. and the following exception occurs:- Trouble: java.rmi.ConnectException: Connection refused to host: 59.178.198.247; nested exception is: java.net.ConnectException: Connection refused: connect i think it is due to NAT Problem because i am under ISP. so,my problem is that how can i start my RMI Server under ISP so that it can recieve remote calls from internet????

    Read the article

  • [C#][XNA 3.1] How can I host two different XNA windows inside one Windows Form?

    - by secutos
    I am making a Map Editor for a 2D tile-based game. I would like to host two XNA controls inside the Windows Form - the first to render the map; the second to render the tileset. I used the code here to make the XNA control host inside the Windows Form. This all works very well - as long as there is only one XNA control inside the Windows Form. But I need two - one for the map; the second for the tileset. How can I run two XNA controls inside the Windows Form? While googling, I came across the terms "swap chain" and "multiple viewports", but I can't understand them and would appreciate support. Just as a side note, I know the XNA control example was designed so that even if you ran 100 XNA controls, they would all share the same GraphicsDevice - essentially, all 100 XNA controls would share the same screen. I tried modifying the code to instantiate a new GraphicsDevice for each XNA control, but the rest of the code doesn't work. The code is a bit long to post, so I won't post it unless someone needs it to be able to help me. Thanks in advance.

    Read the article

  • How frequently IP packets are fragmented at the source host?

    - by Methos
    I know that if IP payload MTU then routers usually fragment the IP packet. Finally all the fragmented packets are assembled at the destination using the fields IP-ID, IP fragment offsets and fragmentation flags. Max length of IP payload is 64K. Thus its very plausible for L4 to hand over payload which is 64K. If the L2 protocol is Ethernet, which often is the case, then the MTU will be about 1600 bytes. Hence IP packet will be fragmented at the source host itself. However, a quick search about IP implementation in Linux tells me that in recent kernels, L4 protocols are fragment friendly i.e. they try to save the fragmentation work for IP by handing over buffers of size which is close to MTU. Considering these two facts, I am wondering about how frequently does the IP packet gets fragmented at the source host itself. Does it occur sometimes/rarely/never? Does anyone know if there are exceptions to the rule of fragmentation in linux kernel (i.e. are there situations where L4 protocols are not fragment friendly)? How is this handled in other common OSes like windows? In general how frequently IP packets are fragmented?

    Read the article

  • How to extend an existing Ruby on Rails CMS to host multiple sites?

    - by Andrew
    I am trying to build a CMS I can use to host multiple sites. I know I'm going to end up reinventing the wheel a million times with this project, so I'm thinking about extending an existing open source Ruby on Rails CMS to meet my needs. One of those needs is to be able to run multiple sites, while using only one code-base. That way, when there's an update I want to make, I can update it in one place, and the change is reflected on all of the sites. I think that this will be able to scale by running multiple instances of the application. I think that I can use the domain/subdomain to determine which data to display. For example, someone goes to subdomain1.mysite.com and the application looks in the database for the content for subdomain1. The problem I see is with most pre-built CMS solutions, they are only designed to host one site, including the one I want to use. So the database is structured to work with one site. However, I had the idea that I could overcome this by "creating a new database" for each site, then specifying which database to connect to based on the domain/subdomain as I mentioned above. I'm thinking of hosting this on Heroku, so I'm wondering what my options for this might be. I'm not very familiar with Amazon S3, or Amazon SimpleDB, but I feel like there's some sort of "cloud database" that would make this solution a lot more realistic, than creating a new MySQL database for each site. What do you think? Am I thinking about this the wrong way? What advice do you have to offer in this area?

    Read the article

  • interactive login on cgi script

    - by raindrop18
    I am new for perl-cgi script. and my objective is to create user/pass interactive script to log and get information from multiple device at once. instead of add the user/pass credential on the script itself. since i am new if some one show me how to write the interactive part of the script. thanks much!!! here is the current code usr/local/bin/perl -wl use CGI ':standard'; use Net::Telnet::Cisco; ### set the error fields to nulls $selerror = ""; ### Input from the screen - make sure data was input if (param() and param('Switches') ne "" and (param('Mac') ne "" or param('Interface Description') ne "" or param('VLAN') ne "" )) { ### Put the input devices into an array. @devices = param('Switches'); ### format the header data print header(); print start_html(-title=>"ShowSwitches",-BGCOLOR=>'aqua'); print "\n",h1("<CENTER>Show Switches</CENTER>"); print "\n",hr(),"\n"; ### Go thru the device array. foreach(@devices) { $error_msg = ""; $TAC_login_error = ""; $open_error = ""; $retry_open_error = ""; $prompt_error = ""; $password_error = ""; ### Take input host and use to send to Telnet $host = $_; $session = Net::Telnet::Cisco->new(Errmode => 'return', Timeout => 30); ### Connect to the host $session->open(Host =>"$host", Timeout => 15); $open_error = $session->errmsg; ### Login with TACACS if host can be connected to if ($open_error eq "") { $session->login('USER', 'PASS'); $TAC_login_error = $session->errmsg; ### Login with TACACS failed - try standard login if ($TAC_login_error ne "") { ### Connect to host $session->open(Host =>"$host", Timeout => 15); $retry_open_error = $session->errmsg; ### Wait for password prompt - multiple matches - devices may have different device prompts. if ($retry_open_error eq "") { $session->waitfor(Match => '/Password:.*$/', Match => '/Enter password:.*$/', Timeout => 20); $prompt_error = $session->errmsg; if ($prompt_error eq "") { ### Input password $session->print('getmeout'); $password_error = $session->errmsg; $session->waitfor('/.*>$/'); $password_error = $session->errmsg; } } } } ### No errors, then issue "show commands". if ($open_error eq "" and ($TAC_login_error eq "" or $retry_open_error eq "") and $prompt_error eq "" and $password_error eq "") { ### Show Mac if (param('Mac')) { $cmd = 'sh mac'; @output = $session->cmd("$cmd"); $show_error = ""; $show_error = $session->errmsg; print "\n",h2($host . ' - ' . $cmd); if ($show_error ne "") { $error_msg = 'Error for show mac - ' . $show_error; print b($error_msg),(br); print hr(),"\n"; $error_msg = ''; } else { print pre(@output); print hr(),"\n"; } } ### Show Interface Description if (param('Interface Description')) { $cmd = 'sh interface description'; @output = $session->cmd("$cmd"); $show_error = ""; $show_error = $session->errmsg; print "\n",h2($host . ' - ' . $cmd); if ($show_error ne "") { $error_msg = 'Error for show mac - ' . $show_error; print b($error_msg),(br); print hr(),"\n"; $error_msg = ''; } else { print pre(@output); print hr(),"\n"; } } ### Show VLAN if (param('VLAN')) { $cmd = 'sh vlan'; @output = $session->cmd("$cmd"); $show_error = ""; $show_error = $session->errmsg; print "\n",h2($host . ' - ' . $cmd); if ($show_error ne "") { $error_msg = 'Error for show vlan - ' . $show_error; print b($error_msg),(br); print hr(),"\n"; $error_msg = ''; } else { print pre(@output); print hr(),"\n"; } } } elsif ($TAC_login_error ne "" and $password_error ne "") { $error_msg = "Error - $host " . $TAC_login_error . ' - possible incorrect TACACS or standard password parameters on device.'; } elsif ($open_error ne "") { $error_msg = "Error - $host " . $open_error . ' - cannot connect to host - is it down??'; } elsif ($prompt_error ne "") { $error_msg = "Error - $host " . $prompt_error . ' - password prompt not recognized - invalid TACACS (or user) password.'; } elsif ($password_error ne "") { $error_msg = "Error - $host " . $password_error . ' - possible incorrect user/password parameters on device.'; } if ($error_msg ne "" ) { print b($error_msg),(br); print hr(),"\n"; } print hr(),"\n"; print end_html(),"\n"; } } else { ### No Show command was selected. if (param('Submit') and param('Mac') eq "" and param('Interface Description') eq "" and param('VLAN') eq "" ) { $selerror = 'No Show Displays were selected. Try again please!!'; } elsif ### No switch was selected. (param('Submit') and param('Switches') eq "") { $selerror = 'No devices were selected. Try again please!!'; } ### This formats the initial Show Web page. print header(-Pragma='no-cache'), start_html(-title=>"Show Displays",-BGCOLOR=>'aqua'), h1("<CENTER>Show Switches</CENTER>"),hr(), start_form(), b("Select Show Commands:"), br(), br(), checkbox(-name=>'Mac'), checkbox(-name=>'Interface Description'), checkbox(-name=>'VLAN'), br(),br(),hr(),br(), b("Select One or More Devices:"), br(), br(), scrolling_list (-name => 'Switches', -default=> "NONE", -values => ['cs6a', 'cs7a', 'cs7b', 'cs8b', 'cs9a', 'c9b', 'csa' ], -multiple => 'true', -size => 7, ), p(submit('Submit'),reset('Reset')), b($selerror), end_form(),hr(), end_html(); } #

    Read the article

  • interactive login on perl cgi script- updated question [closed]

    - by raindrop18
    I am new for perl-cgi script. and my objective is to create user/pass interactive script to log and get information from multiple device at once. instead of add the user/pass credential on the script itself. since i am new if some one show me how to write the interactive part of the script. thanks much!!! here is the current code usr/local/bin/perl -wl use CGI ':standard'; use Net::Telnet::Cisco; # ### set the error fields to nulls $selerror = ""; # ### Input from the screen - make sure data was input if (param() and param('Switches') ne "" and (param('Mac') ne "" or param('Interface Description') ne "" or param('VLAN') ne "" )) { # ### Put the input devices into an array. @devices = param('Switches'); # ### format the header data print header(); print start_html(-title=>"ShowSwitches",-BGCOLOR=>'aqua'); print "\n",h1("<CENTER>Show Switches</CENTER>"); print "\n",hr(),"\n"; # ### Go thru the device array. foreach(@devices) { $error_msg = ""; $TAC_login_error = ""; $open_error = ""; $retry_open_error = ""; $prompt_error = ""; $password_error = ""; # ### Take input host and use to send to Telnet $host = $_; $session = Net::Telnet::Cisco->new(Errmode => 'return', Timeout => 30); # ### Connect to the host $session->open(Host =>"$host", Timeout => 15); $open_error = $session->errmsg; # ### Login with TACACS if host can be connected to if ($open_error eq "") { $session->login('USER', 'PASS'); $TAC_login_error = $session->errmsg; # ### Login with TACACS failed - try standard login if ($TAC_login_error ne "") { # ### Connect to host $session->open(Host =>"$host", Timeout => 15); $retry_open_error = $session->errmsg; # ### Wait for password prompt - multiple matches - devices may have different device prompts. if ($retry_open_error eq "") { $session->waitfor(Match => '/Password:.*$/', Match => '/Enter password:.*$/', Timeout => 20); $prompt_error = $session->errmsg; if ($prompt_error eq "") { # ### Input password $session->print('getmeout'); $password_error = $session->errmsg; $session->waitfor('/.*>$/'); $password_error = $session->errmsg; } } } } # ### No errors, then issue "show commands". if ($open_error eq "" and ($TAC_login_error eq "" or $retry_open_error eq "") and $prompt_error eq "" and $password_error eq "") { # ### Show Mac if (param('Mac')) { $cmd = 'sh mac'; @output = $session->cmd("$cmd"); $show_error = ""; $show_error = $session->errmsg; print "\n",h2($host . ' - ' . $cmd); if ($show_error ne "") { $error_msg = 'Error for show mac - ' . $show_error; print b($error_msg),(br); print hr(),"\n"; $error_msg = ''; } else { print pre(@output); print hr(),"\n"; } } # ### Show Interface Description if (param('Interface Description')) { $cmd = 'sh interface description'; @output = $session->cmd("$cmd"); $show_error = ""; $show_error = $session->errmsg; print "\n",h2($host . ' - ' . $cmd); if ($show_error ne "") { $error_msg = 'Error for show mac - ' . $show_error; print b($error_msg),(br); print hr(),"\n"; $error_msg = ''; } else { print pre(@output); print hr(),"\n"; } } # ### Show VLAN if (param('VLAN')) { $cmd = 'sh vlan'; @output = $session->cmd("$cmd"); $show_error = ""; $show_error = $session->errmsg; print "\n",h2($host . ' - ' . $cmd); if ($show_error ne "") { $error_msg = 'Error for show vlan - ' . $show_error; print b($error_msg),(br); print hr(),"\n"; $error_msg = ''; } else { print pre(@output); print hr(),"\n"; } } } elsif ($TAC_login_error ne "" and $password_error ne "") { $error_msg = "Error - $host " . $TAC_login_error . ' - possible incorrect TACACS or standard password parameters on device.'; } elsif ($open_error ne "") { $error_msg = "Error - $host " . $open_error . ' - cannot connect to host - is it down??'; } elsif ($prompt_error ne "") { $error_msg = "Error - $host " . $prompt_error . ' - password prompt not recognized - invalid TACACS (or user) password.'; } elsif ($password_error ne "") { $error_msg = "Error - $host " . $password_error . ' - possible incorrect user/password parameters on device.'; } if ($error_msg ne "" ) { print b($error_msg),(br); print hr(),"\n"; } print hr(),"\n"; print end_html(),"\n"; } } else { # ### No Show command was selected. if (param('Submit') and param('Mac') eq "" and param('Interface Description') eq "" and param('VLAN') eq "" ) { $selerror = 'No Show Displays were selected. Try again please!!'; } elsif # ### No switch was selected. (param('Submit') and param('Switches') eq "") { $selerror = 'No devices were selected. Try again please!!'; } # ### This formats the initial Show Web page. print header(-Pragma=>'no-cache'), start_html(-title=>"Show Displays",-BGCOLOR=>'aqua'), h1("<CENTER>Show Switches</CENTER>"),hr(), start_form(), b("Select Show Commands:"), br(), br(), checkbox(-name=>'Mac'), checkbox(-name=>'Interface Description'), checkbox(-name=>'VLAN'), br(),br(),hr(),br(), b("Select One or More Devices:"), br(), br(), scrolling_list (-name => 'Switches', -default=> "NONE", -values => ['cs6a', 'cs7a', 'cs7b', 'cs8b', 'cs9a', 'c9b', 'csa' ], -multiple => 'true', -size => 7, ), p(submit('Submit'),reset('Reset')), b($selerror), end_form(),hr(), end_html(); } #

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Cisco ASA - Enable communication between same security level

    - by Conor
    I have recently inherited a network with a Cisco ASA (running version 8.2). I am trying to configure it to allow communication between two interfaces configured with the same security level (DMZ-DMZ) "same-security-traffic permit inter-interface" has been set, but hosts are unable to communicate between the interfaces. I am assuming that some NAT settings are causing my issue. Below is my running config: ASA Version 8.2(3) ! hostname asa enable password XXXXXXXX encrypted passwd XXXXXXXX encrypted names ! interface Ethernet0/0 switchport access vlan 400 ! interface Ethernet0/1 switchport access vlan 400 ! interface Ethernet0/2 switchport access vlan 420 ! interface Ethernet0/3 switchport access vlan 420 ! interface Ethernet0/4 switchport access vlan 450 ! interface Ethernet0/5 switchport access vlan 450 ! interface Ethernet0/6 switchport access vlan 500 ! interface Ethernet0/7 switchport access vlan 500 ! interface Vlan400 nameif outside security-level 0 ip address XX.XX.XX.10 255.255.255.248 ! interface Vlan420 nameif public security-level 20 ip address 192.168.20.1 255.255.255.0 ! interface Vlan450 nameif dmz security-level 50 ip address 192.168.10.1 255.255.255.0 ! interface Vlan500 nameif inside security-level 100 ip address 192.168.0.1 255.255.255.0 ! ftp mode passive clock timezone JST 9 same-security-traffic permit inter-interface same-security-traffic permit intra-interface object-group network DM_INLINE_NETWORK_1 network-object host XX.XX.XX.11 network-object host XX.XX.XX.13 object-group service ssh_2220 tcp port-object eq 2220 object-group service ssh_2251 tcp port-object eq 2251 object-group service ssh_2229 tcp port-object eq 2229 object-group service ssh_2210 tcp port-object eq 2210 object-group service DM_INLINE_TCP_1 tcp group-object ssh_2210 group-object ssh_2220 object-group service zabbix tcp port-object range 10050 10051 object-group service DM_INLINE_TCP_2 tcp port-object eq www group-object zabbix object-group protocol TCPUDP protocol-object udp protocol-object tcp object-group service http_8029 tcp port-object eq 8029 object-group network DM_INLINE_NETWORK_2 network-object host 192.168.20.10 network-object host 192.168.20.30 network-object host 192.168.20.60 object-group service imaps_993 tcp description Secure IMAP port-object eq 993 object-group service public_wifi_group description Service allowed on the Public Wifi Group. Allows Web and Email. service-object tcp-udp eq domain service-object tcp-udp eq www service-object tcp eq https service-object tcp-udp eq 993 service-object tcp eq imap4 service-object tcp eq 587 service-object tcp eq pop3 service-object tcp eq smtp access-list outside_access_in remark http traffic from outside access-list outside_access_in extended permit tcp any object-group DM_INLINE_NETWORK_1 eq www access-list outside_access_in remark ssh from outside to web1 access-list outside_access_in extended permit tcp any host XX.XX.XX.11 object-group ssh_2251 access-list outside_access_in remark ssh from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group ssh_2229 access-list outside_access_in remark http from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group http_8029 access-list outside_access_in remark ssh from outside to internal hosts access-list outside_access_in extended permit tcp any host XX.XX.XX.13 object-group DM_INLINE_TCP_1 access-list outside_access_in remark dns service to internal host access-list outside_access_in extended permit object-group TCPUDP any host XX.XX.XX.13 eq domain access-list dmz_access_in extended permit ip 192.168.10.0 255.255.255.0 any access-list dmz_access_in extended permit tcp any host 192.168.10.29 object-group DM_INLINE_TCP_2 access-list public_access_in remark Web access to DMZ websites access-list public_access_in extended permit object-group TCPUDP any object-group DM_INLINE_NETWORK_2 eq www access-list public_access_in remark General web access. (HTTP, DNS & ICMP and Email) access-list public_access_in extended permit object-group public_wifi_group any any pager lines 24 logging enable logging asdm informational mtu outside 1500 mtu public 1500 mtu dmz 1500 mtu inside 1500 no failover icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 60 global (outside) 1 interface global (dmz) 2 interface nat (public) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface 2229 192.168.0.29 2229 netmask 255.255.255.255 static (inside,outside) tcp interface 8029 192.168.0.29 www netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.13 192.168.10.10 netmask 255.255.255.255 dns static (dmz,outside) XX.XX.XX.11 192.168.10.30 netmask 255.255.255.255 dns static (dmz,inside) 192.168.0.29 192.168.10.29 netmask 255.255.255.255 static (dmz,public) 192.168.20.30 192.168.10.30 netmask 255.255.255.255 dns static (dmz,public) 192.168.20.10 192.168.10.10 netmask 255.255.255.255 dns static (inside,dmz) 192.168.10.0 192.168.0.0 netmask 255.255.255.0 dns access-group outside_access_in in interface outside access-group public_access_in in interface public access-group dmz_access_in in interface dmz route outside 0.0.0.0 0.0.0.0 XX.XX.XX.9 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.0.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet timeout 5 ssh 192.168.0.0 255.255.255.0 inside ssh timeout 20 console timeout 0 dhcpd dns 61.122.112.97 61.122.112.1 dhcpd auto_config outside ! dhcpd address 192.168.20.200-192.168.20.254 public dhcpd enable public ! dhcpd address 192.168.0.200-192.168.0.254 inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics host threat-detection statistics access-list no threat-detection statistics tcp-intercept ntp server 130.54.208.201 source public webvpn ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect ip-options inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp !

    Read the article

  • Oracle Linux Tips and Tricks: Using SSH

    - by Robert Chase
    Out of all of the utilities available to systems administrators ssh is probably the most useful of them all. Not only does it allow you to log into systems securely, but it can also be used to copy files, tunnel IP traffic and run remote commands on distant servers. It’s truly the Swiss army knife of systems administration. Secure Shell, also known as ssh, was developed in 1995 by Tau Ylonen after the University of Technology in Finland suffered a password sniffing attack. Back then it was common to use tools like rcp, rsh, ftp and telnet to connect to systems and move files across the network. The main problem with these tools is they provide no security and transmitted data in plain text including sensitive login credentials. SSH provides this security by encrypting all traffic transmitted over the wire to protect from password sniffing attacks. One of the more common use cases involving SSH is found when using scp. Secure Copy (scp) transmits data between hosts using SSH and allows you to easily copy all types of files. The syntax for the scp command is: scp /pathlocal/filenamelocal remoteuser@remotehost:/pathremote/filenameremote In the following simple example, I move a file named myfile from the system test1 to the system test2. I am prompted to provide valid user credentials for the remote host before the transfer will proceed.  If I were only using ftp, this information would be unencrypted as it went across the wire.  However, because scp uses SSH, my user credentials and the file and its contents are confidential and remain secure throughout the transfer.  [user1@test1 ~]# scp /home/user1/myfile user1@test2:/home/user1user1@test2's password: myfile                                    100%    0     0.0KB/s   00:00 You can also use ssh to send network traffic and utilize the encryption built into ssh to protect traffic over the wire. This is known as an ssh tunnel. In order to utilize this feature, the server that you intend to connect to (the remote system) must have TCP forwarding enabled within the sshd configuraton. To enable TCP forwarding on the remote system, make sure AllowTCPForwarding is set to yes and enabled in the /etc/ssh/sshd_conf file: AllowTcpForwarding yes Once you have this configured, you can connect to the server and setup a local port which you can direct traffic to that will go over the secure tunnel. The following command will setup a tunnel on port 8989 on your local system. You can then redirect a web browser to use this local port, allowing the traffic to go through the encrypted tunnel to the remote system. It is important to select a local port that is not being used by a service and is not restricted by firewall rules.  In the following example the -D specifies a local dynamic application level port forwarding and the -N specifies not to execute a remote command.   ssh –D 8989 [email protected] -N You can also forward specific ports on both the local and remote host. The following example will setup a port forward on port 8080 and forward it to port 80 on the remote machine. ssh -L 8080:farwebserver.com:80 [email protected] You can even run remote commands via ssh which is quite useful for scripting or remote system administration tasks. The following example shows how to  log in remotely and execute the command ls –la in the home directory of the machine. Because ssh encrypts the traffic, the login credentials and output of the command are completely protected while they travel over the wire. [rchase@test1 ~]$ ssh rchase@test2 'ls -la'rchase@test2's password: total 24drwx------  2 rchase rchase 4096 Sep  6 15:17 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc You can execute any command contained in the quotations marks as long as you have permission with the user account that you are using to log in. This can be very powerful and useful for collecting information for reports, remote controlling systems and performing systems administration tasks using shell scripts. To make your shell scripts even more useful and to automate logins you can use ssh keys for running commands remotely and securely without the need to enter a password. You can accomplish this with key based authentication. The first step in setting up key based authentication is to generate a public key for the system that you wish to log in from. In the following example you are generating a ssh key on a test system. In case you are wondering, this key was generated on a test VM that was destroyed after this article. [rchase@test1 .ssh]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/rchase/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rchase/.ssh/id_rsa.Your public key has been saved in /home/rchase/.ssh/id_rsa.pub.The key fingerprint is:7a:8e:86:ef:59:70:ef:43:b7:ee:33:03:6e:6f:69:e8 rchase@test1The key's randomart image is:+--[ RSA 2048]----+|                 ||  . .            ||   o .           ||    . o o        ||   o o oS+       ||  +   o.= =      ||   o ..o.+ =     ||    . .+. =      ||     ...Eo       |+-----------------+ Now that you have the key generated on the local system you should to copy it to the target server into a temporary location. The user’s home directory is fine for this. [rchase@test1 .ssh]$ scp id_rsa.pub rchase@test2:/home/rchaserchase@test2's password: id_rsa.pub                  Now that the file has been copied to the server, you need to append it to the authorized_keys file. This should be appended to the end of the file in the event that there are other authorized keys on the system. [rchase@test2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys Once the process is complete you are ready to login. Since you are using key based authentication you are not prompted for a password when logging into the system.   [rchase@test1 ~]$ ssh test2Last login: Fri Sep  6 17:42:02 2013 from test1 This makes it much easier to run remote commands. Here’s an example of the remote command from earlier. With no password it’s almost as if the command ran locally. [rchase@test1 ~]$ ssh test2 'ls -la'total 32drwx------  3 rchase rchase 4096 Sep  6 17:40 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc As a security consideration it's important to note the permissions of .ssh and the authorized_keys file.  .ssh should be 700 and authorized_keys should be set to 600.  This prevents unauthorized access to ssh keys from other users on the system.   An even easier way to move keys back and forth is to use ssh-copy-id. Instead of copying the file and appending it manually to the authorized_keys file, ssh-copy-id does both steps at once for you.  Here’s an example of moving the same key using ssh-copy-id.The –i in the example is so that we can specify the path to the id file, which in this case is /home/rchase/.ssh/id_rsa.pub [rchase@test1]$ ssh-copy-id -i /home/rchase/.ssh/id_rsa.pub rchase@test2 One of the last tips that I will cover is the ssh config file. By using the ssh config file you can setup host aliases to make logins to hosts with odd ports or long hostnames much easier and simpler to remember. Here’s an example entry in our .ssh/config file. Host dev1 Hostname somereallylonghostname.somereallylongdomain.com Port 28372 User somereallylongusername12345678 Let’s compare the login process between the two. Which would you want to type and remember? ssh somereallylongusername12345678@ somereallylonghostname.somereallylongdomain.com –p 28372 ssh dev1 I hope you find these tips useful.  There are a number of tools used by system administrators to streamline processes and simplify workflows and whether you are new to Linux or a longtime user, I'm sure you will agree that SSH offers useful features that can be used every day.  Send me your comments and let us know the ways you  use SSH with Linux.  If you have other tools you would like to see covered in a similar post, send in your suggestions.

    Read the article

  • How can I set a local branch to pull / merge from a particular remote branch?

    - by John
    I have a local branch foo that started life as a branch off of master. Then I pushed it to my remote, and it's now happily living life with its siblings in remotes/origin I want pull to automatically pull from remotes/origin/foo, and I want status -sb to show me how many changes I am ahead of remotes/origin/foo. I thought the way to do this was git config branch.foo.merge 'refs/heads/foo' However, after doing that, I get this message: ? git status -sb ## foo ? git pull Your configuration specifies to merge with the ref 'foo' from the remote, but no such ref was fetched. What am I doing wrong?

    Read the article

  • xampp virtual host in config file causes forbidden error...

    - by Ronedog
    Any idea why the following virtual host causes xampp to give me a "403 forbidden error"? <VirtualHost *:80> ServerName rst DocumentRoot "C:/xampp/htdocs/administrator/login.php" </VirtualHost> If I delete this code then xampp loads web pages just fine...but I don't want to have to type in the whole url...I want to just type in "rst" and go to my page. This is on a windows vista machine. Thanks for the help.

    Read the article

  • How can I use pywin32 with a virtualenv without having to include the host environment's site-packag

    - by jkp
    I'm working with PyInstaller under Python 2.6, which is only partially supported due to the mess MS have created with their manifest nonense which now affects Python since it is now MSVC8 compiled. The problem is that the manifest embedding support relies on the pywin32 extensions in order to build which is a pain because without including the host's site-packages folder when I create the virtualenv (kinda defeats the point in a build environment) I cannot find a way to install the required extensions so they are accessible to PyInstaller. Has anyone found a solution to this issue?

    Read the article

  • How can I run a command on a remote machine with Perl?

    - by Bharath Kumar
    I'm using following code to connect to a remote machine and try to execute one simple command on remote machine. cat tt.pl #!/usr/bin/perl #use strict; use warnings; use Net::Telnet; $telnet = new Net::Telnet ( Timeout=>2, Errmode=>'die'); $telnet->open('172.168.12.58'); $telnet->waitfor('/login:\s*/'); $telnet->print('admin'); $telnet->waitfor('/password:\s*/'); $telnet->print('Blue'); #$telnet->cmd('ver > C:\\log.txt'); $telnet->cmd('mkdir gy'); You have new mail in /var/spool/mail/root [root@localhost]# But when I'm executing this script it is throwing error messages [root@localhost]# perl tt.pl command timed-out at tt.pl line 12 [root@localhost]# Please help me in this

    Read the article

  • Separate php.ini file for each Apache virtual host?

    - by Calvin L
    Is it possible to have a separate php.ini file that overrides the default php.ini file for each virtual host? I'm running Apache/2.2.14, PHP 5.3.2-1. For example I have several vhosts pointing to domains in my /var/www/ directory: /var/www/website1.com /var/www/website2.com What I'd like is to be able to place a custom php.ini file in each directory that would override the default values only for that vhost, but keep the original defaults if the value isn't specified: /var/www/website1.com/htdocs/ /var/www/website1.com/php.ini

    Read the article

  • Is it possible to use AWS as a web host?

    - by Matrym
    Is it possible to load / host an entire website using AWS? Or is it only a service that can load specific pieces of a website - such as images, etc. Obviously, I'd want to use my own domain. If you can use it, are there any limitations? Here's the AWS link, for context: http://aws.amazon.com/s3/

    Read the article

  • Aptana Studio 2.0: How to checkout SVN Project to SFTP remote dir?

    - by Brian Lacy
    If you're familiar with the excellent Aptana Studio IDE, you know it's based on Eclipse. You also know it comes pre-packaged with SFTP capability. I need to work on a remote server, where I have Apache installed; SFTP is ideal for this. I've installed the Subclipse plugin, and I can access and checkout projects from the Repo. I can create a new project from SVN source, which will download all the source to my chosen workspace or a specified location. But I can't figure out a way to combine these features! I need to create a Project on a remote server via SFTP but I need to link the source to a repository. Is there any way to do this?

    Read the article

  • Write directly to a remote Git repository, without adding objects to a local index/repo?

    - by Ryan B. Lynch
    Does Git support any commands that would allow me to commit directly from a local/working tree into a remote repository? The normal workflow requires a "git add", at least, to populate the object database with copies of the file contents, etc. I understand that this is NOT the normal, expected Git workflow. But I noticed that Git already supports downloading directly from the repository, with no local repo ("git archive"), so it seems reasonable that there might be a similar uploading operation. Alternatively, if there isn't such a command in the core Git itself, does any 3rd-party software support direct remote writes?

    Read the article

  • How to manage eclipse project on remote computer; ssh, ftp?

    - by Kirzilla
    Hello, Usually I'm creating project work space on my localhost (win). As soon as my code is tested I'm committing it into repository. But some days ago I've faced a little difficulty. My customer want me to write code right on his server because he have some handmade binaries working only on his machine (solaris). I really don't know what to do. I've tried Eclipse plugin for connecting to remote servers, but I'm still unable to create remote project. Any ideas? PS: Sorry for my English :) Thank you.

    Read the article

  • Tomcat JMX connection - Authentication failed.

    - by ziggy
    I am having some problems setting up Tomcat for JMX. I added the following properties to CATALINA_OPTS="-Dcom.sun.management.jmxremote.port=18070 -Dcom.sun.management.jmxremote.password.file=$CATALINA_BASE/conf/jmxremote.password -Dcom.sun .management.jmxremote.ssl=false" And have added the jmxremote.password file in to the conf directory. I wrote a client tool that connects to the JMX server running on port 18070. When i run the client program i get the following error. Exception in thread "main" java.lang.SecurityException: Authentication failed! Credentials required at com.sun.jmx.remote.security.JMXPluggableAuthenticator.authenticationFailure(JMXPluggableAuthenticator.java:193) at com.sun.jmx.remote.security.JMXPluggableAuthenticator.authenticate(JMXPluggableAuthenticator.java:145) at sun.management.jmxremote.ConnectorBootstrap$AccessFileCheckerAuthenticator.authenticate(ConnectorBootstrap.java:185) at javax.management.remote.rmi.RMIServerImpl.doNewClient(RMIServerImpl.java:213) at javax.management.remote.rmi.RMIServerImpl.newClient(RMIServerImpl.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305) at sun.rmi.transport.Transport$1.run(Transport.java:159) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:155) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255) at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:142) at javax.management.remote.rmi.RMIServerImpl_Stub.newClient(Unknown Source) at javax.management.remote.rmi.RMIConnector.getConnection(RMIConnector.java:2312) at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:277) at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248) at com.bt.c21sc.c21tkprobe.accessors.C21TkProbeJmxDAO.connect(Unknown Source) at com.bt.c21sc.c21tkprobe.service.C21TkProbeBD.execute(Unknown Source) at com.bt.c21sc.c21tkprobe.C21AppserverProbe.main(Unknown Source) If i change the CATALINA_OPTS properties to CATALINA_OPTS="-Dcom.sun.management.jmxremote.port=18070 -Dcom.sun.management.jmxremote.password.file=$CATALINA_BASE/conf/jmxremote.password -Dcom.sun .management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false" Then it works fine. I think what i am confused of is what is classed as remote access. I am running the client program away from the Tomcat instance but both Tomcat and the client tool are on the same machine (i.e. different virtual machines but same environemnt). I thought i had to configure the remote authentication if i access the JMX server remotely from a different machine. By remote access do they mean accessing the JMX server from any VM either locally or remotely?

    Read the article

  • what's the UNC path for local computer from a remote machine ?

    - by KaluSingh Gabbar
    I am writing a small utility program in IronPython to install applications on remote machine using managementclass which uses WMI. Now, the script would install an application on Machine_B from Machine_A, it works fine as long as you have the msi file on the local drive of the Target machine (Machine_B, in this case). I want to be able to do same thing with .msi file being on the Host (Machine_A) machine. network_scope = r"\\%Machine_B\root\cimv2" scope = ManagementScope(network_scope, options) scope.Connect() mp = ManagementPath("Win32_Product") ogo = ObjectGetOptions() mc = ManagementClass(scope, mp, ogo) inParams = mc.GetMethodParameters ("Install") inParams["PackageLocation"] = r"C:\installs\python-3.1.1.msi" inParams["AllUsers"] = True retVal = mc.InvokeMethod ("Install", inParams, None) print retVal ["ReturnValue"].ToString() PROBLEM : [Machine A] --- Where I am running the script, and want to host the .msi file [Machine B] --- where I want to install the application So, How can I define the UNC path for local machine ? what will be inParams["PackageLocation"] = ??

    Read the article

  • EJB3.1 Remote invocation - is it distributed automatically? is it expensive?

    - by Hank
    I'm building a JEE6 application with performance and scalability in the forefront of my mind. Business logic and JPA2-facade is held in stateless session beans (EJB3.1). As of right now, the SLSBs implement only @Remote-interfaces. When a bean needs to access another bean, it does so via RMI. My reasoning behind this is the assumption that, once the application runs on a bunch of clustered application servers, the RMI-part allows the execution to be distributed across the whole cluster automagically. Is that a correct assumption? I'm fine with dealing with the downsides of that (objects lose entityManager session, pass-by-value), at least I think so. But I am wondering if constant remote invocation isn't adding more load then necessary.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >