Search Results

Search found 29422 results on 1177 pages for 'port scanning service'.

Page 43/1177 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • OSB, Service Callouts and OQL - Part 3

    - by Sabha
    In the previous sections of the "OSB, Service Callouts and OQL" series, we analyzed the threading model used by OSB for Service Callouts and analysis of OSB Server threads hung in Service callouts and identifying  the Proxies and Remote services involved in the hang using OQL. This final section of the series will focus on the corrective action to avoid Service Callout related OSB Server hangs. Please refer to the blog post for more details.

    Read the article

  • Multiple Denial of Service vulnerabilities in Wireshark

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-0041 Denial of Service(DoS) vulnerability 1.9 Wireshark Solaris 11 11/11 SRU 04 CVE-2012-0042 Denial of Service(DoS) vulnerability 2.9 CVE-2012-0043 Buffer Overflow vulnerability 5.4 CVE-2012-0066 Denial of Service(DoS) vulnerability 1.9 CVE-2012-0067 Denial of Service(DoS) vulnerability 1.9 CVE-2012-0068 Buffer Overflow vulnerability 4.4 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Une API pour le service de raccourcissement d'URL de Google à la disposition des développeurs Web

    Une API pour le service de raccourcissement d'URL de Google Est mise à la disposition des développeurs Web Google vient de publier une API pour Goo.gl, son service de raccourcissement d'URL. L'API permettra aux développeurs d'intégrer le service Goo.gl à leurs pages Web et à leurs différents projets Web. Le service de raccourcissement d'URL Goo avait été déployé par Google en 2009 pour se lancer sur le marché des « URL-Shortner » et concurrencer ainsi Bit.ly. Depuis son lanc...

    Read the article

  • ASP.NET MVC: Moving code from controller action to service layer

    - by DigiMortal
    I fixed one controller action in my application that doesn’t seemed good enough for me. It wasn’t big move I did but worth to show to beginners how nice code you can write when using correct layering in your application. As an example I use code from my posting ASP.NET MVC: How to implement invitation codes support. Problematic controller action Although my controller action works well I don’t like how it looks. It is too much for controller action in my opinion. [HttpPost] public ActionResult GetAccess(string accessCode) {     if(string.IsNullOrEmpty(accessCode.Trim()))     {         ModelState.AddModelError("accessCode", "Insert invitation code!");         return View();     }       Guid accessGuid;       try     {         accessGuid = Guid.Parse(accessCode);     }     catch     {         ModelState.AddModelError("accessCode", "Incorrect format of invitation code!");         return View();                    }       using(var ctx = new EventsEntities())     {         var user = ctx.GetNewUserByAccessCode(accessGuid);         if(user == null)         {             ModelState.AddModelError("accessCode", "Cannot find account with given invitation code!");             return View();         }           user.UserToken = User.Identity.GetUserToken();         ctx.SaveChanges();     }       Session["UserId"] = accessGuid;       return Redirect("~/admin"); } Looking at this code my first idea is that all this access code stuff must be located somewhere else. We have working functionality in wrong place and we should do something about it. Service layer I add layers to my application very carefully because I don’t like to use hand grenade to kill a fly. When I see real need for some layer and it doesn’t add too much complexity I will add new layer. Right now it is good time to add service layer to my small application. After that it is time to move code to service layer and inject service class to controller. public interface IUserService {     bool ClaimAccessCode(string accessCode, string userToken,                          out string errorMessage);       // Other methods of user service } I need this interface when writing unit tests because I need fake service that doesn’t communicate with database and other external sources. public class UserService : IUserService {     private readonly IDataContext _context;       public UserService(IDataContext context)     {         _context = context;     }       public bool ClaimAccessCode(string accessCode, string userToken, out string errorMessage)     {         if (string.IsNullOrEmpty(accessCode.Trim()))         {             errorMessage = "Insert invitation code!";             return false;         }           Guid accessGuid;         if (!Guid.TryParse(accessCode, out accessGuid))         {             errorMessage = "Incorrect format of invitation code!";             return false;         }           var user = _context.GetNewUserByAccessCode(accessGuid);         if (user == null)         {             errorMessage = "Cannot find account with given invitation code!";             return false;         }           user.UserToken = userToken;         _context.SaveChanges();           errorMessage = string.Empty;         return true;     } } Right now I used simple solution for errors and made access code claiming method to follow usual TrySomething() methods pattern. This way I can keep error messages and their retrieval away from controller and in controller I just mediate error message from service to view. Controller Now all the code is moved to service layer and we need also some modifications to controller code so it makes use of users service. I don’t show here DI/IoC details about how to give service instance to controller. GetAccess() action of controller looks like this right now. [HttpPost] public ActionResult GetAccess(string accessCode) {     var userToken = User.Identity.GetUserToken();     string errorMessage;       if (!_userService.ClaimAccessCode(accessCode, userToken,                                       out errorMessage))     {                       ModelState.AddModelError("accessCode", errorMessage);         return View();     }       Session["UserId"] = Guid.Parse(accessCode);     return Redirect("~/admin"); } It’s short and nice now and it deals with web site part of access code claiming. In the case of error user is shown access code claiming view with error message that ClaimAccessCode() method returns as output parameter. If everything goes fine then access code is reserved for current user and user is authenticated. Conclusion When controller action grows big you have to move code to layers it actually belongs. In this posting I showed you how I moved access code claiming functionality from controller action to user service class that belongs to service layer of my application. As the result I have controller action that coordinates the user interaction when going through access code claiming process. Controller communicates with service layer and gets information about how access code claiming succeeded.

    Read the article

  • Using the OAM Mobile & Social SDK to secure native mobile apps - Part 2 : OAM Mobile & Social Server configuration

    - by kanishkmahajan
    Objective  In the second part of this blog post I'll now cover configuration of OAM to secure our sample native apps developed using the iOS SDK. First, here are some key server side concepts: Application Profiles: An application profile is a logical representation of your application within OAM server. It could be a web (html/javascript) or native (iOS or Android) application. Applications may have different requirements for AuthN/AuthZ, and therefore each application that interacts with OAM Mobile & Social REST services must be uniquely defined. Service Providers: Service providers represent the back end services that are accessed by applications. With OAM Mobile & Social these services are in the areas of authentication, authorization and user profile access. A Service Provider then defines a type or class of service for authentication, authorization or user profiles. For example, the JWTAuthentication provider performs authentication and returns JWT (JSON Web Tokens) to the application. In contrast, the OAMAuthentication also provides authentication but uses OAM SSO tokens Service Profiles:  A Service Profile is a logical envelope that defines a service endpoint URL for a service provider for the OAM Mobile & Social Service. You can create multiple service profiles for a service provider to define token capabilities and service endpoints. Each service provider instance requires atleast one corresponding service profile.The  OAM Mobile & Social Service includes a pre-configured service profile for each pre-configured service provider. Service Domains: Service domains bind together application profiles and service profiles with an optional security handler. So now let's configure the OAM server. Additional details are in the OAM Documentation and this post simply provides an outline of configuration tasks required to configure OAM for securing native apps.  Configuration  Create The Application Profile Log on to the Oracle Access Management console and from System Configuration -> Mobile and Social -> Mobile Services, select "Create" under Application Profiles. You would do this  step twice - once for each of the native apps - AvitekInventory and AvitekScheduler. Enter the parameters for the new Application profile: Name:  The application name. In this example we use 'InventoryApp' for the AvitekInventory app and 'SchedulerApp' for the AvitekScheduler app. The application name configured here must match the application name in the settings for the deployed iOS application. BaseSecret: Enter a password here. This does not need to match any existing password. It is used as an encryption key between the client and the OAM server.  Mobile Configuration: Enable this checkbox for any mobile applications. This enables the SDK to collect and send Mobile specific attributes to the OAM server.  Webview: Controls the type of browser that the iOS application will use. The embedded browser (default) will render the browser within the application. External will use the system standalone browser. External can sometimes be preferable for debugging URLScheme: The URL scheme associated with the iOS apps that is also used as a custom URL scheme to register O/S handlers that will take control when OAM transfers control to device. For the AvitekInventory and the AvitekScheduler apps I used osa:// and client:// respectively. You set this scheme in Xcode while developing your iOS Apps under Info->URL Types.  Bundle Identifier : The fully qualified name of your iOS application. You typically set this when you create a new Xcode project or under General->Identity in Xcode. For the AvitekInventory and AvitekScheduler apps these were com.us.oracle.AvitekInventory and com.us.oracle.AvitekScheduler respectively.  Create The Service Domain Select create under Service domains. Create a name for your domain (AvitekDomain is what I've used). The name configured must match the service domain set in the iOS application settings. Under "Application Profile Selection" click the browse button. Choose the application profiles that you created in the previous step one by one. Set the InventoryApp as the SSO agent (with an automatic priority of 1) and the SchedulerApp as the SSO client. This associates these applications with this service domain and configures them in a 'circle of trust'.  Advance to the next page of the wizard to configure the services for this domain. For this example we will use the following services:  Authentication:   This will use the JWT (JSON Web Token) format authentication provider. The iOS application upon successful authentication will receive a signed JWT token from OAM Mobile & Social service. This token will be used in subsequent calls to OAM. Use 'MobileOAMAuthentication' here. Authorization:  The authorization provider. The SDK makes calls to this provider endpoint to obtain authorization decisions on resource requests. Use 'OAMAuthorization' here. User Profile Service:  This is the service that provides user profile services (attribute lookup, attribute modification). It can be any directory configured as a data source in OAM.  And that's it! We're done configuring our native apps. In the next section, let's look at some additional features that were mentioned in the earlier post that are automated by the SDK for the app developer i.e. these are areas that require no additional coding by the app developer when developing with the SDK as they only require server side configuration: Additional Configuration  Offline Authentication Select this option in the service domain configuration to allow users to log in and authenticate to the application locally. Clear the box to block users from authenticating locally. Strong Authentication By simply selecting the OAAMSecurityHandlerPlugin while configuring mobile related Service Domains, the OAM Mobile&Social service allows sophisticated device and client application registration logic as well as the advanced risk and fraud analysis logic found in OAAM to be applied to mobile authentication. Let's look at some scenarios where the OAAMSecurityHandlerPlugin gets used. First, when we configure OAM and OAAM to integrate together using the TAP scheme, then that integration kicks off by selecting the OAAMSecurityHandlerPlugin in the mobile service domain. This is how the mobile device is now prompted for KBA,OTP etc depending on the TAP scheme integration and the OAM users registered in the OAAM database. Second, when we configured the service domain, there were claim attributes there that are already pre-configured in OAM Mobile&Social service and we simply accepted the default values- these are the set of attributes that will be fetched from the device and passed to the server during registration/authentication as device profile attributes. When a mobile application requests a token through the Mobile Client SDK, the SDK logic will send the Device Profile attributes as a part of an HTTP request. This set of Device Profile attributes enhances security by creating an audit trail for devices that assists device identification. When the OAAM Security Plug-in is used, a particular combination of Device Profile attribute values is treated as a device finger print, known as the Digital Finger Print in the OAAM Administration Console. Each finger print is assigned a unique fingerprint number. Each OAAM session is associated with a finger print and the finger print makes it possible to log (and audit) the devices that are performing authentication and token acquisition. Finally, if the jail broken option is selected while configuring an application profile, the SDK detects a device is jail broken based on configured policy and if the OAAM handler is configured the plug-in can allow or block access to client device depending on the OAAM policy as well as detect blacklisted, lost or stolen devices and send a wipeout command that deletes all the mobile &social relevant data and blocks the device from future access. 1024x768 Social Logins Finally, let's complete this post by adding configuration to configure social logins for mobile applications. Although the Avitek sample apps do not demonstrate social logins this would be an ideal exercise for you based on the sample code provided in the earlier post. I'll cover the server side configuration here (with Facebook as an example) and you can retrofit the code to accommodate social logins by following the steps outlined in "Invoking Authentication Services" and add code in LoginViewController and maybe create a new delegate - AvitekRPDelegate based on the description in the previous post. So, here all you will need to do is configure an application profile for social login, configure a new service domain that uses the social login application profile, register the app on Facebook and finally configure the Facebook OAuth provider in OAM with those settings. Navigate to Mobile and Social, click on "Internet Identity Services" and create a new application profile. Here are the relevant parameters for the new application profile (-also we're not registering the social user in OAM with this configuration below, however that is a key feature as well): Name:  The application name. This must match the name of the of mobile application profile created for your application under Mobile Services. We used InventoryApp for this example. SharedSecret: Enter a password here. This does not need to match any existing password. It is used as an encryption key between the client and the OAM Mobile and Social service.  Mobile Application Return URL: After the Relying Party (social) login, the OAM Mobile & Social service will redirect to the iOS application using this URI. This is defined under Info->URL type and we used 'osa', so we define this here as 'osa://' Login Type: Choose to allow only internet identity authentication for this exercise. Authentication Service Endpoint : Make sure that /internetidentityauthentication is selected. Login to http://developers.facebook.com using your Facebook account and click on Apps and register the app as InventoryApp. Note that the consumer key and API secret gets generated automatically by the Facebook OAuth server. Navigate back to OAM and under Mobile and Social, click on "Internet Identity Services" and edit the Facebook OAuth Provider. Add the consumer key and API secret from the Facebook developers site to the Facebook OAuth Provider: Navigate to Mobile Services. Click on New to create a new service domain. In this example we call the domain "AvitekDomainRP". The type should be 'Mobile Application' and the application credential type 'User Token'. Add the application "InventoryApp" to the domain. Advance the next page of the wizard. Select the  default service profiles but ensure that the Authentication Service is set to 'InternetIdentityAuthentication'. Finish the creation of the service domain.

    Read the article

  • What packages can i use to unroll a complete store with customer service?

    - by acidzombie24
    I havent bought the server yet (possible VPS) but i am thinking about using linux with apache and mono for asp.net support. I don't know much about this. What packages can i use together to have a store with customer support? What i like is 1) A store to purchase one item (its digital). More may be possible but they are likely to be addons which need the first item. 2) Have the the store send messages to my app which will generate registration key and deliver the digital item. 3) Create an account for that customer on a support site used for tickets 4) A Forum. I'd like a private forum for customers and may want their account to be disabled when their product license has expired. 5) A mailing list. I like non customers be able to subscribe to a list and i'd like to know if any customers are on it so i can send different emails to each if desired. Are there packages that make any of these easy? I dont mind writing glue code if i need to but i havent tried any stores, mailing list, ticket system but have installed a forum once long ago. My mail server will likely be through google apps.

    Read the article

  • Having tried differen ways but none worked - How do I disable a service from auto-start at boot in Ubuntu?

    - by Howard Guo
    This really doesn't make sense. I've been using many other distros and never had such difficulty managing autostart services. I found three ways of disabling autostart services, and none of them works for me: update-rc.d -f service_name remove chkconfig --level 12345 service_name off sysv-rc-conf I tried all the three ways to disable mysql daemon, mongo daemon, redis server, cups daemon, yet all of the utilities confirmed that the daemons are disabled, yet they still automatically start on boot. Please suggest the most correct way to disable services from auto-start at boot. Thank you! btw, it's running 12.04

    Read the article

  • Combining Shared Secret and Certificates

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also combine this with certificates so that you can identify the sender of the message.   Prerequisites As in the previous article before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   Setting up the Certificates To keep the post and sample simple I am going to use the local computer store for all certificates but this bit is really just the same as setting up certificates for an example where you are using WCF without using Windows Azure Service Bus. In the sample I have included two batch files which you can use to create the sample certificates or remove them. Basically you will end up with: A certificate called PocServerCert in the personal store for the local computer which will be used by the WCF Service component A certificate called PocClientCert in the personal store for the local computer which will be used by the client application A root certificate in the Root store called PocRootCA with its associated revocation list which is the root from which the client and server certificates were created   For the sample Im just using development certificates like you would normally, and you can see exactly how these are configured and placed in the stores from the batch files in the solution using makecert and certmgr.   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.Cert.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our certificate like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element.     Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the certificate stuff in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use the clientCertificate check and also specifying the serviceCertificate with information on how to find the servers certificate in the store.     I have also added a serviceAuthorization section where I will implement my own authorization component to perform additional security checks after the service has validated that the message was signed with a good certificate. I also have the same serviceSecurityAudit configuration to log access to my service. My Authorization Manager The below picture shows you implementation of my authorization manager. WCF will eventually hand off the message to my authorization component before it calls the service code. This is where I can perform some logic to check if the identity is allowed to access resources. In this case I am simple rejecting messages from anyone except the PocClientCertificate.     The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a certificate over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding.   Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the token from a certificate with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This contains the normal transportClientEndpointBehaviour to setup the ACS shared secret configuration but we have also configured the clientCertificate to look for the PocClientCert.     Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF in exactly the normal way but the configuration will jump in and ensure that a token is passed representing the client certificate.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and certificate based token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.Cert.zip

    Read the article

  • SSH from external network refused

    - by wulfsdad
    I've installed open-ssh-server on my home computer(running Lubuntu 12.04.1) in order to connect to it from school. This is how I've set up the sshd_config file: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for #Port 22 Port 2222 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH #LogLevel INFO LogLevel VERBOSE # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net Banner /etc/sshbanner.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes #specify which accounts can use SSH AllowUsers onlyme I've also configured my router's port forwarding table to include: LAN Ports: 2222-2222 Protocol: TCP LAN IP Address: "IP Address" displayed by viewing "connection information" from right-click menu of system tray Remote Ports[optional]: n/a Remote IP Address[optional]: n/a I've tried various other configurations as well, using primary and secondary dns, and also with specifying remote ports 2222-2222. I've also tried with TCP/UDP (actually two rules because my router requires separate rules for each protocol). With any router port forwarding configuration, I am able to log in with ssh -p 2222 -v localhost But, when I try to log in from school using ssh -p 2222 onlyme@IP_ADDRESS I get a "No route to host" message. Same thing when I use the "Broadcast Address" or "Default Route/Primary DNS". When I use the "subnet mask", ssh just hangs. However, when I use the "secondary DNS" I recieve a "Connection refused" message. :^( Someone please help me figure out how to make this work.

    Read the article

  • Why does my .NET Windows service not start automatically sometimes?

    - by Tomek
    Hi all, I have modified a working Windows service that had always been starting beforehand. After adding the System.Management reference it now sometimes will not start automatically. I get the following error: Service cannot be started. System.Runtime.InteropServices.COMException (0x80010002): Call was canceled by the message filter. (Exception from HRESULT: 0x80010002 (RPC_E_CALL_CANCELED)) I found another post here on SO with someone having the same issue. http://stackoverflow.com/questions/998883/why-wont-my-net-windows-service-start-automatically-after-a-reboot However, the proposed solution was to have the service start after the services it depends on have started. However, when I go to the Dependencies tab for my service, I see: Should I just use the workaround method of putting the thread to sleep, or is there a more proper way of getting this service to start correctly? Is this happening because .NET has not started before my service starts? Thanks, Tomek

    Read the article

  • What's the right way to start a node.js service?

    - by elliot42
    I'm running a node.js service (statsd) on CentOS 6. What's the proper way to daemonize and start such a service? Potential Daemonizers--are daemonizers supposed to be language-specific or general?: forever (node-specific) daemonize nohup (presumably wrong) start-stop-daemon(debian-only? is this for daemonizing or starting/stopping? what is the Centos equivalent?) Should the app itself really know how to daemonize itself and then have a -d flag? (e.g. via node-daemonize2 or forever-monitor?) Service starters--should these be from the system/distro, or should they be from monitoring tools such as monit?: service? is really /etc/init.d on CentOS? service? is really Upstart on Ubuntu? monit? daemontools? runit? I'm unfortunately new to this--where can I read up on what is the most standard, classic, reliable way of doing this?

    Read the article

  • HP ProCurve Port Mode Configuration Question Mark 2

    - by SvrGuy
    We have a ProCurve Switch 2810-48G (J9022A). We need to disable auto negotiation on two ports and manually configure them to be full duplex gige ports. From the web GUI, Configuration Tab, Port Configuration sub tab, I am only presented with the option to configure the port as Auto - 1000. I take this to mean, auto negotiate duplex, manually configure the speed to be gige. From the CLI when I try to set 1000-full I get the following error: Value 1000-full is not applicable to port 39 (or whatever port I try) The exact commands I have entered are: config interface 39 speed-duplex 1000-full BTW: speed-duplex auto-1000 works ( I also tried full-1000 and that did not work either) How do I manually configure the port such that it is manually configured to use full duplex, 1000 mbs?

    Read the article

  • SPAN/Port mirroring on Linksys switch

    - by Bastien974
    Hi all, I'm trying to deploy a Snort box in my LAN. I have a Linksys SRW248G4 and trying to configure Port mirroring so that Snort can listen everything on the network in promiscuous mode. So in ADMIN / Port Mirroring, I have 3 things: Source Port (e1,...e48, g1...g4) Type (Rx, Tx, Both) Target (e1...e48, g1...g4) Last time I played with it, I killed all traffic on the switch, I had to reboot it several times... so now I'm asking question before: Do I need to configure each Source Port (from 1 to 48) to forward to the single promiscuous port ? 48 rules !? Is that correct ? Thanks !

    Read the article

  • Mystery 0xc0000142 error on starting java from a service, as a different user.

    - by cpf
    This is a very convoluted setup, but effectively this is what goes down: Manager service (which I don't have control over) running as admin user X starts my executable, which then starts Java as user Y using the standard c# StartInfo.Username/Password controls. Now, from a basic (not elevated or anything, just admin) command prompt I can run that executable, and Java pops up and works fine, running perfectly under the user it should be. When the service runs the same executable, however, Java silently fails. The only hint I see is this series of events in the event viewer: Service starts "Application popup: java.exe - Application Error : The application was unable to start correctly (0xc0000142). Click OK to close the application. " (googling this reveals a lot of scam sites telling me to use their "free antivirus to fix 0xc0000142 errors easy!"... sigh) Service stops (the java shutdown propagated, which is supposed to happen) And here's what process explorer has for the failure: As you can see, everything shows as a success. Now, I think this might have something to do with the permissions (the user java.exe is running under has traverse permission for the entire drive and full permissions to Directory A, which is where the .jar is), but I just can't fathom how something that works fine from the command line (and, this is an upgrade, the previous system without the user-switching aspect works fine from the service) can fail with such a cryptic message and little showing up in logs.

    Read the article

  • Is there a simple way to detect ISP port blocking?

    - by Will M
    Is there a way to tell the difference between my ISP blocking traffic on certain ports and my NAT router/firewall blocking that traffic? The sites “Shields Up” and “Can you see me” show my ports closed or not accessible, but I assume that is primarily due to the NAT router. (Obviously, I could just remove the router, connect directly and use those sites, but is there a simple way to test without doing that?)

    Read the article

  • Recommendation for a non-standard SSL port

    - by onurs
    Hey guys, On our server I have a single IP, and need to host 2 different SSL sites. Sites have different owners so have different SSL certificates, and can't share the same certificate with SAN. So as a last resort I have modified the web application to give the ability to use a specified port for secure pages. For its simple look I used port 200. However I'm worried about some visitors may be unable to see the site because of their firewalls / proxies blocking the port for ssl connections. I heard some people were unable to see the website, a home user and someone from an enterprise company, don't know if this was the reason. So, any recommendations for a non-standard SSL port number (443 is used by the other site) which may work for visitors better than port 200 ? Like 8080 or 8443 perhaps? Thanks!

    Read the article

  • Can't Allow Specific Port in Windows Firewall Advanced Security - Windows 2008

    - by Jody
    In the Outbound Rules, I set up a rule to allow outbound connection from port 26. But, it doesn't work. However, if I allow "all port" for this rule, this will work, but then all ports will be allowed too. What is the reason? Is there any conflicting rule? I need to fix this as soon as possible. -Edit to add : I'm trying to allow email access to mail server outside (port 26). The thing is, even if I telnet using port 26, it will not work, unless I allow "all ports". Specific port will not do.

    Read the article

  • Running PHPmyAdmin on Nginx, port 8080 passed to varnish not working well!

    - by amrnt
    I installed Nginx, Varnish and PHP-fpm. Then I installed PHPmyAdmin and made a virtual host for it: server{ listen 8080; server_name phpmyadmin.Domain.com; access_log /var/log/phpmyadmin.access_log; error_log /var/log/phpmyadmin.error_log; location / { root /usr/share/phpmyadmin; index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include /opt/nginx/conf/fastcgi_params; } } When I go to phpmyadmin.Domain.com it works as expected! but after submitting username/password it redirects me to phpmyadmin.Domain.com:8080/index.php?... with page cannot be found response as well! What could I do?

    Read the article

  • Postfix configuration w.r.t. port 25

    - by Monkey Boson
    After a considerable amount of research, I have configured my postfix server to use dovecot to accept SMTPS connections over port 465 and everything works swimmingly. Unfortunately, I forgot that, unless I listen to port 25, I'm not going to receive any e-mail from the net. I'm hoping somebody knows off the top of their head how to open up port 25 on Postfix for anonymous users, but disallow relaying and any other bad things on that port. And to leave the port 465 the way it is. As to my current configuration, I changed the master.cf file: smtps inet n - n - - smtpd and the main.cf file: # Use our SSL certificates smtpd_tls_cert_file = .....cer smtpd_tls_key_file = .....key smtpd_tls_security_level = may # Use Dovecot for SASL authentication smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination Any help is appreciated!

    Read the article

  • What is a descriptive name for a web service that responds with plain text?

    - by Rich Shealer
    I'm looking for the proper name for a web service that responds with text as opposed to XML. As an example this "cardServer" service will return 5 cards for a poker hand. http://hostname:8080/cardServer/deal/Game=Poker&Qty=5 The result could look like this: Card1=Ad Card2=Kc Card3=Ts Card4=5d Card5=3d The real world example is not as trivial, but the concept is the same. Parameters if any are passed as POST variables. We currently parse the response into a string list and use the values. The process works just fine. What I'm wondering if this service method has a name that has easier tools to deal with the responses and therefore did we reinvent the wheel? For the record the service was provided and maintained by a customer.

    Read the article

  • What is an Acer Converter Port?

    - by ShadowHero
    What is this port "Acer Converter Port"? Very little information can be found on the internet. It's basically the same as miniDP? Can I connect a monitor like Dell U2713H on this one? EDIT From this page: http://acer.custhelp.com/app/answers/detail/a_id/30837/~/frequently-asked-questions-on-the-aspire-r7-series 9. Can I connect a DisplayPort monitor to the Acer Converter Port? The Acer Converter Port uses the same physical port as a Mini DisplayPort, but is designed to connect to an Acer proprietary cable. If you connect a DisplayPort monitor, Acer cannot guarantee the functionality of the monitor. No damage should occur to either monitor or notebook by connecting a Mini DisplayPort cable. So, the question is, can someone confirm this will work for sure?

    Read the article

  • How do you launch an SSH connection with port forwarding without interrupting your screen access?

    - by vfclists
    I want to make an SSH connection to another server with forwarding, but without having to log on to the remote server, nor interfere with the screen I am working on. I also need to access the connection to terminate it when I finish with it. eg. say I want to do a mysql backup on a remote server so I use the command ssh user@remote -L 1234:localhost:3306 but after issuing the password I want to run the mysql command in the session, but be able to access the SSH connection when I finish with mysql and terminate it. Is there some way this can be done?

    Read the article

  • Printer demands specific USB port

    - by snitzr
    My HP laptop has 3 USB ports. When I add my printer driver by plugging into one of those ports, the driver installs fine. Problem: I have to install 3 identical drivers. One for each USB port. Depending on what USB port I have the printer plugged into, I have to pick the driver for that port. Question: How do I print to any of the three ports automatically based on what port the printer is in? Further into: In Devices and Printers Printer Properties Ports... I cannot select more than one port for a printer driver to print to.

    Read the article

  • Why does my router log crazy amounts of blocked traffic on port 1701?

    - by Vlad Seghete
    I have a 2701HGV-B 2Wire modem and router (AT&T). The log is basically full with entries similar to the following with a time between a fifth and a third of a second between entries: src=86.156.7.170 dst=xxx.xxx.xxx.38 ipprot=17 sport=6882 dport=1701 Unknown inbound session stopped src=58.176.22.252 dst=xxx.xxx.xxx.38 ipprot=17 sport=21573 dport=1701 Unknown inbound session stopped src=91.221.6.250 dst=xxx.xxx.xxx.38 ipprot=17 sport=25902 dport=1701 Unknown inbound session stopped ... where the source IP will be different for every entry. The entries accumulate constantly, every single second that the router is on several of them appear in the log. The destination is the WAN address for my router. I understand that this is somehow related to VNCs, but I don't know enough to figure out why my router is getting bombarded with requests for a VNC session. Is there anything fishy going on or is this normal? If it is normal, how do I keep these entries from spamming my log files? Since there's about two or three of them every second, everything else gets drowned out.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >