Search Results

Search found 12283 results on 492 pages for 'tcp port'.

Page 434/492 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • SSH very slow when connecting from external [closed]

    - by wnstnsmth
    Possible Duplicate: ssh delay when connecting We have a CentOS server that we use for internal testing purposes, which has sshd enabled. When I (as a developer) am at the company, I use ssh [email protected] to connect to it - and it works flawlessly. Now, in order to work from home, accessing the server via the company's static IP, we set up another port for ssh, 2020. So I execute ssh -p 2020 [email protected] and am immediately granted for a password. After entering the password, it takes up to 30 seconds until I can access the server. Same is with SFTP (i.e. uploading files takes about 30 seconds until it begins to transfer). As you can imagine, if you have to regularly upload files to a webserver via SFTP, this is very tedious. So I looked at similar questions and thus edited the sshd_config file on the server, setting UseDNS to "no" and GSSAPIAuthentication to "no" (this one also in ssh_config on the client) - it did not work.. Please have a look at the -vvv output when externally accessing the server: ssh -p 2020 -vvv [email protected] PasteBin: ssh What could it be? Do you need more info?

    Read the article

  • Trouble Starting MySL Community Server on Windows 7

    - by CodeAngel
    I have installed Netbeans 7 on my Windows 7. In addition, the MySQL Community Server 5.6.12 is installed with the MSI installer on thesame 7 PC. The MySQL server is integrated with the Netbeans IDE. However , it is not possible to start or stop the MySQL server from the command prompt or the Netbeans IDE. I am only able to start or stop the server from the Windows 7 services tool. Also , it is difficult running SQL queries from the Netbeans IDE even though it shows there is connection with the MySQL server. I have added the my.ini file to the installed directory of the MySQL server , that is : C:\Program Files\MySQL\MySQL Server 5.6 below is the my.ini file : # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... port = 3306 # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Any suggestion is welcomed.

    Read the article

  • Seeking faster access/transfer times for accounting application

    - by Markaway
    Our accounting software, Sage 50, has been getting slower to open on workstations and reading the company file. The company file only contains 2 years worth of transactions, and we just cleared out 2011 so the file size has gotten a lot smaller. There are 10 users, 6 of which are on it all day, 4 are on and off throughout the day. Our network is entirely GbE and the switches are set to prioritize traffic on that port number. Watching network traffic, we barely use 40% of the network capability on the workstation, so I don't think that is our bottleneck. Our server contains two older Raptors Sata 2(3GB/s) 150GB in RAID 1. We were considering switching to SSD's, but a lot of what I read says to stay away from MLC's, especially for production environment and definitely avoid putting them in a RAID config. So would upgrading to newer Raptors with SATA 3(6GB/s) offer noticable benefits? What other options are out there that aren't so expensive? Trying to keep it to 200-300 per drive. We need at least 150GB, but going to 250-300GB would be better as it gives us more room to grow. We have about 30% space remaining on what we have now.

    Read the article

  • Would a USB hub work in reverse?

    - by Tim
    Imagine for a moment with a 4 port USB hub. Normally how this would work is the hub has one plug that goes to the computer, then 4 ports that you can plug in other things to (thumb drive, keyboard, mouse etc). I am wondering if I can use it in reverse. So I would have 1 keyboard going in to the hub, and then plug in male to male usb cables from the 4 ports to 4 different PCs, my aim is that when a key is pressed on the keyboard all 4 PCs will receive it as if the keyboard were plugged in to them. Does anyone know if this would work? And if not does anyone have any ideas how I could get the same effect? EDIT: So I am looking for more of a KVM switch type device rather than a USB hub. However all of the KVM switches I've found use some sort of mechanism to select which computer you'll be using. (some are physical switches / buttons, others do it via software "automatically" some how) However I need to have 1 keyboard hooked up to 2 computers and when I press a key on the keyboard I want the keypress to be sent to both computers simultaneously, not to one or the other. Does anyone know if KVMs with this feature exist?

    Read the article

  • How and where to implement basic authentication in Kibana 3

    - by Jabb
    I have put my elasticsearch server behind a Apache reverse proxy that provides basic authentication. Authenticating to Apache directly from the browser works fine. However, when I use Kibana 3 to access the server, I receive authentication errors. Obviously because no auth headers are sent along with Kibana's Ajax calls. I added the below to elastic-angular-client.js in the Kibana vendor directory to implement authentication quick and dirty. But for some reason it does not work. $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); What is the best approach and place to implement basic authentication in Kibana? /*! elastic.js - v1.1.1 - 2013-05-24 * https://github.com/fullscale/elastic.js * Copyright (c) 2013 FullScale Labs, LLC; Licensed MIT */ /*jshint browser:true */ /*global angular:true */ 'use strict'; /* Angular.js service wrapping the elastic.js API. This module can simply be injected into your angular controllers. */ angular.module('elasticjs.service', []) .factory('ejsResource', ['$http', function ($http) { return function (config) { var // use existing ejs object if it exists ejs = window.ejs || {}, /* results are returned as a promise */ promiseThen = function (httpPromise, successcb, errorcb) { return httpPromise.then(function (response) { (successcb || angular.noop)(response.data); return response.data; }, function (response) { (errorcb || angular.noop)(response.data); return response.data; }); }; // check if we have a config object // if not, we have the server url so // we convert it to a config object if (config !== Object(config)) { config = {server: config}; } // set url to empty string if it was not specified if (config.server == null) { config.server = ''; } /* implement the elastic.js client interface for angular */ ejs.client = { server: function (s) { if (s == null) { return config.server; } config.server = s; return this; }, post: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); console.log($http.defaults.headers); path = config.server + path; var reqConfig = {url: path, data: data, method: 'POST'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, get: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on get request, data will be request params var reqConfig = {url: path, params: data, method: 'GET'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, put: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'PUT'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, del: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'DELETE'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, head: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on HEAD request, data will be request params var reqConfig = {url: path, params: data, method: 'HEAD'}; return $http(angular.extend(reqConfig, config)) .then(function (response) { (successcb || angular.noop)(response.headers()); return response.headers(); }, function (response) { (errorcb || angular.noop)(undefined); return undefined; }); } }; return ejs; }; }]); UPDATE 1: I implemented Matts suggestion. However, the server returns a weird response. It seems that the authorization header is not working. Could it have to do with the fact, that I am running Kibana on port 81 and elasticsearch on 8181? OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.46.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.46.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache This is the response HTTP/1.1 401 Authorization Required Date: Fri, 08 Nov 2013 23:47:02 GMT WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 UPDATE 2: Updated all instances with the modified headers in these Kibana files root@localhost:/var/www/kibana# grep -r 'ejsResource(' . ./src/app/controllers/dash.js: $scope.ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/querySrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/filterSrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/dashboard.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); And modified my vhost conf for the reverse proxy like this <VirtualHost *:8181> ProxyRequests Off ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/cake2.2.4/.htpasswd Require valid-user Header always set Access-Control-Allow-Methods "GET, POST, DELETE, OPTIONS, PUT" Header always set Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization" Header always set Access-Control-Allow-Credentials "true" Header always set Cache-Control "max-age=0" Header always set Access-Control-Allow-Origin * </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost> Apache sends back the new response headers but the request header still seems to be wrong somewhere. Authentication just doesn't work. Request Headers OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.26.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.26.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Response Headers HTTP/1.1 401 Authorization Required Date: Sat, 09 Nov 2013 08:48:48 GMT Access-Control-Allow-Methods: GET, POST, DELETE, OPTIONS, PUT Access-Control-Allow-Headers: Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization Access-Control-Allow-Credentials: true Cache-Control: max-age=0 Access-Control-Allow-Origin: * WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 SOLUTION: After doing some more research, I found out that this is definitely a configuration issue with regard to CORS. There are quite a few posts available regarding that topic but it appears that in order to solve my problem, it would be necessary to to make some very granular configurations on apache and also make sure that the right stuff is sent from the browser. So I reconsidered the strategy and found a much simpler solution. Just modify the vhost reverse proxy config to move the elastisearch server AND kibana on the same http port. This also adds even better security to Kibana. This is what I did: <VirtualHost *:8181> ProxyRequests Off ProxyPass /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPassReverse /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/.htpasswd Require valid-user </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost>

    Read the article

  • How to consume PHP SOAP service using WCF

    - by mr.b
    I am new in web services so apologize me if I am making some cardinal mistake here, hehe. I have built SOAP service using PHP. Service is SOAP 1.2 compatible, and I have WSDL available. I have enabled sessions, so that I can track login status, etc. I don't need some super security here (ie message-level security), all I need is transport security (HTTPS), since this service will be used infrequently, and performances are not so much of an issue. I am having difficulties making it to work at all. C# throws some unclear exception ("Server returned an invalid SOAP Fault. Please see InnerException for more details.", which in turn says "Unbound prefix used in qualified name 'rpc:ProcedureNotPresent'."), but consuming service using PHP SOAP client behaves as expected (including session and all). So far, I have following code. note: due to amount of real code, I am posting minimal code configuration PHP SOAP server (using Zend Soap Server library), including class(es) exposed via service: <?php class Verification_LiteralDocumentProxy { protected $instance; public function __call($methodName, $args) { if ($this->instance === null) { $this->instance = new Verification(); } $result = call_user_func_array(array($this->instance, $methodName), $args[0]); return array($methodName.'Result' => $result); } } class Verification { private $guid = ''; private $hwid = ''; /** * Initialize connection * * @param string GUID * @param string HWID * @return bool */ public function Initialize($guid, $hwid) { $this->guid = $guid; $this->hwid = $hwid; return true; } /** * Closes session * * @return void */ public function Close() { // if session is working, $this->hwid and $this->guid // should contain non-empty values } } // start up session stuff $sess = Session::instance(); require_once 'Zend/Soap/Server.php'; $server = new Zend_Soap_Server('https://www.somesite.com/api?wsdl'); $server->setClass('Verification_LiteralDocumentProxy'); $server->setPersistence(SOAP_PERSISTENCE_SESSION); $server->handle(); WSDL: <definitions xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:tns="https://www.somesite.com/api" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap-enc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" name="Verification" targetNamespace="https://www.somesite.com/api"> <types> <xsd:schema targetNamespace="https://www.somesite.com/api"> <xsd:element name="Initialize"> <xsd:complexType> <xsd:sequence> <xsd:element name="guid" type="xsd:string"/> <xsd:element name="hwid" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="InitializeResponse"> <xsd:complexType> <xsd:sequence> <xsd:element name="InitializeResult" type="xsd:boolean"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="Close"> <xsd:complexType/> </xsd:element> </xsd:schema> </types> <portType name="VerificationPort"> <operation name="Initialize"> <documentation> Initializes connection with server</documentation> <input message="tns:InitializeIn"/> <output message="tns:InitializeOut"/> </operation> <operation name="Close"> <documentation> Closes session between client and server</documentation> <input message="tns:CloseIn"/> </operation> </portType> <binding name="VerificationBinding" type="tns:VerificationPort"> <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/> <operation name="Initialize"> <soap:operation soapAction="https://www.somesite.com/api#Initialize"/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> <operation name="Close"> <soap:operation soapAction="https://www.somesite.com/api#Close"/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> </binding> <service name="VerificationService"> <port name="VerificationPort" binding="tns:VerificationBinding"> <soap:address location="https://www.somesite.com/api"/> </port> </service> <message name="InitializeIn"> <part name="parameters" element="tns:Initialize"/> </message> <message name="InitializeOut"> <part name="parameters" element="tns:InitializeResponse"/> </message> <message name="CloseIn"> <part name="parameters" element="tns:Close"/> </message> </definitions> And finally, WCF C# consumer code: [ServiceContract(SessionMode = SessionMode.Required)] public interface IVerification { [OperationContract(Action = "Initialize", IsInitiating = true)] bool Initialize(string guid, string hwid); [OperationContract(Action = "Close", IsInitiating = false, IsTerminating = true)] void Close(); } class Program { static void Main(string[] args) { WSHttpBinding whb = new WSHttpBinding(SecurityMode.Transport, true); ChannelFactory<IVerification> cf = new ChannelFactory<IVerification>( whb, "https://www.somesite.com/api"); IVerification client = cf.CreateChannel(); Console.WriteLine(client.Initialize("123451515", "15498518").ToString()); client.Close(); } } Any ideas? What am I doing wrong here? Thanks!

    Read the article

  • Unable to get HTTPS MEX endpoint to work

    - by Rahul
    I have been trying to configure WCF to work with Azure ACS. This WCF configuration has 2 bugs: It does not publish MEX end point. It does not invoke custom behaviour extension. (It just stopped doing that after I made some changes which I can't remember) What could be possibly wrong here? <configuration> <configSections> <section name="microsoft.identityModel" type="Microsoft.IdentityModel.Configuration.MicrosoftIdentityModelSection, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </configSections> <location path="FederationMetadata"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <system.web> <compilation debug="true" targetFramework="4.0"> <assemblies> <add assembly="Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> </system.web> <system.serviceModel> <services> <service name="production" behaviorConfiguration="AccessServiceBehavior"> <endpoint contract="IMetadataExchange" binding="mexHttpsBinding" address="mex" /> <endpoint address="" binding="customBinding" contract="Samples.RoleBasedAccessControl.Service.IService1" bindingConfiguration="serviceBinding" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="AccessServiceBehavior"> <federatedServiceHostConfiguration /> <sessionExtension/> <useRequestHeadersForMetadataAddress> <defaultPorts> <add scheme="http" port="8000" /> <add scheme="https" port="8443" /> </defaultPorts> </useRequestHeadersForMetadataAddress> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpsGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceCredentials> <!--Certificate added by FedUtil. Subject='CN=DefaultApplicationCertificate', Issuer='CN=DefaultApplicationCertificate'.--> <serviceCertificate findValue="XXXXXXXXXXXXXXX" storeLocation="LocalMachine" storeName="My" x509FindType="FindByThumbprint" /> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> <extensions> <behaviorExtensions> <add name="sessionExtension" type="Samples.RoleBasedAccessControl.Service.RsaSessionServiceBehaviorExtension, Samples.RoleBasedAccessControl.Service, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> <add name="federatedServiceHostConfiguration" type="Microsoft.IdentityModel.Configuration.ConfigureServiceHostBehaviorExtensionElement, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </behaviorExtensions> </extensions> <protocolMapping> <add scheme="http" binding="customBinding" bindingConfiguration="serviceBinding" /> <add scheme="https" binding="customBinding" bindingConfiguration="serviceBinding"/> </protocolMapping> <bindings> <customBinding> <binding name="serviceBinding"> <security authenticationMode="SecureConversation" messageSecurityVersion="WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10" requireSecurityContextCancellation="false"> <secureConversationBootstrap authenticationMode="IssuedTokenOverTransport" messageSecurityVersion="WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10"> <issuedTokenParameters> <additionalRequestParameters> <AppliesTo xmlns="http://schemas.xmlsoap.org/ws/2004/09/policy"> <EndpointReference xmlns="http://www.w3.org/2005/08/addressing"> <Address>https://127.0.0.1:81/</Address> </EndpointReference> </AppliesTo> </additionalRequestParameters> <claimTypeRequirements> <add claimType="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name" isOptional="true" /> <add claimType="http://schemas.microsoft.com/ws/2008/06/identity/claims/role" isOptional="true" /> <add claimType="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" isOptional="true" /> <add claimType="http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider" isOptional="true" /> </claimTypeRequirements> <issuerMetadata address="https://XXXXYYYY.accesscontrol.windows.net/v2/wstrust/mex" /> </issuedTokenParameters> </secureConversationBootstrap> </security> <httpsTransport /> </binding> </customBinding> </bindings> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> <microsoft.identityModel> <service> <audienceUris> <add value="http://127.0.0.1:81/" /> </audienceUris> <issuerNameRegistry type="Microsoft.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"> <trustedIssuers> <add thumbprint="THUMBPRINT HERE" name="https://XXXYYYY.accesscontrol.windows.net/" /> </trustedIssuers> </issuerNameRegistry> <certificateValidation certificateValidationMode="None" /> </service> </microsoft.identityModel> <appSettings> <add key="FederationMetadataLocation" value="https://XXXYYYY.accesscontrol.windows.net/FederationMetadata/2007-06/FederationMetadata.xml " /> </appSettings> </configuration> Edit: Further implementation details I have the following Behaviour Extension Element (which is not getting invoked currently) public class RsaSessionServiceBehaviorExtension : BehaviorExtensionElement { public override Type BehaviorType { get { return typeof(RsaSessionServiceBehavior); } } protected override object CreateBehavior() { return new RsaSessionServiceBehavior(); } } The namespaces and assemblies are correct in the config. There is more code involved for checking token validation, but in my opinion at least MEX should get published and CreateBehavior() should get invoked in order for me to proceed further.

    Read the article

  • GridBagConstraints problem-moved to left and size isn't the same

    - by Damir
    I have in Java two panels which need to have same layout, there is my functions for initializations panels. private void InitializePanelCom(){ pnlCom=new JPanel(); pnlCom.setSize(300,160); pnlCom.setLocation(10, 60); add(pnlCom); GridBagLayout gb=new GridBagLayout(); GridBagConstraints gc=new GridBagConstraints(); pnlCom.setLayout(gb); jLabelcommPort = setJLabel("Com Port : "); jLabelbaudRate = setJLabel("Baud Rate : "); jLabelplcAddress = setJLabel("Plc Address : "); jLabelsendTime = setJLabel("Send Time : "); jLabelx50 = setJLabel(" x 50 ms (2 - 99)"); jComboBoxcommPort = setJComboBox(commPortList); jComboBoxbaudRate = setJComboBox(bitRateList); jTextAreaPlcAddress = setJTextField(""); jTextAreaSendTime = setJTextField(""); gc.insets = new Insets(10,0,0,0); gc.ipadx = 120; gc.weightx = 1; gc.gridx = 0; gc.gridy = 0; gc.anchor=GridBagConstraints.EAST; pnlCom.add(jLabelcommPort,gc); gc.insets = new Insets(10,0,0,0); gc.ipadx = 120; gc.weightx = 1; gc.gridx = 1; gc.gridy = 0; gc.anchor=GridBagConstraints.EAST; pnlCom.add(jComboBoxcommPort,gc); gc.insets=new Insets(10,0,0,0); gc.ipadx=120; gc.weightx=1; gc.gridx=0; gc.gridy=1; gc.anchor=GridBagConstraints.EAST; pnlCom.add(jLabelbaudRate,gc); gc.insets=new Insets(10,0,0,0); gc.ipadx=120; gc.weightx=1; gc.gridx=1; gc.gridy=1; gc.anchor=GridBagConstraints.EAST; pnlCom.add(jComboBoxbaudRate,gc); gc.insets=new Insets(10,0,0,0); gc.ipadx=120; gc.weightx=1; gc.gridx=0; gc.gridy=2; gc.anchor=GridBagConstraints.EAST; pnlCom.add(jLabelplcAddress,gc); gc.insets=new Insets(10,0,0,0); gc.ipadx=120; gc.weightx=1; gc.gridx=1; gc.gridy=2; gc.anchor=GridBagConstraints.EAST; pnlCom.add(jTextAreaPlcAddress,gc); gc.insets=new Insets(10,0,0,0); gc.ipadx=120; gc.weightx=1; gc.gridx=0; gc.gridy=3; gc.anchor=GridBagConstraints.EAST; pnlCom.add(jLabelsendTime,gc); gc.insets=new Insets(10,0,0,0); gc.ipadx=120; gc.weightx=1; gc.gridx=1; gc.gridy=3; gc.anchor=GridBagConstraints.EAST; pnlCom.add(jTextAreaSendTime,gc); gc.insets=new Insets(10,0,0,0); gc.ipadx=120; gc.weightx=1; gc.gridx=2; gc.gridy=3; gc.anchor=GridBagConstraints.EAST; pnlCom.add(jLabelx50,gc); } ![alt text][1] private void InitializePanelTcp(){ pnlTcp=new JPanel(); pnlTcp.setSize(300,160); pnlTcp.setLocation(10, 60); add(pnlTcp); GridBagLayout gb=new GridBagLayout(); GridBagConstraints gc=new GridBagConstraints(); pnlTcp.setLayout(gb); lblIPAddress=setJLabel("IP Address : "); txtIPAddress=setJTextField(""); lblPort=setJLabel("Port : "); txtPort=setJTextField(""); cmbBaudRateTCP = setJComboBox(bitRateList); lblBaudRateTCP = setJLabel("Baud Rate : "); lblParityCheck=setJLabel("Parity check : "); txtParityCheck=setJTextField(""); gc.insets = new Insets(10,0,0,0); //gc.ipadx = 20; gc.weightx = 0.3; gc.gridx = 0; gc.gridy = 0; gc.anchor=GridBagConstraints.WEST; pnlTcp.add(lblIPAddress,gc); gc.insets = new Insets(10,0,0,0); //gc.ipadx = 80; gc.weightx = 0.7; gc.gridx = 1; gc.gridy = 0; gc.anchor=GridBagConstraints.WEST; pnlTcp.add(txtIPAddress,gc); gc.insets=new Insets(10,0,0,0); //gc.ipadx=120; gc.weightx=0.3; gc.gridx=0; gc.gridy=1; gc.anchor=GridBagConstraints.WEST; pnlTcp.add(lblPort,gc); gc.insets=new Insets(10,0,0,0); //gc.ipadx=80; gc.weightx=0.7; gc.gridx=1; gc.gridy=1; gc.anchor=GridBagConstraints.WEST; pnlTcp.add(txtPort,gc); gc.insets=new Insets(10,0,0,0); //gc.ipadx=120; gc.weightx=0.3; gc.gridx=0; gc.gridy=2; gc.anchor=GridBagConstraints.WEST; pnlTcp.add(lblBaudRateTCP,gc); gc.insets=new Insets(10,0,0,0); //gc.ipadx=0; gc.weightx=0.7; gc.gridx=1; gc.gridy=2; gc.anchor=GridBagConstraints.WEST; pnlTcp.add(cmbBaudRateTCP,gc); gc.insets=new Insets(10,0,0,0); //gc.ipadx=120; gc.weightx=0.3; gc.gridx=0; gc.gridy=3; gc.anchor=GridBagConstraints.WEST; pnlTcp.add(lblParityCheck,gc); gc.insets=new Insets(10,0,0,0); //gc.ipadx=0; gc.weightx=1.7; gc.gridx=1; gc.gridy=3; gc.anchor=GridBagConstraints.WEST; pnlTcp.add(txtParityCheck,gc); } Problem is that second panel (initializetcp, second picture doesn't look the same, labels are moved at left , it is different ). Can anybody help, I am new with GridBagContsraints at all ?

    Read the article

  • jetty - javax.naming.InvalidNameException: A flat name can only have a single component

    - by Dinesh Pillay
    I have been breaking my head against this for too much time now. I'm trying to get maven + jetty + jotm to play nice but it looks like its too much to ask for :( Below is my jetty.xml:- <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure id="Server" class="org.mortbay.jetty.Server"> <New id="jotm" class="org.objectweb.jotm.Jotm"> <Arg type="boolean">true</Arg> <Arg type="boolean">false</Arg> <Call id="tm" name="getTransactionManager" /> <Call id="ut" name="getUserTransaction" /> </New> <New class="org.mortbay.jetty.plus.naming.Resource"> <Arg /> <Arg>javax.transaction.TransactionManager</Arg> <Arg><Ref id="ut" /></Arg> </New> <New id="tx" class="org.mortbay.jetty.plus.naming.Transaction"> <Arg><Ref id="ut" /></Arg> </New> <New class="org.mortbay.jetty.plus.naming.Resource"> <Arg>myxadatasource</Arg> <Arg> <New id="myxadatasourceA" class="org.enhydra.jdbc.standard.StandardXADataSource"> <Set name="DriverName">org.apache.derby.jdbc.EmbeddedDriver</Set> <Set name="Url">jdbc:derby:protodb;create=true</Set> <Set name="User"></Set> <Set name="Password"></Set> <Set name="transactionManager"> <Ref id="tm" /> </Set> </New> </Arg> </New> <New id="protodb" class="org.mortbay.jetty.plus.naming.Resource"> <Arg>jdbc/protodb</Arg> <Arg> <New class="org.enhydra.jdbc.pool.StandardXAPoolDataSource"> <Arg> <Ref id="myxadatasourceA" /> </Arg> <Set name="DataSourceName">myxadatasource</Set> </New> </Arg> </New> And this is the maven plugin configuration:- <plugin> <groupId>org.mortbay.jetty</groupId> <artifactId>maven-jetty-plugin</artifactId> <configuration> <scanIntervalSeconds>10</scanIntervalSeconds> <stopKey>ps</stopKey> <stopPort>7777</stopPort> <webAppConfig> <contextPath>/ps</contextPath> </webAppConfig> <connectors> <connector implementation="org.mortbay.jetty.nio.SelectChannelConnector"> <port>7070</port> <maxIdleTime>60000</maxIdleTime> </connector> </connectors> <jettyConfig>src/main/webapp/WEB-INF/jetty.xml</jettyConfig> </configuration> <executions> <execution> <id>start-jetty</id> <phase>pre-integration-test</phase> <goals> <goal>run</goal> </goals> <configuration> <scanIntervalSeconds>0</scanIntervalSeconds> <daemon>true</daemon> </configuration> </execution> <execution> <id>stop-jetty</id> <phase>post-integration-test</phase> <goals> <goal>stop</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>org.apache.derby</groupId> <artifactId>derby</artifactId> <version>10.6.1.0</version> </dependency> <dependency> <groupId>jotm</groupId> <artifactId>jotm</artifactId> <version>2.0.10</version> <exclusions> <exclusion> <groupId>javax.resource</groupId> <artifactId>connector</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>com.experlog</groupId> <artifactId>xapool</artifactId> <version>1.5.0</version> </dependency> <dependency> <groupId>javax.resource</groupId> <artifactId>connector-api</artifactId> <version>1.5</version> </dependency> <dependency> <groupId>javax.transaction</groupId> <artifactId>jta</artifactId> <version>1.0.1B</version> </dependency> <!-- <dependency> <groupId>javax.jts</groupId> <artifactId>jts</artifactId> <version>1.0</version> </dependency> --> </dependencies> </plugin> I am using maven-jetty-plugin-6.1.24 cause I couldn't get the later one's to work either. When I execute this I get the following exception:- 2010-06-16 09:03:13.423:WARN::Config error at javax.transaction.TransactionManager java.lang.reflect.InvocationTargetException [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failure A flat name can only have a single component [INFO] ------------------------------------------------------------------------ Caused by: javax.naming.InvalidNameException: A flat name can only have a single component at javax.naming.NameImpl.addAll(NameImpl.java:621) at javax.naming.CompoundName.addAll(CompoundName.java:442) at org.mortbay.jetty.plus.naming.NamingEntryUtil.makeNamingEntryName(NamingEntryUtil.java:136) at org.mortbay.jetty.plus.naming.NamingEntry.save(NamingEntry.java:196) at org.mortbay.jetty.plus.naming.NamingEntry.(NamingEntry.java:58) at org.mortbay.jetty.plus.naming.Resource.(Resource.java:34) ... 31 more Help!

    Read the article

  • Need help debugging a very basic PHP SOAP Hello world app

    - by WarDoGG
    I have been breaking my head at this, reading almost every article and tutorial there is on the web, but nothing doing.. i still cannot get my first web service application to work. I would really appreciate it if anyone could debug this code for me and provide me with a good explanation as to what is wrong and why. This will help indeed ! Thanks ! I have pasted below the entire codes that i am using making it easier to debug. I'm using the PHP5 SOAP extension. Here is my WSDL: <?xml version="1.0" encoding="utf-8"?> <wsdl:definitions name="testWebservice" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/" xmlns:tns="http://tempuri.org/" xmlns:s1="http://microsoft.com/wsdl/types/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:http="http://schemas.xmlsoap.org/wsdl/http/" targetNamespace="http://tempuri.org/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"> <wsdl:types> <s:schema elementFormDefault="qualified" targetNamespace="http://tempuri.org/"> <s:import namespace="http://microsoft.com/wsdl/types/" /> <s:element name="getUser"> <s:complexType> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="username" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="password" type="s:string" /> </s:sequence> </s:complexType> </s:element> <s:element name="getUserResponse"> <s:complexType> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="getUserResult" type="tns:userInfo" /> </s:sequence> </s:complexType> </s:element> <s:complexType name="userInfo"> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="ID" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="authkey" type="s:int" /> </s:sequence> </s:complexType> </s:schema> </wsdl:types> <wsdl:message name="getUserSoapIn"> <wsdl:part name="parameters" element="tns:getUser" /> </wsdl:message> <wsdl:message name="getUserSoapOut"> <wsdl:part name="parameters" element="tns:getUserResponse" /> </wsdl:message> <wsdl:portType name="testWebservice"> <wsdl:operation name="getUser"> <wsdl:input message="tns:getUserSoapIn" /> <wsdl:output message="tns:getUserSoapOut" /> </wsdl:operation> </wsdl:portType> <wsdl:binding name="testWebserviceBinding" type="tns:testWebservice"> <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http" /> <wsdl:operation name="getUser"> <soap:operation soapAction="http://tempuri.org/getUser" /> <wsdl:input> <soap:body use="literal" /> </wsdl:input> <wsdl:output> <soap:body use="literal" /> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="testWebserviceService"> <wsdl:port name="testWebservicePort" binding="tns:testWebserviceBinding"> <soap:address location="http://127.0.0.1/nusoap/storytruck/index.php" /> </wsdl:port> </wsdl:service> </wsdl:definitions> and here is the PHP Code i use to setup the server: <?php function getUser($user,$pass) { return array('ID'=>1); } ini_set("soap.wsdl_cache_enabled", "0"); // disabling WSDL cache $server = new SoapServer("http://127.0.0.1/mywsdl.wsdl"); $server->addFunction('getUser'); $server->handle(); ?> and the code for the client: <?php $client = new SoapClient("http://127.0.0.1/index.php?wsdl", array('exceptions' => 0)); try { $result = $client->getUser("username","pass"); print_r($result); } catch (SoapFault $result) { print_r($result); } ?> Here is the ERROR output i am getting on the browser : SoapFault Object ( [message:protected] => Error cannot find parameter [string:Exception:private] => [code:protected] => 0 [file:protected] => C:\xampp\htdocs\client.php [line:protected] => 6 [trace:Exception:private] => Array ( [0] => Array ( [function] => __call [class] => SoapClient [type] => -> [args] => Array ( [0] => getUser [1] => Array ( [0] => username [1] => pass ) ) ) [1] => Array ( [file] => C:\xampp\htdocs\client.php [line] => 6 [function] => getUser [class] => SoapClient [type] => -> [args] => Array ( [0] => username [1] => pass ) ) ) [previous:Exception:private] => [faultstring] => Error cannot find parameter [faultcode] => SOAP-ENV:Client )

    Read the article

  • File using .net sockets, transferring problem

    - by Sergei
    I have a client and server, client sending file to server. When i transfer files on my computer(in local) everything is ok(try to sen file over 700mb). When i try to sent file use Internet to my friend in the end of sending appears error on server "Input string is not in correct format".This error appears in this expression fSize = Convert::ToUInt64(tokenes[0]); - and i don't mind wht it's appear. File should be transfered and wait other transferring ps: sorry for too much code, but i want to find solution private: void CreateServer() { try{ IPAddress ^ipAddres = IPAddress::Parse(ipAdress); listener = gcnew System::Net::Sockets::TcpListener(ipAddres, port); listener->Start(); clientsocket =listener->AcceptSocket(); bool keepalive = true; array<wchar_t,1> ^split = gcnew array<wchar_t>(1){ '\0' }; array<wchar_t,1> ^split2 = gcnew array<wchar_t>(1){ '|' }; statusBar1->Text = "Connected" ; // while (keepalive) { array<Byte>^ size1 = gcnew array<Byte>(1024); clientsocket->Receive(size1); System::String ^notSplited = System::Text::Encoding::GetEncoding(1251)->GetString(size1); array<String^> ^ tokenes = notSplited->Split(split2); System::String ^fileName = tokenes[1]->ToString(); statusBar1->Text = "Receiving file" ; unsigned long fSize = 0; //IN THIS EXPRESSIN APPEARS ERROR fSize = Convert::ToUInt64(tokenes[0]); if (!Directory::Exists("Received")) Directory::CreateDirectory("Received"); System::String ^path = "Received\\"+ fileName; while (File::Exists(path)) { int dotPos = path->LastIndexOf('.'); if (dotPos == -1) { path += "[1]"; } else { path = path->Insert(dotPos, "[1]"); } } FileStream ^fs = gcnew FileStream(path, FileMode::CreateNew, FileAccess::Write); BinaryWriter ^f = gcnew BinaryWriter(fs); //bytes received unsigned long processed = 0; pBarFilesTr->Visible = true; pBarFilesTr->Minimum = 0; pBarFilesTr->Maximum = (int)fSize; // Set the initial value of the ProgressBar. pBarFilesTr->Value = 0; pBarFilesTr->Step = 1024; //loop for receive file array<Byte>^ buffer = gcnew array<Byte>(1024); while (processed < fSize) { if ((fSize - processed) < 1024) { int bytes ; array<Byte>^ buf = gcnew array<Byte>(1024); bytes = clientsocket->Receive(buf); if (bytes != 0) { f->Write(buf, 0, bytes); processed = processed + (unsigned long)bytes; pBarFilesTr->PerformStep(); } break; } else { int bytes = clientsocket->Receive(buffer); if (bytes != 0) { f->Write(buffer, 0, 1024); processed = processed + 1024; pBarFilesTr->PerformStep(); } else break; } } statusBar1->Text = "File was received" ; array<Byte>^ buf = gcnew array<Byte>(1); clientsocket->Send(buf,buf->Length,SocketFlags::None); f->Close(); fs->Close(); SystemSounds::Beep->Play(); } }catch(System::Net::Sockets::SocketException ^es) { MessageBox::Show(es->ToString()); } catch(System::Exception ^es) { MessageBox::Show(es->ToString()); } } private: void CreateClient() { clientsock = gcnew System::Net::Sockets::TcpClient(ipAdress, port); ns = clientsock->GetStream(); sr = gcnew StreamReader(ns); statusBar1->Text = "Connected" ; } private:void Send() { try{ OpenFileDialog ^openFileDialog1 = gcnew OpenFileDialog(); System::String ^filePath = ""; System::String ^fileName = ""; //file choose dialog if (openFileDialog1->ShowDialog() == System::Windows::Forms::DialogResult::OK) { filePath = openFileDialog1->FileName; fileName = openFileDialog1->SafeFileName; } else { MessageBox::Show("You must select a file", "Error", MessageBoxButtons::OK, MessageBoxIcon::Exclamation); return; } statusBar1->Text = "Sending file" ; NetworkStream ^writerStream = clientsock->GetStream(); System::Runtime::Serialization::Formatters::Binary::BinaryFormatter ^format = gcnew System::Runtime::Serialization::Formatters::Binary::BinaryFormatter(); array<Byte>^ buffer = gcnew array<Byte>(1024); FileStream ^fs = gcnew FileStream(filePath, FileMode::Open); BinaryReader ^br = gcnew BinaryReader(fs); //file size unsigned long fSize = (unsigned long)fs->Length; //transfer file size + name bFSize = Encoding::GetEncoding(1251)->GetBytes(Convert::ToString(fs->Length+"|"+fileName+"|")); writerStream->Write(bFSize, 0, bFSize->Length); //status bar pBarFilesTr->Visible = true; pBarFilesTr->Minimum = 0; pBarFilesTr->Maximum = (int)fSize; pBarFilesTr->Value = 0; // Set the initial value of the ProgressBar. pBarFilesTr->Step = 1024; //bytes transfered unsigned long processed = 0; int bytes = 1024; //loop for transfer while (processed < fSize) { if ((fSize - processed) < 1024) { bytes = (int)(fSize - processed); array<Byte>^ buf = gcnew array<Byte>(bytes); br->Read(buf, 0, bytes); writerStream->Write(buf, 0, buf->Length); pBarFilesTr->PerformStep(); processed = processed + (unsigned long)bytes; break; } else { br->Read(buffer, 0, 1024); writerStream->Write(buffer, 0, buffer->Length); pBarFilesTr->PerformStep(); processed = processed + 1024; } } array<Byte>^ bufsss = gcnew array<Byte>(100); writerStream->Read(bufsss,0,bufsss->Length); statusBar1->Text = "File was sent" ; btnSend->Enabled = true; fs->Close(); br->Close(); SystemSounds::Beep->Play(); newThread->Abort(); } catch(System::Net::Sockets::SocketException ^es) { MessageBox::Show(es->ToString()); } } UPDATE: ok, i can add checking if clientsocket->Receive(size1); equal zero, but why he begin receiving data again , in the ending of receiving. UPDATE:After adding this checking problem remains. AND WIN RAR SAY TO OPENING ARCHIVE - unexpected end of file! UPDATE:http://img153.imageshack.us/img153/3760/erorr.gif I think it continue receiving some bytes from client(that remains in the stream), but why existes cicle while (processed < fSize)

    Read the article

  • File using sockets .net, tranfering problem

    - by Sergei
    I have a client and server, client sending file to server. When i transfer files on my computer(in local) everything is ok(try to sen file over 700mb). When i try to sent file use Internet to my friend in the end of sending appears error on server "Input string is not in correct format".This error appears in this expression fSize = Convert::ToUInt64(tokenes[0]); - and i don't mind wht it's appear. File should be transfered and wait other transferring ps: sorry for too much code, but i want to find solution private: void CreateServer() { try{ IPAddress ^ipAddres = IPAddress::Parse(ipAdress); listener = gcnew System::Net::Sockets::TcpListener(ipAddres, port); listener->Start(); clientsocket =listener->AcceptSocket(); bool keepalive = true; array<wchar_t,1> ^split = gcnew array<wchar_t>(1){ '\0' }; array<wchar_t,1> ^split2 = gcnew array<wchar_t>(1){ '|' }; statusBar1->Text = "Connected" ; // while (keepalive) { array<Byte>^ size1 = gcnew array<Byte>(1024); clientsocket->Receive(size1); System::String ^notSplited = System::Text::Encoding::GetEncoding(1251)->GetString(size1); array<String^> ^ tokenes = notSplited->Split(split2); System::String ^fileName = tokenes[1]->ToString(); statusBar1->Text = "Receiving file" ; unsigned long fSize = 0; //IN THIS EXPRESSIN APPEARS ERROR fSize = Convert::ToUInt64(tokenes[0]); if (!Directory::Exists("Received")) Directory::CreateDirectory("Received"); System::String ^path = "Received\\"+ fileName; while (File::Exists(path)) { int dotPos = path->LastIndexOf('.'); if (dotPos == -1) { path += "[1]"; } else { path = path->Insert(dotPos, "[1]"); } } FileStream ^fs = gcnew FileStream(path, FileMode::CreateNew, FileAccess::Write); BinaryWriter ^f = gcnew BinaryWriter(fs); //bytes received unsigned long processed = 0; pBarFilesTr->Visible = true; pBarFilesTr->Minimum = 0; pBarFilesTr->Maximum = (int)fSize; // Set the initial value of the ProgressBar. pBarFilesTr->Value = 0; pBarFilesTr->Step = 1024; //loop for receive file array<Byte>^ buffer = gcnew array<Byte>(1024); while (processed < fSize) { if ((fSize - processed) < 1024) { int bytes ; array<Byte>^ buf = gcnew array<Byte>(1024); bytes = clientsocket->Receive(buf); if (bytes != 0) { f->Write(buf, 0, bytes); processed = processed + (unsigned long)bytes; pBarFilesTr->PerformStep(); } break; } else { int bytes = clientsocket->Receive(buffer); if (bytes != 0) { f->Write(buffer, 0, 1024); processed = processed + 1024; pBarFilesTr->PerformStep(); } else break; } } statusBar1->Text = "File was received" ; array<Byte>^ buf = gcnew array<Byte>(1); clientsocket->Send(buf,buf->Length,SocketFlags::None); f->Close(); fs->Close(); SystemSounds::Beep->Play(); } }catch(System::Net::Sockets::SocketException ^es) { MessageBox::Show(es->ToString()); } catch(System::Exception ^es) { MessageBox::Show(es->ToString()); } } private: void CreateClient() { clientsock = gcnew System::Net::Sockets::TcpClient(ipAdress, port); ns = clientsock->GetStream(); sr = gcnew StreamReader(ns); statusBar1->Text = "Connected" ; } private:void Send() { try{ OpenFileDialog ^openFileDialog1 = gcnew OpenFileDialog(); System::String ^filePath = ""; System::String ^fileName = ""; //file choose dialog if (openFileDialog1->ShowDialog() == System::Windows::Forms::DialogResult::OK) { filePath = openFileDialog1->FileName; fileName = openFileDialog1->SafeFileName; } else { MessageBox::Show("You must select a file", "Error", MessageBoxButtons::OK, MessageBoxIcon::Exclamation); return; } statusBar1->Text = "Sending file" ; NetworkStream ^writerStream = clientsock->GetStream(); System::Runtime::Serialization::Formatters::Binary::BinaryFormatter ^format = gcnew System::Runtime::Serialization::Formatters::Binary::BinaryFormatter(); array<Byte>^ buffer = gcnew array<Byte>(1024); FileStream ^fs = gcnew FileStream(filePath, FileMode::Open); BinaryReader ^br = gcnew BinaryReader(fs); //file size unsigned long fSize = (unsigned long)fs->Length; //transfer file size + name bFSize = Encoding::GetEncoding(1251)->GetBytes(Convert::ToString(fs->Length+"|"+fileName+"|")); writerStream->Write(bFSize, 0, bFSize->Length); //status bar pBarFilesTr->Visible = true; pBarFilesTr->Minimum = 0; pBarFilesTr->Maximum = (int)fSize; pBarFilesTr->Value = 0; // Set the initial value of the ProgressBar. pBarFilesTr->Step = 1024; //bytes transfered unsigned long processed = 0; int bytes = 1024; //loop for transfer while (processed < fSize) { if ((fSize - processed) < 1024) { bytes = (int)(fSize - processed); array<Byte>^ buf = gcnew array<Byte>(bytes); br->Read(buf, 0, bytes); writerStream->Write(buf, 0, buf->Length); pBarFilesTr->PerformStep(); processed = processed + (unsigned long)bytes; break; } else { br->Read(buffer, 0, 1024); writerStream->Write(buffer, 0, buffer->Length); pBarFilesTr->PerformStep(); processed = processed + 1024; } } array<Byte>^ bufsss = gcnew array<Byte>(100); writerStream->Read(bufsss,0,bufsss->Length); statusBar1->Text = "File was sent" ; btnSend->Enabled = true; fs->Close(); br->Close(); SystemSounds::Beep->Play(); newThread->Abort(); } catch(System::Net::Sockets::SocketException ^es) { MessageBox::Show(es->ToString()); } }

    Read the article

  • SSL connection errors from Apache

    - by Yang
    I'm running a (self-signed) SSL cert site on Apache/2.2.14 on Ubuntu 10.04, but various browsers are giving errors on half the connection attempts. Just now saw this transient error from Chrome: "Error 126 (net::ERR_SSL_BAD_RECORD_MAC_ALERT): Unknown error." Hit refresh and the problem goes away for a while. wget too: $ wget --no-check-certificate https://dev.foo.com/deps/ --2010-09-08 19:30:26-- https://dev.foo.com/deps/ Resolving dev.foo.com... 184.72.53.220 Connecting to dev.foo.com|184.72.53.220|:443... connected. OpenSSL: error:0407006A:rsa routines:RSA_padding_check_PKCS1_type_1:block type is not 01 OpenSSL: error:04067072:rsa routines:RSA_EAY_PUBLIC_DECRYPT:padding check failed OpenSSL: error:1408D07B:SSL routines:SSL3_GET_KEY_EXCHANGE:bad signature Unable to establish SSL connection. Run it right away again and it works: $ wget --no-check-certificate https://dev.foo.com/deps/ --2010-09-08 19:30:29-- https://dev.foo.com/deps/ Resolving dev.foo.com... 184.72.53.220 Connecting to dev.foo.com|184.72.53.220|:443... connected. WARNING: cannot verify dev.foo.com's certificate, issued by `/CN=dev.foo.com': Self-signed certificate encountered. HTTP request sent, awaiting response... 200 OK Length: 3157 (3.1K) [text/html] Saving to: `index.html' 100%[======================================>] 3,157 --.-K/s in 0s 2010-09-08 19:30:29 (48.6 MB/s) - `index.html' saved [3157/3157] In my sites-enabled/default-ssl: SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key The cert: -----BEGIN CERTIFICATE----- MIIBszCCARwCCQCa0TzNwqLgsTANBgkqhkiG9w0BAQUFADAeMRwwGgYDVQQDExNk ZXYucGFydHlvbmRhdGEuY29tMB4XDTEwMDgyNzA2MzA1N1oXDTIwMDgyNDA2MzA1 N1owHjEcMBoGA1UEAxMTZGV2LnBhcnR5b25kYXRhLmNvbTCBnzANBgkqhkiG9w0B AQEFAAOBjQAwgYkCgYEAzXDEULpCUqIc9hV/ESFapkckR2uoYINA81DvG2aQZ9Ot Q30OwX2ae2CC4bSzJEIVlahU8vjVrWpmpa28NEhQbqh4ywwbl1XDrEVYI6Gkfimf snJhOKyaVrEhlwutYtBjmsz3ZIqwymMPm/6smVcSS5dJIynlSmtltxX6ivPcO8UC AwEAATANBgkqhkiG9w0BAQUFAAOBgQBGxHVkpSSOnZjzuySRepjhAlV/yhe9Fx23 fh12WrjQMEi98B7JEuNSLXDWckUN7O6XRc3RzKmazcGHJqzhn0Ov6gAmAE2XjZ/x VW21xmaLwk+KgYKFJbJJaP3jMSpU7I3aa11wqAkR2Zd4Nkm9N0YXYIzcBdfztTVI Et8mEHBFdg== -----END CERTIFICATE----- The cert is in turn generated via: $ make-ssl-cert generate-default-snakeoil --force-overwrite Apache version. $ apache2 -V Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf" I don't administer the network, hardware, etc. - this is all running on Amazon EC2. I'm not running a load-balancer or anything else in front of the server. I'm making direct TCP connections to that host (AFAIK). Any ideas? Thanks in advance for any help.

    Read the article

  • SQL Server Agent was not running on Server Dynamics CRM 2013

    - by No1_Melman
    I'm trying to install Dynamics CRM 2013 on a server. This server is on a VM. There are several other VMs, an ADDS & DNS, a MSSQL and a WebServer VM. Each server is a Windows Server 2012 R2. The SQL Server is 2012 Enterprise. Each VM is part of the main Domain, set by the ADDS & DNS. NSLookup confirms I can see the computer at the right IP address. Each separate VM has its own static IP, the DNS is set to the ADDS & DNS. I use the domain administrator to log into all the servers, and make the that domain administrator a local administrator. I've set up all the domain users for the CRM and gave them appropriate permissions, I have also added the accounts to the appropriate places, such that the CRM Deployment user is in the SQL security. The SQL Agent is running. SQL server configuration manager has SQL server network configuration TCP/IP enabled to allow remote connections. The SQL server has the domain user as a administrator, which is the same user being used to install the CRM. In the CRM setup i point to the [Servername]\[Instance] and I have also tried just the [Servername]. to make this easier I called the server MSSQL and left the instance name to the default. I even install the MSSQL instance as the domain administrator. CRM can find the ReportServer url. I have enable all the ports required, including: 135, 1433, 1434, 2382, 2383, 4022. 1434 UDP. I feel like I have absolutely done everything, I have google many times and tried all the different methods, and for the life of me I cant seem to get the CRM setup to find the SQL server agent. It passes everything else perfectly fine. I can even ping the MSSQL server. What is the problem, why does the CRM still keep giving the error: SQLSERVERAGENT (SQLSERVERAGENT) service is not running on the server MSSQL On the MSSQL server, the name of the sql server agent service is: SQL Server Agent (MSSQLSERVER)

    Read the article

  • Uwsgi starts from root but not as a service

    - by vittore
    I have nginx + uwsgi setup for flask website. thats my nginx server { listen 80; server_name _; location /static/ { alias /var/www/site/app/static/; } location / { uwsgi_pass 127.0.0.1:5080; include uwsgi_params; } } And here is my uwsgi config.xml <uwsgi> <socket>127.0.0.1:5080</socket> <autoload/> <daemonize>/var/log/uwsgi_webapp.log</daemonize> <pythonpath>/var/www/site/</pythonpath> <module>run:app</module> <plugins>python27</plugins> <virtualenv>/var/www/venv/</virtualenv> <processes>1</processes> <enable-threads/> <master /> <harakiri>60</harakiri> <max-requests>2000</max-requests> <limit-as>512</limit-as> <reload-on-as>256</reload-on-as> <reload-on-rss>192</reload-on-rss> <no-orphans/> <vacuum/> </uwsgi> When I trying to start uwsgi service (service uwsgi start) it says ok but there is no uwsgi process and I see the following in the log: *** Starting uWSGI 1.0.3-debian (64bit) on [Fri Oct 25 00:43:13 2013] *** compiled with version: 4.6.3 on 17 July 2012 02:26:54 current working directory: / writing pidfile to /run/uwsgi/app/gsk/pid detected binary path: /usr/bin/uwsgi-core setgid() to 33 setuid() to 33 limiting address space of processes... your process address space limit is 536870912 bytes (512 MB) your memory page size is 4096 bytes *** WARNING: you have enabled harakiri without post buffering. Slow upload could be rejected on post-unbuffered webservers *** uwsgi socket 0 bound to TCP address 127.0.0.1:5080 fd 6 bind(): Permission denied [socket.c line 107] However when I start uwsgi as a root uwsgi --socket 127.0.0.1:5080 --module run --callab app --harakiri 15 --harakiri-verbose --logto2 tmp/uwsgi.log It starts just fine and after restarting nginx I can access website. What can be an issue ?

    Read the article

  • Looking for definitive answer to accessing a network drive/NAS/SMB drive via Windows 7 HOME and Windows 7 Professional. Is it possible and how?

    - by Rob
    I want to be able to access my Lacie 2Big network drive in Windows 7 Explorer. I have a machine with Windows 7 Home and one with Windows 7 Professional. Neither Windows 7, home or pro, can access the drive. The Windows 7 Home machine displays the drive in its Explorer, with the capacity, but on clicking the icon, I get another window, blank with the busy pointer which does not eventually stop. The drive is working perfectly. How do I know this? Because I can access it with no problems on my Apple Mac, Windows XP home and Ubuntu machines on the same network as the Windows 7 machines. Except for the Windows XP home machine that required Lacie ethernet agent program, the Mac and the Ubuntu machines needed no setup, the drive appeared like any other drive. So my 2 questions: Is it possible to access a network share drive, e.g. a NAS like Lacie 2big in Windows 7 Home Premium and Windows 7 Professional. If so how? I read on Microsoft's own forums and elsewhere that network sharing drives, e.g. via SambaSMB is NOT possible on Windows 7 Home. Is this true? http://social.technet.microsoft.com/Forums/en-US/w7itprovirt/thread/e08c3500-a722-4b44-b644-64f94f63c8e5/ This question is a more comprehensive re-write of my earlier question: Windows 7 / TCP/IP network share guide - looking for to resolve failure to mount lacie network drive but works on XP,Linux,Mac. ...where I haven't received a solving answer, and I have tried to find a solution myself. Lacie themselves haven't offered a definitive solving answer either, but I suspect it's not just their drives but SMB/network share/NAS in general... This is utterly pathetic that Windows 7 home cannot access something as simple as a network drive, especially given that Windows XP home can. My research so far: Apparently it is possible on Windows 7 Professional, via the Local Security Policy, only on Windows 7 Professional, not Windows 7 Home: http://www.sevenforums.com/tutorials/7357-local-security-policy-editor-open.html http://answers.microsoft.com/en-us/windows/forum/windows_7-security/accessing-local-security-policy-in-windows-7-home/0c8300d0-1d23-4de0-9b37-935c01a7d17a http://social.technet.microsoft.com/Forums/en-US/w7itprosecurity/thread/14fc5037-3386-4973-b5d8-2167272ff5ad/ http://www.tomshardware.com/forum/75-63-windows-samba-issue Another solution offered is editing the registry, doesn't look promising to me, fiddly and not guaranteed, hard to produce a complete solution I think, given that everyone's registry can vary. Registry key edit solutons: https://www.lacie.com/uk/mystuff/ticket/ticket.htm?tid=101278940 http://networksecurity.farzadbanifatemi.com/security-policy/how-to-access-local-security-policy-windows-7-home-premium Related: Does Windows 7 Home Premium support backing up to a network share Network Copy to Windows 7 File Share Fails and Kills Network Connection

    Read the article

  • Windows 7 extremely slow login, exchange performance, printer enumeration, etc...

    - by Jeff
    Background: I have a fresh copy of Windows 7 Professional x64 on a Dell Latitude E6500. The laptop has 8GB RAM, 250GB drive, and all Intel peripherals (net/wifi/graphics). All available Windows updates, as well as hardware drivers are installed. The IT folks where I work joined the computer to our Windows 2003-based Active Directory domain. There are no errors in any logs that we've looked at, and Group Policy templates appear to have applied properly. Problem: Every time I turn on or reboot the computer, it takes between 2 to 10 (all times are actual) minutes after successfully typing my username/password to get to my desktop. My login script does not always run. Sometimes I get a black screen, and a couple of minutes later the login script will pop up and take up to 10 minutes to complete. I can get around this by hitting cntrl-shift-esc and running explorer.exe from the Task Manager. The login script continues to hang, but I can minimize it and go on about my business. Either way, it generally throws errors prior to completing. I often get slow or failed connectivity to Exchange via Outlook. When I bring up printer dialogs, they take several minutes to populate, and block the calling app while doing so. Copies to SMB shares are very slow. On my home network, everything works fine. On both the work network and home network, I can use remote internet resources just fine. Web pages pull up, remote VPN's are fine, I can max out bandwidth on SpeakEasy Speed Test. I can get almost max bandwidth transferring FTP/HTTP over a LAN. Another symptom of the problem is that when I first log in, the work network shows as "Identifying" for a long time in the Network and Sharing Center, and will often then change to the name of the work domain, but say "Unauthenticated Network". Note that this computer previously ran Windows Vista with none of these problems. Attempts to Fix: Installed the Win7 admin pack Uninstalled/reinstalled all hardware drivers Verified Active Directory DNS settings (Vista works relatively well on the same network) Reset all TCP/IP settings on all adapters using the netsh commands to do so Disabled ipv6 on all adapters Disable wifi adapter while on work network Locked the network card to 100/Full, 1000/Full; also tried Auto Added various important addresses to hosts file (exchange, dns, ad) -- removed when didn't help My background is a jpeg (sounds unrelated but there is apparently a win7 login bug related to solid color background) More I have forgotten The IT staff at my company indicated they believe this is due to having Windows 2003 AD servers and not having any Windows 2008 R2 AD servers. Other than that, they have no advice or assistance to offer other than a rebuild (already tried that once with similar symptoms), or downgrade to Vista. Any thoughts out there?

    Read the article

  • Mono through FastCGI on nginx

    - by Stijn
    I'm going through http://www.mono-project.com/FastCGI_Nginx and can't get it to work. The FastCGI server seems to be running. The following is from the error log: upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: 192.168.1.125, server: arch, request: "GET /Default.aspx HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "arch" Command used to start the server (I've tried server2 and server4, using a simple .NET 2.0 or .NET 4.0 project): fastcgi-mono-server2 /applications=arch:/:/var/www/test/public/ /socket=tcp:127.0.0.1:9000 /stopable=True nginx config: server { listen 80; server_name arch; access_log /var/www/test/log/access.log; error_log /var/www/test/log/error.log; location / { root /var/www/test/public; index index.html index.htm default.aspx Default.aspx; fastcgi_index Default.aspx; fastcgi_pass 127.0.0.1:9000; fastcgi_param PATH_INFO ""; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Using xsp4 works fine, I can browse the site. I've enabled FastCGI logging, this is the output: [2012-04-15 23:51:18Z] Debug Accepting an incoming connection. [2012-04-15 23:51:18Z] Notice Beginning to receive records on connection. [2012-04-15 23:51:18Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 386) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-04-15 23:51:18Z] Debug Read parameter. (PATH_INFO = ) [2012-04-15 23:51:18Z] Debug Read parameter. (SCRIPT_FILENAME = /var/www/test/public/Home) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_HOST = arch) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-gb,en;q=0.5) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip, deflate) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_COOKIE = ASP.NET_SessionId=2C3D702C9B0F23F69B80820B) [2012-04-15 23:51:18Z] Error Failed to process connection. Reason: Argument cannot be null. Parameter name: s [2012-04-15 23:51:18Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug The FastCGI connection has been closed.

    Read the article

  • VMWare Network bug in multiple VMWare Workstation versions if using a hardcoded IP address

    - by onyxruby
    I'm having a very tricky problem with some of my VM sessions being unable to reach the Internet or even ping the gateway. I have just set up a new VM Workstation (7) on a W2K8 64bit server (I'll be converting to ESXI 4 once I can find a decent book on it, so for the meanwhile I use workstation). I have imported a number of VM's and setup some new ones on the server.In short the problem with some of the VM's being unable to reach the Internet is that they can't reach the gateway. I've looking at a number of things and can pretty safely rule out the following: Switch, Router, DHCP Server, DNS, Client IP configuration, Routes and typos. The problem is that some of the new clients cannot reach the gateway if their IP address is hardcoded, they can't even ping it by IP address. That rules out DNS and DHCP. Now, if I allow them to get their IP address by DHCP they can reach the gateway and Internet without issue. The interesting thing on this, is that this behavior occurs even if I leave the DNS information hardcoded under TCP/IP settings. It doesn't work unless the IP and gateway are handed out by DHCP even though the same information IP info is being used by the host. Fundamentally from the standpoint of the clients, they are trying to reach the exact same gateway using the exact same IP information regardless of whether they are hardcoded or assigned by DHCP. Here's an example of one client. IP Address 192.168.7.66 - Subnet Mask 255.255.255.0 - Gateway 192.168.7.254 - DNS1 192.168.7.44 - DNS2 192.168.7.254. The issue occurs across six different microsoft operating systems, Windows 7 and Windows 2008 variants all have the issue. My W2K3, XP, Vista and W98 clients all work without issue with hardcoded IP addresses. I have tried things like rearranging the DNS order, flushing DNS and so on. It's not a routing or switch issue as the clients can work just fine if they get their IP by DHCP. It's not a paramater issue as the exact same paramaters are handed out by DHCP as I plug in by hand. It's not a DNS issue as clients cant reach other clients even with IP addresses only. I have run a tracert to the gateway by IP address and it times out on the very first hop before failing on hop3 with destination host unreachable. If I get the IP address by DHCP the tracert finds the gateway (and Internet) without issue. I have read a few other posts online in forums talking about this problem randomly occuring over the years in other VM versions as well, so I suspect some kind of long standing bug. Does anyone have any ideas on this? Is it possibly a bug with Windows 7 and W2K clients under VM?

    Read the article

  • Windows Server 2003 IPSec Tunnel Connected, But Not Working (Possibly NAT/RRAS Related)

    - by Kevinoid
    Configuration I have setup a "raw" IPSec tunnel between a Windows Server 2003 (SBS) machine and a Netgear FVG318 according to the instructions in Microsoft KB816514. The configuration is as follows (using the same conventions as the article): NetA | SBS2003 | FVG318 | NetB 10.0.0.0/24 | 216.x.x.x | 69.y.y.y | 10.0.254.0/24 Both the Main Mode and Quick Mode Security Associations are successfully completed and appear in the IP Security Monitor. I am also able to ping the SBS2003 server on its private address from any computer on NetB. The Problem Any traffic sent from a computer on NetA to NetB, or from SBS2003 to NetB (excluding ICMP Ping responses), is sent out on the public network interface outside the IPSec tunnel (no encryption or header authentication, as if the tunnel were not there). Pings sent from a computer on NetB to a computer on NetA successfully reach computers on NetA, but the responses are silently discarded by SBS2003 (they do not go out in the clear and do not generate any encrypted traffic). Possible Solutions Incorrect Configuration I could have mistyped something, somewhere, or KB816514 could be incorrect in some way. I have tried very hard to eliminate the first option. Have re-created the configuration several times, tried tweaking and adjusting all the settings I could without success (most prevent the SA from being established). NAT/RRAS I have seen multiple posts elsewhere suggesting that this could be due to interaction between NAT and the IPSec filters. Possibly the NetA private addresses get rewritten to 216.x.x.x before being compared with the Quick Mode IPSec filters and don't get tunneled because of the mismatch. In fact, The Cable Guy article from June 2005 "TCP/IP Packet Processing Paths" suggests that this is the case, (see step 2 and 4 of the Transit Traffic path). If this is the case, is there a way to exclude NetA-NetB traffic from NAT? Any thoughts, ideas, suggestions, and/or comments are appreciated. Update (2011-06-26) After failing to solve the problem, I resorted to paid Microsoft support. They were unable to solve the problem. Since then I have implemented a solution based on Linux that is working quite well. I will attempt to evaluate any proposed answers as best I can, but current configurations and time constraints will make this slow...

    Read the article

  • radvd is not assigning prefix

    - by Samik
    I'm currently trying to setup IPv6 address auto-configuration with router advertisement daemon (radvd) on a virtual machine running CentOS 6.5. But the eth0 interface is not obtaining that prefix. I've obtained the ULA prefix from here. Contents of /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 net.ipv6.conf.all.forwarding = 1 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 # Controls the default maxmimum size of a mesage queue kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 Contents of /etc/radvd.conf # NOTE: there is no such thing as a working "by-default" configuration file. # At least the prefix needs to be specified. Please consult the radvd.conf(5) # man page and/or /usr/share/doc/radvd-*/radvd.conf.example for help. # # interface eth0 { AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvDefaultPreference low; AdvHomeAgentFlag off; prefix fd8a:8d9d:808f:1::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; }; }; Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes I've also enabled radvd at startup through chkconfig. Though I noticed that radvd is starting after interfaces are brought up. I've tried restarting the network service afterwards but still I get the following link-local address only #ip -6 addr show 1: lo: mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qlen 1000 inet6 fe80::5054:ff:fe74:d746/64 scope link valid_lft forever preferred_lft forever Edit: Based on the answer given by Sander Steffann I still need clarification on some points but I'm posting here what worked. Contents of /etc/sysconfig/network NETWORKING=yes HOSTNAME=syslog-ng-server NETWORKING_IPV6=yes IPV6FORWARDING=yes Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes IPV6FORWARDING=no Removed following line from /etc/sysctl.conf net.ipv6.conf.all.forwarding = 1 Contents of /etc/radvd.conf is as previous.

    Read the article

  • Double MySQL Running on Mountain Lion

    - by Norris
    After I've done a full restart, my Apache PHP server doesn't connect to Local MySQL ( connecting via 127.0.0.1 because localhost for some reason fails always ). So I did this today: ? ~ mysqladmin shutdown -u root -p Enter password: ? ~ mysqladmin shutdown -u root -p Enter password: mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)' Check that mysqld is running and that the socket: '/tmp/mysql.sock' exists! Which basically means that I succeeded in shutting down mysql. But as soon as I did - Apache PHP successfully connect to MySQL and my local sites work without a hiccup until the next restart. Here are a few other details: (as you can tell - I've installed MySQL via brew) ? ~ sudo ps aux | grep mysql N 4774 0.0 0.0 2432768 620 s000 S+ 9:53AM 0:00.00 grep mysql N 4772 0.0 2.6 3030168 440688 ?? S 9:51AM 0:00.29 /usr/local/Cellar/mysql/5.6.13/bin/mysqld --basedir=/usr/local/Cellar/mysql/5.6.13 --datadir=/usr/local/var/mysql --plugin-dir=/usr/local/Cellar/mysql/5.6.13/lib/plugin --bind-address=127.0.0.1 --log-error=/usr/local/var/mysql/N.local.err --pid-file=/usr/local/var/mysql/N.local.pid N 4686 0.0 0.0 2433432 1000 ?? S 9:51AM 0:00.01 /bin/sh /usr/local/opt/mysql/bin/mysqld_safe --bind-address=127.0.0.1 N 4362 0.0 2.7 3120276 458728 ?? S 9:47AM 0:00.45 mysqld ? ~ lsof -i | grep mysql mysqld 4362 N 16u IPv6 0x76959e40691f9f93 0t0 TCP *:mysql (LISTEN) This is the weird thing: ? ~ killall -9 mysqld MySQL Is dead! Apache doesn't connect. Then, when I run: ? ~ sudo mysqladmin shutdown -u root -p Enter password: Apache is (again) able to successfully connect to MySQL. As far as I understand this means that I have two mysql servers setup and both of them are trying to start up at the same time, but I don't have the slightest idea on how to fix it. I've tried brew reinstalling but that didn't help. ? ~ which mysqladmin /usr/local/bin/mysqladmin ? ~ whence -p mysql /usr/local/bin/mysql

    Read the article

  • Having trouble Getting "RTSP over HTTP"

    - by Muhammad Adeel Zahid
    There is an axis camera that is connected to our site (camba.tv) through axis one click connection component (which acts as proxy). We can communicate with this camera only through http by setting the proxy to our OCCC server's address. If we want to get RTSP streams (h.264) we are only left with "RTSP over HTTP" option. For this I have followed axis VAPIX 3 documentation section 3.3. I issue requests through fiddler but don't get any response. But when i put the URL (axrtsphttp://1.00408CBEA38B/axis-media/media.amp) in windows media player (with proxy set to OCCC server 212.78.237.156:3128) the player is able to get RTSP stream over HTTP after logging in. I have created a request trace of communication between camera and windows media player through wireshark and the request that brings the stream looks like http://1.00408cbea38b/axis-media/media.amp HTTP/1.1 x-sessioncookie: 619 User-Agent: Axis AMC Host: 1.00408CBEA38B Proxy-Connection: Keep-Alive Pragma: no-cache Authorization: Digest username="root",realm="AXIS_00408CBEA38B",nonce="000a8b40Y0100409c13ac7e6cceb069289041d8feb1691",uri="/axis-media/media.amp",cnonce="9946e2582bd590418c9b70e1b17956c7",nc=00000001,response="f3cab86fc84bfe33719675848e7fdc0a",qop="auth" HTTP/1.0 200 OK Content-Type: application/x-rtsp-tunnelled Date: Tue, 02 Nov 2010 11:45:23 GMT RTSP/1.0 200 OK CSeq: 1 Content-Type: application/sdp Content-Base: rtsp://1.00408CBEA38B/axis-media/media.amp/ Date: Tue, 02 Nov 2010 11:45:23 GMT Content-Length: 410 v=0 o=- 1288698323798001 1288698323798001 IN IP4 1.00408CBEA38B s=Media Presentation e=NONE c=IN IP4 0.0.0.0 b=AS:50000 t=0 0 a=control:* a=range:npt=0.000000- m=video 0 RTP/AVP 96 b=AS:50000 a=framerate:30.0 a=transform:1,0,0;0,1,0;0,0,1 a=control:trackID=1 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1; profile-level-id=420029; sprop-parameter-sets=Z0IAKeNQFAe2AtwEBAaQeJEV,aM48gA== RTSP/1.0 200 OK CSeq: 2 Session: 3F4763D8; timeout=60 Transport: RTP/AVP/TCP;unicast;interleaved=0-1;ssrc=060922C6;mode="PLAY" Date: Tue, 02 Nov 2010 11:45:24 GMT RTSP/1.0 200 OK CSeq: 3 Session: 3F4763D8 Range: npt=0- RTP-Info: url=rtsp://1.00408CBEA38B/axis-media/media.amp/trackID=1;seq=7392;rtptime=4190934902 Date: Tue, 02 Nov 2010 11:45:24 GMT [Binary Stream Content] But when i copy this request to fiddler, I only get 200 status code with content-type set to application/x-rtsp-tunneled and there is no stream data. The only thing i do different with stream is to use Basic in authorization header instead of Digest and I do not get 401 (Un authorized) status code. Can anyone explain what's happening here? How can I write request sequences to get stream in fiddler? If it is needed, I can upload the wireshark request dump somewhere.

    Read the article

  • How to Block a HTTP Website along with Its All Subdomain using IPTABLE

    - by netnovice
    I run a small HTTP web proxy site . We can not modify anything there in Proxy program. Few users mainly use Yahoo Web mail for Spamming and We need to block yahoo web mail access only ( complete yahoo website is also Ok) through our proxy . specially .mail.yahoo.com.. Like - we need to block URL like - http://uk-mg61.mail.yahoo.com http://in-mg61.mail.yahoo.com etc. etc. Note : We generaly open http://mail.yahoo.com in browser - but after loggin in it forwards it to Urls like above but all those are subdomain of mail.yahoo.com My target is if we can get all IP list for all available subdomain of mail.yahoo.com I can block it totally . We can only use IPTABLE ...I know using proxy itself we can check HTTP header and check Host field for .mail.yahoo.com. and block it. Solution : Follwoign what I did using IPtable . I collected IP CIDR block for yahoo mainly for yahoo web mail ( mail.yahoo.com ) as much as possible ( using linux host and whois command ) [ like 66.163.160.0/19 nd 98.136.0.0/14 etc ] and applied follwing command Like iptables -A OUTPUT -p tcp -d 66.163.160.0/19 -m state --state NEW -j DROP etc. Things are working fine. user can not access yahoo mail BUT the problem is I need to be updated with the avaialble CIDR YAHOO IP list ... I am ready to do it every week. I collected many from Net... You know theer are countles subdomain of mail.yahoo.com and seems every week Yahoo adding new IP... But what I observed some time user can bypass our rule and the reason obvously all the avaialble Ips are not entered in IPtable yet. What we need to do is enter all Ips of mail.yahoo.co But where do I find all subdomain for mail.yahoo.com I know we can get it from DNS but I must not be allowed to make DNS axfr query. Also doing reverse DNS will have performance issue. I want to know all subdomain of .mail.yahoo.c Can I get it from yahoo site. I have the list of all YAHOO smtp IP....but I need webmail Ip... ( http://public.yahoo.com/carloc/ymail.html ) Can you please share your Idea. Thank you

    Read the article

  • SQL Server High Availability - Mirroring with MSCS?

    - by David
    I'm looking at options for high-availability for my SQL Server-powered application. The requirements are: HA protection from storage failure. Data accessibility when one of the DB servers is undergoing software updates (e.g. planned outage for Windows Update / SQL Server service-packs). Must not involve much in the way of hardware procurement. The application is an ASP.NET web application. The web application's users have their own database instances. I've seen two main options: SQL Server failover clustering, and SQL Server mirroring. I understand that SQL Server Failover Clustering requires the purchasing of a shared disk array and doesn't offer any protection if the shared storage goes down (so the documentation recommends to set up a Mirroring between two clusters). Database Mirroring seems the cheaper option (as it only requires two database servers and a simple witness box) - but I've heard it doesn't work well when you have a large number of databases. The application I'm developing involves giving each client their own database for their application - there could be hundreds of databases. Setting up the mirroring is no problem thanks to the automation systems we have in place. My final point concerns how failover works with respect to client connections - SQL Server Failover Clustering uses MSCS which means that the cluster is invisible to clients - a connection attempt might fail during the failover, but a simple reconnect will have it working again. However mirroring, as far as I know, requires that the client be aware of the mirrored partners: if the client cannot connect to the primary server then it tries the secondary server. I'm wondering how this work with respect to Connection Pooling in ASP.NET applications - does the client connection failovering mean that there's a potential 2-second (assuming 2000ms TCP timeout policy) pause when the connection pool tries the primary server on every connection attempt? I read somewhere that Mirroring can be used on top of MSCS which means that the client does not need to be aware of mirroring (so there wouldn't be any potential delays during connection, and also that no changes would need to be made to the client, not even the connection string) - however I'm finding it hard to get documentation or white papers on this approach. But if true, then it means the best method is then Mirroring (for HA) with MSCS (for client ignorance and connection performance). ...but how does this scale to a server instance that might contain hundreds of mirrored databases?

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >