Search Results

Search found 21501 results on 861 pages for 'slow connection'.

Page 289/861 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Configuring UCM cache to check for external Content Server changes

    - by Martin Deh
    Recently, I was involved in a customer scenario where they were modifying the Content Server's contributor data files directly through Content Server.  This operation of course is completely supported.  However, since the contributor data file was modified through the "backdoor", a running WebCenter Spaces page, which also used the same data file, would not get the updates immediately.  This was due to two reasons.  The first reason is that the Spaces page was using Content Presenter to display the contents of the data file. The second reason is that the Spaces application was using the "cached" version of the data file.  Fortunately, there is a way to configure cache so backdoor changes can be picked up more quickly and automatically. First a brief overview of Content Presenter.  The Content Presenter task flow enables WebCenter Spaces users with Page-Edit permissions to precisely customize the selection and presentation of content in a WebCenter Spaces application.  With Content Presenter, you can select a single item of content, contents under a folder, a list of items, or query for content, and then select a Content Presenter based template to render the content on a page in a Spaces application.  In addition to displaying the folders and the files in a Content Server, Content Presenter integrates with Oracle Site Studio to allow you to create, access, edit, and display Site Studio contributor data files (Content Server Document) in either a Site Studio region template or in a custom Content Presenter display template.  More information about creating Content Presenter Display Template can be found in the OFM Developers Guide for WebCenter Portal. The easiest way to configure the cache is to modify the WebCenter Spaces Content Server service connection setting through Enterprise Manager.  From here, under the Cache Details, there is a section to set the Cache Invalidation Interval.  Basically, this enables the cache to be monitored by the cache "sweeper" utility.  The cache sweeper queries for changes in the Content Server, and then "marks" the object in cache as "dirty".  This causes the application in turn to get a new copy of the document from the Content Server that replaces the cached version.  By default the initial value for the Cache Invalidation Interval is set to 0 (minutes).  This basically means that the sweeper is OFF.  To turn the sweeper ON, just set a value (in minutes).  The mininal value that can be set is 2 (minutes): Just a note.  In some instances, once the value of the Cache Invalidation Interval has been set (and saved) in the Enterprise Manager UI, it becomes "sticky" and the interval value cannot be set back to 0.  The good news is that this value can also be updated throught a WLST command.   The WLST command to run is as follows: setJCRContentServerConnection(appName, name, [socketType, url, serverHost, serverPort, keystoreLocation, keystorePassword, privateKeyAlias, privateKeyPassword, webContextRoot, clientSecurityPolicy, cacheInvalidationInterval, binaryCacheMaxEntrySize, adminUsername, adminPassword, extAppId, timeout, isPrimary, server, applicationVersion]) One way to get the required information for executing the command is to use the listJCRContentServerConnections('webcenter',verbose=true) command.  For example, this is the sample output from the execution: ------------------ UCM ------------------ Connection Name: UCM Connection Type: JCR External Appliction ID: Timeout: (not set) CIS Socket Type: socket CIS Server Hostname: webcenter.oracle.local CIS Server Port: 4444 CIS Keystore Location: CIS Private Key Alias: CIS Web URL: Web Server Context Root: /cs Client Security Policy: Admin User Name: sysadmin Cache Invalidation Interval: 2 Binary Cache Maximum Entry Size: 1024 The Documents primary connection is "UCM" From this information, the completed  setJCRContentServerConnection would be: setJCRContentServerConnection(appName='webcenter',name='UCM', socketType='socket', serverHost='webcenter.oracle.local', serverPort='4444', webContextRoot='/cs', cacheInvalidationInterval='0', binaryCacheMaxEntrySize='1024',adminUsername='sysadmin',isPrimary=1) Note: The Spaces managed server must be restarted for the change to take effect. More information about using WLST for WebCenter can be found here. Once the sweeper is turned ON, only cache objects that have been changed will be invalidated.  To test this out, I will go through a simple scenario.  The first thing to do is configure the Content Server so it can monitor and report on events.  Log into the Content Server console application, and under the Administration menu item, select System Audit Information.  Note: If your console is using the left menu display option, the Administration link will be located there. Under the Tracing Sections Information, add in only "system" and "requestaudit" in the Active Sections.  Check Full Verbose Tracing, check Save, then click the Update button.  Once this is done, select the View Server Output menu option.  This will change the browser view to display the log.  This is all that is needed to configure the Content Server. For example, the following is the View Server Output with the cache invalidation interval set to 2(minutes) Note the time stamp: requestaudit/6 08.30 09:52:26.001  IdcServer-68    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.016933999955654144(secs) requestaudit/6 08.30 09:52:26.010  IdcServer-69    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.006134999915957451(secs) requestaudit/6 08.30 09:52:26.014  IdcServer-70    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.004271999932825565(secs) ... other trace info ... requestaudit/6 08.30 09:54:26.002  IdcServer-71    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.020323999226093292(secs) requestaudit/6 08.30 09:54:26.011  IdcServer-72    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.017928000539541245(secs) requestaudit/6 08.30 09:54:26.017  IdcServer-73    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.010185999795794487(secs) Now that the tracing logs are reporting correctly, the next step is set up the Spaces app to test the sweeper. I will use 2 different pages that will use Content Presenter task flows.  Each task flow will use a different custom Content Presenter display template, and will be assign 2 different contributor data files (document that will be in the cache).  The pages at run time appear as follows: Initially, when the Space pages containing the content is loaded in the browser for the first time, you can see the tracing information in the Content Server output viewer. requestaudit/6 08.30 11:51:12.030 IdcServer-129 CLEAR_SERVER_OUTPUT [dUser=weblogic] 0.029171999543905258(secs) requestaudit/6 08.30 11:51:12.101 IdcServer-130 GET_SERVER_OUTPUT [dUser=weblogic] 0.025721000507473946(secs) requestaudit/6 08.30 11:51:26.592 IdcServer-131 VCR_GET_DOCUMENT_BY_NAME [dID=919][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.21525299549102783(secs) requestaudit/6 08.30 11:51:27.117 IdcServer-132 VCR_GET_CONTENT_TYPES [dUser=sysadmin][IsJava=1] 0.5059549808502197(secs) requestaudit/6 08.30 11:51:27.146 IdcServer-133 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.03360399976372719(secs) requestaudit/6 08.30 11:51:27.169 IdcServer-134 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.008806000463664532(secs) requestaudit/6 08.30 11:51:27.204 IdcServer-135 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.013265999965369701(secs) requestaudit/6 08.30 11:51:27.384 IdcServer-136 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.18119299411773682(secs) requestaudit/6 08.30 11:51:27.533 IdcServer-137 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.1519480049610138(secs) requestaudit/6 08.30 11:51:27.634 IdcServer-138 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.10827399790287018(secs) requestaudit/6 08.30 11:51:27.687 IdcServer-139 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.059702999889850616(secs) requestaudit/6 08.30 11:51:28.271 IdcServer-140 GET_USER_PERMISSIONS [dUser=weblogic][IsJava=1] 0.006703000050038099(secs) requestaudit/6 08.30 11:51:28.285 IdcServer-141 GET_ENVIRONMENT [dUser=sysadmin][IsJava=1] 0.010893999598920345(secs) requestaudit/6 08.30 11:51:30.433 IdcServer-142 GET_SERVER_OUTPUT [dUser=weblogic] 0.017318999394774437(secs) requestaudit/6 08.30 11:51:41.837 IdcServer-143 VCR_GET_DOCUMENT_BY_NAME [dID=508][dDocName=113_ES][dDocTitle=Landing Home][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.15937699377536774(secs) requestaudit/6 08.30 11:51:42.781 IdcServer-144 GET_FILE [dID=326][dDocName=WEBCENTERORACL000315][dDocTitle=Duke][dUser=anonymous][RevisionSelectionMethod=LatestReleased][dSecurityGroup=Public][xCollectionID=0] 0.16288499534130096(secs) The highlighted sections show where the 2 data files DF_UCMCACHETESTER (P1 page) and 113_ES (P2 page) were called by the (Spaces) VCR connection to the Content Server. The most important line to notice is the VCR_GET_DOCUMENT_BY_NAME invocation.  On subsequent refreshes of these 2 pages, you will notice (after you refresh the Content Server's View Server Output) that there are no further traces of the same VCR_GET_DOCUMENT_BY_NAME invocations.  This is because the pages are getting the documents from the cache. The next step is to go through the "backdoor" and change one of the documents through the Content Server console.  This operation can be done by first locating the data file document, and from the Content Information page, select Edit Data File menu option.   This invokes the Site Studio Contributor, where the modifications can be made. Refreshing the Content Server View Server Output, the tracing displays the operations perform on the document.  requestaudit/6 08.30 11:56:59.972 IdcServer-255 SS_CHECKOUT_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dUser=weblogic][dSecurityGroup=Public] 0.05558200180530548(secs) requestaudit/6 08.30 11:57:00.065 IdcServer-256 SS_GET_CONTRIBUTOR_CONFIG [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.08632399886846542(secs) requestaudit/6 08.30 11:57:00.470 IdcServer-259 DOC_INFO_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.02268899977207184(secs) requestaudit/6 08.30 11:57:10.177 IdcServer-264 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007652000058442354(secs) requestaudit/6 08.30 11:57:10.181 IdcServer-263 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.01868399977684021(secs) requestaudit/6 08.30 11:57:10.187 IdcServer-265 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.009367000311613083(secs) (internal)/6 08.30 11:57:26.118 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml (internal)/6 08.30 11:57:26.121 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml requestaudit/6 08.30 11:57:26.122 IdcServer-266 SS_SET_ELEMENT_DATA [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0][StatusCode=0][StatusMessage=Successfully checked in content item 'DF_UCMCACHETESTER'.] 0.3765290081501007(secs) requestaudit/6 08.30 11:57:30.710 IdcServer-267 DOC_INFO_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.07942699640989304(secs) requestaudit/6 08.30 11:57:30.733 IdcServer-268 SS_GET_CONTRIBUTOR_STRINGS [dUser=weblogic] 0.0044570001773536205(secs) After a few moments and refreshing the P1 page, the updates has been applied. Note: The refresh time may very, since the Cache Invalidation Interval (set to 2 minutes) is not determined by when changes happened.  The sweeper just runs every 2 minutes. Refreshing the Content Server View Server Output, the tracing displays the important information. requestaudit/6 08.30 11:59:10.171 IdcServer-270 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.00952600035816431(secs) requestaudit/6 08.30 11:59:10.179 IdcServer-271 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.011118999682366848(secs) requestaudit/6 08.30 11:59:10.182 IdcServer-272 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007447000127285719(secs) requestaudit/6 08.30 11:59:16.885 IdcServer-273 VCR_GET_DOCUMENT_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.0786449983716011(secs) After the specifed interval time the sweeper is invoked, which is noted by the GET_ ... calls.  Since the history has noted the change, the next call is to the VCR_GET_DOCUMENT_BY_NAME to retrieve the new version of the (modifed) data file.  Navigating back to the P2 page, and viewing the server output, there are no further VCR_GET_DOCUMENT_BY_NAME to retrieve the data file.  This simply means that this data file was just retrieved from the cache.   Upon further review of the server output, we can see that there was only 1 request for the VCR_GET_DOCUMENT_BY_NAME: requestaudit/6 08.30 12:08:00.021 Audit Request Monitor Request Audit Report over the last 120 Seconds for server webcenteroraclelocal16200****  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor -Num Requests 8 Errors 0 Reqs/sec. 0.06666944175958633 Avg. Latency (secs) 0.02762500010430813 Max Thread Count 2  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 1 Service VCR_GET_DOCUMENT_BY_NAME Total Elapsed Time (secs) 0.09200000017881393 Num requests 1 Num errors 0 Avg. Latency (secs) 0.09200000017881393  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 2 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.054999999701976776 Num requests 1 Num errors 0 Avg. Latency (secs) 0.054999999701976776  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 3 Service GET_FOLDER_HISTORY_REPORT Total Elapsed Time (secs) 0.028999999165534973 Num requests 2 Num errors 0 Avg. Latency (secs) 0.014499999582767487  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 4 Service GET_SERVER_OUTPUT Total Elapsed Time (secs) 0.017999999225139618 Num requests 1 Num errors 0 Avg. Latency (secs) 0.017999999225139618  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 5 Service GET_FILE Total Elapsed Time (secs) 0.013000000268220901 Num requests 1 Num errors 0 Avg. Latency (secs) 0.013000000268220901  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor ****End Audit Report*****  

    Read the article

  • How do I create a Linked Server in SQL Server 2005 to a password protected Access 95 database?

    - by Brad Knowles
    I need to create a linked server with SQL Server Management Studio 2005 to an Access 95 database, which happens to be password protected at the database level. User level security has not been implemented. I cannot convert the Access database to a newer version. It is being used by a 3rd party application; so modifying it, in any way, is not allowed. I've tried using the Jet 4.0 OLE DB Provider and the ODBC OLE DB Provider. The 3rd party application creates a System DSN (with the proper database password), but I've not had any luck in using either method. If I were using a standard connection string, I think it would look something like this: Provider=Microsoft.Jet.OLEDB.4.0;Data Source='C:\Test.mdb';Jet OLEDB:Database Password=####; I'm fairly certain I need to somehow incorporate Jet OLEDB:Database Password into the linked server setup, but haven't figured out how. I've posted the scripts I'm using along with the associated error messages below. Any help is greatly appreciated. I'll provide more details if needed, just ask. Thanks! Method #1 - Using the Jet 4.0 Provider When I try to run these statements to create the linked server: sp_dropserver 'Test', 'droplogins'; EXEC sp_addlinkedserver @server = N'Test', @provider = N'Microsoft.Jet.OLEDB.4.0', @srvproduct = N'Access DB', @datasrc = N'C:\Test.mdb' GO EXEC sp_addlinkedsrvlogin @rmtsrvname=N'Test', @useself=N'False',@locallogin=NULL, @rmtuser=N'Admin', @rmtpassword='####' GO I get this error when testing the connection: TITLE: Microsoft SQL Server Management Studio ------------------------------ "The test connection to the linked server failed." ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ The OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test" reported an error. Authentication failed. Cannot initialize the data source object of OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test". OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test" returned message "Cannot start your application. The workgroup information file is missing or opened exclusively by another user.". (Microsoft SQL Server, Error: 7399) ------------------------------ Method #2 - Using the ODBC Provider... sp_dropserver 'Test', 'droplogins'; EXEC sp_addlinkedserver @server = N'Test', @provider = N'MSDASQL', @srvproduct = N'ODBC', @datasrc = N'Test:DSN' GO EXEC sp_addlinkedsrvlogin @rmtsrvname=N'Test', @useself=N'False',@locallogin=NULL, @rmtuser=N'Admin', @rmtpassword='####' GO I get this error: TITLE: Microsoft SQL Server Management Studio ------------------------------ "The test connection to the linked server failed." ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ Cannot initialize the data source object of OLE DB provider "MSDASQL" for linked server "Test". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Microsoft Access Driver] Cannot open database '(unknown)'. It may not be a database that your application recognizes, or the file may be corrupt.". (Microsoft SQL Server, Error: 7303)

    Read the article

  • stunnel crashing

    - by Jay
    I'm trying to use stunnel to secure a legacy application's communications. I can't seem to get it setup and working. Can anyone provide any hints where I'm going wrong? Here's what I'm trying to accomplish: A windows service on a client machine connects to a server on port 7000 using TCP. I'd like to encrypt the communication between client and server. Here's what I've tried: Created a new server that accepts ssl connections on port 7443. Got a certificate for the server and installed it. That seems to work with my test setup. Installed stunnel on my windows machine (version 7.43 from the distribution archive file). Installed libssl32.dll and libeay32.dll in the same directory as stunnel.exe ( from the openssl-0.9.8h-1 binary distribution). Installed it as a service using "stunnel -install" Configured stunnel as follows: debug=7 output=C:\p4\internal\Utility\Proxy\proxy.log service=Proxy taskbar=no [exchange] accept=7000 client=yes connect=proxy.blah.com:7443 I changed my hosts file to trick the old application into connecting through stunnel: server.blah.com 127.0.0.1 # when client looks up server it goes to stunnel proxy.blah.com IP-address-of-server.blah.com # stunnel connects to new server "server.blah.com" now resolves to the machine it's running on (i.e. stunnel). "proxy.blah.com" goes to the real server. stunnel should connect to the server. I start the stunnel service and try to connect. It looks like it's working but the stunnel service just shuts down with no message. 2010.04.19 13:16:21 LOG5[4924:3716]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:16:21 LOG5[4924:3716]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange accepted connection from 127.0.0.1:4134 2010.04.19 13:16:49 LOG6[4924:3748]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange connected remote server from x.253.120.19:4135 2010.04.19 13:20:24 LOG5[3668:3856]: Reading configuration from file stunnel.conf 2010.04.19 13:20:24 LOG7[3668:3856]: Snagged 64 random bytes from C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: Wrote 1024 new random bytes to C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: RAND_status claims sufficient entropy for the PRNG 2010.04.19 13:20:24 LOG7[3668:3856]: PRNG seeded successfully 2010.04.19 13:20:24 LOG7[3668:3856]: SSL context initialized for service exchange 2010.04.19 13:20:24 LOG5[3668:3856]: Configuration successful 2010.04.19 13:20:24 LOG5[3668:3856]: No limit detected for the number of clients 2010.04.19 13:20:24 LOG7[3668:3856]: FD=312 in non-blocking mode 2010.04.19 13:20:24 LOG7[3668:3856]: Option SO_REUSEADDR set on accept socket 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange bound to 0.0.0.0:7000 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange opened FD=312 2010.04.19 13:20:24 LOG5[3668:3856]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:20:24 LOG5[3668:3856]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:21:02 LOG7[3668:4556]: Service exchange accepted FD=372 from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:4556]: Creating a new thread 2010.04.19 13:21:02 LOG7[3668:4556]: New thread created 2010.04.19 13:21:02 LOG7[3668:3756]: Service exchange started 2010.04.19 13:21:02 LOG7[3668:3756]: FD=372 in non-blocking mode 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange accepted connection from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:3756]: FD=396 in non-blocking mode 2010.04.19 13:21:02 LOG6[3668:3756]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:21:02 LOG7[3668:3756]: connect_blocking: s_poll_wait x.80.60.32:7443: waiting 10 seconds 2010.04.19 13:21:02 LOG5[3668:3756]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange connected remote server from x.253.120.19:4157 2010.04.19 13:21:02 LOG7[3668:3756]: Remote FD=396 initialized 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): before/connect initialization 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server certificate A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server done A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client key exchange A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write change cipher spec A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write finished A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 flush data 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read finished A The client thinks the connection is closed: No connection could be made because the target machine actively refused it 127.0.0.1:7000 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.Sockets.Socket.Connect(EndPoint remoteEP) at Service.ConnUtility.Connect() Any suggestions?

    Read the article

  • Cisco ASA (Client VPN) to LAN - through second VPN to second LAN

    - by user50855
    We have 2 site that is linked by an IPSEC VPN to remote Cisco ASAs: Site 1 1.5Mb T1 Connection Cisco(1) 2841 Site 2 1.5Mb T1 Connection Cisco 2841 In addition: Site 1 has a 2nd WAN 3Mb bonded T1 Connection Cisco 5510 that connects to same LAN as Cisco(1) 2841. Basically, Remote Access (VPN) users connecting through Cisco ASA 5510 needs access to a service at the end of Site 2. This is due to the way the service is sold - Cisco 2841 routers are not under our management and it is setup to allow connection from local LAN VLAN 1 IP address 10.20.0.0/24. My idea is to have all traffic from Remote Users through Cisco ASA destined for Site 2 to go via the VPN between Site 1 and Site 2. The end result being all traffic that hits Site 2 has come via Site 1. I'm struggling to find a great deal of information on how this is setup. So, firstly, can anyone confirm that what I'm trying to achieve is possible? Secondly, can anyone help me to correct the configuration bellow or point me in the direction of an example of such a configuration? Many Thanks. interface Ethernet0/0 nameif outside security-level 0 ip address 7.7.7.19 255.255.255.240 interface Ethernet0/1 nameif inside security-level 100 ip address 10.20.0.249 255.255.255.0 object-group network group-inside-vpnclient description All inside networks accessible to vpn clients network-object 10.20.0.0 255.255.255.0 network-object 10.20.1.0 255.255.255.0 object-group network group-adp-network description ADP IP Address or network accessible to vpn clients network-object 207.207.207.173 255.255.255.255 access-list outside_access_in extended permit icmp any any echo-reply access-list outside_access_in extended permit icmp any any source-quench access-list outside_access_in extended permit icmp any any unreachable access-list outside_access_in extended permit icmp any any time-exceeded access-list outside_access_in extended permit tcp any host 7.7.7.20 eq smtp access-list outside_access_in extended permit tcp any host 7.7.7.20 eq https access-list outside_access_in extended permit tcp any host 7.7.7.20 eq pop3 access-list outside_access_in extended permit tcp any host 7.7.7.20 eq www access-list outside_access_in extended permit tcp any host 7.7.7.21 eq www access-list outside_access_in extended permit tcp any host 7.7.7.21 eq https access-list outside_access_in extended permit tcp any host 7.7.7.21 eq 5721 access-list acl-vpnclient extended permit ip object-group group-inside-vpnclient any access-list acl-vpnclient extended permit ip object-group group-inside-vpnclient object-group group-adp-network access-list acl-vpnclient extended permit ip object-group group-adp-network object-group group-inside-vpnclient access-list PinesFLVPNTunnel_splitTunnelAcl standard permit 10.20.0.0 255.255.255.0 access-list inside_nat0_outbound_1 extended permit ip 10.20.0.0 255.255.255.0 10.20.1.0 255.255.255.0 access-list inside_nat0_outbound_1 extended permit ip 10.20.0.0 255.255.255.0 host 207.207.207.173 access-list inside_nat0_outbound_1 extended permit ip 10.20.1.0 255.255.255.0 host 207.207.207.173 ip local pool VPNPool 10.20.1.100-10.20.1.200 mask 255.255.255.0 route outside 0.0.0.0 0.0.0.0 7.7.7.17 1 route inside 207.207.207.173 255.255.255.255 10.20.0.3 1 crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-SHA crypto dynamic-map outside_dyn_map 20 set security-association lifetime seconds 288000 crypto dynamic-map outside_dyn_map 20 set security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set reverse-route crypto map outside_map 20 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map interface outside crypto map outside_dyn_map 20 match address acl-vpnclient crypto map outside_dyn_map 20 set security-association lifetime seconds 28800 crypto map outside_dyn_map 20 set security-association lifetime kilobytes 4608000 crypto isakmp identity address crypto isakmp enable outside crypto isakmp policy 20 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 group-policy YeahRightflVPNTunnel internal group-policy YeahRightflVPNTunnel attributes wins-server value 10.20.0.9 dns-server value 10.20.0.9 vpn-tunnel-protocol IPSec password-storage disable pfs disable split-tunnel-policy tunnelspecified split-tunnel-network-list value acl-vpnclient default-domain value YeahRight.com group-policy YeahRightFLVPNTunnel internal group-policy YeahRightFLVPNTunnel attributes wins-server value 10.20.0.9 dns-server value 10.20.0.9 10.20.0.7 vpn-tunnel-protocol IPSec split-tunnel-policy tunnelspecified split-tunnel-network-list value YeahRightFLVPNTunnel_splitTunnelAcl default-domain value yeahright.com tunnel-group YeahRightFLVPN type remote-access tunnel-group YeahRightFLVPN general-attributes address-pool VPNPool tunnel-group YeahRightFLVPNTunnel type remote-access tunnel-group YeahRightFLVPNTunnel general-attributes address-pool VPNPool authentication-server-group WinRadius default-group-policy YeahRightFLVPNTunnel tunnel-group YeahRightFLVPNTunnel ipsec-attributes pre-shared-key *

    Read the article

  • Using the Data Form Web Part (SharePoint 2010) Site Agnostically!

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/24/154465.aspxAs a Developer whom has worked closely with web designers (Power users) in a SharePoint environment, I have come across the issue of making the Data Form Web Part reusable across the site collection! In SharePoint 2007 it was very easy and this blog pointed the way to make it happen: Josh Gaffey's Blog. In SharePoint 2010 something changed! This method failed except for using a Data Form Web Part that pointed to a list in the Site Collection Root! I am making this discussion relative to a developer whom creates a solution (WSP) with all the artifacts embedded and the user shouldn’t have any involvement in the process except to activate features. The Scenario: 1. A Power User creates a Data Form Web Part using SharePoint Designer 2010! It is a great web part the uses all the power of SharePoint Designer and XSLT (Conditional formatting, etc.). 2. Other Users in the site collection want to use that specific web part in sub sites in the site collection. Pointing to a list with the same name, not at the site collection root! The Issues: 1. The Data Form Web Part Data Source uses a List ID (GUID) to point to the specific list. Which means a list in a sub site will have a list with a new GUID different than the one which was created with SharePoint Designer! Obviously, the List needs to be the same List (Fields, Content Types, etc.) with different data. 2. How can we make this web part site agnostic, and dependent only on the lists Name? I had this problem come up over and over and decided to put my solution forward! The Solution: 1. Use the XSL of the Data Form Web Part Created By the Power User in SharePoint Designer! 2. Extend the OOTB Data Form Web Part to use this XSL and Point to a List by name. The solution points to a hybrid solution that requires some coding (Developer) and the XSL (Power User) artifacts put together in a Visual Studio SharePoint Solution. Here are the solution steps in summary: 1. Create an empty SharePoint project in Visual Studio 2. Create a Module and Feature and put the XSL file created by the Power User into it a. Scope the feature to web 3. Create a Feature Receiver to Create the List. The same list from which the Data Form Web Part was created with by the Power User. a. Scope the feature to web 4. Create a Web Part extending the Data Form Web a. Point the Data Form Web Part to point to the List by Name b. Point the Data Form Web Part XSL link to the XSL added using the Module feature c. Scope The feature to Site i. This is because all web parts are in the site collection web part gallery. So in a Narrative Summary: We are creating a list in code which has the same name and (site Columns) as the list from which the Power User created the Data Form Web Part Using SharePoint Designer. We are creating a Web Part in code which extends the OOTB Data Form Web Part to point to a list by name and use the XSL created by the Power User. Okay! Here are the steps with images and code! At the end of this post I will provide a link to the code for a solution which works in any site! I want to TOOT the HORN for the power of this solution! It is the mantra a use with all my clients! What is a basic skill a SharePoint Developer: Create an application that uses the data from a SharePoint list and make that data visible to the user in a manner which meets requirements! Create an Empty SharePoint 2010 Project Here I am naming my Project DJ.DataFormWebPart Create a Code Folder Copy and paste the Extension and Utilities classes (Found in the solution provided at the end of this post) Change the Namespace to match this project The List to which the Data Form Web Part which was used to make the XSL by the Power User in SharePoint Designer is now going to be created in code! If already in code, then all the better! Here I am going to create a list in the site collection root and add some data to it! For the purpose of this discussion I will actually create this list in code before using SharePoint Designer for simplicity! So here I create the List and deploy it within this solution before I do anything else. I will use a List I created before for demo purposes. Footer List is used within the footer of my master page. Add a new Feature: Here I name the Feature FooterList and add a Feature Event Receiver: Here is the code for the Event Receiver: I have a previous blog post about adding lists in code so I will not take time to narrate this code: using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; using DJ.DataFormWebPart.Code; namespace DJ.DataFormWebPart.Features.FooterList { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("a58644fd-9209-41f4-aa16-67a53af7a9bf")] public class FooterListEventReceiver : SPFeatureReceiver { SPWeb currentWeb = null; SPSite currentSite = null; const string columnGroup = "DJ"; const string ctName = "FooterContentType"; // Uncomment the method below to handle the event raised after a feature has been activated. public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { using (SPSite site = new SPSite(spWeb.Site.ID)) { using (SPWeb rootWeb = site.OpenWeb(site.RootWeb.ID)) { //add the fields addFields(rootWeb); //add content type SPContentType testCT = rootWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(rootWeb, ctName); } if ((spWeb.Lists.TryGetList("FooterList") == null)) { //create the list if it dosen't to exist CreateFooterList(spWeb, site); } } } } } #region ContentType public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Link", SPFieldType.URL, false, columnGroup); Utilities.addField(spWeb, "Information", SPFieldType.Text, false, columnGroup); } private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Item"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Link", ct); Utilities.addContentTypeLink(spWeb, "Information", ct); } private void CreateFooterList(SPWeb web, SPSite site) { Guid newListGuid = web.Lists.Add("FooterList", "Footer List", SPListTemplateType.GenericList); SPList newList = web.Lists[newListGuid]; newList.ContentTypesEnabled = true; var footer = site.RootWeb.ContentTypes[ctName]; newList.ContentTypes.Add(footer); newList.ContentTypes.Delete(newList.ContentTypes["Item"].Id); newList.Update(); var view = newList.DefaultView; //add all view fields here //view.ViewFields.Add("NewsTitle"); view.ViewFields.Add("Link"); view.ViewFields.Add("Information"); view.Update(); } } } Basically created a content type with two site columns Link and Information. I had to change some code as we are working at the SPWeb level and need Content Types at the SPSite level! I’ll use a new Site Collection for this demo (Best Practice) keep old artifacts from impinging on development: Next we will add this list to the root of the site collection by deploying this solution, add some data and then use SharePoint Designer to create a Data Form Web Part. The list has been added, now let’s add some data: Okay let’s add a Data Form Web Part in SharePoint Designer. Create a new web part page in the site pages library: I will name it TestWP.aspx and edit it in advanced mode: Let’s add an empty Data Form Web Part to the web part zone: Click on the web part to add a data source: Choose FooterList in the Data Source menu: Choose appropriate fields and select insert as multiple item view: Here is what it look like after insertion: Let’s add some conditional formatting if the information filed is not blank: Choose Create (right side) apply formatting: Choose the Information Field and set the condition not null: Click Set Style: Here is the result: Okay! Not flashy but simple enough for this demo. Remember this is the job of the Power user! All we want from this web part is the XLS-Style Sheet out of SharePoint Designer. We are going to use it as the XSL for our web part which we will be creating next. Let’s add a web part to our project extending the OOTB Data Form Web Part. Add new item from the Visual Studio add menu: Choose Web Part: Change WebPart to DataFormWebPart (Oh well my namespace needs some improvement, but it will sure make it readily identifiable as an extended web part!) Below is the code for this web part: using System; using System.ComponentModel; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls; using System.Text; namespace DJ.DataFormWebPart.DataFormWebPart { [ToolboxItemAttribute(false)] public class DataFormWebPart : Microsoft.SharePoint.WebPartPages.DataFormWebPart { protected override void OnInit(EventArgs e) { base.OnInit(e); this.ChromeType = PartChromeType.None; this.Title = "FooterListDF"; try { //SPSite site = SPContext.Current.Site; SPWeb web = SPContext.Current.Web; SPList list = web.Lists.TryGetList("FooterList"); if (list != null) { string queryList1 = "<Query><Where><IsNotNull><FieldRef Name='Title' /></IsNotNull></Where><OrderBy><FieldRef Name='Title' Ascending='True' /></OrderBy></Query>"; uint maximumRowList1 = 10; SPDataSource dataSourceList1 = GetDataSource(list.Title, web.Url, list, queryList1, maximumRowList1); this.DataSources.Add(dataSourceList1); this.XslLink = web.Url + "/Assests/Footer.xsl"; this.ParameterBindings = BuildDataFormParameters(); this.DataBind(); } } catch (Exception ex) { this.Controls.Add(new LiteralControl("ERROR: " + ex.Message)); } } private SPDataSource GetDataSource(string dataSourceId, string webUrl, SPList list, string query, uint maximumRow) { SPDataSource dataSource = new SPDataSource(); dataSource.UseInternalName = true; dataSource.ID = dataSourceId; dataSource.DataSourceMode = SPDataSourceMode.List; dataSource.List = list; dataSource.SelectCommand = "" + query + ""; Parameter listIdParam = new Parameter("ListID"); listIdParam.DefaultValue = list.ID.ToString( "B").ToUpper(); Parameter maximumRowsParam = new Parameter("MaximumRows"); maximumRowsParam.DefaultValue = maximumRow.ToString(); QueryStringParameter rootFolderParam = new QueryStringParameter("RootFolder", "RootFolder"); dataSource.SelectParameters.Add(listIdParam); dataSource.SelectParameters.Add(maximumRowsParam); dataSource.SelectParameters.Add(rootFolderParam); dataSource.UpdateParameters.Add(listIdParam); dataSource.DeleteParameters.Add(listIdParam); dataSource.InsertParameters.Add(listIdParam); return dataSource; } private string BuildDataFormParameters() { StringBuilder parameters = new StringBuilder("<ParameterBindings><ParameterBinding Name=\"dvt_apos\" Location=\"Postback;Connection\"/><ParameterBinding Name=\"UserID\" Location=\"CAMLVariable\" DefaultValue=\"CurrentUserName\"/><ParameterBinding Name=\"Today\" Location=\"CAMLVariable\" DefaultValue=\"CurrentDate\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_firstrow\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_nextpagedata\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocmode\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocfiltermode\" Location=\"Postback;Connection\"/>"); parameters.Append("</ParameterBindings>"); return parameters.ToString(); } } } The OnInit method we use to set the list name and the XSL Link property of the Data Form Web Part. We do not have the link to XSL in our Solution so we will add the XSL now: Add a Module in the Visual Studio add menu: Rename Sample.txt in the module to footer.xsl and then copy the XSL from SharePoint Designer Look at elements.xml to where the footer.xsl is being provisioned to which is Assets/footer.xsl, make sure the Web parts xsl link is pointing to this url: Okay we are good to go! Let’s check our features and package: DataFormWebPart should be scoped to site and have the web part: The Footer List feature should be scoped to web and have the Assets module (Okay, I see, a spelling issue but it won’t affect this demo) If everything is correct we should be able to click a couple of sub site feature activations and have our list and web part in a sub site. (In fact this solution can be activated anywhere) Here is the list created at SubSite1 with new data It. Next let’s add the web part on a test page and see if it works as expected: It does! So we now have a repeatable way to use a WSP to move a Data Form Web Part around our sites! Here is a link to the code: DataFormWebPart Solution

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • Apache cyclic redirection problem

    - by slicedlime
    I have an extremely weird problem with one of my sites. I run a number of blogs off a single apache2 server with a shared wordpress install. Each site has a www.domain.com main domain, but a ServerAlias of domain.com. This works fine for all the blogs except one, which instead of redirecting to www.domain.com redirects to domain.com, causing a cyclic redirection. The configuration for each host looks like this: <VirtualHost *:80> ServerName www.domain.com ServerAlias domain.com DocumentRoot "/home/www/www.domain.com" <Directory "/home/www/www.domain.com"> Options MultiViews Indexes Includes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> As this didn't work, I tried a mod_rewrite rule for it, which still didn't redirect correctly. The weird thing here is that if i rewrite it to redirect to any other domain it will redirect correctly, even to another subdomain. So a rewrite rule for domain.com that redirects to foo.domain.com works, but not to www.domain.com. In the same way, trying to redirec to www.domain.com/foo/ ends me up with a redirection to domain.com/foo/. Even weirder, I tried setting up domain.com as a completely separate virtual host, and ran this php test script as index.php on it: <?php header('Location: http://www.domain.com/' . $_SERVER["request_uri"]); ?> Hitting domain.com still redirects to domain.com! Checking out the headers sent to the server verifies that I get exactly the redirect URL I wanted, except with the "www." stripped. The same script works like a charm if I replace www. with foo or redirect to any other domain. This has now foiled me for a long time. I've diffed the vhosts configs for a working domain and the faulty one, and the only difference is the domain name itself. I've diffed the .htaccess files for both sites, and the only difference is a path related to the sharing of wordpress installation for the blogs: php_value include_path ".:/home/www/www.domain.com/local/:/home/www/www.domain.com/" I searched through everything in /etc (including apache conf) for the domain name of the faulty host and found nothing weird, searched through everything in /etc for one of the working ones to make sure it didn't differ, I even went so far to check on the DNS setup of two domains to make sure there wasn't anything strange going on. Here's the response for the faulty one: user@localhost dir $ wget -S domain.com --2010-03-20 21:47:24-- http://domain.com/ Resolving domain.com... x.x.x.x Connecting to domain.com|x.x.x.x|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 301 Moved Permanently Via: 1.1 ISA Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 0 Date: Sat, 20 Mar 2010 20:47:24 GMT Location: http://domain.com/ Content-Type: text/html; charset=UTF-8 Server: Apache X-Powered-By: PHP/5.2.10-pl0-gentoo X-Pingback: http://domain.com/xmlrpc.php Keep-Alive: timeout=15, max=100 Location: http://domain.com/ [following] And a working one: user@localhost dir $ wget -S domain.com --2010-03-20 21:51:33-- http://domain.com/ Resolving domain.com... x.x.x.x Connecting to domain.com|x.x.x.x|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 301 Moved Permanently Via: 1.1 ISA Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 0 Date: Sat, 20 Mar 2010 20:51:33 GMT Location: http://www.domain.com/ Content-Type: text/html; charset=UTF-8 Server: Apache X-Powered-By: PHP/5.2.10-pl0-gentoo X-Pingback: http://www.domain.com/xmlrpc.php Keep-Alive: timeout=15, max=100 Location: http://www.domain.com/ [following] I'm stumped. I've had this problem for a long time, and I feel like I've tried everything. I can't see why the different domains would act differently under the same installation with the same settings. Help :(

    Read the article

  • JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue

    - by John-Brown.Evans
    JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue .c21_2{vertical-align:top;width:487.3pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c15_2{vertical-align:top;width:487.3pt;border-style:solid;border-color:#ffffff;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c0_2{padding-left:0pt;direction:ltr;margin-left:36pt} .c20_2{list-style-type:circle;margin:0;padding:0} .c10_2{list-style-type:disc;margin:0;padding:0} .c6_2{background-color:#ffffff} .c17_2{padding-left:0pt;margin-left:72pt} .c3_2{line-height:1.0;direction:ltr} .c1_2{font-size:10pt;font-family:"Courier New"} .c16_2{color:#1155cc;text-decoration:underline} .c13_2{color:inherit;text-decoration:inherit} .c7_2{background-color:#ffff00} .c9_2{border-collapse:collapse} .c2_2{font-family:"Courier New"} .c18_2{font-size:18pt} .c5_2{font-weight:bold} .c19_2{color:#ff0000} .c12_2{background-color:#f3f3f3;border-style:solid;border-color:#000000;border-width:1pt;} .c14_2{font-size:24pt} .c8_2{direction:ltr;background-color:#ffffff} .c11_2{font-style:italic} .c4_2{height:11pt} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt}.subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} This post is the second in a series of JMS articles which demonstrate how to use JMS queues in a SOA context. In the previous post JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g I showed you how to create a JMS queue and its dependent objects in WebLogic Server. In this article, we will use a sample program to write a message to that queue. Please review the previous post if you have not created those objects yet, as they will be required later in this example. The previous post also includes useful background information and links to the Oracle documentation for addional research. The following post in this series will show how to read the message from the queue again. 1. Source code The following java code will be used to write a message to the JMS queue. It is based on a sample program provided with the WebLogic Server installation. The sample is not installed by default, but needs to be installed manually using the WebLogic Server Custom Installation option, together with many, other useful samples. You can either copy-paste the following code into your editor, or install all the samples. The knowledge base article in My Oracle Support: How To Install WebLogic Server and JMS Samples in WLS 10.3.x (Doc ID 1499719.1) describes how to install the samples. QueueSend.java package examples.jms.queue; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.Hashtable; import javax.jms.*; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; /** This example shows how to establish a connection * and send messages to the JMS queue. The classes in this * package operate on the same JMS queue. Run the classes together to * witness messages being sent and received, and to browse the queue * for messages. The class is used to send messages to the queue. * * @author Copyright (c) 1999-2005 by BEA Systems, Inc. All Rights Reserved. */ public class QueueSend { // Defines the JNDI context factory. public final static String JNDI_FACTORY="weblogic.jndi.WLInitialContextFactory"; // Defines the JMS context factory. public final static String JMS_FACTORY="jms/TestConnectionFactory"; // Defines the queue. public final static String QUEUE="jms/TestJMSQueue"; private QueueConnectionFactory qconFactory; private QueueConnection qcon; private QueueSession qsession; private QueueSender qsender; private Queue queue; private TextMessage msg; /** * Creates all the necessary objects for sending * messages to a JMS queue. * * @param ctx JNDI initial context * @param queueName name of queue * @exception NamingException if operation cannot be performed * @exception JMSException if JMS fails to initialize due to internal error */ public void init(Context ctx, String queueName) throws NamingException, JMSException { qconFactory = (QueueConnectionFactory) ctx.lookup(JMS_FACTORY); qcon = qconFactory.createQueueConnection(); qsession = qcon.createQueueSession(false, Session.AUTO_ACKNOWLEDGE); queue = (Queue) ctx.lookup(queueName); qsender = qsession.createSender(queue); msg = qsession.createTextMessage(); qcon.start(); } /** * Sends a message to a JMS queue. * * @param message message to be sent * @exception JMSException if JMS fails to send message due to internal error */ public void send(String message) throws JMSException { msg.setText(message); qsender.send(msg); } /** * Closes JMS objects. * @exception JMSException if JMS fails to close objects due to internal error */ public void close() throws JMSException { qsender.close(); qsession.close(); qcon.close(); } /** main() method. * * @param args WebLogic Server URL * @exception Exception if operation fails */ public static void main(String[] args) throws Exception { if (args.length != 1) { System.out.println("Usage: java examples.jms.queue.QueueSend WebLogicURL"); return; } InitialContext ic = getInitialContext(args[0]); QueueSend qs = new QueueSend(); qs.init(ic, QUEUE); readAndSend(qs); qs.close(); } private static void readAndSend(QueueSend qs) throws IOException, JMSException { BufferedReader msgStream = new BufferedReader(new InputStreamReader(System.in)); String line=null; boolean quitNow = false; do { System.out.print("Enter message (\"quit\" to quit): \n"); line = msgStream.readLine(); if (line != null && line.trim().length() != 0) { qs.send(line); System.out.println("JMS Message Sent: "+line+"\n"); quitNow = line.equalsIgnoreCase("quit"); } } while (! quitNow); } private static InitialContext getInitialContext(String url) throws NamingException { Hashtable env = new Hashtable(); env.put(Context.INITIAL_CONTEXT_FACTORY, JNDI_FACTORY); env.put(Context.PROVIDER_URL, url); return new InitialContext(env); } } 2. How to Use This Class 2.1 From the file system on UNIX/Linux Log in to a machine with a WebLogic installation and create a directory to contain the source and code matching the package name, e.g. $HOME/examples/jms/queue. Copy the above QueueSend.java file to this directory. Set the CLASSPATH and environment to match the WebLogic server environment. Go to $MIDDLEWARE_HOME/user_projects/domains/base_domain/bin  and execute . ./setDomainEnv.sh Collect the following information required to run the script: The JNDI name of a JMS queue to use In the Weblogic server console > Services > Messaging > JMS Modules > (Module name, e.g. TestJMSModule) > (JMS queue name, e.g. TestJMSQueue)Select the queue and note its JNDI name, e.g. jms/TestJMSQueue The JNDI name of a connection factory to connect to the queue Follow the same path as above to get the connection factory for the above queue, e.g. TestConnectionFactory and its JNDI namee.g. jms/TestConnectionFactory The URL and port of the WebLogic server running the above queue Check the JMS server for the above queue and the managed server it is targeted to, for example soa_server1. Now find the port this managed server is listening on, by looking at its entry under Environment > Servers in the WLS console, e.g. 8001 The URL for the server to be given to the QueueSend program in this example will therefore be t3://host.domain:8001 e.g. t3://jbevans-lx.de.oracle.com:8001 Edit QueueSend.java and enter the above queue name and connection factory respectively under ...public final static String  JMS_FACTORY=" jms/TestConnectionFactory "; ... public final static String QUEUE=" jms/TestJMSQueue "; ... Compile QueueSend.java using javac QueueSend.java Go to the source’s top-level directory and execute it using java examples.jms.queue.QueueSend t3://jbevans-lx.de.oracle.com:8001 This will prompt for a text input or “quit” to end. In the WLS console, go to the queue and select Monitoring to confirm that a new message was written to the queue. 2.2 From JDeveloper Create a new application in JDeveloper, called, for example JMSTests. When prompted for a project name, enter QueueSend and select Java as the technology Default Package = examples.jms.queue (but you can enter anything here as you will overwrite it in the code later). Leave the other values at their defaults. Press Finish Create a new Java class called QueueSend and use the default values This will create a file called QueueSend.java. Open QueueSend.java, if it is not already open and replace all its contents with the QueueSend java code listed above Some lines might have warnings due to unfound objects. These are due to missing libraries in the JDeveloper project. Add the following libraries to the JDeveloper project: right-click the QueueSend  project in the navigation menu and select Libraries and Classpath , then Add JAR/Directory  Go to the folder containing the JDeveloper installation and find/choose the file javax.jms_1.1.1.jar , e.g. at D:\oracle\jdev11116\modules\javax.jms_1.1.1.jar Do the same for the weblogic.jar file located, for example in D:\oracle\jdev11116\wlserver_10.3\server\lib\weblogic.jar Now you should be able to compile the project, for example by selecting the Make or Rebuild icons   If you try to execute the project, you will get a usage message, as it requires a parameter pointing to the WLS installation containing the JMS queue, for example t3://jbevans-lx.de.oracle.com:8001 . You can automatically pass this parameter to the program from JDeveloper by editing the project’s Run/Debug/Profile. Select the project properties, select Run/Debug/Profile and edit the Default run configuration and add the connection parameter to the Program Arguments field If you execute it again, you will see that it has passed the parameter to the start command If you get a ClassNotFoundException for the class weblogic.jndi.WLInitialContextFactory , then check that the weblogic.jar file was correctly added to the project in one of the earlier steps above. Set the values of JMS_FACTORY and QUEUE the same way as described above in the description of how to use this from a Linux file system, i.e. ...public final static String  JMS_FACTORY=" jms/TestConnectionFactory "; ... public final static String QUEUE=" jms/TestJMSQueue "; ... You need to make one more change to the project. If you execute it now, it will prompt for the payload for the JMS message, but you won’t be able to enter it by default in JDeveloper. You need to enable program input for the project first. Select the project’s properties, then Tool Settings, then check the Allow Program Input checkbox at the bottom and Save. Now when you execute the project, you will get a text entry field at the bottom into which you can enter the payload. You can enter multiple messages until you enter “quit”, which will cause the program to stop. The following screen shot shows the TestJMSQueue’s Monitoring page, after a message was sent to the queue: This concludes the sample. In the following post I will show you how to read the message from the queue again.

    Read the article

  • PXE Boot not working

    - by Nishant
    Please explain the error in this screenshot DHCP Setting: This screenshot was taken after powering off the old comp hence he server interface is shown as the wireless card - it becomes 192.168.0.1 when I connect wires and power up the old laptop to boot via PXE. My scenario is simple. An old laptop and a new laptop . A cross over cable ( that I myself made from CAT 6 cable by cutting it and connecting 4 wires as mentioned in some doc). The new laptop ( tftp server ) has a Wirelss Card ( with which I am browsing and writing this ) . And the cable is connected between laptops . TFTP server ( new laptop details ) Windows IP Configuration Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::f511:3d4a:ca01:122e%16 IPv4 Address. . . . . . . . . . . : 192.168.0.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.0.2 Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Achilles Link-local IPv6 Address . . . . . : fe80::99b1:8ae0:9e6c:f300%11 IPv4 Address. . . . . . . . . . . : 192.168.2.3 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.2.1

    Read the article

  • All client browsers repeatedly asking for NTLM authentication when running through local proxy server

    - by Marko
    All client browsers repeatedly asking for NTLM authentication when running through local proxy server. When pointing browsers through the local proxy to the internet, some but not all clients are being repeatedley prompted to authenticate to the proxy server. I have inspected the headers using firefox live headers as well as fiddler, and in all cases the authentication prompts happen when requesting SSL resources. an example of this would be as follows: GET http://gmail.google.com/mail/ HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave- flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms- xpsdocument, application/xaml+xml, */* Accept-Language: en-gb User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: gmail.google.com GET http://gmail.google.com/mail/ HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave- flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms- xpsdocument, application/xaml+xml, */* Accept-Language: en-gb User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: gmail.google.com Proxy-Authorization: NTLM TlRMTVNTUAABAAAAB7IIogkACQAvAAAABwAHACgAAAAFASgKAAAAD1dJTlhQMUdGTEFHU0hJUDc= GET http://gmail.google.com/mail/ HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave- flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms- xpsdocument, application/xaml+xml, */* Accept-Language: en-gb User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Proxy-Authorization: NTLM TlRMTVNTUAADA (more stuff goes here I cut it short) Host: gmail.google.com At this point the username and password prompt has appeared in the browser, it does not matter what is typed into this box, correct credentials, random nonsense the browser does not accept anything in this box it will continue to popup. If I press cancel, I sometimes get a http 407 error, but on other occasions I click cancel the website proceeds to download and show normally. This is repeatable with some clients running through my proxy server, but in other cases it does not happen at all. In the cases where a client computer works normally, the only difference I can see is that the 3rd request for SSL resource comes back with a 200 response, see below: CONNECT gmail.google.com:443 HTTP/1.0 User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; MALC) Proxy-Connection: Keep-Alive Content-Length: 0 Host: gmail.google.com Pragma: no-cache Proxy-Authorization: NTLM TlRMTVNTUAADAAAAGAAYAIAAAA A SSLv3-compatible ClientHello handshake was found. I have tried resetting user accounts as well as computer accounts in Active Directory. User accounts and passwords that are being used are correct and the passwords have been reset so they are not out of sync. I have removed the clients and even the proxy server from the domain, and rejoined them. I have installed a complete separate proxy server and get exactly the same problem when I point clients to a different proxy server on a different IP address.

    Read the article

  • stunnel crashing

    - by Jay
    I'm trying to use stunnel to secure a legacy application's communications. I can't seem to get it setup and working. Can anyone provide any hints where I'm going wrong? Here's what I'm trying to accomplish: A windows service on a client machine connects to a server on port 7000 using TCP. I'd like to encrypt the communication between client and server. Here's what I've tried: Created a new server that accepts ssl connections on port 7443. Got a certificate for the server and installed it. That seems to work with my test setup. Installed stunnel on my windows machine (version 7.43 from the distribution archive file). Installed libssl32.dll and libeay32.dll in the same directory as stunnel.exe ( from the openssl-0.9.8h-1 binary distribution). Installed it as a service using "stunnel -install" Configured stunnel as follows: debug=7 output=C:\p4\internal\Utility\Proxy\proxy.log service=Proxy taskbar=no [exchange] accept=7000 client=yes connect=proxy.blah.com:7443 I changed my hosts file to trick the old application into connecting through stunnel: server.blah.com 127.0.0.1 # when client looks up server it goes to stunnel proxy.blah.com IP-address-of-server.blah.com # stunnel connects to new server "server.blah.com" now resolves to the machine it's running on (i.e. stunnel). "proxy.blah.com" goes to the real server. stunnel should connect to the server. I start the stunnel service and try to connect. It looks like it's working but the stunnel service just shuts down with no message. 2010.04.19 13:16:21 LOG5[4924:3716]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:16:21 LOG5[4924:3716]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange accepted connection from 127.0.0.1:4134 2010.04.19 13:16:49 LOG6[4924:3748]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange connected remote server from x.253.120.19:4135 2010.04.19 13:20:24 LOG5[3668:3856]: Reading configuration from file stunnel.conf 2010.04.19 13:20:24 LOG7[3668:3856]: Snagged 64 random bytes from C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: Wrote 1024 new random bytes to C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: RAND_status claims sufficient entropy for the PRNG 2010.04.19 13:20:24 LOG7[3668:3856]: PRNG seeded successfully 2010.04.19 13:20:24 LOG7[3668:3856]: SSL context initialized for service exchange 2010.04.19 13:20:24 LOG5[3668:3856]: Configuration successful 2010.04.19 13:20:24 LOG5[3668:3856]: No limit detected for the number of clients 2010.04.19 13:20:24 LOG7[3668:3856]: FD=312 in non-blocking mode 2010.04.19 13:20:24 LOG7[3668:3856]: Option SO_REUSEADDR set on accept socket 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange bound to 0.0.0.0:7000 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange opened FD=312 2010.04.19 13:20:24 LOG5[3668:3856]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:20:24 LOG5[3668:3856]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:21:02 LOG7[3668:4556]: Service exchange accepted FD=372 from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:4556]: Creating a new thread 2010.04.19 13:21:02 LOG7[3668:4556]: New thread created 2010.04.19 13:21:02 LOG7[3668:3756]: Service exchange started 2010.04.19 13:21:02 LOG7[3668:3756]: FD=372 in non-blocking mode 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange accepted connection from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:3756]: FD=396 in non-blocking mode 2010.04.19 13:21:02 LOG6[3668:3756]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:21:02 LOG7[3668:3756]: connect_blocking: s_poll_wait x.80.60.32:7443: waiting 10 seconds 2010.04.19 13:21:02 LOG5[3668:3756]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange connected remote server from x.253.120.19:4157 2010.04.19 13:21:02 LOG7[3668:3756]: Remote FD=396 initialized 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): before/connect initialization 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server certificate A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server done A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client key exchange A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write change cipher spec A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write finished A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 flush data 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read finished A The client thinks the connection is closed: No connection could be made because the target machine actively refused it 127.0.0.1:7000 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.Sockets.Socket.Connect(EndPoint remoteEP) at Service.ConnUtility.Connect() Any suggestions?

    Read the article

  • Static file download from browser breaking in varnish but works fine in Apache

    - by Ron
    I would at first like to thank everyone at serverfault for this great website and I also come to this site while searching in google for various server related issues and setups. I also have an issue today and so I am posting here and hope that the seniors would help me out. I had setup a website on a dedicated server a few days ago and I used Varnish 3 as the frontend to Apache2 on a Debian Lenny server as the traffic was a bit high. There are several static file downloads of around 10-20 MB in size in the website. The website looked fine in the last few days after I setup. I was checking from a 5mbps + broadband connection and the file downloads were also completed in seconds and working fine. But today I realized that on a slow internet connection the file downloads were breaking off. When I tried to download the files from the website using a browser then it broke off after a minute or so. It kept on happening again and again and so it had nothing to do with the internet connection. The internet connection was around 512 kbps and so it was not dial up level speed too but decent speed where files should easily download though not that fast. Then I thought of trying out with the apache backend port and used the port number to check out if the problem occurs. But then on adding the apache port in the static file download url, the files got downloaded easily and did not break even once. I tried it several times to make sure that it was not a coincidence but every time I was using the apache port in the file download url then it was downloading fine while it was breaking each time with the normal link which was routed through Varnish I suppose. So, it seems Varnish has somehow resulted in the broken file downloads. Could anyone give any idea as to why it is happening and how to fix the problem. For more clarification, take this example: Apache backend set on port 8008, Varnish frontend set on port 80 Now when I download say http://mywebsite.com/directory/filename.extension Then the download breaks off after a minute or so. I cannot be sure it is due to the time or size though and I am just assuming. May be some other reason too. But when I download using: http://mywebsite.com:8008/directory/filename.extension Then the file download does not break at all and it gets download fine. So, it seems that varnish is somehow creating the file download breaking and not apache. Does anybody have any idea as to why it is happening and how can it be fixed. Any help would be highly appreciated. And my varnish default.vcl is backend apache { set backend.host = "127.0.0.1"; set backend.port = "8008"; } sub vcl_deliver { remove resp.http.X-Varnish; remove resp.http.Via; remove resp.http.Age; remove resp.http.Server; remove resp.http.X-Powered-By; }

    Read the article

  • Cannot determine ethernet address for proxy ARP on PPTP

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 and i get this error also on PPTP server error log Cannot determine ethernet address for proxy ARP Ping from Client2 to Client1 PING 10.0.0.60 (10.0.0.60) 56(84) bytes of data. --- 10.0.0.60 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5000ms route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables is stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • nagios NRPE: Unable to read output

    - by user555854
    I currently set up a script to restart my http servers + php5 fpm but can't get it to work. I have googled and have found that mostly permissions are the problems of my error but can't figure it out. I start my script using /usr/lib/nagios/plugins/check_nrpe -H bart -c restart_http This is the output in my syslog on the node I want to restart Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 port 25028 Jun 27 06:29:35 bart nrpe[8926]: Host address is in allowed_hosts Jun 27 06:29:35 bart nrpe[8926]: Handling the connection... Jun 27 06:29:35 bart nrpe[8926]: Host is asking for command 'restart_http' to be run... Jun 27 06:29:35 bart nrpe[8926]: Running command: /usr/bin/sudo /usr/lib/nagios/plugins/http-restart Jun 27 06:29:35 bart nrpe[8926]: Command completed with return code 1 and output: Jun 27 06:29:35 bart nrpe[8926]: Return Code: 1, Output: NRPE: Unable to read output Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 closed. If I run the command myself it runs fine (but asks for a password) (nagios user) This are the script permission and the script contents. -rwxrwxrwx 1 nagios nagios 142 Jun 26 21:41 /usr/lib/nagios/plugins/http-restart #!/bin/bash echo "ok" /etc/init.d/nginx stop /etc/init.d/nginx start /etc/init.d/php5-fpm stop /etc/init.d/php5-fpm start echo "done" I also added this line to visudo nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ My local nagios nrpe.cfg ############################################################################# # Sample NRPE Config File # Written by: Ethan Galstad ([email protected]) # # # NOTES: # This is a sample configuration file for the NRPE daemon. It needs to be # located on the remote host that is running the NRPE daemon, not the host # from which the check_nrpe client is being executed. ############################################################################# # LOG FACILITY # The syslog facility that should be used for logging purposes. log_facility=daemon # PID FILE # The name of the file in which the NRPE daemon should write it's process ID # number. The file is only written if the NRPE daemon is started by the root # user and is running in standalone mode. pid_file=/var/run/nagios/nrpe.pid # PORT NUMBER # Port number we should wait for connections on. # NOTE: This must be a non-priviledged port (i.e. > 1024). # NOTE: This option is ignored if NRPE is running under either inetd or xinetd server_port=5666 # SERVER ADDRESS # Address that nrpe should bind to in case there are more than one interface # and you do not want nrpe to bind on all interfaces. # NOTE: This option is ignored if NRPE is running under either inetd or xinetd #server_address=127.0.0.1 # NRPE USER # This determines the effective user that the NRPE daemon should run as. # You can either supply a username or a UID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_user=nagios # NRPE GROUP # This determines the effective group that the NRPE daemon should run as. # You can either supply a group name or a GID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_group=nagios # ALLOWED HOST ADDRESSES # This is an optional comma-delimited list of IP address or hostnames # that are allowed to talk to the NRPE daemon. # # Note: The daemon only does rudimentary checking of the client's IP # address. I would highly recommend adding entries in your /etc/hosts.allow # file to allow only the specified host to connect to the port # you are running this daemon on. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd allowed_hosts=127.0.0.1,192.168.133.17 # COMMAND ARGUMENT PROCESSING # This option determines whether or not the NRPE daemon will allow clients # to specify arguments to commands that are executed. This option only works # if the daemon was configured with the --enable-command-args configure script # option. # # *** ENABLING THIS OPTION IS A SECURITY RISK! *** # Read the SECURITY file for information on some of the security implications # of enabling this variable. # # Values: 0=do not allow arguments, 1=allow command arguments dont_blame_nrpe=0 # COMMAND PREFIX # This option allows you to prefix all commands with a user-defined string. # A space is automatically added between the specified prefix string and the # command line from the command definition. # # *** THIS EXAMPLE MAY POSE A POTENTIAL SECURITY RISK, SO USE WITH CAUTION! *** # Usage scenario: # Execute restricted commmands using sudo. For this to work, you need to add # the nagios user to your /etc/sudoers. An example entry for alllowing # execution of the plugins from might be: # # nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # # This lets the nagios user run all commands in that directory (and only them) # without asking for a password. If you do this, make sure you don't give # random users write access to that directory or its contents! command_prefix=/usr/bin/sudo # DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 # COMMAND TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # allow plugins to finish executing before killing them off. command_timeout=60 # CONNECTION TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # wait for a connection to be established before exiting. This is sometimes # seen where a network problem stops the SSL being established even though # all network sessions are connected. This causes the nrpe daemons to # accumulate, eating system resources. Do not set this too low. connection_timeout=300 # WEEK RANDOM SEED OPTION # This directive allows you to use SSL even if your system does not have # a /dev/random or /dev/urandom (on purpose or because the necessary patches # were not applied). The random number generator will be seeded from a file # which is either a file pointed to by the environment valiable $RANDFILE # or $HOME/.rnd. If neither exists, the pseudo random number generator will # be initialized and a warning will be issued. # Values: 0=only seed from /dev/[u]random, 1=also seed from weak randomness #allow_weak_random_seed=1 # INCLUDE CONFIG FILE # This directive allows you to include definitions from an external config file. #include=<somefile.cfg> # INCLUDE CONFIG DIRECTORY # This directive allows you to include definitions from config files (with a # .cfg extension) in one or more directories (with recursion). #include_dir=<somedirectory> #include_dir=<someotherdirectory> # COMMAND DEFINITIONS # Command definitions that this daemon will run. Definitions # are in the following format: # # command[<command_name>]=<command_line> # # When the daemon receives a request to return the results of <command_name> # it will execute the command specified by the <command_line> argument. # # Unlike Nagios, the command line cannot contain macros - it must be # typed exactly as it should be executed. # # Note: Any plugins that are used in the command lines must reside # on the machine that this daemon is running on! The examples below # assume that you have plugins installed in a /usr/local/nagios/libexec # directory. Also note that you will have to modify the definitions below # to match the argument format the plugins expect. Remember, these are # examples only! # The following examples use hardcoded command arguments... command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 # The following examples allow user-supplied arguments and can # only be used if the NRPE daemon was compiled with support for # command arguments *AND* the dont_blame_nrpe directive in this # config file is set to '1'. This poses a potential security risk, so # make sure you read the SECURITY file before doing this. #command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$ #command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$ #command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ #command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$ command[restart_http]=/usr/lib/nagios/plugins/http-restart # # local configuration: # if you'd prefer, you can instead place directives here include=/etc/nagios/nrpe_local.cfg # # you can place your config snipplets into nrpe.d/ include_dir=/etc/nagios/nrpe.d/ My Sudoers files # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d Hopefully someone can help!

    Read the article

  • Load balancing with multiple gateways

    - by ttouch
    I have to different ISPs, each on each own network. The main connects via ethernet and the secondary via wifi. The two networks have no relation at all. I just connect to them simultaneously. The reason I want to load balance between them is to achieve higher Internet speeds. Note: I have no advanced network hardware. Just my pc and the two routers that I have no access... main network: if: eth0 gw: 192.168.178.1 my ip: 192.168.178.95 speed: 400 kbit/s secondary network: if: wlan0 gw: 192.168.1.1 my ip: 192.168.1.95 speed: 300 kbit/s A diagram to explain the situation: http://i.imgur.com/NZdsv.jpg I'm on Arch Linux x64. I use netcfg to configure the interfaces Configs: # /etc/network.d/main CONNECTION='ethernet' DESCRIPTION='A basic static ethernet connection using iproute' INTERFACE='eth0' IP='static' ADDR='192.168.178.95' # /etc/network.d/second CONNECTION='wireless' DESCRIPTION='A simple WEP encrypted wireless connection' INTERFACE='wlan0' SECURITY='wep' ESSID='wifi_essid' KEY='the_password' IP="static" ADDR='192.168.1.95' And I use iptables to load balance, rules: #!/bin/bash /usr/sbin/ip route flush table ISP1 2>/dev/null /usr/sbin/ip rule del fwmark 101 table ISP1 2>/dev/null /usr/sbin/ip route add table ISP1 192.168.178.0/24 dev eth0 proto kernel scope link src 192.168.178.95 metric 202 /usr/sbin/ip route add table ISP1 default via 192.168.178.1 dev eth0 /usr/sbin/ip rule add fwmark 101 table ISP1 /usr/sbin/ip route flush table ISP2 2>/dev/null /usr/sbin/ip rule del fwmark 102 table ISP2 2>/dev/null /usr/sbin/ip route add table ISP2 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.95 metric 202 /usr/sbin/ip route add table ISP2 default via 192.168.1.1 dev wlan0 /usr/sbin/ip rule add fwmark 102 table ISP2 /usr/sbin/iptables -t mangle -F /usr/sbin/iptables -t mangle -X /usr/sbin/iptables -t mangle -N MARK-gw1 /usr/sbin/iptables -t mangle -A MARK-gw1 -m comment --comment 'send via 192.168.178.1' -j MARK --set-mark 101 /usr/sbin/iptables -t mangle -A MARK-gw1 -j CONNMARK --save-mark /usr/sbin/iptables -t mangle -A MARK-gw1 -j RETURN /usr/sbin/iptables -t mangle -N MARK-gw2 /usr/sbin/iptables -t mangle -A MARK-gw2 -m comment --comment 'send via 192.168.1.1' -j MARK --set-mark 102 /usr/sbin/iptables -t mangle -A MARK-gw2 -j CONNMARK --save-mark /usr/sbin/iptables -t mangle -A MARK-gw2 -j RETURN /usr/sbin/iptables -t mangle -A PREROUTING -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment "this stream is already marked; escape early" -m mark ! --mark 0 -j ACCEPT /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment 'prevent asynchronous routing' -i eth0 -m conntrack --ctstate NEW -j MARK-gw1 /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment 'prevent asynchronous routing' -i wlan0 -m conntrack --ctstate NEW -j MARK-gw2 /usr/sbin/iptables -t mangle -N DEF_POL /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'default balancing' -p tcp -m conntrack --ctstate ESTABLISHED,RELATED -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'default balancing' -p udp -m conntrack --ctstate ESTABLISHED,RELATED -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j MARK-gw1 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j MARK-gw2 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j MARK-gw1 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j MARK-gw2 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j ACCEPT /usr/sbin/iptables -t mangle -A PREROUTING -j DEF_POL /usr/sbin/iptables -t nat -A POSTROUTING -m comment --comment 'snat outbound eth0' -o eth0 -s 192.168.0.0/16 -m mark --mark 101 -j SNAT --to-source 192.168.178.95 /usr/sbin/iptables -t nat -A POSTROUTING -m comment --comment 'snat outbound wlan0' -o wlan0 -s 192.168.0.0/16 -m mark --mark 102 -j SNAT --to-source 192.168.1.95 /usr/sbin/ip route flush cache (this script was made by fukawi2, I don't know how to use iptables) but I have no Internet connection... output of iptables -t mangle -nvL Chain PREROUTING (policy ACCEPT 1254K packets, 1519M bytes) pkts bytes target prot opt in out source destination 1278K 1535M CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK restore 21532 15M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* this stream is already marked; escape early */ mark match ! 0x0 582 72579 MARK-gw1 all -- eth0 * 0.0.0.0/0 0.0.0.0/0 /* prevent asynchronous routing */ ctstate NEW 2376 696K MARK-gw2 all -- wlan0 * 0.0.0.0/0 0.0.0.0/0 /* prevent asynchronous routing */ ctstate NEW 1257K 1520M DEF_POL all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 1276K packets, 1535M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 870K packets, 97M bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 870K packets, 97M bytes) pkts bytes target prot opt in out source destination Chain DEF_POL (1 references) pkts bytes target prot opt in out source destination 1236K 1517M CONNMARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default balancing */ ctstate RELATED,ESTABLISHED CONNMARK restore 15163 2041K CONNMARK udp -- * * 0.0.0.0/0 0.0.0.0/0 /* default balancing */ ctstate RELATED,ESTABLISHED CONNMARK restore 555 33176 MARK-gw1 tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 tcp */ ctstate NEW statistic mode nth every 2 555 33176 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 tcp */ ctstate NEW statistic mode nth every 2 277 16516 MARK-gw2 tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 tcp */ ctstate NEW statistic mode nth every 2 packet 1 277 16516 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 tcp */ ctstate NEW statistic mode nth every 2 packet 1 1442 384K MARK-gw1 udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 udp */ ctstate NEW statistic mode nth every 2 1442 384K ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 udp */ ctstate NEW statistic mode nth every 2 720 189K MARK-gw2 udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 udp */ ctstate NEW statistic mode nth every 2 packet 1 720 189K ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 udp */ ctstate NEW statistic mode nth every 2 packet 1 Chain MARK-gw1 (3 references) pkts bytes target prot opt in out source destination 2579 490K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* send via 192.168.178.1 */ MARK set 0x65 2579 490K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK save 2579 490K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain MARK-gw2 (3 references) pkts bytes target prot opt in out source destination 3373 901K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* send via 192.168.1.1 */ MARK set 0x66 3373 901K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK save 3373 901K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0

    Read the article

  • IPCop server slows down download speed

    - by noocyte
    I have an IPCop server running at home, been doing just fine for ~5 months, but last week I suddenly started getting time-outs and slow downloads from the 'net. I first thought that this was my ISP acting up, then I thought it might be one of my 3 switches or some of my cabling. In due order I've tested everything above and found them all to be working as they should. The only factor remaining is my IPCop server. Facts: I've got a 15/15 Mbit line (fiber) and I get ~15 Mbit upload, but only 0.5 Mbit download with the IPCop box as router (ISP router set in bridge mode). If I connect without the IPCop box (using the ISP router) I get ~12 Mbit upload and ~15 Mbit download. The load on the IPCop box appears to be light and it used to handle this traffic just fine 2 weeks ago. The memory usage is ~60%, I tried to restart it and test again, the memory fell to ~50% then (5 months of uptime). I'm thinking that one of my nics are busted, but I'm sort of perplexed that this could be the outcome; slow download but full speed upload. Anybody ever seen that happening before? Could it just be one of the nics that needs to be replaced? Will try that as soon as I can get my hands on a couple of new ones.

    Read the article

  • ATI gpu (video accel, decode, encode, ATI Stream, DXVA)

    - by Shiki
    Okay its a long question title for sure. I'm looking for a new video card (yes,SU is not a page for that, but wait). I've been a loyal NVidia customer ever since, now using a 8600gts. Old but still somewhat good, its a bit slow though. I want an upgrade because 8600gts wont support better vdpau and new features. I checked out the prices and the documents, I would need a GTX260 card. Which costs ..well.. a lot. ATI performs much better for that price. (At least on every test it outperforms GTX260). However, as far as I know there is no gpu accel with ATI. The things you can use is DXVA only, no other method. Could you correct me out there? Will be there a gpu accel for ATI also? Or is there one available? (DXVA is not bad, but kinda slow compared to NVIdia's CUDA.) What about openCL? How does ATI support that? (I'm talking about the 5850 ATI card at the minute, I would buy that instead of the NVidia.)

    Read the article

  • SMB access from XP to Windows 2008 R2

    - by Pablo
    Here's the thing... I have a very slow file copy performance from Windows XP clients to Windows 2008R2 servers. Here are the facts: Windows XP to Windows 2K3: Fast Windows XP to Windows 2K8: Very Slow Windows 7 to Windows (any): Fast Despite the fact that the obvious solution would be to upgrade to Windows 7, well, we have 900 desktops so it's not an option in the short time. I have tried everything: Disabling SMB2.0, disabling security signatures, changing the TCP Window size, disabling the W2K8 auto tuning, upgraded the drivers, etc. We eliminated the network; both the server and the client are connected to the same core switch (no hops, no routers, same VLAN). Upon monitoring the network with a packet capture utility, we see that the SMB packets being exchanged between the W2K8 and the XP machines are very small packets (256 bytes); despite the fact that the MTUs are properly set (1500) and there is no fragmentation whatsoever. In fact, those SMB packets show, on the IP datagram, that the window is 65535 or close. The same trace, made using the same application but instead of using a W2K8 share uses a Windows XP share (and that goes FAST) shows SMB packets of 4096 bytes. I can post the traces if necessary. So, why does XP-W2K8 negotiation arrange for 24-bytes SMB payload, whereas the XP-XP negotiation arranges for 4096 SMB packets? Any ideas? I am running short of those...

    Read the article

  • How can I get MySQL 5.5 to log warnings to one of the log files?

    - by Wodin
    I have found various things that say that you can log warnings to the MySQL error log, but I have not been able to actually make it happen. I do have the error log working, and MySQL prints stuff to it on startup and shutdown and occasionally at other times, but if I e.g. SELECT CAST('123' AS DATE); and then SHOW WARNINGS; I can see the warning, but it does not show up in any logs. I've also tried enabling the general log and the slow query log, but these don't show the warnings either. I've tried with log_warnings = 1 and log_warnings = 2, but still no warnings are logged. What am I doing wrong? mysql> show variables like '%error%'; +--------------------+--------------------------+ | Variable_name | Value | +--------------------+--------------------------+ | error_count | 0 | | log_error | /var/log/mysql/mysql.err | | max_connect_errors | 10 | | max_error_count | 1024 | | slave_skip_errors | OFF | +--------------------+--------------------------+ mysql> show variables like '%warn%'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_warnings | 1 | | sql_warnings | OFF | | warning_count | 0 | +---------------+-------+ 3 rows in set (0.06 sec) mysql> show variables like '%log%'; +-----------------------------------------+-------------------------------+ | Variable_name | Value | +-----------------------------------------+-------------------------------+ ... | general_log | ON | | general_log_file | /var/log/mysql/general.log | ... | log | ON | ... | log_error | /var/log/mysql/mysql.err | | log_output | FILE | | log_queries_not_using_indexes | ON | ... | log_warnings | 1 | ... | slow_query_log | ON | | slow_query_log_file | /var/log/mysql/mysql-slow.log | ... +-----------------------------------------+-------------------------------+ Edit: mysql> show global status like 'Aborted%'; +------------------+-------+ | Variable_name | Value | +------------------+-------+ | Aborted_clients | 24 | | Aborted_connects | 15 | +------------------+-------+ 2 rows in set (0.08 sec) Edit: Clarification: I do get [Warning] Aborted connection 1 to db... and [Warning] Access denied for user... messages logged, but not the warnings that you can see via SHOW WARNINGS after e.g. inserting something or running LOAD DATA INFILE... which is what I'm looking for.

    Read the article

  • MS SQL Server slows down over time?

    - by Dave Holland
    Have any of you experienced the following, and have you found a solution: A large part of our website's back-end is MS SQL Server 2005. Every week or two weeks the site begins running slower - and I see queries taking longer and longer to complete in SQL. I have a query that I like to use: USE master select text,wait_time,blocking_session_id AS "Block", percent_complete, * from sys.dm_exec_requests CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 order by start_time asc Which is fairly useful... it gives a snapshot of everything that's running right at that moment against your SQL server. What's nice is that even if your CPU is pegged at 100% for some reason and Activity Monitor is refusing to load (I'm sure some of you have been there) this query still returns and you can see what query is killing your DB. When I run this, or Activity Monitor during the times that SQL has begun to slow down I don't see any specific queries causing the issue - they are ALL running slower across the board. If I restart the MS SQL Service then everything is fine, it speeds right up - for a week or two until it happens again. Nothing that I can think of has changed, but this just started a few months ago... Ideas? --Added Please note that when this database slowdown happens it doesn't matter if we are getting 100K page views an hour (busier time of day) or 10K page views an hour (slow time) the queries all take a longer time to complete than normal. The server isn't really under stress - the CPU isn't high, the disk usage doesn't seem to be out of control... it feels like index fragmentation or something of the sort but that doesn't seem to be the case. As far as pasting results of the query I pasted above I really can't do that. The Query above lists the login of the user performing the task, the entire query, etc etc.. and I'd really not like to hand out the names of my databases, tables, columns and the logins online :)... I can tell you that the queries running at that time are normal, standard queries for our site that run all the time, nothing out of the norm.

    Read the article

  • mysql startup, shtudown and logging on osx

    - by Joelio
    Hi, I am trying to troubleshoot some mysql problems (I have a table I cant seem to delete or drop, it hangs forever) I have 10.5.8 osx, I dont remember how/if I installed mysql, here is what I know: it automatically starts on boot the process looks like this: /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql --datadir=/usr/local/mysql/var --pid-file=/usr/local/mysql/var/Joels-New-Pro.local.pid _mysql 96 0.0 0.0 75884 684 ?? Ss Sat06PM 0:00.02 /bin/sh /usr/local/mysql/bin/mysqld_safe when I run: /usr/local/mysql/libexec/mysqld --verbose --help it says: /usr/local/mysql/libexec/mysqld Ver 5.0.45 for apple-darwin9.1.0 on i686 (Source distribution) it seems to use my.cnf from /etc/my.cnf Now here are my questions: I dont see anything in the startupitems that remotely looks like mysql ls /Library/StartupItems/ BRESINKx86Monitoring ChmodBPF HP IO HP Trap Monitor Parallels ParallelsTransporter 1.) So how does it startup automatically? 2.) How do I start & stop this type of installation? Also, looking at the config, the logs have no values: /usr/local/mysql/libexec/mysqld --verbose --help|grep '^log' log (No default value) log-bin (No default value) log-bin-index (No default value) log-bin-trust-function-creators FALSE log-bin-trust-routine-creators FALSE log-error log-isam myisam.log log-queries-not-using-indexes FALSE log-short-format FALSE log-slave-updates FALSE log-slow-admin-statements FALSE log-slow-queries (No default value) log-tc tc.log log-tc-size 24576 log-update (No default value) log-warnings 1 3.) Does that mean there is no logging enabled in mysetup? thanks in advance! Joel

    Read the article

  • RDP problem with Vista and Windows 7 destination

    - by MadBison
    I use a server a home to host a bunch of concurrently running Hyper-V VM's with different OS's and software for testing. I have Vista on the laptop, all latest SP's and patches. The server is Server 2008 R2, fully patched. The guests are a mix of XP, Vista, Server 2008 and Windows 7. If I connect to the Win XP or Server 2008 guest using RDP, it is always good. Very quick, no speed issues. If I connect to the Vista or Win 7 guests, the response time is so slow it is unusable. Usually 6 or 8 seconds, and at times it is to long to measure! This happens from both the laptop running Vista, and the server running Server 2008 R2. Does anyone know what the issue is with RDP on Vista and Windows 7 destinations? I did read this: http://blog.tmcnet.com/blog/tom-keating/microsoft/remote-desktop-slow-problem-solved.asp and that is not the problem I have applied that change to all PC's.

    Read the article

  • Creating mdraid device on top of other existing mdraid devices

    - by Dmitriusan
    I'm considering creating something like "hierarchical raid" and wondering whether it is possible using pure mdraid. Moreover, I'm going to boot from this device. I'm using Ubuntu Server 12.04 LTS with Grub2 bootloader. Motivation behind doing that is: I have 4 x 1tb 7200rpm disks. Two are newer and faster (up to 200mb/sec) and other two are slower (up to 140mb/sec). I want to create RAID-0 device from them. When creating such RAID-0 directly from 4 hard disks, I get summary speed up to ~480mb/sec. That is roughly 4*120mb/sec, so RAID-0 works with speed of the slowest device. I have an idea to create a separate RAID-0 md0 device from 500gb partitions of slower hard disks. Theoretically, this md0 device will have speed 2*140=240~280mb/sec. After that, I'm going to add this md0 device to RAID-0 with faster disks, finishing with up to 3*200=600mb/sec. Stripe-width for this raid will be 2x times bigger than for underlying raid with slow disks. Questions are: is it possible or I'm missing something? will that work as expected? can I boot from such consolidated raid device? any better ideas? any pitfalls? I don't want to use fakeraid for consolidating slow disks for multiple reasons (portability, ability to customize parameters and so on). PS Speed is needed for home virtualization server and just for experience/fun. Reliability is provided via regular automatic backups to a separate device. PPS I considered also using different stripe-width for hard disks with different speed in single raid, but mdraid does not seem to support that.

    Read the article

  • PHP + MYSQL site perfomance

    - by Diego
    I have to manage a site which wasn't developed by me. It is in PHP using a mysql database, which is located in the web server. The site, sometimes (when the visitors increase too much) stops responding, or respond too slow. I have developed some sites in PHP but never took care of the management so really don't know where to start. The server (the hard) seems to be fine, when the web stops responding the cpu is being used at about 55% and has a lot of memory. I'm not asking someone to solve this issue. I only would really like if someone could give me a few tips about where can I find logs and how should I read and interpret them. So, that way I would be able to know if its the net traffic, the database (which queries), or what. Thanks! Update: Forgot to say: it is a Windows Server 2003. Note: I've recorded about a day with Jet Profiler. I don't really understand all the information it provides but there is one query which it marks as really slow. It makes sense because it is a select with a where clause which has three like condition. Initially I didn't include this in my question because when I run the query from MySQL Query Browser it doesn't take any long. It is under 0.01 seconds.

    Read the article

  • Configuring OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never Why is it that I can connect to the same server using one version of JRE while I cannot with another ?

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >