Search Results

Search found 3760 results on 151 pages for 'mutlple entries'.

Page 150/151 | < Previous Page | 146 147 148 149 150 151  | Next Page >

  • OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never

    Read the article

  • Custom rails route problem with 2.3.8 and Mongrel

    - by CHsurfer
    I have a controller called 'exposures' which I created automatically with the script/generate scaffold call. The scaffold pages work fine. I created a custom action called 'test' in the exposures controller. When I try to call the page (http://127.0.0.1:3000/exposures/test/1) I get a blank, white screen with no text at all in the source. I am using Rails 2.3.8 and mongrel in the development environment. There are no entries in development.log and the console that was used to open mongrel has the following error: You might have expected an instance of Array. The error occurred while evaluating nil.split D:/Rails/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/cgi_process.rb:52:in dispatch_cgi' D:/Rails/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:101:in dispatch_cgi' D:/Rails/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:27:in dispatch' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/rails.rb:76:in process' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/rails.rb:74:in synchronize' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/rails.rb:74:in process' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:159:in process_client' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:158:in each' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:158:in process_client' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:285:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:285:in initialize' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:285:in new' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:285:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:268:in initialize' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:268:in new' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:268:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/configurator.rb:282:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/configurator.rb:281:in each' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/configurator.rb:281:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/mongrel_rails:128:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/command.rb:212:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/mongrel_rails:281 D:/Rails/ruby/bin/mongrel_rails:19:in load' D:/Rails/ruby/bin/mongrel_rails:19 Here is the exposures_controller code: class ExposuresController < ApplicationController # GET /exposures # GET /exposures.xml def index @exposures = Exposure.all respond_to do |format| format.html # index.html.erb format.xml { render :xml => @exposures } end end #/exposure/graph/1 def graph @exposure = Exposure.find(params[:id]) project_name = @exposure.tender.project.name group_name = @exposure.tender.user.group.name tender_desc = @exposure.tender.description direction = "Cash Out" direction = "Cash In" if @exposure.supply currency_1_and_2 = "#{@exposure.currency_in} = #{@exposure.currency_out}" title = "#{project_name}:#{group_name}:#{tender_desc}/n" title += "#{direction}:#{currency_1_and_2}" factors = Array.new carrieds = Array.new days = Array.new @exposure.rates.each do |r| factors << r.factor carrieds << r.carried days << r.day.to_s end max = (factors+carrieds).max min = (factors+carrieds).min g = Graph.new g.title(title, '{font-size: 12px;}') g.set_data(factors) g.line_hollow(2, 4, '0x80a033', 'Bounces', 10) g.set_x_labels(days) g.set_x_label_style( 10, '#CC3399', 2 ); g.set_y_min(min*0.9) g.set_y_max(max*1.1) g.set_y_label_steps(5) render :text = g.render end def test render :text = "this works" end # GET /exposures/1 # GET /exposures/1.xml def show @exposure = Exposure.find(params[:id]) @graph = open_flash_chart_object(700,250, "/exposures/graph/#{@exposure.id}") #@graph = "/exposures/graph/#{@exposure.id}" respond_to do |format| format.html # show.html.erb format.xml { render :xml => @exposure } end end # GET /exposures/new # GET /exposures/new.xml def new @exposure = Exposure.new respond_to do |format| format.html # new.html.erb format.xml { render :xml => @exposure } end end # GET /exposures/1/edit def edit @exposure = Exposure.find(params[:id]) end # POST /exposures # POST /exposures.xml def create @exposure = Exposure.new(params[:exposure]) respond_to do |format| if @exposure.save flash[:notice] = 'Exposure was successfully created.' format.html { redirect_to(@exposure) } format.xml { render :xml => @exposure, :status => :created, :location => @exposure } else format.html { render :action => "new" } format.xml { render :xml => @exposure.errors, :status => :unprocessable_entity } end end end # PUT /exposures/1 # PUT /exposures/1.xml def update @exposure = Exposure.find(params[:id]) respond_to do |format| if @exposure.update_attributes(params[:exposure]) flash[:notice] = 'Exposure was successfully updated.' format.html { redirect_to(@exposure) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @exposure.errors, :status => :unprocessable_entity } end end end # DELETE /exposures/1 # DELETE /exposures/1.xml def destroy @exposure = Exposure.find(params[:id]) @exposure.destroy respond_to do |format| format.html { redirect_to(exposures_url) } format.xml { head :ok } end end end Clever readers will notice the 'graph' action. This is what I really want to work, but if I can't even get the test action working, then I'm sure I have no chance. Any ideas? I have restarted mongrel a few times with no change. Here is the output of Rake routes (but I don't believe this is the problem. The error would be in the form of and HTML error response). D:\Rails\rails_apps\fxrake routes (in D:/Rails/rails_apps/fx) DEPRECATION WARNING: Rake tasks in vendor/plugins/open_flash_chart/tasks are deprecated. Use lib/tasks instead. (called from D:/ by/gems/1.8/gems/rails-2.3.8/lib/tasks/rails.rb:10) rates GET /rates(.:format) {:controller="rates", :action="index"} POST /rates(.:format) {:controller="rates", :action="create"} new_rate GET /rates/new(.:format) {:controller="rates", :action="new"} edit_rate GET /rates/:id/edit(.:format) {:controller="rates", :action="edit"} rate GET /rates/:id(.:format) {:controller="rates", :action="show"} PUT /rates/:id(.:format) {:controller="rates", :action="update"} DELETE /rates/:id(.:format) {:controller="rates", :action="destroy"} tenders GET /tenders(.:format) {:controller="tenders", :action="index"} POST /tenders(.:format) {:controller="tenders", :action="create"} new_tender GET /tenders/new(.:format) {:controller="tenders", :action="new"} edit_tender GET /tenders/:id/edit(.:format) {:controller="tenders", :action="edit"} tender GET /tenders/:id(.:format) {:controller="tenders", :action="show"} PUT /tenders/:id(.:format) {:controller="tenders", :action="update"} DELETE /tenders/:id(.:format) {:controller="tenders", :action="destroy"} exposures GET /exposures(.:format) {:controller="exposures", :action="index"} POST /exposures(.:format) {:controller="exposures", :action="create"} new_exposure GET /exposures/new(.:format) {:controller="exposures", :action="new"} edit_exposure GET /exposures/:id/edit(.:format) {:controller="exposures", :action="edit"} exposure GET /exposures/:id(.:format) {:controller="exposures", :action="show"} PUT /exposures/:id(.:format) {:controller="exposures", :action="update"} DELETE /exposures/:id(.:format) {:controller="exposures", :action="destroy"} currencies GET /currencies(.:format) {:controller="currencies", :action="index"} POST /currencies(.:format) {:controller="currencies", :action="create"} new_currency GET /currencies/new(.:format) {:controller="currencies", :action="new"} edit_currency GET /currencies/:id/edit(.:format) {:controller="currencies", :action="edit"} currency GET /currencies/:id(.:format) {:controller="currencies", :action="show"} PUT /currencies/:id(.:format) {:controller="currencies", :action="update"} DELETE /currencies/:id(.:format) {:controller="currencies", :action="destroy"} projects GET /projects(.:format) {:controller="projects", :action="index"} POST /projects(.:format) {:controller="projects", :action="create"} new_project GET /projects/new(.:format) {:controller="projects", :action="new"} edit_project GET /projects/:id/edit(.:format) {:controller="projects", :action="edit"} project GET /projects/:id(.:format) {:controller="projects", :action="show"} PUT /projects/:id(.:format) {:controller="projects", :action="update"} DELETE /projects/:id(.:format) {:controller="projects", :action="destroy"} groups GET /groups(.:format) {:controller="groups", :action="index"} POST /groups(.:format) {:controller="groups", :action="create"} new_group GET /groups/new(.:format) {:controller="groups", :action="new"} edit_group GET /groups/:id/edit(.:format) {:controller="groups", :action="edit"} group GET /groups/:id(.:format) {:controller="groups", :action="show"} PUT /groups/:id(.:format) {:controller="groups", :action="update"} DELETE /groups/:id(.:format) {:controller="groups", :action="destroy"} users GET /users(.:format) {:controller="users", :action="index"} POST /users(.:format) {:controller="users", :action="create"} new_user GET /users/new(.:format) {:controller="users", :action="new"} edit_user GET /users/:id/edit(.:format) {:controller="users", :action="edit"} user GET /users/:id(.:format) {:controller="users", :action="show"} PUT /users/:id(.:format) {:controller="users", :action="update"} DELETE /users/:id(.:format) {:controller="users", :action="destroy"} /:controller/:action/:id /:controller/:action/:id(.:format) D:\Rails\rails_apps\fxrails -v Rails 2.3.8 Thanks in advance for the help -Jon

    Read the article

  • Why is this PHP loop rendering every row twice?

    - by Christopher
    I'm working on a real frankensite here not of my own design. There's a rudimentary CMS and one of the pages shows customer records from a MySQL DB. For some reason, it has no probs picking up the data from the DB - there's no duplicate records - but it renders each row twice. <?php $limit = 500; $area = 'customers_list'; $prc = 'customer_list.php'; if($_GET['page']) { include('inc/functions.php'); $page = $_GET['page']; } else { $page = 1; } $limitvalue = $page * $limit - ($limit); $customers_check = get_customers(); $customers = get_customers($limitvalue, $limit); $totalrows = count($customers_check); ?> <!-- pid: customer_list --> <table border="0" width="100%" cellpadding="0" cellspacing="0" style="float: left; margin-bottom: 20px;"> <tr> <td class="col_title" width="200">Name</td> <td></td> <td class="col_title" width="200">Town/City</td> <td></td> <td class="col_title">Telephone</td> <td></td> </tr> <?php for ($i = 0; $i < count($customers); $i++) { ?> <tr> <td colspan="2" class="cus_col_1"><a href="customer_details.php?id=<?php echo $customers[$i]['customer_id']; ?>"><?php echo $customers[$i]['surname'].', '.$customers[$i]['first_name']; ?></a></td> <td colspan="2" class="cus_col_2"><?php echo $customers[$i]['town']; ?></td> <td class="cus_col_1"><?php echo $customers[$i]['telephone']; ?></td> <td class="cus_col_2"> <a href="javascript: single_execute('prc/customers.prc.php?delete=yes&id=<?php echo $customers[$i]['customer_id']; ?>')" onClick="return confirmdel();" class="btn_maroon_small" style="margin: 0px; float: right; margin-right: 10px;"><div class="btn_maroon_small_left"> <div class="btn_maroon_small_right">Delete Account</div> </div></a> <a href="customer_edit.php?id=<?php echo $customers[$i]['customer_id']; ?>" class="btn_black" style="margin: 0px; float: right; margin-right: 10px;"><div class="btn_black_left"> <div class="btn_black_right">Edit Account</div> </div></a> <a href="mailto: <?php echo $customers[$i]['email']; ?>" class="btn_black" style="margin: 0px; float: right; margin-right: 10px;"><div class="btn_black_left"> <div class="btn_black_right">Email Customer</div> </div></a> </td> </tr> <tr><td class="col_divider" colspan="6"></td></tr> <?php }; ?> </table> <!--///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////--> <!--// PAGINATION--> <!--///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////--> <div class="pagination_holder"> <?php if($page != 1) { $pageprev = $page-1; ?> <a href="javascript: change('<?php echo $area; ?>', '<?php echo $prc; ?>?page=<?php echo $pageprev; ?>');" class="pagination_left">Previous</a> <?php } else { ?> <div class="pagination_left, page_grey">Previous</div> <?php } ?> <div class="pagination_middle"> <?php $numofpages = $totalrows / $limit; for($i = 1; $i <= $numofpages; $i++) { if($i == $page) { ?> <div class="page_number_selected"><?php echo $i; ?></div> <?php } else { ?> <a href="javascript: change('<?php echo $area; ?>', '<?php echo $prc; ?>?page=<?php echo $i; ?>');" class="page_number"><?php echo $i; ?></a> <?php } } if(($totalrows % $limit) != 0) { if($i == $page) { ?> <div class="page_number_selected"><?php echo $i; ?></div> <?php } else { ?> <a href="javascript: change('<?php echo $area; ?>', '<?php echo $prc; ?>?page=<?php echo $i; ?>');" class="page_number"><?php echo $i; ?></a> <?php } } ?> </div> <?php if(($totalrows - ($limit * $page)) > 0) { $pagenext = $page+1; ?> <a href="javascript: change('<?php echo $area; ?>', '<?php echo $prc; ?>?page=<?php echo $pagenext; ?>');" class="pagination_right">Next</a> <?php } else { ?> <div class="pagination_right, page_grey">Next</div> <?php } ?> </div> <!--///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////--> <!--// END PAGINATION--> <!--///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////--> I'm not the world's best PHP expert but I think I can see an error in a for loop when there is one... But everything looks ok to me. You'll notice that the customer name is clickable; clicking takes you to another page where you can view their full info as held in the DB - and for both rows, the customer ID is identical, and manually checking the DB shows there's no duplicate entries. The code is definitely rendering each row twice, but for what reason I have no idea. All pointers / advice appreciated.

    Read the article

  • what does calling ´this´ outside of a jquery plugin refer to

    - by Richard
    Hi, I am using the liveTwitter plugin The problem is that I need to stop the plugin from hitting the Twitter api. According to the documentation I need to do this $("#tab1 .container_twitter_status").each(function(){ this.twitter.stop(); }); Already, the each does not make sense on an id and what does this refer to? Anyway, I get an undefined error. I will paste the plugin code and hope it makes sense to somebody MY only problem thusfar with this plugin is that I need to be able to stop it. thanks in advance, Richard /* * jQuery LiveTwitter 1.5.0 * - Live updating Twitter plugin for jQuery * * Copyright (c) 2009-2010 Inge Jørgensen (elektronaut.no) * Licensed under the MIT license (MIT-LICENSE.txt) * * $Date: 2010/05/30$ */ /* * Usage example: * $("#twitterSearch").liveTwitter('bacon', {limit: 10, rate: 15000}); */ (function($){ if(!$.fn.reverse){ $.fn.reverse = function() { return this.pushStack(this.get().reverse(), arguments); }; } $.fn.liveTwitter = function(query, options, callback){ var domNode = this; $(this).each(function(){ var settings = {}; // Handle changing of options if(this.twitter) { settings = jQuery.extend(this.twitter.settings, options); this.twitter.settings = settings; if(query) { this.twitter.query = query; } this.twitter.limit = settings.limit; this.twitter.mode = settings.mode; if(this.twitter.interval){ this.twitter.refresh(); } if(callback){ this.twitter.callback = callback; } // ..or create a new twitter object } else { // Extend settings with the defaults settings = jQuery.extend({ mode: 'search', // Mode, valid options are: 'search', 'user_timeline' rate: 15000, // Refresh rate in ms limit: 10, // Limit number of results refresh: true }, options); // Default setting for showAuthor if not provided if(typeof settings.showAuthor == "undefined"){ settings.showAuthor = (settings.mode == 'user_timeline') ? false : true; } // Set up a dummy function for the Twitter API callback if(!window.twitter_callback){ window.twitter_callback = function(){return true;}; } this.twitter = { settings: settings, query: query, limit: settings.limit, mode: settings.mode, interval: false, container: this, lastTimeStamp: 0, callback: callback, // Convert the time stamp to a more human readable format relativeTime: function(timeString){ var parsedDate = Date.parse(timeString); var delta = (Date.parse(Date()) - parsedDate) / 1000; var r = ''; if (delta < 60) { r = delta + ' seconds ago'; } else if(delta < 120) { r = 'a minute ago'; } else if(delta < (45*60)) { r = (parseInt(delta / 60, 10)).toString() + ' minutes ago'; } else if(delta < (90*60)) { r = 'an hour ago'; } else if(delta < (24*60*60)) { r = '' + (parseInt(delta / 3600, 10)).toString() + ' hours ago'; } else if(delta < (48*60*60)) { r = 'a day ago'; } else { r = (parseInt(delta / 86400, 10)).toString() + ' days ago'; } return r; }, // Update the timestamps in realtime refreshTime: function() { var twitter = this; $(twitter.container).find('span.time').each(function(){ $(this).html(twitter.relativeTime(this.timeStamp)); }); }, // Handle reloading refresh: function(initialize){ var twitter = this; if(this.settings.refresh || initialize) { var url = ''; var params = {}; if(twitter.mode == 'search'){ params.q = this.query; if(this.settings.geocode){ params.geocode = this.settings.geocode; } if(this.settings.lang){ params.lang = this.settings.lang; } if(this.settings.rpp){ params.rpp = this.settings.rpp; } else { params.rpp = this.settings.limit; } // Convert params to string var paramsString = []; for(var param in params){ if(params.hasOwnProperty(param)){ paramsString[paramsString.length] = param + '=' + encodeURIComponent(params[param]); } } paramsString = paramsString.join("&"); url = "http://search.twitter.com/search.json?"+paramsString+"&callback=?"; } else if(twitter.mode == 'user_timeline') { url = "http://api.twitter.com/1/statuses/user_timeline/"+encodeURIComponent(this.query)+".json?count="+twitter.limit+"&callback=?"; } else if(twitter.mode == 'list') { var username = encodeURIComponent(this.query.user); var listname = encodeURIComponent(this.query.list); url = "http://api.twitter.com/1/"+username+"/lists/"+listname+"/statuses.json?per_page="+twitter.limit+"&callback=?"; } $.getJSON(url, function(json) { var results = null; if(twitter.mode == 'search'){ results = json.results; } else { results = json; } var newTweets = 0; $(results).reverse().each(function(){ var screen_name = ''; var profile_image_url = ''; if(twitter.mode == 'search') { screen_name = this.from_user; profile_image_url = this.profile_image_url; created_at_date = this.created_at; } else { screen_name = this.user.screen_name; profile_image_url = this.user.profile_image_url; // Fix for IE created_at_date = this.created_at.replace(/^(\w+)\s(\w+)\s(\d+)(.*)(\s\d+)$/, "$1, $3 $2$5$4"); } var userInfo = this.user; var linkified_text = this.text.replace(/[A-Za-z]+:\/\/[A-Za-z0-9-_]+\.[A-Za-z0-9-_:%&\?\/.=]+/, function(m) { return m.link(m); }); linkified_text = linkified_text.replace(/@[A-Za-z0-9_]+/g, function(u){return u.link('http://twitter.com/'+u.replace(/^@/,''));}); linkified_text = linkified_text.replace(/#[A-Za-z0-9_\-]+/g, function(u){return u.link('http://search.twitter.com/search?q='+u.replace(/^#/,'%23'));}); if(!twitter.settings.filter || twitter.settings.filter(this)) { if(Date.parse(created_at_date) > twitter.lastTimeStamp) { newTweets += 1; var tweetHTML = '<div class="tweet tweet-'+this.id+'">'; if(twitter.settings.showAuthor) { tweetHTML += '<img width="24" height="24" src="'+profile_image_url+'" />' + '<p class="text"><span class="username"><a href="http://twitter.com/'+screen_name+'">'+screen_name+'</a>:</span> '; } else { tweetHTML += '<p class="text"> '; } tweetHTML += linkified_text + ' <span class="time">'+twitter.relativeTime(created_at_date)+'</span>' + '</p>' + '</div>'; $(twitter.container).prepend(tweetHTML); var timeStamp = created_at_date; $(twitter.container).find('span.time:first').each(function(){ this.timeStamp = timeStamp; }); if(!initialize) { $(twitter.container).find('.tweet-'+this.id).hide().fadeIn(); } twitter.lastTimeStamp = Date.parse(created_at_date); } } }); if(newTweets > 0) { // Limit number of entries $(twitter.container).find('div.tweet:gt('+(twitter.limit-1)+')').remove(); // Run callback if(twitter.callback){ twitter.callback(domNode, newTweets); } // Trigger event $(domNode).trigger('tweets'); } }); } }, start: function(){ var twitter = this; if(!this.interval){ this.interval = setInterval(function(){twitter.refresh();}, twitter.settings.rate); this.refresh(true); } }, stop: function(){ if(this.interval){ clearInterval(this.interval); this.interval = false; } } }; var twitter = this.twitter; this.timeInterval = setInterval(function(){twitter.refreshTime();}, 5000); this.twitter.start(); } }); return this; }; })(jQuery);

    Read the article

  • How to display a JSON error message?

    - by Tiny Giant Studios
    I'm currently developing a tumblr theme and have built a jQuery JSON thingamabob that uses the Tumblr API to do the following: The user would click on the "post type" link (e.g. Video Posts), at which stage jQuery would use JSON to grab all the posts that's related to that type and then dynamically display them in a designated area. Now everything works absolutely peachy, except that with Tumblr being Tumblr and their servers taking a knock every now and then, the Tumblr API thingy is sometimes offline. Now I can't foresee when this function will be down, which is why I want to display some generic error message if JSON (for whatever reason) was unable to load the post. You'll see I've already written some code to show an error message when jQuery can't find any posts related to that post type BUT it doesn't cover any server errors. Note: I sometimes get this error: Failed to load resource: the server responded with a status of 503 (Service Temporarily Unavailable) It is for this 503 Error message that I need to write some code, but I'm slightly clueless :) Here's the jQuery JSON code: $('ul.right li').find('a').click(function() { var postType = this.className; var count = 0; byCategory(postType); return false; function byCategory(postType, callback) { $.getJSON('{URL}/api/read/json?type=' + postType + '&callback=?', function(data) { var article = []; $.each(data.posts, function(i, item) { // i = index // item = data for a particular post switch(item.type) { case 'photo': article[i] = '<div class="post_wrap"><div class="photo" style="padding-bottom:5px;">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/XSTldh6ds/photo_icon.png" alt="type_icon"/></a>' + '<a href="' + item.url + '" title="{Title}"><img src="' + item['photo-url-500'] + '"alt="image" /></a></div></div>'; count = 1; break; case 'video': article[i] = '<div class="post_wrap"><div class="video" style="padding-bottom:5px;">' + '<a href="' + item.url + '" title="{Title}" class="type_icon">' + '<img src="http://static.tumblr.com/ewjv7ap/nuSldhclv/video_icon.png" alt="type_icon"/></a>' + '<span style="margin: auto;">' + item['video-player'] + '</span>' + '</div></div>'; count = 1; break; case 'audio': if (use_IE == true) { article[i] = '<div class="post_wrap"><div class="regular">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/R50ldh5uj/audio_icon.png" alt="type_icon"/></a>' + '<h3><a href="' + item.url + '">' + item['id3-artist'] +' - ' + item['id3-title'] + '</a></h3>' + '</div></div>'; } else { article[i] = '<div class="post_wrap"><div class="regular">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/R50ldh5uj/audio_icon.png" alt="type_icon"/></a>' + '<h3><a href="' + item.url + '">' + item['id3-artist'] +' - ' + item['id3-title'] + '</a></h3><div class="player">' + item['audio-player'] + '</div>' + '</div></div>'; }; count = 1; break; case 'regular': article[i] = '<div class="post_wrap"><div class="regular">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/dwxldhck1/regular_icon.png" alt="type_icon"/></a><h3><a href="' + item.url + '">' + item['regular-title'] + '</a></h3><div class="description_container">' + item['regular-body'] + '</div></div></div>'; count = 1; break; case 'quote': article[i] = '<div class="post_wrap"><div class="quote">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/loEldhcpr/quote_icon.png" alt="type_icon"/></a><blockquote><h3><a href="' + item.url + '" title="{Title}">' + item['quote-text'] + '</a></h3></blockquote><cite>- ' + item['quote-source'] + '</cite></div></div>'; count = 1; break; case 'conversation': article[i] = '<div class="post_wrap"><div class="chat">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/MVuldhcth/conversation_icon.png" alt="type_icon"/></a><h3><a href="' + item.url + '">' + item['conversation-title'] + '</a></h3></div></div>'; count = 1; break; case 'link': article[i] = '<div class="post_wrap"><div class="link">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/EQGldhc30/link_icon.png" alt="type_icon"/></a><h3><a href="' + item['link-url'] + '" target="_blank">' + item['link-text'] + '</a></h3></div></div>'; count = 1; break; default: alert('No Entries Found.'); }; }) // end each if (!(count == 0)) { $('#content_right') .hide('fast') .html('<div class="first_div"><span class="left_corner"></span><span class="right_corner"></span><h2>Displaying ' + postType + ' Posts Only</h2></div>' + article.join('')) .slideDown('fast') } else { $('#content_right') .hide('fast') .html('<div class="first_div"><span class="left_corner"></span><span class="right_corner"></span><h2>Hmmm, currently there are no ' + postType + ' posts to display</h2></div>') .slideDown('fast') } // end getJSON }); // end byCategory } }); If you'd like to see the demo in action, check out Elegantem but do note that everything might work absolutely fine for you (or not), depending on Tumblr's temperament.

    Read the article

  • calling and killing a parent function with onmouseover and onmouseout events

    - by Zoolu
    I want to call the function upon the onmouseover="ParentFunction();" then kill it onmouseout="killParent();". Note: in my code the parent function is called initiate(); and the killer function is called reset(); which lies outside the parent function at the bottom of the script. I don't know how to kill the intitiate() function my first guess was: var reset = function(){ return initiate(); }; here's my source code: any suggestions and help appreciated. <!doctype html> <html> <head> <title> function/event prototype </title> <link rel="stylesheet" type="text/css" href="styling.css" /> </head> <body> <h2> <em>Fantastical place<br/>prototype</em> </h2> <div id="button-container"> <div id="button-box"> <button id="activate" onmouseover="initiate()" onmouseout="reset();" width="50px" height="50px" title="Activate"> </button> </div> <div id="text-box"> </div> </div> <div id="container"> <canvas id="playground" width="200px" height="250px"> </canvas> <canvas id="face" width="400px" height="200px"> </canvas> <!-- <div id="clear"> </div> --> </div> <script> alert("Welcome, there are x entries as of" +""+new Date().getHours()); //global scope var i=0; var c1 = []; //c is short for collect var c2 = []; var c3 = []; var c4 = []; var c5 = []; var c6 = []; var initiate = function(){ //the button that triggers the program var timer = setInterval(function(){clock()},90); //copy this block for ref. function clock(){ i+=1; var a = Math.round(Math.random()*200); var b = Math.round(Math.random()*250); var c = Math.round(Math.random()*200); var d = Math.round(Math.random()*250); var e = Math.round(Math.random()*200); var f = Math.round(Math.random()*250); c1.push(a); c2.push(b); c3.push(c); c4.push(d); c5.push(e); c6.push(f); // document.write(i); var c = document.getElementById("playground"); var ctx = c.getContext("2d"); ctx.beginPath(); ctx.moveTo(c3[i-2], c4[i-2]); ctx.bezierCurveTo(c1[i-2],c2[i-2],c5[i-2],c6[i-2],c3[i-1], c4[i-1]); // ctx.lineTo(c3[i-1], c4[i-1]); if(a<200){ ctx.strokeStyle="#FF33CC"; } else if(a<400){ ctx.strokeStyle="#FF33aa"; } else{ ctx.strokeStyle="#FF3388"; } ctx.stroke(); document.getElementById("text-box").innerHTML=i+"<p>Thoughts.</p>"; if(i===20){ //alert("15 reached"); clearInterval(timer);//to clearInterval must be using a global scoped variable. return; } }; //end of clock //setInterval(clock,150); var targetFace = document.getElementById("face"); var face = targetFace.getContext("2d"); var faceTimer = setInterval(function(){faceAnim()},80); //copy this block for ref. global scoped. function faceAnim(){ face.beginPath(); face.strokeStyle="#FF33CC"; face.moveTo(100,104); //eye line face.bezierCurveTo(150,125,250,125,300,104); face.moveTo(200,1); //centre line face.lineTo(200,400); face.moveTo(125,111);//left eye lid face.bezierCurveTo(160,135,170,130,185,120); face.moveTo(150,116);//left eye face.bezierCurveTo(155,125,165,125,170,118); face.moveTo(275,111);//right eye lid face.bezierCurveTo(240,135,230,130,215,120); face.moveTo(250,116);//right eye face.bezierCurveTo(245,125,235,125,230,118); face.moveTo(195, 118); //left nose face.lineTo(190, 160); face.lineTo(200,170); face.moveTo(190,160); //left nostroll face.lineTo(180,160); face.lineTo(191,154); face.moveTo(180,160); //left lower nostrol face.lineTo(200,170); face.moveTo(205, 118); //right nose face.lineTo(210, 160); face.lineTo(200,170); face.moveTo(210,160); //right nostroll face.lineTo(220,160); face.lineTo(209,154); face.moveTo(220,160); //right lower nostrol face.lineTo(200,170); face.moveTo(200,140); //outer triad face.lineTo(170, 100); face.lineTo(230, 100); face.lineTo(200, 140); face.moveTo(200,145); //outer triad drop shadow face.lineTo(170, 100); face.lineTo(230, 100); face.lineTo(200, 145); face.moveTo(200,130); //inner triad face.lineTo(180, 105); face.lineTo(220, 105); face.lineTo(200, 130); //face.lineWidth =0.6; face.moveTo(280,111);//outer right eye lid face.bezierCurveTo(240,140,230,135,210,120); face.moveTo(120,111);//outer left eye lid face.bezierCurveTo(160,140,170,135,190,120); face.moveTo(162,174); //upper mouth line face.bezierCurveTo(170,180,230,180,238,174); face.moveTo(165,175); //mouth line bottom face.bezierCurveTo(190,Math.floor(Math.random()*25+180),210,Math.floor(Math.random()*25+180),235,175); face.moveTo(232,204); //head shape face.lineTo(340, 20); face.moveTo(168,204); //head shape face.lineTo(60, 20); face.stroke(); //exicute all co-ords. }; //end of face anim var clearFace = function(){ document.getElementById('face').getContext('2d').clearRect(0, 0, 700, 750); }; setInterval(clearFace,90); }; //end of parent function var reset = function(){ document.getElementById('playground').getContext('2d').clearRect(0, 0, 700, 750); //clearInterval(faceTimer); //delete initiate(); }; </script> </body> </html>

    Read the article

  • Something is making my page perform an Ajax call multiple times... [read: I've never been more frust

    - by Jack Webb-Heller
    NOTE: This is a long question. I've explained all the 'basics' at the top and then there's some further (optional) information for if you need it. Hi folks Basically last night this started happening at about 9PM whilst I was trying to restructure my code to make it a bit nicer for the designer to add a few bits to. I tried to fix it until 2AM at which point I gave up. Came back to it this morning, still baffled. I'll be honest with you, I'm a pretty bad Javascript developer. Since starting this project Javascript has been completely new to me and I've just learn as I went along. So please forgive me if my code structure is really bad (perhaps give a couple of pointers on how to improve it?). So, to the problem: to reproduce it, visit http://furnace.howcode.com (it's far from complete). This problem is a little confusing but I'd really appreciate the help. So in the second column you'll see three tabs The 'Newest' tab is selected by default. Scroll to the bottom, and 3 further results should be dynamically fetched via Ajax. Now click on the 'Top Rated' tab. You'll see all the results, but ordered by rating Scroll to the bottom of 'Top Rated'. You'll see SIX results returned. This is where it goes wrong. Only a further three should be returned (there are 18 entries in total). If you're observant you'll notice two 'blocks' of 3 returned. The first 'block' is the second page of results from the 'Newest' tab. The second block is what I just want returned. Did that make any sense? Never mind! So basically I checked this out in Firebug. What happens is, from a 'Clean' page (first load, nothing done) it calls ONE POST request to http://furnace.howcode.com/code/loadmore . But every time you load a new one of the tabs, it makes an ADDITIONAL POST request each time where there should normally only be ONE. So, can you help me? I'd really appreciate it! At this point you could start independent investigation or read on for a little further (optional) information. Thanks! Jack Further Info (may be irrelevant but here for reference): It's almost like there's some Javascript code or something being left behind that duplicates it each time. I thought it might be this code that I use to detect when the browser is scrolled to the bottom: var col = $('#col2'); col.scroll(function(){ if (col.outerHeight() == (col.get(0).scrollHeight - col.scrollTop())) loadMore(1); }); So what I thought was that code was left behind, and so every time you scroll #col2 (which contains different data for each tab) it detected that and added it for #newest as well. So, I made each tab click give #col2 a dynamic class - either .newestcol, .featuredcol, or .topratedcol. And then I changed the var col=$('.newestcol');dynamically so it would only detect it individually for each tab (makin' any sense?!). But hey, that didn't do anything. Another useful tidbit: here's the PHP for http://furnace.howcode.com/code/loadmore: $kind = $this->input->post('kind'); if ($kind == 1){ // kind is 1 - newest $start = $this->input->post('currentpage'); $data['query'] = "SELECT code.id AS codeid, code.title AS codetitle, code.summary AS codesummary, code.author AS codeauthor, code.rating AS rating, code.date, code_tags.*, tags.*, users.firstname AS authorname, users.id AS authorid, GROUP_CONCAT(tags.tag SEPARATOR ', ') AS taggroup FROM code, code_tags, tags, users WHERE users.id = code.author AND code_tags.code_id = code.id AND tags.id = code_tags.tag_id GROUP BY code_id ORDER BY date DESC LIMIT $start, 15 "; $this->load->view('code/ajaxlist',$data); } elseif ($kind == 2) { // kind is 2 - featured So my jQuery code sends a variable 'kind'. If it's 1, it runs the query for Newest, etc. etc. The PHP code for furnace.howcode.com/code/ajaxlist is: <?php // Our query base // SELECT * FROM code ORDER BY date DESC $query = $this->db->query($query); foreach($query->result() as $row) { ?> <script type="text/javascript"> $('#title-<?php echo $row->codeid;?>').click(function() { var form_data = { id: <?php echo $row->codeid; ?> }; $('#col3').fadeOut('slow', function() { $.ajax({ url: "<?php echo site_url('code/viewajax');?>", type: 'POST', data: form_data, success: function(msg) { $('#col3').html(msg); $('#col3').fadeIn('fast'); } }); }); }); </script> <div class="result"> <div class="resulttext"> <div id="title-<?php echo $row->codeid; ?>" class="title"> <?php echo anchor('#',$row->codetitle); ?> </div> <div class="summary"> <?php echo $row->codesummary; ?> </div> <!-- Now insert the 5-star rating system --> <?php include($_SERVER['DOCUMENT_ROOT']."/fivestars/5star.php");?> <div class="bottom"> <div class="author"> Submitted by <?php echo anchor('auth/profile/'.$row->authorid,''.$row->authorname);?> </div> <?php // Now we need to take the GROUP_CONCATted tags and split them using the magic of PHP into seperate tags $tagarray = explode(", ", $row->taggroup); foreach ($tagarray as $tag) { ?> <div class="tagbutton" href="#"> <span><?php echo $tag; ?></span> </div> <?php } ?> </div> </div> </div> <?php } echo "&nbsp;";?> <script type="text/javascript"> var newpage = <?php echo $this->input->post('currentpage') + 15;?>; </script> So that's everything in PHP. The rest you should be able to view with Firebug or by viewing the Source code. I've put all the Tab/clicking/Ajaxloading bits in the tags at the very bottom. There's a comment before it all kicks off. Thanks so much for your help!

    Read the article

  • css: problems with floating a sidebar

    - by user239831
    hey guys, i can't seem to get it working. i have a div.post with #comments and a #respond form underneath it. the div.post contains the #comments and the #respond form. i simply want to float the sidebar to the right of the entire div.post and i cannot seem to get it work. here is an example. any idea how to solve that - its probably quite simple. :) <!DOCTYPE html> <html lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>layout</title> <style type="text/css"> body { margin:0; padding:0; } #main { width:100%; background:#cfcfcf; } .inner { margin: 0 auto; padding: 96px 72px 0; width: 1068px; border-color: #000; border-style: solid; border-width: 0 1px; color: #3C3C3C; } .post { width: 705px; background:#999; } #comments, #respond { width: 705px; background:#999; } #sidebar { width:338px; background:#777; margin-left:730px; } </style> </head> <body> <div id="main"> <div class="inner"> <div id="post" class="post"> <h2>Lorem Ipsum Test Page</h2> <div class="entry"> <p>Lorem ipsum sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.</p> <p>Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.</p> </div> <!-- entry --> <div id="comments"> <h2>One Response</h2> <ol class="commentlist"> <li id="comment" class="comment"> <div class="comment-body"> <div class="comment-author vcard"> Tom says: </div> <p>Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea found. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.</p> </div> </li> </ol> </div> <!-- comments --> <div id="respond"> <h2>Leave a Reply</h2> <form id="commentform" method="post" action=""> <input type="text" aria-required="true" tabindex="1" size="22" value="" id="author" name="author" gtbfieldid="230"> <label for="author">Name (required)</label> <input type="text" aria-required="true" tabindex="2" size="22" value="" id="email" name="email" gtbfieldid="231"> <label for="email">Mail (will not be published) (required)</label> <input type="text" tabindex="3" size="22" value="" id="url" name="url" gtbfieldid="232"> <label for="url">Website</label> </div> <textarea tabindex="4" rows="10" cols="58" id="comment" name="comment"></textarea> <input type="submit" value="Submit Comment" tabindex="5" id="submit" name="submit"> <input type="hidden" id="comment_post_ID" value="36" name="comment_post_ID"> <input type="hidden" value="0" id="comment_parent" name="comment_parent"> </form> </div> <!-- respond --> </div> <!-- post --> <div id="sidebar"> <h2>Meta</h2> <ul> <li>Login</li> <li>Anything</li> </ul> <h2>Subscribe</h2> <ul> <li>Entries (RSS)</li> <li>Comments (RSS)</li> </ul> </div> <!-- sidebar --> </div> <!-- inner --> </div> <!-- main --> </body> </html> edit: can you see any errors in my html. firebug says that the sidebar div is actually outside the .inner div. however if i look at the code it's inside.

    Read the article

  • Edit Contact code worked in 1.6 but doesn't work on Droid 2.1?

    - by user225405
    Hi All, I had some fairly simple code in my app to invoke Edit Contact activity on a known good contact index that worked in Android 1.6 but is broken for me now in Android 2.1 on the Droid. I built a sample activity/app 'EdCon' to show this: package com.jbh; import android.app.Activity; import android.content.Intent; import android.net.Uri; import android.os.Bundle; public class EdCon extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); // Build an intent to edit a known good contact index Intent i; i = new Intent(Intent.ACTION_EDIT); i.setData(Uri.parse("content://contacts/people/10")); startActivity(i); } } When I run this on my G1 running 1.6 it works as expected i.e. brings up the Edit Contact screen for the known index and then I can hit BACK to return to "Hello World, EdCon". When I run this on the Droid under 2.1 I get the following: 05-07 15:35:57.787: INFO/ActivityManager(1013): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.jbh/.EdCon } 05-07 15:35:57.826: DEBUG/AndroidRuntime(13780): Shutting down VM 05-07 15:35:57.826: DEBUG/dalvikvm(13780): DestroyJavaVM waiting for non-daemon threads to exit 05-07 15:35:57.928: DEBUG/dalvikvm(13780): DestroyJavaVM shutting VM down 05-07 15:35:57.928: DEBUG/dalvikvm(13780): HeapWorker thread shutting down 05-07 15:35:57.928: DEBUG/dalvikvm(13780): HeapWorker thread has shut down 05-07 15:35:57.928: DEBUG/jdwp(13780): JDWP shutting down net... 05-07 15:35:57.928: DEBUG/jdwp(13780): Got wake-up signal, bailing out of select 05-07 15:35:57.928: INFO/dalvikvm(13780): Debugger has detached; object registry had 1 entries 05-07 15:35:57.928: DEBUG/dalvikvm(13780): VM cleaning up 05-07 15:35:57.935: INFO/ActivityManager(1013): Start proc com.jbh for activity com.jbh/.EdCon: pid=13802 uid=10052 gids={1015} 05-07 15:35:57.967: ERROR/AndroidRuntime(13780): ERROR: thread attach failed 05-07 15:35:58.053: INFO/ActivityThread(13792): Publishing provider com.android.vending.SuggestionsProvider: com.android.vending.SuggestionsProvider 05-07 15:35:58.154: INFO/dalvikvm(13802): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=38) 05-07 15:35:58.209: DEBUG/dalvikvm(13780): LinearAlloc 0x0 used 639500 of 5242880 (12%) 05-07 15:35:58.365: INFO/dalvikvm(13802): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=18) 05-07 15:35:58.639: INFO/ActivityManager(1013): Starting activity: Intent { act=android.intent.action.EDIT dat=content://contacts/people/10 cmp=com.android.contacts/.ui.EditContactActivity } 05-07 15:35:58.975: DEBUG/dalvikvm(13137): GC freed 2902 objects / 166768 bytes in 61ms 05-07 15:35:59.100: DEBUG/vending(13792): com.android.vending.LocalDbSyncService.run(): Syncing local DB with package manager... 05-07 15:35:59.100: DEBUG/vending(13792): com.android.vending.LocalDbSyncService.syncLocalDbWithPackageManager(): No INSTALLING or UNINSTALLING assets. 05-07 15:35:59.115: INFO/ActivityManager(1013): Displayed activity com.android.contacts/.ui.EditContactActivity: 387 ms (total 1296 ms) 05-07 15:35:59.185: DEBUG/Sources(13137): Creating external source for type=com.facebook.auth.login, packageName=com.facebook.katana 05-07 15:35:59.225: DEBUG/vending(13792): com.android.vending.LocalDbSyncService.run(): Syncing done. 05-07 15:35:59.232: WARN/dalvikvm(13137): threadid=27: thread exiting with uncaught exception (group=0x4001b180) 05-07 15:35:59.232: ERROR/AndroidRuntime(13137): Uncaught handler: thread AsyncTask #1 exiting due to uncaught exception 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): java.lang.RuntimeException: An error occured while executing doInBackground() 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.os.AsyncTask$3.done(AsyncTask.java:200) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask.run(FutureTask.java:137) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.lang.Thread.run(Thread.java:1096) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): Caused by: android.database.sqlite.SQLiteException: no such column: raw_contact_id: , while compiling: SELECT account_name, account_type, sourceid, version, dirty, data_id, res_package, mimetype, data1, data2, data3, data4, data5, data6, data7, data8, data9, data10, data11, data12, data13, data14, data15, data_sync1, data_sync2, data_sync3, data_sync4, _id, is_primary, is_super_primary, data_version, group_sourceid, sync1, sync2, sync3, sync4, deleted, contact_id, starred, is_restricted FROM contact_entities_view WHERE (1) AND (raw_contact_id=10) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteProgram.native_compile(Native Method) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteProgram.compile(SQLiteProgram.java:110) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteProgram.(SQLiteProgram.java:59) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteQuery.(SQLiteQuery.java:49) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteDirectCursorDriver.query(SQLiteDirectCursorDriver.java:49) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteDatabase.rawQueryWithFactory(SQLiteDatabase.java:1221) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteQueryBuilder.query(SQLiteQueryBuilder.java:316) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.providers.contacts.ContactsProvider2.query(ContactsProvider2.java:3850) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.providers.contacts.ContactsProvider2.query(ContactsProvider2.java:3840) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.providers.contacts.ContactsProvider2$RawContactsEntityIterator.(ContactsProvider2.java:4498) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.providers.contacts.ContactsProvider2.queryEntities(ContactsProvider2.java:4751) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.content.ContentProvider$Transport.queryEntities(ContentProvider.java:140) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.content.ContentProviderClient.queryEntities(ContentProviderClient.java:98) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.content.ContentResolver.queryEntities(ContentResolver.java:296) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.contacts.model.EntitySet.fromQuery(EntitySet.java:72) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.contacts.ui.EditContactActivity$QueryEntitiesTask.doInBackground(EditContactActivity.java:191) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.contacts.ui.EditContactActivity$QueryEntitiesTask.doInBackground(EditContactActivity.java:154) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.contacts.util.WeakAsyncTask.doInBackground(WeakAsyncTask.java:45) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.os.AsyncTask$2.call(AsyncTask.java:185) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): ... 4 more 05-07 15:35:59.303: INFO/Process(1013): Sending signal. PID: 13137 SIG: 3 05-07 15:35:59.303: INFO/dalvikvm(13137): threadid=7: reacting to signal 3 05-07 15:35:59.303: ERROR/dalvikvm(13137): Unable to open stack trace file '/data/anr/traces.txt': Permission denied 05-07 15:35:59.506: INFO/DumpStateReceiver(1013): Added state dump to 1 crashes 05-07 15:36:07.053: DEBUG/dalvikvm(12901): GC freed 389 objects / 25056 bytes in 145ms 05-07 15:36:17.287: DEBUG/dalvikvm(11649): GC freed 154 objects / 6816 bytes in 136ms 05-07 15:36:22.365: DEBUG/dalvikvm(13574): GC freed 348 objects / 67848 bytes in 112ms 05-07 15:36:27.451: DEBUG/dalvikvm(11836): GC freed 267 objects / 17432 bytes in 65ms 05-07 15:36:32.553: DEBUG/dalvikvm(12757): GC freed 1888 objects / 92440 bytes in 67ms 05-07 15:36:38.803: INFO/power(1013): * set_screen_state 0 05-07 15:36:38.813: DEBUG/SurfaceFlinger(1013): About to give-up screen, flinger = 0x114c30 05-07 15:36:38.826: DEBUG/Sensors(1013): using accelerometer (name=accelerometer) 05-07 15:36:38.834: DEBUG/PhoneWindow(13137): couldn't save which view has focus because the focused view android.widget.ScrollView@44883558 has no id. 05-07 15:36:38.865: DEBUG/WifiService(1013): ACTION_SCREEN_OFF 05-07 15:36:38.889: DEBUG/WifiService(1013): setting ACTION_DEVICE_IDLE timer for 900000ms 05-07 15:36:44.107: DEBUG/dalvikvm(1013): GC freed 7351 objects / 521440 bytes in 130ms 05-07 15:36:49.373: DEBUG/dalvikvm(13553): GC freed 321 objects / 12056 bytes in 102ms The no such column: raw_contact_id: looks like the issue but I'm not sure how or why that would happen or what it means. Any help appreciated! [email protected]

    Read the article

  • g++ SSE intrinsics dilemma - value from intrinsic "saturates"

    - by Sriram
    Hi, I wrote a simple program to implement SSE intrinsics for computing the inner product of two large (100000 or more elements) vectors. The program compares the execution time for both, inner product computed the conventional way and using intrinsics. Everything works out fine, until I insert (just for the fun of it) an inner loop before the statement that computes the inner product. Before I go further, here is the code: //this is a sample Intrinsics program to compute inner product of two vectors and compare Intrinsics with traditional method of doing things. #include <iostream> #include <iomanip> #include <xmmintrin.h> #include <stdio.h> #include <time.h> #include <stdlib.h> using namespace std; typedef float v4sf __attribute__ ((vector_size(16))); double innerProduct(float* arr1, int len1, float* arr2, int len2) { //assume len1 = len2. float result = 0.0; for(int i = 0; i < len1; i++) { for(int j = 0; j < len1; j++) { result += (arr1[i] * arr2[i]); } } //float y = 1.23e+09; //cout << "y = " << y << endl; return result; } double sse_v4sf_innerProduct(float* arr1, int len1, float* arr2, int len2) { //assume that len1 = len2. if(len1 != len2) { cout << "Lengths not equal." << endl; exit(1); } /*steps: * 1. load a long-type (4 float) into a v4sf type data from both arrays. * 2. multiply the two. * 3. multiply the same and store result. * 4. add this to previous results. */ v4sf arr1Data, arr2Data, prevSums, multVal, xyz; //__builtin_ia32_xorps(prevSums, prevSums); //making it equal zero. //can explicitly load 0 into prevSums using loadps or storeps (Check). float temp[4] = {0.0, 0.0, 0.0, 0.0}; prevSums = __builtin_ia32_loadups(temp); float result = 0.0; for(int i = 0; i < (len1 - 3); i += 4) { for(int j = 0; j < len1; j++) { arr1Data = __builtin_ia32_loadups(&arr1[i]); arr2Data = __builtin_ia32_loadups(&arr2[i]); //store the contents of two arrays. multVal = __builtin_ia32_mulps(arr1Data, arr2Data); //multiply. xyz = __builtin_ia32_addps(multVal, prevSums); prevSums = xyz; } } //prevSums will hold the sums of 4 32-bit floating point values taken at a time. Individual entries in prevSums also need to be added. __builtin_ia32_storeups(temp, prevSums); //store prevSums into temp. cout << "Values of temp:" << endl; for(int i = 0; i < 4; i++) cout << temp[i] << endl; result += temp[0] + temp[1] + temp[2] + temp[3]; return result; } int main() { clock_t begin, end; int length = 100000; float *arr1, *arr2; double result_Conventional, result_Intrinsic; // printStats("Allocating memory."); arr1 = new float[length]; arr2 = new float[length]; // printStats("End allocation."); srand(time(NULL)); //init random seed. // printStats("Initializing array1 and array2"); begin = clock(); for(int i = 0; i < length; i++) { // for(int j = 0; j < length; j++) { // arr1[i] = rand() % 10 + 1; arr1[i] = 2.5; // arr2[i] = rand() % 10 - 1; arr2[i] = 2.5; // } } end = clock(); cout << "Time to initialize array1 and array2 = " << ((double) (end - begin)) / CLOCKS_PER_SEC << endl; // printStats("Finished initialization."); // printStats("Begin inner product conventionally."); begin = clock(); result_Conventional = innerProduct(arr1, length, arr2, length); end = clock(); cout << "Time to compute inner product conventionally = " << ((double) (end - begin)) / CLOCKS_PER_SEC << endl; // printStats("End inner product conventionally."); // printStats("Begin inner product using Intrinsics."); begin = clock(); result_Intrinsic = sse_v4sf_innerProduct(arr1, length, arr2, length); end = clock(); cout << "Time to compute inner product with intrinsics = " << ((double) (end - begin)) / CLOCKS_PER_SEC << endl; //printStats("End inner product using Intrinsics."); cout << "Results: " << endl; cout << " result_Conventional = " << result_Conventional << endl; cout << " result_Intrinsics = " << result_Intrinsic << endl; return 0; } I use the following g++ invocation to build this: g++ -W -Wall -O2 -pedantic -march=i386 -msse intrinsics_SSE_innerProduct.C -o innerProduct Each of the loops above, in both the functions, runs a total of N^2 times. However, given that arr1 and arr2 (the two floating point vectors) are loaded with a value 2.5, the length of the array is 100,000, the result in both cases should be 6.25e+10. The results I get are: Results: result_Conventional = 6.25e+10 result_Intrinsics = 5.36871e+08 This is not all. It seems that the value returned from the function that uses intrinsics "saturates" at the value above. I tried putting other values for the elements of the array and different sizes too. But it seems that any value above 1.0 for the array contents and any size above 1000 meets with the same value we see above. Initially, I thought it might be because all operations within SSE are in floating point, but floating point should be able to store a number that is of the order of e+08. I am trying to see where I could be going wrong but cannot seem to figure it out. I am using g++ version: g++ (GCC) 4.4.1 20090725 (Red Hat 4.4.1-2). Any help on this is most welcome. Thanks, Sriram.

    Read the article

  • Form Validation using javascript in joomla...

    - by Ankur
    I want to use form validation. I have used javascript for this and I have downloaded the com_php0.1alpha-J15.tar component for writing php code but the blank entries are goes to the database. Please guide me... sample code is here... <script language="javascript" type="text/javascript"> function Validation() { if(document.getElementById("name").value=="") { document.getElementById("nameerr").innerHTML="Enter Name"; document.getElementById("name").style.backgroundColor = "yellow"; } else { document.getElementById("nameerr").innerHTML=""; document.getElementById("name").style.backgroundColor = "White"; } if(document.getElementById("email").value=="") { document.getElementById("emailerr").innerHTML="Enter Email"; document.getElementById("email").style.backgroundColor = "yellow"; } else { document.getElementById("emailerr").innerHTML=""; document.getElementById("email").style.backgroundColor = "White"; } if(document.getElementById("phone").value=="") { document.getElementById("phoneerr").innerHTML="Enter Contact No"; document.getElementById("phone").style.backgroundColor = "yellow"; } else { document.getElementById("phoneerr").innerHTML=""; document.getElementById("phone").style.backgroundColor = "White"; } if(document.getElementById("society").value=="") { document.getElementById("societyerr").innerHTML="Enter Society"; document.getElementById("society").style.backgroundColor = "yellow"; } else { document.getElementById("societyerr").innerHTML=""; document.getElementById("society").style.backgroundColor = "White"; } if(document.getElementById("occupation").value=="") { document.getElementById("occupationerr").innerHTML="Enter Occupation"; document.getElementById("occupation").style.backgroundColor = "yellow"; } else { document.getElementById("occupationerr").innerHTML=""; document.getElementById("occupation").style.backgroundColor = "White"; } if(document.getElementById("feedback").value=="") { document.getElementById("feedbackerr").innerHTML="Enter Feedback"; document.getElementById("feedback").style.backgroundColor = "yellow"; } else { document.getElementById("feedbackerr").innerHTML=""; document.getElementById("feedback").style.backgroundColor = "White"; } if(document.getElementById("name").value=="" || document.getElementById("email").value=="" || document.getElementById("phone").value=="" || document.getElementById("society").value=="" || document.getElementById("occupation").value=="" || document.getElementById("feedback").value=="") return false; else return true; } </script> <?php if(isset($_POST['submit'])) { $conn = mysql_connect('localhost','root',''); mysql_select_db('society_f',$conn); $name = $_POST['name']; $email = $_POST['email']; $phone = $_POST['phone']; $society = $_POST['society']; $occupation = $_POST['occupation']; $feedback = $_POST['feedback']; $qry = "insert into feedback values(null". ",'" . $name . "','" . $email . "','" . $phone . "','" . $society . "','" . $occupation . "','" . $feedback . "')" ; $res = mysql_query($qry); if(!$res) { echo "Could not run a query" . mysql_error(); exit(); } } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Untitled Document</title> <style type="text/css"> .td{ background-color:#FFFFFF; color:#000066; width:100px; } .text{ border:1px solid #0033FF; color:#000000; } </style> </head> <body> <form id="form1" method="post"> <input type="hidden" name="check" value="post"/> <table border="0" align="center" cellpadding="2" cellspacing="2"> <tr> <td colspan="3" style="text-align:center;"><span style="font-size:24px;color:#000066;">Feedback Form</span></td> </tr> <tr> <td class="td">Name</td> <td><input type="text" id="name" name="name" class="text" ></td> <td style="font-style:italic;color:#FF0000;" id="nameerr"></td> </tr> <tr> <td class="td">E-mail</td> <td><input type="text" id="Email" name="Email" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="emailerr"></td> </tr> <tr> <td class="td">Contact No</td> <td><input type="text" id="Phone" name="Phone" maxlength="15" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="Phoneerr"></td> </tr> <tr> <td class="td">Your Society</td> <td><input type="text" id="society" name="society" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="societyerr"></td> </tr> <tr> <td class="td">Occupation</td> <td><input type="text" id="occupation" name="occupation" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="occupationerr"></td> </tr> <tr> <td class="td">Feedback</td> <td><textarea name="feedback" id="feedback" class="text"></textarea></td> <td style="font-style:italic;color:#FF0000;" id="feedbackerr"></td> </tr> <tr> <td colspan="3" style="text-align:center;"> <input type="submit" value="Submit" id="submit" onClick="Validation();" /> <input type="reset" value="Reset" onClick="Resetme();" /> </td> </tr> </table> </form> </body> </html>

    Read the article

  • Form Validation using javascript in joomla

    - by Ankur
    I want to use form validation. I have used javascript for this and I have downloaded the com_php0.1alpha-J15.tar component for writing php code but the blank entries are goes to the database. Please guide me... sample code is here... <script language="javascript" type="text/javascript"> function Validation() { if(document.getElementById("name").value=="") { document.getElementById("nameerr").innerHTML="Enter Name"; document.getElementById("name").style.backgroundColor = "yellow"; } else { document.getElementById("nameerr").innerHTML=""; document.getElementById("name").style.backgroundColor = "White"; } if(document.getElementById("email").value=="") { document.getElementById("emailerr").innerHTML="Enter Email"; document.getElementById("email").style.backgroundColor = "yellow"; } else { document.getElementById("emailerr").innerHTML=""; document.getElementById("email").style.backgroundColor = "White"; } if(document.getElementById("phone").value=="") { document.getElementById("phoneerr").innerHTML="Enter Contact No"; document.getElementById("phone").style.backgroundColor = "yellow"; } else { document.getElementById("phoneerr").innerHTML=""; document.getElementById("phone").style.backgroundColor = "White"; } if(document.getElementById("society").value=="") { document.getElementById("societyerr").innerHTML="Enter Society"; document.getElementById("society").style.backgroundColor = "yellow"; } else { document.getElementById("societyerr").innerHTML=""; document.getElementById("society").style.backgroundColor = "White"; } if(document.getElementById("occupation").value=="") { document.getElementById("occupationerr").innerHTML="Enter Occupation"; document.getElementById("occupation").style.backgroundColor = "yellow"; } else { document.getElementById("occupationerr").innerHTML=""; document.getElementById("occupation").style.backgroundColor = "White"; } if(document.getElementById("feedback").value=="") { document.getElementById("feedbackerr").innerHTML="Enter Feedback"; document.getElementById("feedback").style.backgroundColor = "yellow"; } else { document.getElementById("feedbackerr").innerHTML=""; document.getElementById("feedback").style.backgroundColor = "White"; } if(document.getElementById("name").value=="" || document.getElementById("email").value=="" || document.getElementById("phone").value=="" || document.getElementById("society").value=="" || document.getElementById("occupation").value=="" || document.getElementById("feedback").value=="") return false; else return true; } </script> <?php if(isset($_POST['submit'])) { $conn = mysql_connect('localhost','root',''); mysql_select_db('society_f',$conn); $name = $_POST['name']; $email = $_POST['email']; $phone = $_POST['phone']; $society = $_POST['society']; $occupation = $_POST['occupation']; $feedback = $_POST['feedback']; $qry = "insert into feedback values(null". ",'" . $name . "','" . $email . "','" . $phone . "','" . $society . "','" . $occupation . "','" . $feedback . "')" ; $res = mysql_query($qry); if(!$res) { echo "Could not run a query" . mysql_error(); exit(); } } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Untitled Document</title> <style type="text/css"> .td{ background-color:#FFFFFF; color:#000066; width:100px; } .text{ border:1px solid #0033FF; color:#000000; } </style> </head> <body> <form id="form1" method="post"> <input type="hidden" name="check" value="post"/> <table border="0" align="center" cellpadding="2" cellspacing="2"> <tr> <td colspan="3" style="text-align:center;"><span style="font-size:24px;color:#000066;">Feedback Form</span></td> </tr> <tr> <td class="td">Name</td> <td><input type="text" id="name" name="name" class="text" ></td> <td style="font-style:italic;color:#FF0000;" id="nameerr"></td> </tr> <tr> <td class="td">E-mail</td> <td><input type="text" id="Email" name="Email" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="emailerr"></td> </tr> <tr> <td class="td">Contact No</td> <td><input type="text" id="Phone" name="Phone" maxlength="15" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="Phoneerr"></td> </tr> <tr> <td class="td">Your Society</td> <td><input type="text" id="society" name="society" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="societyerr"></td> </tr> <tr> <td class="td">Occupation</td> <td><input type="text" id="occupation" name="occupation" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="occupationerr"></td> </tr> <tr> <td class="td">Feedback</td> <td><textarea name="feedback" id="feedback" class="text"></textarea></td> <td style="font-style:italic;color:#FF0000;" id="feedbackerr"></td> </tr> <tr> <td colspan="3" style="text-align:center;"> <input type="submit" value="Submit" id="submit" onClick="Validation();" /> <input type="reset" value="Reset" onClick="Resetme();" /> </td> </tr> </table> </form> </body> </html>

    Read the article

  • Having trouble returning a value from a method call when sending an array and the program is error out when run in reference to the sort

    - by programmerNOOB
    I am getting the following output when this program is run: Please enter the Social Security Number for taxpayer 0: 111111111 Please enter the gross income for taxpayer 0: 20000 Please enter the Social Security Number for taxpayer 1: 555555555 Please enter the gross income for taxpayer 1: 50000 Please enter the Social Security Number for taxpayer 2: 333333333 Please enter the gross income for taxpayer 2: 5464166 Please enter the Social Security Number for taxpayer 3: 222222222 Please enter the gross income for taxpayer 3: 645641 Please enter the Social Security Number for taxpayer 4: 444444444 Please enter the gross income for taxpayer 4: 29000 Taxpayer # 1 SSN: 111111111, Income is $20,000.00, Tax is $0.00 Taxpayer # 2 SSN: 555555555, Income is $50,000.00, Tax is $0.00 Taxpayer # 3 SSN: 333333333, Income is $5,464,166.00, Tax is $0.00 Taxpayer # 4 SSN: 222222222, Income is $645,641.00, Tax is $0.00 Taxpayer # 5 SSN: 444444444, Income is $29,000.00, Tax is $0.00 Unhandled Exception: System.InvalidOperationException: Failed to compare two elements in the array. --- System.ArgumentException: At least one object must implement IComparable. at System.Collections.Comparer.Compare(Object a, Object b) at System.Collections.Generic.ObjectComparer`1.Compare(T x, T y) at System.Collections.Generic.ArraySortHelper`1.SwapIfGreaterWithItems(T[] keys, IComparer`1 comparer, Int32 a, Int32 b) at System.Collections.Generic.ArraySortHelper`1.QuickSort(T[] keys, Int32 left, Int32 right, IComparer`1 comparer) at System.Collections.Generic.ArraySortHelper`1.Sort(T[] keys, Int32 index, Int32 length, IComparer`1 comparer) --- End of inner exception stack trace --- at System.Collections.Generic.ArraySortHelper`1.Sort(T[] keys, Int32 index, Int32 length, IComparer`1 comparer) at System.Array.Sort[T](T[] array, Int32 index, Int32 length, IComparer`1 comparer) at System.Array.Sort[T](T[] array) at Assignment5.Taxpayer.Main(String[] args) in Program.cs:line 150 Notice the 0s at the end of the line that should be the tax amount??? Here is the code: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace taxes { class Rates { // Create a class named rates that has the following data members: private int incLimit; private double lowTaxRate; private double highTaxRate; // use read-only accessor public int IncomeLimit { get { return incLimit; } } public double LowTaxRate { get { return lowTaxRate; } } public double HighTaxRate { get { return highTaxRate; } } //A class constructor that assigns default values public Rates() { int limit = 30000; double lowRate = .15; double highRate = .28; incLimit = limit; lowTaxRate = lowRate; highTaxRate = highRate; } //A class constructor that takes three parameters to assign input values for limit, low rate and high rate. public Rates(int limit, double lowRate, double highRate) { } // A CalculateTax method that takes an income parameter and computes the tax as follows: public int CalculateTax(int income) { int limit = 0; double lowRate = 0; double highRate = 0; int taxOwed = 0; // If income is less than the limit then return the tax as income times low rate. if (income < limit) taxOwed = Convert.ToInt32(income * lowRate); // If income is greater than or equal to the limit then return the tax as income times high rate. if (income >= limit) taxOwed = Convert.ToInt32(income * highRate); return taxOwed; } } //end class Rates // Create a class named Taxpayer that has the following data members: public class Taxpayer { //Use get and set accessors. string SSN { get; set; } int grossIncome { get; set; } // Use read-only accessor. public int taxOwed { get { return taxOwed; } } // The Taxpayer class should be set up so that its objects are comparable to each other based on tax owed. class taxpayer : IComparable { public int taxOwed { get; set; } public int income { get; set; } int IComparable.CompareTo(Object o) { int returnVal; taxpayer temp = (taxpayer)o; if (this.taxOwed > temp.taxOwed) returnVal = 1; else if (this.taxOwed < temp.taxOwed) returnVal = -1; else returnVal = 0; return returnVal; } // End IComparable.CompareTo } //end taxpayer IComparable class // **The tax should be calculated whenever the income is set. // The Taxpayer class should have a getRates class method that has the following. public static void GetRates() { // Local method data members for income limit, low rate and high rate. int incLimit = 0; double lowRate; double highRate; string userInput; // Prompt the user to enter a selection for either default settings or user input of settings. Console.Write("Would you like the default values (D) or would you like to enter the values (E)?: "); /* If the user selects default the default values you will instantiate a rates object using the default constructor * and set the Taxpayer class data member for tax equal to the value returned from calling the rates object CalculateTax method.*/ userInput = Convert.ToString(Console.ReadLine()); if (userInput == "D" || userInput == "d") { Rates rates = new Rates(); rates.CalculateTax(incLimit); } // end if /* If the user selects to enter the rates data then prompt the user to enter values for income limit, low rate and high rate, * instantiate a rates object using the three-argument constructor passing those three entries as the constructor arguments and * set the Taxpayer class data member for tax equal to the valuereturned from calling the rates object CalculateTax method. */ if (userInput == "E" || userInput == "e") { Console.Write("Please enter the income limit: "); incLimit = Convert.ToInt32(Console.ReadLine()); Console.Write("Please enter the low rate: "); lowRate = Convert.ToDouble(Console.ReadLine()); Console.Write("Please enter the high rate: "); highRate = Convert.ToDouble(Console.ReadLine()); Rates rates = new Rates(incLimit, lowRate, highRate); rates.CalculateTax(incLimit); } } static void Main(string[] args) { Taxpayer[] taxArray = new Taxpayer[5]; Rates taxRates = new Rates(); // Implement a for-loop that will prompt the user to enter the Social Security Number and gross income. for (int x = 0; x < taxArray.Length; ++x) { taxArray[x] = new Taxpayer(); Console.Write("Please enter the Social Security Number for taxpayer {0}: ", x + 1); taxArray[x].SSN = Console.ReadLine(); Console.Write("Please enter the gross income for taxpayer {0}: ", x + 1); taxArray[x].grossIncome = Convert.ToInt32(Console.ReadLine()); } Taxpayer.GetRates(); // Implement a for-loop that will display each object as formatted taxpayer SSN, income and calculated tax. for (int i = 0; i < taxArray.Length; ++i) { Console.WriteLine("Taxpayer # {0} SSN: {1}, Income is {2:c}, Tax is {3:c}", i + 1, taxArray[i].SSN, taxArray[i].grossIncome, taxRates.CalculateTax(taxArray[i].grossIncome)); } // end for // Implement a for-loop that will sort the five objects in order by the amount of tax owed Array.Sort(taxArray); Console.WriteLine("Sorted by tax owed"); for (int i = 0; i < taxArray.Length; ++i) { Console.WriteLine("Taxpayer # {0} SSN: {1}, Income is {2:c}, Tax is {3:c}", i + 1, taxArray[i].SSN, taxArray[i].grossIncome, taxRates.CalculateTax(taxArray[i].grossIncome)); } } //end main } // end Taxpayer class } //end Any clues as to why the dollar amount is coming up as 0 and why the sort is not working?

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctl.conf/loader.conf/KENCONF. It was initially based on Igor Sysoev's (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Tunings are for FreeBSD-CURRENT. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. sysctl.conf: # No zero mapping feature # May break wine # (There are also reports about broken samba3) #security.bsd.map_at_zero=0 # If you have really busy webserver with apache13 you may run out of processes #kern.maxproc=10000 # Same for servers with apache2 / Pound #kern.threads.max_threads_per_proc=4096 # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Can cause this on older kernels: # http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=10485760 # Mbuf 2k clusters (on amd64 7.2+ 25600 is default) # For such high value vm.kmem_size must be increased to 3G kern.ipc.nmbclusters=262144 # Jumbo pagesize(_SC_PAGESIZE) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=262144 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=65536 #kern.ipc.nmbjumbo16=32768 # For lower latency you can decrease scheduler's maximum time slice # default: stathz/10 (~ 13) #kern.sched.slice=1 # Increase max command-line length showed in `ps` (e.g for Tomcat/Java) # Default is PAGE_SIZE / 16 or 256 on x86 # This avoids commands to be presented as [executable] in `ps` # For more info see: http://www.freebsd.org/cgi/query-pr.cgi?pr=120749 kern.ps_arg_cache_limit=4096 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # On some systems HPET is almost 2 times faster than default ACPI-fast # Useful on systems with lots of clock_gettime / gettimeofday calls # See http://old.nabble.com/ACPI-fast-default-timecounter,-but-HPET-83--faster-td23248172.html # After revision 222222 HPET became default: http://svnweb.freebsd.org/base?view=revision&revision=222222 kern.timecounter.hardware=HPET # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # This is useful on Fat-Long-Pipes #net.inet.tcp.recvbuf_max=10485760 #net.inet.tcp.recvbuf_inc=65535 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This is useful on Fat-Long-Pipes #net.inet.tcp.sendbuf_max=10485760 #net.inet.tcp.sendbuf_inc=65535 # Turn off receive autotuning # You can play with it. #net.inet.tcp.recvbuf_auto=0 #net.inet.tcp.sendbuf_auto=0 # This should be enabled if you going to use big spaces (>64k) # Also timestamp field is useful when using syncookies net.inet.tcp.rfc1323=1 # Turn this off on high-speed, lossless connections (LAN 1Gbit+) # If you set it there is no need in TCP_NODELAY sockopt (see man tcp) net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) # This sysctl was removed in 10-CURRENT: # See: http://www.mail-archive.com/[email protected]/msg06178.html #net.inet.tcp.inflight.enable=0 # TCP slowstart algorithm tunings # We assuming we have very fast clients #net.inet.tcp.slowstart_flightsize=100 #net.inet.tcp.local_slowstart_flightsize=100 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't checked it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # # stops route cache degregation during a high-bandwidth flood # http://www.freebsd.org/doc/en/books/handbook/securing-freebsd.html #net.inet.ip.rtexpire=2 net.inet.ip.rtminexpire=2 net.inet.ip.rtmaxcache=1024 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # # There is also good example of sysctl.conf with comments: # http://www.thern.org/projects/sysctl.conf # # icmp may NOT rst, helpful for those pesky spoofed # icmp/udp floods that end up taking up your outgoing # bandwidth/ifqueue due to all that outgoing RST traffic. # #net.inet.tcp.icmp_may_rst=0 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # IPv6 Security # For more info see http://www.fosslc.org/drupal/content/security-implications-ipv6 # Disable Node info replies # To see this vulnerability in action run `ping6 -a sglAac ::1` or `ping6 -w ::1` on unprotected node net.inet6.icmp6.nodeinfo=0 # Turn on IPv6 privacy extensions # For more info see proposal http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-06/msg00103.html net.inet6.ip6.use_tempaddr=1 net.inet6.ip6.prefer_tempaddr=1 # Disable ICMP redirect net.inet6.icmp6.rediraccept=0 # Disable acceptation of RA and auto linklocal generation if you don't use them #net.inet6.ip6.accept_rtadv=0 #net.inet6.ip6.auto_linklocal=0 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds # (default: 30000. RFC from 1979 recommends 120000) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=200000 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becomes lower) vfs.ufs.dirhash_maxmem=67108864 # Note from commit http://svn.freebsd.org/base/head@211031 : # For systems with RAID volumes and/or virtualization envirnments, where # read performance is very important, increasing this sysctl tunable to 32 # or even more will demonstratively yield additional performance benefits. vfs.read_max=32 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 # ZFS # Enable prefetch. Useful for sequential load type i.e fileserver. # FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386 systems and # on any amd64 systems with less than 4GB of avaiable memory # For additional info check this nabble thread http://old.nabble.com/Samba-read-speed-performance-tuning-td27964534.html #vfs.zfs.prefetch_disable=0 # On highload servers you may notice following message in dmesg: # "Approaching the limit on PV entries, consider increasing either the # vm.pmap.shpgperproc or the vm.pmap.pv_entry_max tunable" vm.pmap.shpgperproc=2048 loader.conf: # Accept filters for data, http and DNS requests # Useful when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Linux specific devices in /dev # As for 8.1 it only /dev/full #lindev_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load="YES" #siis_load="YES" # FreeBSD 8.2+ # New Congestion Control for FreeBSD # http://caia.swin.edu.au/urp/newtcp/tools/cc_chd-readme-0.1.txt # http://www.ietf.org/proceedings/78/slides/iccrg-5.pdf # Initial merge commit message http://www.mail-archive.com/[email protected]/msg31410.html #cc_chd_load="YES" # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # If your server has lots of swap (>4Gb) you should increase following value # according to http://lists.freebsd.org/pipermail/freebsd-hackers/2009-October/029616.html # Otherwise you'll be getting errors # "kernel: swap zone exhausted, increase kern.maxswzone" # kern.maxswzone="256M" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 10% of mem) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # FreeBSD 9+ # HPET "legacy route" support. It should allow HPET to work per-CPU # See http://www.mail-archive.com/[email protected]/msg03603.html #hint.atrtc.0.clock=0 #hint.attimer.0.clock=0 #hint.hpet.0.legacy_route=1 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=512 net.inet.tcp.syncache.cachelimit=65536 # Increased hostcache # Later host cache can be viewed via net.inet.tcp.hostcache.list hidden sysctl # Very useful for it's RTT RTTVAR # Must be power of two net.inet.tcp.hostcache.hashsize=65536 # hashsize * bucketlimit (which is 30 by default) # It allocates 255Mb (1966080*136) of RAM net.inet.tcp.hostcache.cachelimit=1966080 # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Disable ipfw deny all # Should be uncommented when there is a chance that # kernel and ipfw binary may be out-of sync on next reboot #net.inet.ip.fw.default_to_accept=1 # # SIFTR (Statistical Information For TCP Research) is a kernel module that # logs a range of statistics on active TCP connections to a log file. # See prerelease notes http://groups.google.com/group/mailing.freebsd.current/browse_thread/thread/b4c18be6cdce76e4 # and man 4 sitfr #siftr_load="YES" # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em/igb drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # sysctl dev.em.0.stats=1 ; dmesg # sysctl dev.em.0.debug=1 ; dmesg # Also after r209242 (-CURRENT) there is a separate sysctl for each stat variable; # Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.maxthreads=4 #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # # FreeBSD 9.x+ # Increase interface send queue length # See commit message http://svn.freebsd.org/viewvc/base?view=revision&revision=207554 #net.link.ifqmaxlen=1024 # Nicer boot logo =) loader_logo="beastie" And finally here is KERNCONF: # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU >= 4k # This req. is only for receiving data. # Read more in man zero_copy_sockets # Also this epic thread on kernel trap: # http://kerneltrap.org/node/6506 # Here Linus says that "anybody that does it that way (FreeBSD) is totally incompetent" #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE # There was stackoverflow found in KAME IPSec stack: # See http://secunia.com/advisories/43995/ # For quick workaround you can use `ipfw add deny proto ipcomp` options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL # On 8.1+ you can disable verbose to see blocked packets on ipfw0 interface. # Also there is no point in compiling verbose into the kernel, because # now there is net.inet.ip.fw.verbose tunable. #options IPFIREWALL_VERBOSE #options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron device amdtemp # Same for Intel processors device coretemp # man 4 cpuctl device cpuctl # CPU control pseudo-device # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # Debug & DTrace options KDB # Kernel debugger related code options KDB_TRACE # Print a stack trace for a panic options KDTRACE_FRAME # amd64-only(?) options KDTRACE_HOOKS # all architectures - enable general DTrace hooks #options DDB #options DDB_CTF # all architectures - kernel ELF linker loads CTF data # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (8.x+) #options TEKEN_UTF8 # FreeBSD 8.1+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html # (FYI: "resolution" is panic so use with caution) #options DEADLKRES # Increase maximum size of Raw I/O and sendfile(2) readahead #options MAXPHYS=(1024*1024) #options MAXBSIZE=(1024*1024) # For scheduler debug enable following option. # Debug will be available via `kern.sched.stats` sysctl # For more information see http://svnweb.freebsd.org/base/head/sys/conf/NOTES?view=markup #options SCHED_STATS If you are tuning network for maximum performance you may wish to play with ifconfig options like: # You can list all capabilities via `ifconfig -m` ifconfig [-]rxcsum [-]txcsum [-]tso [-]lro mtu In case you've enabled DDB in kernel config, you should edit your /etc/ddb.conf and add something like this to enable automatic reboot (and textdump as bonus): script kdb.enter.panic=textdump set; capture on; show pcpu; bt; ps; alltrace; capture off; call doadump; reset script kdb.enter.default=textdump set; capture on; bt; ps; capture off; call doadump; reset And do not forget to add ddb_enable="YES" to /etc/rc.conf Since FreeBSD 9 you can select to enable/disable flowcontrol on your NIC: # See http://en.wikipedia.org/wiki/Ethernet_flow_control and # http://www.mail-archive.com/[email protected]/msg07927.html for additional info ifconfig bge0 media auto mediaopt flowcontrol PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. FreeBSD WIP * Whats cooking for FreeBSD 7? * Whats cooking for FreeBSD 8? * Whats cooking for FreeBSD 9? So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • Oracle Support Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1)

    - by faye.todd(at)oracle.com
    Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1) Copyright (c) 2010, Oracle Corporation. All Rights Reserved. In this Document  Purpose  Last Review Date  Instructions for the Reader  Troubleshooting Details     1. Scope and Application      2. Definitions and Classifications     3. How to Use This Guide     4. Basic AQ Propagation Troubleshooting     5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages     6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment     7. Performance Issues  References Applies to: Oracle Server - Enterprise Edition - Version: 8.1.7.0 to 11.2.0.2 - Release: 8.1.7 to 11.2Information in this document applies to any platform. Purpose This document presents a step-by-step methodology for troubleshooting and resolving problems with Advanced Queuing Propagation in both Streams and basic Advanced Queuing environments. It also serves as a master reference for other more specific notes on Oracle Streams Propagation and Advanced Queuing Propagation issues. Last Review Date December 20, 2010 Instructions for the Reader A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting. Troubleshooting Details 1. Scope and Application This note is intended for Database Administrators of Oracle databases where issues are being encountered with propagating messages between advanced queues, whether the queues are used for user-created messaging systems or for Oracle Streams. It contains troubleshooting steps and links to notes for further problem resolution.It can also be used a template to document a problem when it is necessary to engage Oracle Support Services. Knowing what is NOT happening can frequently speed up the resolution process by focusing solely on the pertinent problem area. This guide is divided into five parts: Section 2: Definitions and Classifications (discusses the different types and features of propagations possible - helpful for understanding the rest of the guide) Section 3: How to Use this Guide (to be used as a start part for determining the scope of the problem and what sections to consult) Section 4. Basic AQ propagation troubleshooting (applies to both AQ propagation of user enqueued and dequeued messages as well as Oracle Streams propagations) Section 5. Additional troubleshooting steps for AQ propagation of user enqueued and dequeued messages Section 6. Additional troubleshooting steps for Oracle Streams propagation Section 7. Performance issues 2. Definitions and Classifications Given the potential scope of issues that can be encountered with AQ propagation, the first recommended step is to do some basic diagnosis to determine the type of problem that is being encountered. 2.1. What Type of Propagation is Being Used? 2.1.1. Buffered Messaging For an advanced queue, messages can be maintained on disk (persistent messaging) or in memory (buffered messaging). To determine if a queue is buffered or not, reference the GV_$BUFFERED_QUEUES view. If the queue does not appear in this view, it is persistent. 2.1.2. Propagation mode - queue-to-dblink vs queue-to-queue As of 10.2, an AQ propagation can also be defined as queue-to-dblink, or queue-to-queue: queue-to-dblink: The propagation delivers messages or events from the source queue to all subscribing queues at the destination database identified by the dblink. A single propagation schedule is used to propagate messages to all subscribing queues. Hence any changes made to this schedule will affect message delivery to all the subscribing queues. This mode does not support multiple propagations from the same source queue to the same target database. queue-to-queue: Added in 10.2, this propagation mode delivers messages or events from the source queue to a specific destination queue identified on the database link. This allows the user to have fine-grained control on the propagation schedule for message delivery. This new propagation mode also supports transparent failover when propagating to a destination Oracle RAC system. With queue-to-queue propagation, you are no longer required to re-point a database link if the owner instance of the queue fails on Oracle RAC. This mode supports multiple propagations to the same target database if the target queues are different. The default is queue-to-dblink. To verify if queue-to-queue propagation is being used, in non-Streams environments query DBA_QUEUE_SCHEDULES.DESTINATION - if a remote queue is listed along with the remote database link, then queue-to-queue propagation is being used. For Streams environments, the DBA_PROPAGATION.QUEUE_TO_QUEUE column can be checked.See the following note for a method to switch between the two modes:Document 827473.1 How to alter propagation from queue-to-queue to queue-to-dblink 2.1.3. Combined Capture and Apply (CCA) for Streams In 11g Oracle Streams environments, an optimization called Combined Capture and Apply (CCA) is implemented by default when possible. Although a propagation is configured in this case, Streams does not use it; instead it passes information directly from capture to an apply receiver. To see if CCA is in use: COLUMN CAPTURE_NAME HEADING 'Capture Name' FORMAT A30COLUMN OPTIMIZATION HEADING 'CCA Mode?' FORMAT A10SELECT CAPTURE_NAME, DECODE(OPTIMIZATION,0, 'No','Yes') OPTIMIZATIONFROM V$STREAMS_CAPTURE; Also, see the following note:Document 463820.1 Streams Combined Capture and Apply in 11g 2.2. Queue Table Compatibility There are three types of queue table compatibility. In more recent databases, queue tables may be present in all three modes of compatibility: 8.0 - earliest version, deprecated in 10.2 onwards 8.1 - support added for RAC, asynchronous notification, secure queues, queue level access control, rule-based subscribers, separate storage of history information 10.0 - if the database is in 10.1-compatible mode, then the default value for queue table compatibility is 10.0 2.3. Single vs Multiple Consumer Queue Tables If more than one recipient can dequeue a message from a queue, then its queue table is multiple consumer. You can propagate messages from a multiple-consumer queue to a single-consumer queue. Propagation from a single-consumer queue to a multiple-consumer queue is not possible. 3. How to Use This Guide 3.1. Are Messages Being Propagated at All, or is the Propagation Just Slow? Run the following query on the source database for the propagation (assuming that it is running): select TOTAL_NUMBER from DBA_QUEUE_SCHEDULES where QNAME='<source_queue_name>'; If TOTAL_NUMBER is increasing, then propagation is most likely functioning, although it may be slow. For performance issues, see Section 7. 3.2. Propagation Between Persistent User-Created Queues See Sections 4 and 5 (and optionally Section 6 if performance is an issue). 3.3. Propagation Between Buffered User-Created Queues See Sections 4, 5, and 6 (and optionally Section 7 if performance is an issue). 3.4. Propagation between Oracle Streams Queues (without Combined Capture and Apply (CCA) Optimization) See Sections 4 and 6 (and optionally Section 7 if performance is an issue). 3.5. Propagation between Oracle Streams Queues (with Combined Capture and Apply (CCA) Optimization) Although an AQ propagation is not used directly in this case, some characteristics of the message transfer are inferred from the propagation parameters used. Some parts of Sections 4 and 6 still apply. 3.6. Messaging Gateway Propagations This note does not apply to Messaging Gateway propagations. 4. Basic AQ Propagation Troubleshooting 4.1. Double-check Your Code Make sure that you are consistent in your usage of the database link(s) names, queue names, etc. It may be useful to plot a diagram of which queues are connected via which database links to make sure that the logical structure is correct. 4.2. Verify that Job Queue Processes are Running 4.2.1. Versions 10.2 and Lower - DBA_JOBS Package For versions 10.2 and lower, a scheduled propagation is managed by DBMS_JOB package. The propagation is performed by job queue process background processes. Therefore we need to verify that there are sufficient processes available for the propagation process. We should have at least 4 job queue processes running and preferably more depending on the number of other jobs running in the database. It should be noted that for AQ specific work, AQ will only ever use half of the job queue processes available.An issue caused by an inadequate job queue processes parameter setting is described in the following note:Document 298015.1 Kwqjswproc:Excep After Loop: Assigning To Self 4.2.1.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; 4.2.1.2. Job Queue Processes in Memory The following command will show how many job queue processes are currentlyin use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.1.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (spids) of job queue processes involved in propagation via select p.SPID, p.PROGRAM from V$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOBand j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%'; and these SPIDs can be used to check at the operating system level that they exist.In 8i a job queue process will have a name similar to: ora_snp1_<instance_name>.In 9i onwards you will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.2.2. Version 11.1 and Above - Oracle Scheduler In version 11.1 and above, Oracle Scheduler is used to perform AQ and Streams propagations. Oracle Scheduler automatically tunes the number of slave processes for these jobs based on the load on the computer system, and the JOB_QUEUE_PROCESSES initialization parameter is only used to specify the maximum number of slave processes. Therefore, the JOB_QUEUE_PROCESSES initialization parameter does not need to be set (it defaults to a very high number), unless you want to limit the number of slaves that can be created. If JOB_QUEUE_PROCESSES = 0, no propagation jobs will run.See the following note for a discussion of Oracle Streams 11g and Oracle Scheduler:Document 1083608.1 11g Streams and Oracle Scheduler 4.2.2.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0, and preferably be left at its default value. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; To set the JOB_QUEUE_PROCESSES parameter to its default value, run: connect / as sysdbaalter system reset JOB_QUEUE_PROCESSES; and then bounce the instance. 4.2.2.2. Job Queue Processes in Memory The following command will show how many job queue processes are currently in use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.2.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (SPIDs) of job queue processes involved in propagation via col PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_namefrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDRand jr.JOB_name=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%'; and these SPIDs can be used to check at the operating system level that they exist.You will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.3. Check the Alert Log and Any Associated Trace Files The first place to check for propagation failures is the alert logs at all sites (local and if relevant all remote sites). When a job queue process attempts to execute a schedule and fails it will always write an error stack to the alert log. This error stack will also be written in a job queue process trace file, which will be written to the BACKGROUND_DUMP_DEST location for 10.2 and below, and in the DIAGNOSTIC_DEST location for 11g. The fact that errors are written to the alert log demonstrates that the schedule is executing. This means that the problem could be with the set up of the schedule. In this example the ORA-02068 demonstrates that the failure was at the remote site. Further investigation revealed that the remote database was not open, hence the ORA-03114 error. Starting the database resolved the problem. Thu Feb 14 10:40:05 2002 Propagation Schedule for (AQADM.MULTIPLEQ, SHANE816.WORLD) encountered following error:ORA-04052: error occurred when looking up Remote object [email protected]: error occurred at recursive SQL level 4ORA-02068: following severe error from SHANE816ORA-03114: not connected to ORACLEORA-06512: at "SYS.DBMS_AQADM_SYS", line 4770ORA-06512: at "SYS.DBMS_AQADM", line 548ORA-06512: at line 1 Other potential errors that may be written to the alert log can be found in the following notes:Document 827184.1 AQ Propagation with CLOB data types Fails with ORA-22990 (11.1)Document 846297.1 AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn] (10.2, 11.1)Document 731292.1 ORA-25215 Reported on Local Propagation When Using Transformation with ANYDATA queue tables (10.2, 11.1, 11.2)Document 365093.1 ORA-07445 [kwqppay2aqe()+7360] Reported on Propagation of a Transformed Message (10.1, 10.2)Document 219416.1 Advanced Queuing Propagation Fails with ORA-22922 (9.0)Document 1203544.1 AQ Propagation Aborted with ORA-600 [ociksin: invalid status] on SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE After Upgrade (11.1, 11.2)Document 1087324.1 ORA-01405 ORA-01422 reported by Advanced Queuing Propagation schedules after RAC reconfiguration (10.2)Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370 incorrect usage of method" (9.2, 10.2, 11.1, 11.2)Document 332792.1 ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up Statspack (8.1, 9.0, 9.2, 10.1)Document 353325.1 ORA-24056: Internal inconsistency for QUEUE <queue_name> and destination <dblink> (8.1, 9.0, 9.2, 10.1, 10.2, 11.1, 11.2)Document 787367.1 ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2 (10.1, 10.2)Document 566622.1 ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1 (9.2, 10.1)Document 731539.1 ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTP (9.0, 9.2, 10.1, 10.2, 11.1)Document 253131.1 Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555) (9.2)Document 118884.1 How to unschedule a propagation schedule stuck in pending stateDocument 222992.1 DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 1204080.1 AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.Document 1233675.1 AQ Propagation stops after upgrade to 11.2.0.1 ORA-30757 4.3.1. Errors Related to Incorrect Network Configuration The most common propagation errors result from an incorrect network configuration. The list below contains common errors caused by tnsnames.ora file or database links being configured incorrectly: - ORA-12154: TNS:could not resolve service name- ORA-12505: TNS:listener does not currently know of SID given in connect descriptor- ORA-12514: TNS:listener could not resolve SERVICE_NAME - ORA-12541: TNS-12541 TNS:no listener 4.4. Check the Database Links Exist and are Functioning Correctly For schedules to remote databases confirm the database link exists via. SQL> col DBLINK for a45SQL> select QNAME, NVL(REGEXP_SUBSTR(DESTINATION, '[^@]+', 1, 2), DESTINATION) dblink2 from DBA_QUEUE_SCHEDULES3 where MESSAGE_DELIVERY_MODE = 'PERSISTENT';QNAME DBLINK------------------------------ ---------------------------------------------MY_QUEUE ORCL102B.WORLD Connect as the owner of the link and select across it to verify it works and connects to the database we expect. i.e. select * from ALL_QUEUES@ ORCL102B.WORLD; You need to ensure that the userid that scheduled the propagation (using DBMS_AQADM.SCHEDULE_PROPAGATION or DBMS_PROPAGATION_ADM.CREATE_PROPAGATION if using Streams) has access to the database link for the destination. 4.5. Has Propagation Been Correctly Scheduled? Check that the propagation schedule has been created and that a job queue process has been assigned. Look for the entry in DBA_QUEUE_SCHEDULES and SYS.AQ$_SCHEDULES for your schedule. For 10g and below, check that it has a JOBNO entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_JOBS with that JOBNO. For 11g and above, check that the schedule has a JOB_NAME entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_SCHEDULER_JOBS with that JOB_NAME. Check the destination is as intended and spelled correctly. SQL> select SCHEMA, QNAME, DESTINATION, SCHEDULE_DISABLED, PROCESS_NAME from DBA_QUEUE_SCHEDULES;SCHEMA QNAME DESTINATION S PROCESS------- ---------- ------------------ - -----------AQADM MULTIPLEQ AQ$_LOCAL N J000 AQ$_LOCAL in the destination column shows that the queue to which we are propagating to is in the same database as the source queue. If the propagation was to a remote (different) database, a database link will be in the DESTINATION column. The entry in the SCHEDULE_DISABLED column, N, means that the schedule is NOT disabled. If Y (yes) appears in this column, propagation is disabled and the schedule will not be executed. If not using Oracle Streams, propagation should resume once you have enabled the schedule by invoking DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE (for 10.2 Oracle Streams and above, the DBMS_PROPAGATION_ADM.START_PROPAGATION procedure should be used). The PROCESS_NAME is the name of the job queue process currently allocated to execute the schedule. This process is allocated dynamically at execution time. If the PROCESS_NAME column is null (empty) the schedule is not currently executing. You may need to execute this statement a number of times to verify if a process is being allocated. If a process is at some time allocated to the schedule, it is attempting to execute. SQL> select SCHEMA, QNAME, LAST_RUN_DATE, NEXT_RUN_DATE from DBA_QUEUE_SCHEDULES;SCHEMA QNAME LAST_RUN_DATE NEXT_RUN_DATE------ ----- ----------------------- ----------------------- AQADM MULTIPLEQ 13-FEB-2002 13:18:57 13-FEB-2002 13:20:30 In 11g, these dates are expressed in TIMESTAMP WITH TIME ZONE datatypes. If the NEXT_RUN_DATE and NEXT_RUN_TIME columns are null when this statement is executed, the scheduled propagation is currently in progress. If they never change it would suggest that the schedule itself is never executing. If the next scheduled execution is too far away, change the NEXT_TIME parameter of the schedule so that schedules are executed more frequently (assuming that the window is not set to be infinite). Parameters of a schedule can be changed using the DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE call. In 10g and below, scheduling propagation posts a job in the DBA_JOBS view. The columns are more or less the same as DBA_QUEUE_SCHEDULES so you just need to recognize the job and verify that it exists. SQL> select JOB, WHAT from DBA_JOBS where WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';JOB WHAT---- ----------------- 720 next_date := sys.dbms_aqadm.aq$_propaq(job); For 11g, scheduling propagation posts a job in DBA_SCHEDULER_JOBS instead: SQL> select JOB_NAME from DBA_SCHEDULER_JOBS where JOB_NAME like 'AQ_JOB$_%';JOB_NAME------------------------------AQ_JOB$_41 If no job exists, check DBA_QUEUE_SCHEDULES to make sure that the schedule has not been disabled. For 10g and below, the job number is dynamic for AQ propagation schedules. The procedure that is executed to expedite a propagation schedule runs, removes itself from DBA_JOBS, and then reposts a new job for the next scheduled propagation. The job number should therefore always increment unless the schedule has been set up to run indefinitely. 4.6. Is the Schedule Executing but Failing to Complete? Run the following query: SQL> select FAILURES, LAST_ERROR_MSG from DBA_QUEUE_SCHEDULES;FAILURES LAST_ERROR_MSG------------ -----------------------1 ORA-25207: enqueue failed, queue AQADM.INQ is disabled from enqueueingORA-02063: preceding line from SHANE816 The failures column shows how many times we have attempted to execute the schedule and failed. Oracle will attempt to execute the schedule 16 times after which it will be removed from the DBA_JOBS or DBA_SCHEDULER_JOBS view and the schedule will become disabled. The column DBA_QUEUE_SCHEDULES.SCHEDULE_DISABLED will show 'Y'. For 11g and above, the DBA_SCHEDULER_JOBS.STATE column will show 'BROKEN' for the job corresponding to DBA_QUEUE_SCHEDULES.JOB_NAME. Prior to 10g the back off algorithm for failures was exponential, whereas from 10g onwards it is linear. The propagation will become disabled on the 17th attempt. Only the last execution failure will be reflected in the LAST_ERROR_MSG column. That is, if the schedule fails 5 times for 5 different reasons, only the last set of errors will be recorded in DBA_QUEUE_SCHEDULES. Any errors need to be resolved to allow propagation to continue. If propagation has also become disabled due to 17 failures, first resolve the reason for the error and then re-enable the schedule using the DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE procedure, or DBMS_PROPAGATION_ADM.START_PROPAGATION if using 10.2 or above Oracle Streams. As soon as the schedule executes successfully the error message entries will be deleted. Oracle does not keep a history of past failures. However, when using Oracle Streams, the errors will be retained in the DBA_PROPAGATION view even after the schedule resumes successfully. See the following note for instructions on how to clear out the errors from the DBA_PROPAGATION view:Document 808136.1 How to clear the old errors from DBA_PROPAGATION view?If a schedule is active and no errors are being reported then the source queue may not have any messages to be propagated. 4.7. Do the Propagation Notification Queue Table and Queue Exist? Check to see that the propagation notification queue table and queue exist and are enabled for enqueue and dequeue. Propagation makes use of the propagation notification queue for handling propagation run-time events, and the messages in this queue are stored in a SYS-owned queue table. This queue should never be stopped or dropped and the corresponding queue table never be dropped. 10g and belowThe propagation notification queue table is of the format SYS.AQ$_PROP_TABLE_n, where 'n' is the RAC instance number, i.e. '1' for a non-RAC environment. This queue and queue table are created implicitly when propagation is first scheduled. If propagation has been scheduled and these objects do not exist, try unscheduling and rescheduling propagation. If they still do not exist contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ$_PROP_TABLE_1SQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ$_PROP_NOTIFY_1 YES YESAQ$_AQ$_PROP_TABLE_1_E NO NO If the AQ$_PROP_NOTIFY_1 queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_1_E should not be enabled for enqueue or dequeue.11g and aboveThe propagation notification queue table is of the format SYS.AQ_PROP_TABLE, and is created when the database is created. If they do not exist, contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ_PROP_TABLESQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ_PROP_NOTIFY YES YESAQ$_AQ_PROP_TABLE_E NO NO If the AQ_PROP_NOTIFY queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_E should not be enabled for enqueue or dequeue. 4.8. Does the Remote Queue Exist and is it Enabled for Enqueueing? Check that the remote queue the propagation is transferring messages to exists and is enabled for enqueue: SQL> select DESTINATION from USER_QUEUE_SCHEDULES where QNAME = 'OUTQ';DESTINATION-----------------------------------------------------------------------------"AQADM"."INQ"@M2V102.ESSQL> select OWNER, NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED from [email protected];OWNER NAME ENQUEUE DEQUEUE-------- ------ ----------- -----------AQADM INQ YES YES 4.9. Do the Target and Source Database Charactersets Differ? If a message fails to propagate, check the database charactersets of the source and target databases. Investigate whether the same message can propagate between the databases with the same characterset or it is only a particular combination of charactersets which causes a problem. 4.10. Check the Queue Table Type Agreement Propagation is not possible between queue tables which have types that differ in some respect. One way to determine if this is the case is to run the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure for the two queues that the propagation operates on. If the types do not agree, DBMS_AQADM.VERIFY_QUEUE_TYPES will return '0'.For AQ propagation between databases which have different NLS_LENGTH_SEMANTICS settings, propagation will not work, unless the queues are Oracle Streams ANYDATA queues.See the following notes for issues caused by lack of type agreement:Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 353754.1 Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT 4.11. Enable Propagation Tracing 4.11.1. System Level This is set it in the init.ora/spfile as follows: event="24040 trace name context forever, level 10" and restart the instanceThis event cannot be set dynamically with an alter system command until version 10.2: SQL> alter system set events '24040 trace name context forever, level 10'; To unset the event: SQL> alter system set events '24040 trace name context off'; Debugging information will be logged to job queue trace file(s) (jnnn) as propagation takes place. You can check the trace file for errors, and for statements indicating that messages have been sent. For the most part the trace information is understandable. This trace should also be uploaded to Oracle Support if a service request is created. 4.11.2. Attaching to a Specific Process We can also attach to an existing job queue processes that is running a propagation schedule and trace it individually using the oradebug utility, as follows:10.2 and below connect / as sysdbaselect p.SPID, p.PROGRAM from v$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 11g connect / as sysdbacol PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_NAMEfrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 4.11.3. Further Tracing The previous tracing steps only trace the job queue process executing the propagation on the source. At times it is useful to trace the propagation receiver process (the session which is enqueueing the messages into the target queue) on the target database which is associated with the job queue process on the source database.These following queries provide ways of identifying the processes involved in propagation so that you can attach to them via oradebug to generate trace information.In order to identify the propagation receiver process you need to execute the query as a user with privileges to access the v$ views in both the local and remote databases so the database link must connect as a user with those privileges in the remote database. The <DBLINK> in the queries should be replaced by the appropriate database link.The queries have two forms due to the differences between operating systems. The value returned by 'Rem Process' is the operating system identifier of the propagation receiver on the remote database. Once identified, this process can be attached to and traced on the remote database using the commands given in Section 4.11.2.10.2 and below - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from v$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 10.2 and below - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=sr.PROCESS; 11g - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 11g - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=sr.PROCESS;   5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages 5.1. Check the Privileges of All Users Involved Ensure that the owner of the database link has the necessary privileges on the aq packages. SQL> select TABLE_NAME, PRIVILEGE from USER_TAB_PRIVS;TABLE_NAME PRIVILEGE------------------------------ ----------------------------------------DBMS_LOCK EXECUTEDBMS_AQ EXECUTEDBMS_AQADM EXECUTEDBMS_AQ_BQVIEW EXECUTEQT52814_BUFFER SELECT Note that when queue table is created, a view called QT<nnn>_BUFFER is created in the SYS schema, and the queue table owner is given SELECT privileges on it. The <nnn> corresponds to the object_id of the associated queue table. SQL> select * from USER_ROLE_PRIVS;USERNAME GRANTED_ROLE ADM DEF OS_------------------------------ ------------------------------ ---- ---- ---AQ_USER1 AQ_ADMINISTRATOR_ROLE NO YES NOAQ_USER1 CONNECT NO YES NOAQ_USER1 RESOURCE NO YES NO It is good practice to configure central AQ administrative user. All admin and processing jobs are created, executed and administered as this user. This configuration is not mandatory however, and the database link can be owned by any existing queue user. If this latter configuration is used, ensure that the connecting user has the necessary privileges on the AQ packages and objects involved. Privileges for an AQ Administrative user Execute on DBMS_AQADM Execute on DBMS_AQ Granted the AQ_ADMINISTRATOR_ROLE Privileges for an AQ user Execute on DBMS_AQ Execute on the message payload Enqueue privileges on the remote queue Dequeue privileges on the originating queue Privileges need to be confirmed on both sites when propagation is scheduled to remote destinations. Verify that the user ID used to login to the destination through the database link has been granted privileges to use AQ. 5.2. Verify Queue Payload Types AQ will not propagate messages from one queue to another if the payload types of the two queues are not verified to be equivalent. An AQ administrator can verify if the source and destination's payload types match by executing the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure. The results of the type checking will be stored in the SYS.AQ$_MESSAGE_TYPES table. This table can be accessed using the object identifier OID of the source queue and the address database link of the destination queue, i.e. [schema.]queue_name[@destination]. Prior to Oracle 9i the payload (message type) had to be the same for all the queue tables involved in propagation. From Oracle9i onwards a transformation can be used so that payloads can be converted from one type to another. The following procedural call made on the source database can verify whether we can propagate between the source and the destination queue tables. connect aq_user1/[email protected] serverout onDECLARErc_value number;BEGINDBMS_AQADM.VERIFY_QUEUE_TYPES(src_queue_name => 'AQ_USER1.Q_1', dest_queue_name => 'AQ_USER2.Q_2',destination => 'dbl_aq_user2.es',rc => rc_value);dbms_output.put_line('rc_value code is '||rc_value);END;/ If propagation is possible then the return code value will be 1. If it is 0 then propagation is not possible and further investigation of the types and transformations used by and in conjunction with the queue tables is required. With regard to comparison of the types the following sql can be used to extract the DDL for a specific type with' %' changed appropriately on the source and target. This can then be compared for the source and target. SET LONG 20000 set pagesize 50 EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'STORAGE',false); SELECT DBMS_METADATA.GET_DDL('TYPE',t.type_name) from user_types t WHERE t.type_name like '%'; EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'DEFAULT'); 5.3. Check Message State and Destination The first step in this process is to identify the queue table associated with the problem source queue. Although you schedule propagation for a specific queue, most of the meta-data associated with that queue is stored in the underlying queue table. The following statement finds the queue table for a given queue (note that this is a multiple-consumer queue table). SQL> select QUEUE_TABLE from DBA_QUEUES where NAME = 'MULTIPLEQ';QUEUE_TABLE --------------------MULTIPLEQTABLE For a small amount of messages in a multiple-consumer queue table, the following query can be run: SQL> select MSG_STATE, CONSUMER_NAME, ADDRESS from AQ$MULTIPLEQTABLE where QUEUE = 'MULTIPLEQ';MSG_STATE CONSUMER_NAME ADDRESS-------------- ----------------------- -------------READY AQUSER2 [email protected] AQUSER1READY AQUSER3 AQADM.INQ In this example we see 2 messages ready to be propagated to remote queues and 1 that is not. If the address column is blank, the message is not scheduled for propagation and can only be dequeued from the queue upon which it was enqueued. The MSG_STATE column values are discussed in Document 102330.1 Advanced Queueing MSG_STATE Values and their Interpretation. If the address column has a value, the message has been enqueued for propagation to another queue. The first row in the example includes a database link (@M2V102.ES). This demonstrates that the message should be propagated to a queue at a remote database. The third row does not include a database link so will be propagated to a queue that resides on the same database as the source queue. The consumer name is the intended recipient at the target queue. Note that we are not querying the base queue table directly; rather, we are querying a view that is available on top of every queue table, AQ$<queue_table_name>.A more realistic query in an environment where the queue table contains thousands of messages is8.0.3-compatible multiple-consumer queue table and all compatibility single-consumer queue tables select count(*), MSG_STATE, QUEUE from AQ$<queue_table_name>  group by MSG_STATE, QUEUE; 8.1.3 and 10.0-compatible queue tables select count(*), MSG_STATE, QUEUE, CONSUMER_NAME from AQ$<queue_table_name>group by MSG_STATE, QUEUE, CONSUMER_NAME; For multiple-consumer queue tables, if you did not see the expected CONSUMER_NAME , check the syntax of the enqueue code and verify the recipients are declared correctly. If a recipients list is not used on enqueue, check the subscriber list in the AQ$_<queue_table_name>_S view (note that a single-consumer queue table does not have a subscriber view. This view records all members of the default subscription list which were added using the DBMS_AQADM.ADD_SUBSCRIBER procedure and also those enqueued using a recipient list. SQL> select QUEUE, NAME, ADDRESS from AQ$MULTIPLEQTABLE_S;QUEUE NAME ADDRESS---------- ----------- -------------MULTIPLEQ AQUSER2 [email protected] AQUSER1 In this example we have 2 subscribers registered with the queue. We have a local subscriber AQUSER1, and a remote subscriber AQUSER2, on the queue INQ, owned by AQADM, at M2V102.ES. Unless overridden with a recipient list during enqueue every message enqueued to this queue will be propagated to INQ at M2V102.ES.For 8.1 style and above multiple consumer queue tables, you can also check the following information at the target: select CONSUMER_NAME, DEQ_TXN_ID, DEQ_TIME, DEQ_USER_ID, PROPAGATED_MSGID from AQ$<queue_table_name> where QUEUE = '<QUEUE_NAME>'; For 8.0 style queues, if the queue table supports multiple consumers you can obtain the same information from the history column of the queue table: select h.CONSUMER, h.TRANSACTION_ID, h.DEQ_TIME, h.DEQ_USER, h.PROPAGATED_MSGIDfrom AQ$<queue_table_name> t, table(t.history) h where t.Q_NAME = '<QUEUE_NAME>'; A non-NULL TRANSACTION_ID indicates that the message was successfully propagated. Further, the DEQ_TIME indicates the time of propagation, the DEQ_USER indicates the userid used for propagation, and the PROPAGATED_MSGID indicates the message ID of the message that was enqueued at the destination. 6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment 6.1. Is the Propagation Enabled? For a propagation job to propagate messages, the propagation must be enabled. For Streams, a special view called DBA_PROPAGATION exists to convey information about Streams propagations. If messages are not being propagated by a propagation as expected, then the propagation might not be enabled. To query for this: SELECT p.PROPAGATION_NAME, DECODE(s.SCHEDULE_DISABLED, 'Y', 'Disabled','N', 'Enabled') SCHEDULE_DISABLED, s.PROCESS_NAME, s.FAILURES, s.LAST_ERROR_MSGFROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION pWHERE p.DESTINATION_DBLINK = NVL(REGEXP_SUBSTR(s.DESTINATION, '[^@]+', 1, 2), s.DESTINATION) AND s.SCHEMA = p.SOURCE_QUEUE_OWNER AND s.QNAME = p.SOURCE_QUEUE_NAME AND MESSAGE_DELIVERY_MODE = 'PERSISTENT' order by PROPAGATION_NAME; At times, the propagation job may become "broken" or fail to start after an error has been encountered or after a database restart. If an error is indicated by the above query, an attempt to disable the propagation and then re-enable it can be made. In the examples below, for the propagation named STRMADMIN_PROPAGATE where the queue name is STREAMS_QUEUE owned by STRMADMIN and the destination database link is ORCL2.WORLD, the commands would be:10.2 and above exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE'); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); If the above does not fix the problem, stop the propagation specifying the force parameter (2nd parameter on stop_propagation) as TRUE: exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE',true); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); The statistics for the propagation as well as any old error messages are cleared when the force parameter is set to TRUE. Therefore if the propagation schedule is stopped with FORCE set to TRUE, and upon restart there is still an error message in DBA_PROPAGATION, then the error message is current.9.2 or 10.1 exec dbms_aqadm.disable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms.aqadm.enable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); If the above does not fix the problem, perform an unschedule of propagation and then schedule_propagation: exec dbms_aqadm.unschedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms_aqadm.schedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); Typically if the error from the first query in Section 6.1 recurs after restarting the propagation as shown above, further troubleshooting of the error is needed. 6.2. Check Propagation Rule Sets and Transformations Inspect the configuration of the rules in the rule set that is associated with the propagation process to make sure that they evaluate to TRUE as expected. If not, then the object or schema will not be propagated. Remember that when a negative rule evaluates to TRUE, the specified object or schema will not be propagated. Finally inspect any rule-based transformations that are implemented with propagation to make sure they are changing the data in the intended way.The following query shows what rule sets are assigned to a propagation: select PROPAGATION_NAME, RULE_SET_OWNER||'.'||RULE_SET_NAME "Positive Rule Set",NEGATIVE_RULE_SET_OWNER||'.'||NEGATIVE_RULE_SET_NAME "Negative Rule Set"from DBA_PROPAGATION; The next two queries list the propagation rules and their conditions. The first is for the positive rule set, the second is for the negative rule set: set long 4000select rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES rwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER and RULE_SET_NAME in(select RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME;   set long 4000select c.PROPAGATION_NAME, rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES r ,DBA_PROPAGATION cwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER andrsr.RULE_SET_OWNER=c.NEGATIVE_RULE_SET_OWNER and rsr.RULE_SET_NAME=c.NEGATIVE_RULE_SET_NAMEand rsr.RULE_SET_NAME in(select NEGATIVE_RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME; 6.3. Determining the Total Number of Messages and Bytes Propagated As in Section 3.1, determining if messages are flowing can be instructive to see whether the propagation is entirely hung or just slow. If the propagation is not in flow control (see Section 6.5.2), but the statistics are incrementing slowly, there may be a performance issue. For Streams implementations two views are available that can assist with this that can show the number of messages sent by a propagation, as well as the number of acknowledgements being returned from the target site: the V$PROPAGATION_SENDER view at the Source site and the V$PROPAGATION_RECEIVER view at the destination site. It is helpful to query both to determine if messages are being delivered to the target. Look for the statistics to increase.Source: select QUEUE_SCHEMA, QUEUE_NAME, DBLINK,HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS, TOTAL_BYTESfrom V$PROPAGATION_SENDER; Target: select SRC_QUEUE_SCHEMA, SRC_QUEUE_NAME, SRC_DBNAME, DST_QUEUE_SCHEMA, DST_QUEUE_NAME, HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS from V$PROPAGATION_RECEIVER; 6.4. Check Buffered Subscribers The V$BUFFERED_SUBSCRIBERS view displays information about subscribers for all buffered queues in the instance. This view can be queried to make sure that the site that the propagation is propagating to is listed as a subscriber address for the site being propagated from: select QUEUE_SCHEMA, QUEUE_NAME, SUBSCRIBER_ADDRESS from V$BUFFERED_SUBSCRIBERS; The SUBSCRIBER_ADDRESS column will not be populated when the propagation is local (between queues on the same database). 6.5. Common Streams Propagation Errors 6.5.1. ORA-02082: A loopback database link must have a connection qualifier. This error can occur if you use the Streams Setup Wizard in Oracle Enterprise Manager without first configuring the GLOBAL_NAME for your database. 6.5.2. ORA-25307: Enqueue rate too high. Enable flow control DBA_QUEUE_SCHEDULES will display this informational message for propagation when the automatic flow control (10g feature of Streams) has been invoked.Similar to Streams capture processes, a Streams propagation process can also go into a state of 'flow control. This is an informative message that indicates flow control has been automatically enabled to reduce the rate at which messages are being enqueued into at target queue.This typically occurs when the target site is unable to keep up with the rate of messages flowing from the source site. Other than checking that the apply process is running normally on the target site, usually no action is required by the DBA. Propagation and the capture process will be resumed automatically when the target site is able to accept more messages.The following document contains more information:Document 302109.1 Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlSee the following document for one potential cause of this situation:Document 1097115.1 Oracle Streams Apply Reader is in 'Paused' State 6.5.3. ORA-25315 unsupported configuration for propagation of buffered messages This error typically occurs when the target database is RAC and usually indicates that an attempt was made to propagate buffered messages with the database link pointing to an instance in the destination database which is not the owner instance of the destination queue. To resolve the problem, use queue-to-queue propagation for buffered messages. 6.5.4. ORA-600 [KWQBMCRCPTS101] after dropping / recreating propagation For cause/fixes refer to:Document 421237.1 ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams Propagation 6.5.5. Stopping or Dropping a Streams Propagation Hangs See the following note:Document 1159787.1 Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It Hang 6.6. Streams Propagation-Related Notes for Common Issues Document 437838.1 Streams Specific PatchesDocument 749181.1 How to Recover Streams After Dropping PropagationDocument 368912.1 Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentDocument 564649.1 ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveDocument 553017.1 Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201Document 944846.1 Streams Propagation Fails Ora-7445 [kohrsmc]Document 745601.1 ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'Document 333068.1 ORA-23603: Streams Enqueue Aborted Eue To Low SGADocument 363496.1 Ora-25315 Propagating on RAC StreamsDocument 368237.1 Unable to Unschedule Propagation. Streams Queue is InvalidDocument 436332.1 dbms_propagation_adm.stop_propagation hangsDocument 727389.1 Propagation Fails With ORA-12528Document 730911.1 ORA-4063 Is Reported After Dropping Negative Prop.RulesetDocument 460471.1 Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsDocument 1165583.1 ORA-600 [kwqpuspse0-ack] In Streams EnvironmentDocument 1059029.1 Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationDocument 556309.1 Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedDocument 839568.1 Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''Document 311021.1 Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredDocument 359971.1 STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068Document 1101616.1 DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747 7. Performance Issues A propagation may seem to be slow if the queries from Sections 3.1 and 6.3 show that the message statistics are not changing quickly. In Oracle Streams, this more usually is due to a slow apply process at the target rather than a slow propagation. Propagation could be inferred to be slow if the message statistics are changing, and the state of a capture process according to V$STREAMS_CAPTURE.STATE is PAUSED FOR FLOW CONTROL, but an ORA-25307 'Enqueue rate too high. Enable flow control' warning is NOT observed in DBA_QUEUE_SCHEDULES per Section 6.5.2. If this is the case, see the following notes / white papers for suggestions to increase performance:Document 335516.1 Master Note for Streams Performance RecommendationsDocument 730036.1 Overview for Troubleshooting Streams Performance IssuesDocument 780733.1 Streams Propagation Tuning with Network ParametersWhite Paper: http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-streams-performance-130059.pdfWhite Paper: Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2, http://www.oracle.com/technetwork/database/features/availability/maa-10gr2-streams-configuration-132039.pdf, See APPENDIX A: USING STREAMS CONFIGURATIONS OVER A NETWORKFor basic AQ propagation, the network tuning in the aforementioned Appendix A of the white paper 'Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2' is applicable. References NOTE:102330.1 - Advanced Queueing MSG_STATE Values and their InterpretationNOTE:102771.1 - Advanced Queueing Propagation using PL/SQLNOTE:1059029.1 - Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationNOTE:1079577.1 - Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"NOTE:1083608.1 - 11g Streams and Oracle SchedulerNOTE:1087324.1 - ORA-01405 ORA-01422 reported by Adavanced Queueing Propagation schedules after RAC reconfigurationNOTE:1097115.1 - Oracle Streams Apply Reader is in 'Paused' StateNOTE:1101616.1 - DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747NOTE:1159787.1 - Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It HangNOTE:1165583.1 - ORA-600 [kwqpuspse0-ack] In Streams EnvironmentNOTE:118884.1 - How to unschedule a propagation schedule stuck in pending stateNOTE:1203544.1 - AQ PROPAGATION ABORTED WITH ORA-600[OCIKSIN: INVALID STATUS] ON SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE AFTER UPGRADENOTE:1204080.1 - AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.NOTE:219416.1 - Advanced Queuing Propagation fails with ORA-22922NOTE:222992.1 - DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082NOTE:253131.1 - Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555)NOTE:282987.1 - Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueNOTE:298015.1 - Kwqjswproc:Excep After Loop: Assigning To SelfNOTE:302109.1 - Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlNOTE:311021.1 - Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredNOTE:332792.1 - ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up StatspackNOTE:333068.1 - ORA-23603: Streams Enqueue Aborted Eue To Low SGANOTE:335516.1 - Master Note for Streams Performance RecommendationsNOTE:353325.1 - ORA-24056: Internal inconsistency for QUEUE and destination NOTE:353754.1 - Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT.NOTE:359971.1 - STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068NOTE:363496.1 - Ora-25315 Propagating on RAC StreamsNOTE:365093.1 - ORA-07445 [kwqppay2aqe()+7360] reported on Propagation of a Transformed MessageNOTE:368237.1 - Unable to Unschedule Propagation. Streams Queue is InvalidNOTE:368912.1 - Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentNOTE:421237.1 - ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams PropagationNOTE:436332.1 - dbms_propagation_adm.stop_propagation hangsNOTE:437838.1 - Streams Specific PatchesNOTE:460471.1 - Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsNOTE:463820.1 - Streams Combined Capture and Apply in 11gNOTE:553017.1 - Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201NOTE:556309.1 - Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedNOTE:564649.1 - ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveNOTE:566622.1 - ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1NOTE:727389.1 - Propagation Fails With ORA-12528NOTE:730036.1 - Overview for Troubleshooting Streams Performance IssuesNOTE:730911.1 - ORA-4063 Is Reported After Dropping Negative Prop.RulesetNOTE:731292.1 - ORA-25215 Reported On Local Propagation When Using Transformation with ANYDATA queue tablesNOTE:731539.1 - ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTPNOTE:745601.1 - ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'NOTE:749181.1 - How to Recover Streams After Dropping PropagationNOTE:780733.1 - Streams Propagation Tuning with Network ParametersNOTE:787367.1 - ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2NOTE:808136.1 - How to clear the old errors from DBA_PROPAGATION view ?NOTE:827184.1 - AQ Propagation with CLOB data types Fails with ORA-22990NOTE:827473.1 - How to alter propagation from queue_to_queue to queue_to_dblinkNOTE:839568.1 - Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''NOTE:846297.1 - AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn]NOTE:944846.1 - Streams Propagation Fails Ora-7445 [kohrsmc]

    Read the article

  • Silverlight for Windows Embedded tutorial (step 4)

    - by Valter Minute
    I’m back with my Silverlight for Windows Embedded tutorial. Sorry for the long delay between step 3 and step 4, the MVP summit and some work related issue prevented me from working on the tutorial during the last weeks. In our first,  second and third tutorial steps we implemented some very simple applications, just to understand the basic structure of a Silverlight for Windows Embedded application, learn how to handle events and how to operate on images. In this third step our sample application will be slightly more complicated, to introduce two new topics: list boxes and custom control. We will also learn how to create controls at runtime. I choose to explain those topics together and provide a sample a bit more complicated than usual just to start to give the feeling of how a “real” Silverlight for Windows Embedded application is organized. As usual we can start using Expression Blend to define our main page. In this case we will have a listbox and a textblock. Here’s the XAML code: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="ListDemo.Page" Width="640" Height="480" x:Name="ListPage" xmlns:ListDemo="clr-namespace:ListDemo">   <Grid x:Name="LayoutRoot" Background="White"> <ListBox Margin="19,57,19,66" x:Name="FileList" SelectionChanged="Filelist_SelectionChanged"/> <TextBlock Height="35" Margin="19,8,19,0" VerticalAlignment="Top" TextWrapping="Wrap" x:Name="CurrentDir" Text="TextBlock" FontSize="20"/> </Grid> </UserControl> In our listbox we will load a list of directories, starting from the filesystem root (there are no drives in Windows CE, the filesystem has a single root named “\”). When the user clicks on an item inside the list, the corresponding directory path will be displayed in the TextBlock object and the subdirectories of the selected branch will be shown inside the list. As you can see we declared an event handler for the SelectionChanged event of our listbox. We also used a different font size for the TextBlock, to make it more readable. XAML and Expression Blend allow you to customize your UI pretty heavily, experiment with the tools and discover how you can completely change the aspect of your application without changing a single line of code! Inside our ListBox we want to insert the directory presenting a nice icon and their name, just like you are used to see them inside Windows 7 file explorer, for example. To get this we will define a user control. This is a custom object that will behave like “regular” Silverlight for Windows Embedded objects inside our application. First of all we have to define the look of our custom control, named DirectoryItem, using XAML: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" x:Class="ListDemo.DirectoryItem" Width="500" Height="80">   <StackPanel x:Name="LayoutRoot" Orientation="Horizontal"> <Canvas Width="31.6667" Height="45.9583" Margin="10,10,10,10" RenderTransformOrigin="0.5,0.5"> <Canvas.RenderTransform> <TransformGroup> <ScaleTransform/> <SkewTransform/> <RotateTransform Angle="-31.27"/> <TranslateTransform/> </TransformGroup> </Canvas.RenderTransform> <Rectangle Width="31.6667" Height="45.8414" Canvas.Left="0" Canvas.Top="0.116943" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3128" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FF7B6802" Offset="0"/> <GradientStop Color="#FFF3D42C" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="29.8441" Height="43.1517" Canvas.Left="0.569519" Canvas.Top="1.05249" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142632,0.753441" EndPoint="1.01886,0.753441"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142632" CenterY="0.753441" AngleX="19.3127" AngleY="0"/> <RotateTransform CenterX="0.142632" CenterY="0.753441" Angle="-35.3437"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FFCDCDCD" Offset="0.0833333"/> <GradientStop Color="#FFFFFFFF" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="29.8441" Height="43.1517" Canvas.Left="0.455627" Canvas.Top="2.28036" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3128" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FFCDCDCD" Offset="0.0833333"/> <GradientStop Color="#FFFFFFFF" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="29.8441" Height="43.1517" Canvas.Left="0.455627" Canvas.Top="1.34485" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3128" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FFCDCDCD" Offset="0.0833333"/> <GradientStop Color="#FFFFFFFF" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="26.4269" Height="45.8414" Canvas.Left="0.227798" Canvas.Top="0" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3127" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FF7B6802" Offset="0"/> <GradientStop Color="#FFF3D42C" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="1.25301" Height="45.8414" Canvas.Left="1.70862" Canvas.Top="0.116943" Stretch="Fill" Fill="#FFEBFF07"/> </Canvas> <TextBlock Height="80" x:Name="Name" Width="448" TextWrapping="Wrap" VerticalAlignment="Center" FontSize="24" Text="Directory"/> </StackPanel> </UserControl> As you can see, this XAML contains many graphic elements. Those elements are used to design the folder icon. The original drawing has been designed in Expression Design and then exported as XAML. In Silverlight for Windows Embedded you can use vector images. This means that your images will look good even when scaled or rotated. In our DirectoryItem custom control we have a TextBlock named Name, that will be used to display….(suspense)…. the directory name (I’m too lazy to invent fancy names for controls, and using “boring” intuitive names will make code more readable, I hope!). Now that we have some XAML code, we may execute XAML2CPP to generate part of the aplication code for us. We should then add references to our XAML2CPP generated resource file and include in our code and add a reference to the XAML runtime library to our sources file (you can follow the instruction of the first tutorial step to do that), To generate the code used in this tutorial you need XAML2CPP ver 1.0.1.0, that is downloadable here: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2010/03/08/xaml2cpp-1.0.1.0.aspx We can now create our usual simple Win32 application inside Platform Builder, using the same step described in the first chapter of this tutorial (http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2009/10/01/silverlight-for-embedded-tutorial.aspx). We can declare a class for our main page, deriving it from the template that XAML2CPP generated for us: class ListPage : public TListPage<ListPage> { ... } We will see the ListPage class code in a short time, but before we will see the code of our DirectoryItem user control. This object will be used to populate our list, one item for each directory. To declare a user control things are a bit more complicated (but also in this case XAML2CPP will write most of the “boilerplate” code for use. To interact with a user control you should declare an interface. An interface defines the functions of a user control that can be called inside the application code. Our custom control is currently quite simple and we just need some member functions to store and retrieve a full pathname inside our control. The control will display just the last part of the path inside the control. An interface is declared as a C++ class that has only abstract virtual members. It should also have an UUID associated with it. UUID means Universal Unique IDentifier and it’s a 128 bit number that will identify our interface without the need of specifying its fully qualified name. UUIDs are used to identify COM interfaces and, as we discovered in chapter one, Silverlight for Windows Embedded is based on COM or, at least, provides a COM-like Application Programming Interface (API). Here’s the declaration of the DirectoryItem interface: class __declspec(novtable,uuid("{D38C66E5-2725-4111-B422-D75B32AA8702}")) IDirectoryItem : public IXRCustomUserControl { public:   virtual HRESULT SetFullPath(BSTR fullpath) = 0; virtual HRESULT GetFullPath(BSTR* retval) = 0; }; The interface is derived from IXRCustomControl, this will allow us to add our object to a XAML tree. It declares the two functions needed to set and get the full path, but don’t implement them. Implementation will be done inside the control class. The interface only defines the functions of our control class that are accessible from the outside. It’s a sort of “contract” between our control and the applications that will use it. We must support what’s inside the contract and the application code should know nothing else about our own control. To reference our interface we will use the UUID, to make code more readable we can declare a #define in this way: #define IID_IDirectoryItem __uuidof(IDirectoryItem) Silverlight for Windows Embedded objects (like COM objects) use a reference counting mechanism to handle object destruction. Every time you store a pointer to an object you should call its AddRef function and every time you no longer need that pointer you should call Release. The object keeps an internal counter, incremented for each AddRef and decremented on Release. When the counter reaches 0, the object is destroyed. Managing reference counting in our code can be quite complicated and, since we are lazy (I am, at least!), we will use a great feature of Silverlight for Windows Embedded: smart pointers.A smart pointer can be connected to a Silverlight for Windows Embedded object and manages its reference counting. To declare a smart pointer we must use the XRPtr template: typedef XRPtr<IDirectoryItem> IDirectoryItemPtr; Now that we have defined our interface, it’s time to implement our user control class. XAML2CPP has implemented a class for us, and we have only to derive our class from it, defining the main class and interface of our new custom control: class DirectoryItem : public DirectoryItemUserControlRegister<DirectoryItem,IDirectoryItem> { ... } XAML2CPP has generated some code for us to support the user control, we don’t have to mind too much about that code, since it will be generated (or written by hand, if you like) always in the same way, for every user control. But knowing how does this works “under the hood” is still useful to understand the architecture of Silverlight for Windows Embedded. Our base class declaration is a bit more complex than the one we used for a simple page in the previous chapters: template <class A,class B> class DirectoryItemUserControlRegister : public XRCustomUserControlImpl<A,B>,public TDirectoryItem<A,XAML2CPPUserControl> { ... } This class derives from the XAML2CPP generated template class, like the ListPage class, but it uses XAML2CPPUserControl for the implementation of some features. This class shares the same ancestor of XAML2CPPPage (base class for “regular” XAML pages), XAML2CPPBase, implements binding of member variables and event handlers but, instead of loading and creating its own XAML tree, it attaches to an existing one. The XAML tree (and UI) of our custom control is created and loaded by the XRCustomUserControlImpl class. This class is part of the Silverlight for Windows Embedded framework and implements most of the functions needed to build-up a custom control in Silverlight (the guys that developed Silverlight for Windows Embedded seem to care about lazy programmers!). We have just to initialize it, providing our class (DirectoryItem) and interface (IDirectoryItem). Our user control class has also a static member: protected:   static HINSTANCE hInstance; This is used to store the HINSTANCE of the modules that contain our user control class. I don’t like this implementation, but I can’t find a better one, so if somebody has good ideas about how to handle the HINSTANCE object, I’ll be happy to hear suggestions! It also implements two static members required by XRCustomUserControlImpl. The first one is used to load the XAML UI of our custom control: static HRESULT GetXamlSource(XRXamlSource* pXamlSource) { pXamlSource->SetResource(hInstance,TEXT("XAML"),IDR_XAML_DirectoryItem); return S_OK; }   It initializes a XRXamlSource object, connecting it to the XAML resource that XAML2CPP has included in our resource script. The other method is used to register our custom control, allowing Silverlight for Windows Embedded to create it when it load some XAML or when an application creates a new control at runtime (more about this later): static HRESULT Register() { return XRCustomUserControlImpl<A,B>::Register(__uuidof(B), L"DirectoryItem", L"clr-namespace:DirectoryItemNamespace"); } To register our control we should provide its interface UUID, the name of the corresponding element in the XAML tree and its current namespace (namespaces compatible with Silverlight must use the “clr-namespace” prefix. We may also register additional properties for our objects, allowing them to be loaded and saved inside XAML. In this case we have no permanent properties and the Register method will just register our control. An additional static method is implemented to allow easy registration of our custom control inside our application WinMain function: static HRESULT RegisterUserControl(HINSTANCE hInstance) { DirectoryItemUserControlRegister::hInstance=hInstance; return DirectoryItemUserControlRegister<A,B>::Register(); } Now our control is registered and we will be able to create it using the Silverlight for Windows Embedded runtime functions. But we need to bind our members and event handlers to have them available like we are used to do for other XAML2CPP generated objects. To bind events and members we need to implement the On_Loaded function: virtual HRESULT OnLoaded(__in IXRDependencyObject* pRoot) { HRESULT retcode; IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode; return ((A*)this)->Init(pRoot,hInstance,app); } This function will call the XAML2CPPUserControl::Init member that will connect the “root” member with the XAML sub tree that has been created for our control and then calls BindObjects and BindEvents to bind members and events to our code. Now we can go back to our application code (the code that you’ll have to actually write) to see the contents of our DirectoryItem class: class DirectoryItem : public DirectoryItemUserControlRegister<DirectoryItem,IDirectoryItem> { protected:   WCHAR fullpath[_MAX_PATH+1];   public:   DirectoryItem() { *fullpath=0; }   virtual HRESULT SetFullPath(BSTR fullpath) { wcscpy_s(this->fullpath,fullpath);   WCHAR* p=fullpath;   for(WCHAR*q=wcsstr(p,L"\\");q;p=q+1,q=wcsstr(p,L"\\")) ;   Name->SetText(p); return S_OK; }   virtual HRESULT GetFullPath(BSTR* retval) { *retval=SysAllocString(fullpath); return S_OK; } }; It’s pretty easy and contains a fullpath member (used to store that path of the directory connected with the user control) and the implementation of the two interface members that can be used to set and retrieve the path. The SetFullPath member parses the full path and displays just the last branch directory name inside the “Name” TextBlock object. As you can see, implementing a user control in Silverlight for Windows Embedded is not too complex and using XAML also for the UI of the control allows us to re-use the same mechanisms that we learnt and used in the previous steps of our tutorial. Now let’s see how the main page is managed by the ListPage class. class ListPage : public TListPage<ListPage> { protected:   // current path TCHAR curpath[_MAX_PATH+1]; It has a member named “curpath” that is used to store the current directory. It’s initialized inside the constructor: ListPage() { *curpath=0; } And it’s value is displayed inside the “CurrentDir” TextBlock inside the initialization function: virtual HRESULT Init(HINSTANCE hInstance,IXRApplication* app) { HRESULT retcode;   if (FAILED(retcode=TListPage<ListPage>::Init(hInstance,app))) return retcode;   CurrentDir->SetText(L"\\"); return S_OK; } The FillFileList function is used to enumerate subdirectories of the current dir and add entries for each one inside the list box that fills most of the client area of our main page: HRESULT FillFileList() { HRESULT retcode; IXRItemCollectionPtr items; IXRApplicationPtr app;   if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode; // retrieves the items contained in the listbox if (FAILED(retcode=FileList->GetItems(&items))) return retcode;   // clears the list if (FAILED(retcode=items->Clear())) return retcode;   // enumerates files and directory in the current path WCHAR filemask[_MAX_PATH+1];   wcscpy_s(filemask,curpath); wcscat_s(filemask,L"\\*.*");   WIN32_FIND_DATA finddata; HANDLE findhandle;   findhandle=FindFirstFile(filemask,&finddata);   // the directory is empty? if (findhandle==INVALID_HANDLE_VALUE) return S_OK;   do { if (finddata.dwFileAttributes&=FILE_ATTRIBUTE_DIRECTORY) { IXRListBoxItemPtr listboxitem;   // add a new item to the listbox if (FAILED(retcode=app->CreateObject(IID_IXRListBoxItem,&listboxitem))) { FindClose(findhandle); return retcode; }   if (FAILED(retcode=items->Add(listboxitem,NULL))) { FindClose(findhandle); return retcode; }   IDirectoryItemPtr directoryitem;   if (FAILED(retcode=app->CreateObject(IID_IDirectoryItem,&directoryitem))) { FindClose(findhandle); return retcode; }   WCHAR fullpath[_MAX_PATH+1];   wcscpy_s(fullpath,curpath); wcscat_s(fullpath,L"\\"); wcscat_s(fullpath,finddata.cFileName);   if (FAILED(retcode=directoryitem->SetFullPath(fullpath))) { FindClose(findhandle); return retcode; }   XAML2CPPXRValue value((IXRDependencyObject*)directoryitem);   if (FAILED(retcode=listboxitem->SetContent(&value))) { FindClose(findhandle); return retcode; } } } while (FindNextFile(findhandle,&finddata));   FindClose(findhandle); return S_OK; } This functions retrieve a pointer to the collection of the items contained in the directory listbox. The IXRItemCollection interface is used by listboxes and comboboxes and allow you to clear the list (using Clear(), as our function does at the beginning) and change its contents by adding and removing elements. This function uses the FindFirstFile/FindNextFile functions to enumerate all the objects inside our current directory and for each subdirectory creates a IXRListBoxItem object. You can insert any kind of control inside a list box, you don’t need a IXRListBoxItem, but using it will allow you to handle the selected state of an item, highlighting it inside the list. The function creates a list box item using the CreateObject function of XRApplication. The same function is then used to create an instance of our custom control. The function returns a pointer to the control IDirectoryItem interface and we can use it to store the directory full path inside the object and add it as content of the IXRListBox item object, adding it to the listbox contents. The listbox generates an event (SelectionChanged) each time the user clicks on one of the items contained in the listbox. We implement an event handler for that event and use it to change our current directory and repopulate the listbox. The current directory full path will be displayed in the TextBlock: HRESULT Filelist_SelectionChanged(IXRDependencyObject* source,XRSelectionChangedEventArgs* args) { HRESULT retcode;   IXRListBoxItemPtr listboxitem;   if (!args->pAddedItem) return S_OK;   if (FAILED(retcode=args->pAddedItem->QueryInterface(IID_IXRListBoxItem,(void**)&listboxitem))) return retcode;   XRValue content; if (FAILED(retcode=listboxitem->GetContent(&content))) return retcode;   if (content.vType!=VTYPE_OBJECT) return E_FAIL;   IDirectoryItemPtr directoryitem;   if (FAILED(retcode=content.pObjectVal->QueryInterface(IID_IDirectoryItem,(void**)&directoryitem))) return retcode;   content.pObjectVal->Release(); content.pObjectVal=NULL;   BSTR fullpath=NULL;   if (FAILED(retcode=directoryitem->GetFullPath(&fullpath))) return retcode;   CurrentDir->SetText(fullpath);   wcscpy_s(curpath,fullpath); FillFileList(); SysFreeString(fullpath);     return S_OK; } }; The function uses the pAddedItem member of the XRSelectionChangedEventArgs object to retrieve the currently selected item, converts it to a IXRListBoxItem interface using QueryInterface, and then retrives its contents (IDirectoryItem object). Using the GetFullPath method we can get the full path of our selected directory and assing it to the curdir member. A call to FillFileList will update the listbox contents, displaying the list of subdirectories of the selected folder. To build our sample we just need to add code to our WinMain function: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { if (!XamlRuntimeInitialize()) return -1;   HRESULT retcode;   IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return -1;   if (FAILED(retcode=DirectoryItem::RegisterUserControl(hInstance))) return retcode;   ListPage page;   if (FAILED(page.Init(hInstance,app))) return -1;   page.FillFileList();   UINT exitcode;   if (FAILED(page.GetVisualHost()->StartDialog(&exitcode))) return -1;   return 0; } This code is very similar to the one of the WinMains of our previous samples. The main differences are that we register our custom control (you should do that as soon as you have initialized the XAML runtime) and call FillFileList after the initialization of our ListPage object to load the contents of the root folder of our device inside the listbox. As usual you can download the full sample source code from here: http://cid-9b7b0aefe3514dc5.skydrive.live.com/self.aspx/.Public/ListBoxTest.zip

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ls -l freezes terminal locally and remotely

    - by Jakobud
    I've been reading other SF threads regarding ls not returning results or freezing and stalling terminal sessions and it appears they usually the fault of network problems. My problem however, occurs both over remote SSH sessions but also if I am physically at the server itself... I just installed CentOS 5.4 on one of our servers. I'm setting up some rdiff-backup scripts and when I downloaded librsync and untared it, thats when I started seeing some weird behavior with ls -l. wget http://sourceforge.net/projects/librsync/files/librsync/0.9.7/librsync-0.9.7.tar.gz/download /tmp cd /tmp tar -xzf librsync-0.9.7.tar.gz Simple enough. To view the files in this directory I did this: ls results: librsync-0.9.7 librsync-0.9.7.tar.gz Now, if I ls -l, my terminal freezes. I have to re-ssh in to keep going. After reading SF threads, I thought it was network related. So I was extremely surprised to go sit down at the server itself and see the exact same thing happen... So its obviously not a network issues. Even if I ls /tmp/librsync-0.9.7, my terminal freezes just the same... Next I did an strace and got this (warning: wall of text coming....): strace ls -l /tmp execve("/bin/ls", ["ls", "-l", "/tmp"], [/* 21 vars */]) = 0 brk(0) = 0x1c521000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cc0000 uname({sys="Linux", node="massive.answeron.com", ...}) = 0 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=71746, ...}) = 0 mmap(NULL, 71746, PROT_READ, MAP_PRIVATE, 3, 0) = 0x2b8582cc1000 close(3) = 0 open("/lib64/librt.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0 \"\200\2730\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=53448, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cd3000 mmap(0x30bb800000, 2132936, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bb800000 mprotect(0x30bb807000, 2097152, PROT_NONE) = 0 mmap(0x30bba07000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x30bba07000 close(3) = 0 open("/lib64/libacl.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\31@\2740\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=28008, ...}) = 0 mmap(0x30bc400000, 2120992, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bc400000 mprotect(0x30bc406000, 2093056, PROT_NONE) = 0 mmap(0x30bc605000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0x30bc605000 close(3) = 0 open("/lib64/libselinux.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`E\300\2730\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=95464, ...}) = 0 mmap(0x30bbc00000, 2192784, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bbc00000 mprotect(0x30bbc15000, 2097152, PROT_NONE) = 0 mmap(0x30bbe15000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x15000) = 0x30bbe15000 mmap(0x30bbe17000, 1424, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x30bbe17000 close(3) = 0 open("/lib64/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\220\332\201\2720\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1717800, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cd4000 mmap(0x30ba800000, 3498328, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30ba800000 mprotect(0x30ba94d000, 2097152, PROT_NONE) = 0 mmap(0x30bab4d000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x14d000) = 0x30bab4d000 mmap(0x30bab52000, 16728, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x30bab52000 close(3) = 0 open("/lib64/libpthread.so.0", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\220W\0\2730\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=145824, ...}) = 0 mmap(0x30bb000000, 2204528, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bb000000 mprotect(0x30bb016000, 2093056, PROT_NONE) = 0 mmap(0x30bb215000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x15000) = 0x30bb215000 mmap(0x30bb217000, 13168, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x30bb217000 close(3) = 0 open("/lib64/libattr.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\17\300\2750\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=17888, ...}) = 0 mmap(0x30bdc00000, 2110728, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bdc00000 mprotect(0x30bdc04000, 2093056, PROT_NONE) = 0 mmap(0x30bde03000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x30bde03000 close(3) = 0 open("/lib64/libdl.so.2", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\20\16\300\2720\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=23360, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cd5000 mmap(0x30bac00000, 2109696, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bac00000 mprotect(0x30bac02000, 2097152, PROT_NONE) = 0 mmap(0x30bae02000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x30bae02000 close(3) = 0 open("/lib64/libsepol.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0=\0\2740\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=247496, ...}) = 0 mmap(0x30bc000000, 2383136, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bc000000 mprotect(0x30bc03b000, 2097152, PROT_NONE) = 0 mmap(0x30bc23b000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3b000) = 0x30bc23b000 mmap(0x30bc23c000, 40224, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x30bc23c000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cd6000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cd7000 arch_prctl(ARCH_SET_FS, 0x2b8582cd6c50) = 0 mprotect(0x30bba07000, 4096, PROT_READ) = 0 mprotect(0x30bab4d000, 16384, PROT_READ) = 0 mprotect(0x30bb215000, 4096, PROT_READ) = 0 mprotect(0x30ba61b000, 4096, PROT_READ) = 0 mprotect(0x30bae02000, 4096, PROT_READ) = 0 munmap(0x2b8582cc1000, 71746) = 0 set_tid_address(0x2b8582cd6ce0) = 24102 set_robust_list(0x2b8582cd6cf0, 0x18) = 0 futex(0x7fff72d02d6c, FUTEX_WAKE_PRIVATE, 1) = 0 rt_sigaction(SIGRTMIN, {0x30bb005370, [], SA_RESTORER|SA_SIGINFO, 0x30bb00e7c0}, NULL, 8) = 0 rt_sigaction(SIGRT_1, {0x30bb0052b0, [], SA_RESTORER|SA_RESTART|SA_SIGINFO, 0x30bb00e7c0}, NULL, 8) = 0 rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0 getrlimit(RLIMIT_STACK, {rlim_cur=10240*1024, rlim_max=RLIM_INFINITY}) = 0 access("/etc/selinux/", F_OK) = 0 brk(0) = 0x1c521000 brk(0x1c542000) = 0x1c542000 open("/etc/selinux/config", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=448, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cc1000 read(3, "# This file controls the state o"..., 4096) = 448 read(3, "", 4096) = 0 close(3) = 0 munmap(0x2b8582cc1000, 4096) = 0 open("/proc/mounts", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cc1000 read(3, "rootfs / rootfs rw 0 0\n/dev/root"..., 4096) = 577 close(3) = 0 munmap(0x2b8582cc1000, 4096) = 0 open("/selinux/mls", O_RDONLY) = 3 read(3, "1", 19) = 1 close(3) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 connect(3, {sa_family=AF_FILE, path="/var/run/setrans/.setrans-unix"...}, 110) = 0 sendmsg(3, {msg_name(0)=NULL, msg_iov(5)=[{"\1\0\0\0", 4}, {"\1\0\0\0", 4}, {"\1\0\0\0", 4}, {"\0", 1}, {"\0", 1}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 14 readv(3, [{"\1\0\0\0", 4}, {"\1\0\0\0", 4}, {"\0\0\0\0", 4}], 3) = 12 readv(3, [{"\0", 1}], 1) = 1 close(3) = 0 open("/usr/lib/locale/locale-archive", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=56413824, ...}) = 0 mmap(NULL, 56413824, PROT_READ, MAP_PRIVATE, 3, 0) = 0x2b8582cd8000 close(3) = 0 ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(1, TIOCGWINSZ, {ws_row=65, ws_col=137, ws_xpixel=0, ws_ypixel=0}) = 0 open("/usr/share/locale/locale.alias", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=2528, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(3, "# Locale name alias data base.\n#"..., 4096) = 2528 read(3, "", 4096) = 0 close(3) = 0 munmap(0x2b85862a5000, 4096) = 0 open("/usr/share/locale/en_US.UTF-8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US.utf8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.UTF-8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.utf8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) lstat("/tmp", {st_mode=S_IFDIR|S_ISVTX|0777, st_size=4096, ...}) = 0 getxattr("/tmp", "system.posix_acl_access", 0x0, 0) = -1 ENODATA (No data available) getxattr("/tmp", "system.posix_acl_default", 0x0, 0) = -1 ENODATA (No data available) socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 open("/etc/nsswitch.conf", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=1711, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(3, "#\n# /etc/nsswitch.conf\n#\n# An ex"..., 4096) = 1711 read(3, "", 4096) = 0 close(3) = 0 munmap(0x2b85862a5000, 4096) = 0 open("/etc/ld.so.cache", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=71746, ...}) = 0 mmap(NULL, 71746, PROT_READ, MAP_PRIVATE, 3, 0) = 0x2b85862a5000 close(3) = 0 open("/lib64/libnss_files.so.2", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340\37\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=53880, ...}) = 0 mmap(NULL, 2139432, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x2b85862b7000 mprotect(0x2b85862c1000, 2093056, PROT_NONE) = 0 mmap(0x2b85864c0000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x9000) = 0x2b85864c0000 close(3) = 0 mprotect(0x2b85864c0000, 4096, PROT_READ) = 0 munmap(0x2b85862a5000, 71746) = 0 open("/etc/passwd", O_RDONLY) = 3 fcntl(3, F_GETFD) = 0 fcntl(3, F_SETFD, FD_CLOEXEC) = 0 fstat(3, {st_mode=S_IFREG|0644, st_size=1823, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(3, "root:x:0:0:root:/root:/bin/bash\n"..., 4096) = 1823 close(3) = 0 munmap(0x2b85862a5000, 4096) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 open("/etc/group", O_RDONLY) = 3 fcntl(3, F_GETFD) = 0 fcntl(3, F_SETFD, FD_CLOEXEC) = 0 fstat(3, {st_mode=S_IFREG|0644, st_size=743, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(3, "root:x:0:root\nbin:x:1:root,bin,d"..., 4096) = 743 close(3) = 0 munmap(0x2b85862a5000, 4096) = 0 open("/tmp", O_RDONLY|O_NONBLOCK|O_DIRECTORY) = 3 fcntl(3, F_SETFD, FD_CLOEXEC) = 0 getdents(3, /* 8 entries */, 32768) = 264 lstat("/tmp/librsync-0.9.7.tar.gz", {st_mode=S_IFREG|0644, st_size=453802, ...}) = 0 getxattr("/tmp/librsync-0.9.7.tar.gz", "system.posix_acl_access", 0x0, 0) = -1 ENODATA (No data available) getxattr("/tmp/librsync-0.9.7.tar.gz", "system.posix_acl_default", 0x0, 0) = -1 ENODATA (No data available) lstat("/tmp/librsync-0.9.7", {st_mode=S_IFDIR|0777, st_size=4096, ...}) = 0 getxattr("/tmp/librsync-0.9.7", "system.posix_acl_access", 0x0, 0) = -1 ENODATA (No data available) getxattr("/tmp/librsync-0.9.7", "system.posix_acl_default", 0x0, 0) = -1 ENODATA (No data available) open("/etc/passwd", O_RDONLY) = 4 fcntl(4, F_GETFD) = 0 fcntl(4, F_SETFD, FD_CLOEXEC) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=1823, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "root:x:0:0:root:/root:/bin/bash\n"..., 4096) = 1823 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 open("/etc/ld.so.cache", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=71746, ...}) = 0 mmap(NULL, 71746, PROT_READ, MAP_PRIVATE, 4, 0) = 0x2b85862a5000 close(4) = 0 open("/lib64/libnss_ldap.so.2", O_RDONLY) = 4 read(4, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\300r\4\0\0\0\0\0"..., 832) = 832 fstat(4, {st_mode=S_IFREG|0755, st_size=3169960, ...}) = 0 mmap(NULL, 5329912, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x2b85864c2000 mprotect(0x2b858679e000, 2093056, PROT_NONE) = 0 mmap(0x2b858699d000, 176128, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x2db000) = 0x2b858699d000 mmap(0x2b85869c8000, 62456, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x2b85869c8000 close(4) = 0 open("/lib64/libcom_err.so.2", O_RDONLY) = 4 read(4, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\n\300\2770\0\0\0"..., 832) = 832 fstat(4, {st_mode=S_IFREG|0755, st_size=10000, ...}) = 0 mmap(0x30bfc00000, 2103048, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x30bfc00000 mprotect(0x30bfc02000, 2093056, PROT_NONE) = 0 mmap(0x30bfe01000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x1000) = 0x30bfe01000 close(4) = 0 open("/lib64/libkeyutils.so.1", O_RDONLY) = 4 read(4, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`\n@\2760\0\0\0"..., 832) = 832 fstat(4, {st_mode=S_IFREG|0755, st_size=9472, ...}) = 0 mmap(0x30be400000, 2102416, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x30be400000 mprotect(0x30be402000, 2093056, PROT_NONE) = 0 mmap(0x30be601000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x1000) = 0x30be601000 close(4) = 0 open("/lib64/libresolv.so.2", O_RDONLY) = 4 read(4, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\2402\0\2760\0\0\0"..., 832) = 832 fstat(4, {st_mode=S_IFREG|0755, st_size=92736, ...}) = 0 mmap(0x30be000000, 2181864, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x30be000000 mprotect(0x30be011000, 2097152, PROT_NONE) = 0 mmap(0x30be211000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x11000) = 0x30be211000 mmap(0x30be213000, 6888, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x30be213000 close(4) = 0 mprotect(0x30be211000, 4096, PROT_READ) = 0 munmap(0x2b85862a5000, 71746) = 0 rt_sigaction(SIGPIPE, {0x1, [], SA_RESTORER, 0x30ba8302d0}, {SIG_DFL, [], 0}, 8) = 0 geteuid() = 0 futex(0x2b85869c7708, FUTEX_WAKE_PRIVATE, 2147483647) = 0 open("/etc/ldap.conf", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=9119, ...}) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=9119, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "# @(#)$Id: ldap.conf,v 1.38 2006"..., 4096) = 4096 read(4, "Use the OpenLDAP password change"..., 4096) = 4096 read(4, " OpenLDAP 2.0 and earlier is \"no"..., 4096) = 927 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 uname({sys="Linux", node="massive.answeron.com", ...}) = 0 open("/etc/resolv.conf", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=107, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "; generated by /sbin/dhclient-sc"..., 4096) = 107 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 4 fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(4, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(4) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 4 fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(4, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(4) = 0 open("/etc/host.conf", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=17, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "order hosts,bind\n", 4096) = 17 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 futex(0x30bab54d44, FUTEX_WAKE_PRIVATE, 2147483647) = 0 open("/etc/hosts", O_RDONLY) = 4 fcntl(4, F_GETFD) = 0 fcntl(4, F_SETFD, FD_CLOEXEC) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=187, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "# Do not remove the following li"..., 4096) = 187 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 open("/etc/ld.so.cache", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=71746, ...}) = 0 mmap(NULL, 71746, PROT_READ, MAP_PRIVATE, 4, 0) = 0x2b85862a5000 close(4) = 0 open("/lib64/libnss_dns.so.2", O_RDONLY) = 4 read(4, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340\17\0\0\0\0\0\0"..., 832) = 832 fstat(4, {st_mode=S_IFREG|0755, st_size=23736, ...}) = 0 mmap(NULL, 2113792, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x2b85869d8000 mprotect(0x2b85869dc000, 2093056, PROT_NONE) = 0 mmap(0x2b8586bdb000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x3000) = 0x2b8586bdb000 close(4) = 0 mprotect(0x2b8586bdb000, 4096, PROT_READ) = 0 munmap(0x2b85862a5000, 71746) = 0 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 4 connect(4, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.10.20")}, 28) = 0 fcntl(4, F_GETFL) = 0x2 (flags O_RDWR) fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 gettimeofday({1276265920, 823870}, NULL) = 0 poll([{fd=4, events=POLLOUT}], 1, 0) = 1 ([{fd=4, revents=POLLOUT}]) sendto(4, "C\v\1\0\0\1\0\0\0\0\0\0\7massive\10answeron\3co"..., 38, MSG_NOSIGNAL, NULL, 0) = 38 poll([{fd=4, events=POLLIN}], 1, 5000) = 1 ([{fd=4, revents=POLLIN}]) ioctl(4, FIONREAD, [122]) = 0 recvfrom(4, "C\v\205\200\0\1\0\1\0\2\0\2\7massive\10answeron\3co"..., 1024, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.10.20")}, [16]) = 122 close(4) = 0 open("/etc/openldap/ldap.conf", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=335, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "#\n# LDAP Defaults\n#\n\n# See ldap."..., 4096) = 335 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 getuid() = 0 geteuid() = 0 getgid() = 0 getegid() = 0 open("/root/ldaprc", O_RDONLY) = -1 ENOENT (No such file or directory) open("/root/.ldaprc", O_RDONLY) = -1 ENOENT (No such file or directory) stat("/etc/ldap.conf", {st_mode=S_IFREG|0644, st_size=9119, ...}) = 0 geteuid() = 0 brk(0x1c566000) = 0x1c566000 open("/etc/hosts", O_RDONLY) = 4 fcntl(4, F_GETFD) = 0 fcntl(4, F_SETFD, FD_CLOEXEC) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=187, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "# Do not remove the following li"..., 4096) = 187 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 open("/etc/hosts", O_RDONLY) = 4 fcntl(4, F_GETFD) = 0 fcntl(4, F_SETFD, FD_CLOEXEC) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=187, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "# Do not remove the following li"..., 4096) = 187 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 4 connect(4, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.10.20")}, 28) = 0 fcntl(4, F_GETFL) = 0x2 (flags O_RDWR) fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 gettimeofday({1276265920, 855948}, NULL) = 0 poll([{fd=4, events=POLLOUT}], 1, 0) = 1 ([{fd=4, revents=POLLOUT}]) sendto(4, "\32 \1\0\0\1\0\0\0\0\0\0\4ldap\10answeron\3com\0\0"..., 35, MSG_NOSIGNAL, NULL, 0) = 35 poll([{fd=4, events=POLLIN}], 1, 5000) = 1 ([{fd=4, revents=POLLIN}]) ioctl(4, FIONREAD, [104]) = 0 recvfrom(4, "\32 \205\200\0\1\0\1\0\1\0\0\4ldap\10answeron\3com\0\0"..., 1024, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.10.20")}, [16]) = 104 close(4) = 0 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 4 connect(4, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.10.20")}, 28) = 0 fcntl(4, F_GETFL) = 0x2 (flags O_RDWR) fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 gettimeofday({1276265920, 858536}, NULL) = 0 poll([{fd=4, events=POLLOUT}], 1, 0) = 1 ([{fd=4, revents=POLLOUT}]) sendto(4, "I\375\1\0\0\1\0\0\0\0\0\0\4ldap\10answeron\3com\0\0"..., 35, MSG_NOSIGNAL, NULL, 0) = 35 poll([{fd=4, events=POLLIN}], 1, 5000) = 1 ([{fd=4, revents=POLLIN}]) ioctl(4, FIONREAD, [139]) = 0 recvfrom(4, "I\375\205\200\0\1\0\2\0\2\0\2\4ldap\10answeron\3com\0\0"..., 1024, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.10.20")}, [16]) = 139 close(4) = 0 socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 4 fcntl(4, F_SETFD, FD_CLOEXEC) = 0 setsockopt(4, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0 fcntl(4, F_GETFL) = 0x2 (flags O_RDWR) fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(4, {sa_family=AF_INET, sin_port=htons(389), sin_addr=inet_addr("10.20.0.30")}, 16) = -1 EINPROGRESS (Operation now in progress) poll([{fd=4, events=POLLOUT|POLLERR|POLLHUP}], 1, 120000 And thats where it stops, right there after that last 120000.... Using strace, I can obviously CTRL+C to keep going. But like I said, normally the terminal completely freezes. Anyone have any clues?

    Read the article

  • Building a better mouse-trap &ndash; Improving the creation of XML Message Requests using Reflection, XML &amp; XSLT

    - by paulschapman
    Introduction The way I previously created messages to send to the GovTalk service I used the XMLDocument to create the request. While this worked it left a number of problems; not least that for every message a special function would need to created. This is OK for the short term but the biggest cost in any software project is maintenance and this would be a headache to maintain. So the following is a somewhat better way of achieving the same thing. For the purposes of this article I am going to be using the CompanyNumberSearch request of the GovTalk service – although this technique would work for any service that accepted XML. The C# functions which send and receive the messages remain the same. The magic sauce in this is the XSLT which defines the structure of the request, and the use of objects in conjunction with reflection to provide the content. It is a bit like Sweet Chilli Sauce added to Chicken on a bed of rice. So on to the Sweet Chilli Sauce The Sweet Chilli Sauce The request to search for a company based on it’s number is as follows; <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID>1</TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID>????????????????????????????????</SenderID> <Authentication> <Method>CHMD5</Method> <Value>????????????????????????????????</Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber>99999999</PartialCompanyNumber> <DataSet>LIVE</DataSet> <SearchRows>1</SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> This is the XML that we send to the GovTalk Service and we get back a list of companies that match the criteria passed A message is structured in two parts; The envelope which identifies the person sending the request, with the name of the request, and the body which gives the detail of the company we are looking for. The Chilli What makes it possible is the use of XSLT to define the message – and serialization to convert each request object into XML. To start we need to create an object which will represent the contents of the message we are sending. However there is a common properties in all the messages that we send to Companies House. These properties are as follows SenderId – the id of the person sending the message SenderPassword – the password associated with Id TransactionId – Unique identifier for the message AuthenticationValue – authenticates the request Because these properties are unique to the Companies House message, and because they are shared with all messages they are perfect candidates for a base class. The class is as follows; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Cryptography; using System.Text; using System.Text.RegularExpressions; using Microsoft.WindowsAzure.ServiceRuntime; namespace CompanyHub.Services { public class GovTalkRequest { public GovTalkRequest() { try { SenderID = RoleEnvironment.GetConfigurationSettingValue("SenderId"); SenderPassword = RoleEnvironment.GetConfigurationSettingValue("SenderPassword"); TransactionId = DateTime.Now.Ticks.ToString(); AuthenticationValue = EncodePassword(String.Format("{0}{1}{2}", SenderID, SenderPassword, TransactionId)); } catch (System.Exception ex) { throw ex; } } /// <summary> /// returns the Sender ID to be used when communicating with the GovTalk Service /// </summary> public String SenderID { get; set; } /// <summary> /// return the password to be used when communicating with the GovTalk Service /// </summary> public String SenderPassword { get; set; } // end SenderPassword /// <summary> /// Transaction Id - uses the Time and Date converted to Ticks /// </summary> public String TransactionId { get; set; } // end TransactionId /// <summary> /// calculate the authentication value that will be used when /// communicating with /// </summary> public String AuthenticationValue { get; set; } // end AuthenticationValue property /// <summary> /// encodes password(s) using MD5 /// </summary> /// <param name="clearPassword"></param> /// <returns></returns> public static String EncodePassword(String clearPassword) { MD5CryptoServiceProvider md5Hasher = new MD5CryptoServiceProvider(); byte[] hashedBytes; UTF32Encoding encoder = new UTF32Encoding(); hashedBytes = md5Hasher.ComputeHash(ASCIIEncoding.Default.GetBytes(clearPassword)); String result = Regex.Replace(BitConverter.ToString(hashedBytes), "-", "").ToLower(); return result; } } } There is nothing particularly clever here, except for the EncodePassword method which hashes the value made up of the SenderId, Password and Transaction id. Each message inherits from this object. So for the Company Number Search in addition to the properties above we need a partial number, which dataset to search – for the purposes of the project we only need to search the LIVE set so this can be set in the constructor and the SearchRows. Again all are set as properties. With the SearchRows and DataSet initialized in the constructor. public class CompanyNumberSearchRequest : GovTalkRequest, IDisposable { /// <summary> /// /// </summary> public CompanyNumberSearchRequest() : base() { DataSet = "LIVE"; SearchRows = 1; } /// <summary> /// Company Number to search against /// </summary> public String PartialCompanyNumber { get; set; } /// <summary> /// What DataSet should be searched for the company /// </summary> public String DataSet { get; set; } /// <summary> /// How many rows should be returned /// </summary> public int SearchRows { get; set; } public void Dispose() { DataSet = String.Empty; PartialCompanyNumber = String.Empty; DataSet = "LIVE"; SearchRows = 1; } } As well as inheriting from our base class, I have also inherited from IDisposable – not just because it is just plain good practice to dispose of objects when coding, but it gives also gives us more versatility when using the object. There are four stages in making a request and this is reflected in the four methods we execute in making a call to the Companies House service; Create a request Send a request Check the status If OK then get the results of the request I’ve implemented each of these stages within a static class called Toolbox – which also means I don’t need to create an instance of the class to use it. When making a request there are three stages; Get the template for the message Serialize the object representing the message Transform the serialized object using a predefined XSLT file. Each of my templates I have defined as an embedded resource. When retrieving a resource of this kind we have to include the full namespace to the resource. In making the code re-usable as much as possible I defined the full ‘path’ within the GetRequest method. requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); So we now have the full path of the file within the assembly. Now all we need do is retrieve the assembly and get the resource. asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); Once retrieved  So this can be returned to the calling function and we now have a stream of XSLT to define the message. Time now to serialize the request to create the other side of this message. // Serialize object containing Request, Load into XML Document t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); First off we need the type of the object so we make a call to the GetType method of the object containing the Message properties. Next we need a MemoryStream, XmlSerializer and an XMLTextWriter so these can be initialized. The object is serialized by making the call to the Serialize method of the serializer object. The result of that is then converted into a MemoryStream. That MemoryStream is then converted into a string. ConvertByteArrayToString This is a fairly simple function which uses an ASCIIEncoding object found within the System.Text namespace to convert an array of bytes into a string. public static String ConvertByteArrayToString(byte[] bytes) { System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); return enc.GetString(bytes); } I only put it into a function because I will be using this in various places. The Sauce When adding support for other messages outside of creating a new object to store the properties of the message, the C# components do not need to change. It is in the XSLT file that the versatility of the technique lies. The XSLT file determines the format of the message. For the CompanyNumberSearch the XSLT file is as follows; <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID> <xsl:value-of select="CompanyNumberSearchRequest/TransactionId"/> </TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID><xsl:value-of select="CompanyNumberSearchRequest/SenderID"/></SenderID> <Authentication> <Method>CHMD5</Method> <Value> <xsl:value-of select="CompanyNumberSearchRequest/AuthenticationValue"/> </Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber> <xsl:value-of select="CompanyNumberSearchRequest/PartialCompanyNumber"/> </PartialCompanyNumber> <DataSet> <xsl:value-of select="CompanyNumberSearchRequest/DataSet"/> </DataSet> <SearchRows> <xsl:value-of select="CompanyNumberSearchRequest/SearchRows"/> </SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> </xsl:template> </xsl:stylesheet> The outer two tags define that this is a XSLT stylesheet and the root tag from which the nodes are searched for. The GovTalkMessage is the format of the message that will be sent to Companies House. We first set up the XslCompiledTransform object which will transform the XSLT template and the serialized object into the request to Companies House. xslt = new XslCompiledTransform(); resultStream = new MemoryStream(); writer = new XmlTextWriter(resultStream, Encoding.ASCII); doc = new XmlDocument(); The Serialize method require XmlTextWriter to write the XML (writer) and a stream to place the transferred object into (writer). The XML will be loaded into an XMLDocument object (doc) prior to the transformation. // create XSLT Template xslTemplate = Toolbox.GetRequest(Template); xslTemplate.Seek(0, SeekOrigin.Begin); templateReader = XmlReader.Create(xslTemplate); xslt.Load(templateReader); I have stored all the templates as a series of Embedded Resources and the GetRequestCall takes the name of the template and extracts the relevent XSLT file. /// <summary> /// Gets the framwork XML which makes the request /// </summary> /// <param name="RequestFile"></param> /// <returns></returns> public static Stream GetRequest(String RequestFile) { String requestFile = String.Empty; Stream sr = null; Assembly asm = null; try { requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); } catch (Exception) { throw; } finally { asm = null; } return sr; } // end private static stream GetRequest We first take the template name and expand it to include the full namespace to the Embedded Resource I like to keep all my schemas in the same directory and so the namespace reflects this. The rest is the default namespace for the project. Then we get the currently executing assembly (which will contain the resources with the call to GetExecutingAssembly() ) Finally we get a stream which contains the XSLT file. We use this stream and then load an XmlReader with the contents of the template, and that is in turn loaded into the XslCompiledTransform object. We convert the object containing the message properties into Xml by serializing it; calling the Serialize() method of the XmlSerializer object. To set up the object we do the following; t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); We first determine the type of the object being transferred by calling GetType() We create an XmlSerializer object by passing the type of the object being serialized. The serializer writes to a memory stream and that is linked to an XmlTextWriter. Next job is to serialize the object and load it into an XmlDocument. serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; xmlRequest = new XmlTextReader(ms); GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); doc.LoadXml(GovTalkRequest); Time to transform the XML to construct the full request. xslt.Transform(doc, writer); resultStream.Seek(0, SeekOrigin.Begin); request = Toolbox.ConvertByteArrayToString(resultStream.ToArray()); So that creates the full request to be sent  to Companies House. Sending the request So far we have a string with a request for the Companies House service. Now we need to send the request to the Companies House Service. Configuration within an Azure project There are entire blog entries written about configuration within an Azure project – most of this is out of scope for this article but the following is a summary. Configuration is defined in two files within the parent project *.csdef which contains the definition of configuration setting. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="CompanyHub.Host"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="80" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="DataConnectionString" /> </ConfigurationSettings> </WebRole> <WebRole name="CompanyHub.Services"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="8080" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="SenderId"/> <Setting name="SenderPassword" /> <Setting name="GovTalkUrl"/> </ConfigurationSettings> </WebRole> <WorkerRole name="CompanyHub.Worker"> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>   Above is the configuration definition from the project. What we are interested in however is the ConfigurationSettings tag of the CompanyHub.Services WebRole. There are four configuration settings here, but at the moment we are interested in the second to forth settings; SenderId, SenderPassword and GovTalkUrl The value of these settings are defined in the ServiceDefinition.cscfg file; <?xml version="1.0"?> <ServiceConfiguration serviceName="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="CompanyHub.Host"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="CompanyHub.Services"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="SenderId" value="UserID"/> <Setting name="SenderPassword" value="Password"/> <Setting name="GovTalkUrl" value="http://xmlgw.companieshouse.gov.uk/v1-0/xmlgw/Gateway"/> </ConfigurationSettings> </Role> <Role name="CompanyHub.Worker"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration>   Look for the Role tag that contains our project name (CompanyHub.Services). Having configured the parameters we can now transmit the request. This is done by ‘POST’ing a stream of XML to the Companies House servers. govTalkUrl = RoleEnvironment.GetConfigurationSettingValue("GovTalkUrl"); request = WebRequest.Create(govTalkUrl); request.Method = "POST"; request.ContentType = "text/xml"; writer = new StreamWriter(request.GetRequestStream()); writer.WriteLine(RequestMessage); writer.Close(); We use the WebRequest object to send the object. Set the method of sending to ‘POST’ and the type of data as text/xml. Once set up all we do is write the request to the writer – this sends the request to Companies House. Did the Request Work Part I – Getting the response Having sent a request – we now need the result of that request. response = request.GetResponse(); reader = response.GetResponseStream(); result = Toolbox.ConvertByteArrayToString(Toolbox.ReadFully(reader));   The WebRequest object has a GetResponse() method which allows us to get the response sent back. Like many of these calls the results come in the form of a stream which we convert into a string. Did the Request Work Part II – Translating the Response Much like XSLT and XML were used to create the original request, so it can be used to extract the response and by deserializing the result we create an object that contains the response. Did it work? It would be really great if everything worked all the time. Of course if it did then I don’t suppose people would pay me and others the big bucks so that our programmes do not a) Collapse in a heap (this is an area of memory) b) Blow every fuse in the place in a shower of sparks (this will probably not happen this being real life and not a Hollywood movie, but it was possible to blow the sound system of a BBC Model B with a poorly coded setting) c) Go nuts and trap everyone outside the airlock (this was from a movie, and unless NASA get a manned moon/mars mission set up unlikely to happen) d) Go nuts and take over the world (this was also from a movie, but please note life has a habit of being of exceeding the wildest imaginations of Hollywood writers (note writers – Hollywood executives have no imagination and judging by recent output of that town have turned plagiarism into an art form). e) Freeze in total confusion because the cleaner pulled the plug to the internet router (this has happened) So anyway – we need to check to see if our request actually worked. Within the GovTalk response there is a section that details the status of the message and a description of what went wrong (if anything did). I have defined an XSLT template which will extract these into an XML document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <xsl:template match="/"> <GovTalkStatus xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Status> <xsl:value-of select="ev:GovTalkMessage/ev:Header/ev:MessageDetails/ev:Qualifier"/> </Status> <Text> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Text"/> </Text> <Location> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Location"/> </Location> <Number> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Number"/> </Number> <Type> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Type"/> </Type> </GovTalkStatus> </xsl:template> </xsl:stylesheet>   Only thing different about previous XSL files is the references to two namespaces ev & gt. These are defined in the GovTalk response at the top of the response; xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" If we do not put these references into the XSLT template then  the XslCompiledTransform object will not be able to find the relevant tags. Deserialization is a fairly simple activity. encoder = new ASCIIEncoding(); ms = new MemoryStream(encoder.GetBytes(statusXML)); serializer = new XmlSerializer(typeof(GovTalkStatus)); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); messageStatus = (GovTalkStatus)serializer.Deserialize(ms);   We set up a serialization object using the object type containing the error state and pass to it the results of a transformation between the XSLT above and the GovTalk response. Now we have an object containing any error state, and the error message. All we need to do is check the status. If there is an error then we can flag an error. If not then  we extract the results and pass that as an object back to the calling function. We go this by guess what – defining an XSLT template for the result and using that to create an Xml Stream which can be deserialized into a .Net object. In this instance the XSLT to create the result of a Company Number Search is; <?xml version="1.0" encoding="us-ascii"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:sch="http://xmlgw.companieshouse.gov.uk/v1-0/schema" exclude-result-prefixes="ev"> <xsl:template match="/"> <CompanySearchResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <CompanyNumber> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyNumber"/> </CompanyNumber> <CompanyName> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyName"/> </CompanyName> </CompanySearchResult> </xsl:template> </xsl:stylesheet> and the object definition is; using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace CompanyHub.Services { public class CompanySearchResult { public CompanySearchResult() { CompanyNumber = String.Empty; CompanyName = String.Empty; } public String CompanyNumber { get; set; } public String CompanyName { get; set; } } } Our entire code to make calls to send a request, and interpret the results are; String request = String.Empty; String response = String.Empty; GovTalkStatus status = null; fault = null; try { using (CompanyNumberSearchRequest requestObj = new CompanyNumberSearchRequest()) { requestObj.PartialCompanyNumber = CompanyNumber; request = Toolbox.CreateRequest(requestObj, "CompanyNumberSearch.xsl"); response = Toolbox.SendGovTalkRequest(request); status = Toolbox.GetMessageStatus(response); if (status.Status.ToLower() == "error") { fault = new HubFault() { Message = status.Text }; } else { Object obj = Toolbox.GetGovTalkResponse(response, "CompanyNumberSearchResult.xsl", typeof(CompanySearchResult)); } } } catch (FaultException<ArgumentException> ex) { fault = new HubFault() { FaultType = ex.Detail.GetType().FullName, Message = ex.Detail.Message }; } catch (System.Exception ex) { fault = new HubFault() { FaultType = ex.GetType().FullName, Message = ex.Message }; } finally { } Wrap up So there we have it – a reusable set of functions to send and interpret XML results from an internet based service. The code is reusable with a little change with any service which uses XML as a transport mechanism – and as for the Companies House GovTalk service all I need to do is create various objects for the result and message sent and the relevent XSLT files. I might need minor changes for other services but something like 70-90% will be exactly the same.

    Read the article

  • OpenVPN - Windows 8 to Windows 2008 Server, not connecting

    - by niico
    I have followed this tutorial about setting up an OpenVPN Server on Windows Server - and a client on Windows (in this case Windows 8). The server appears to be running fine - but it is not connecting with this error: Mon Jul 22 19:09:04 2013 Warning: cannot open --log file: C:\Program Files\OpenVPN\log\my-laptop.log: Access is denied. (errno=5) Mon Jul 22 19:09:04 2013 OpenVPN 2.3.2 x86_64-w64-mingw32 [SSL (OpenSSL)] [LZO] [PKCS11] [eurephia] [IPv6] built on Jun 3 2013 Mon Jul 22 19:09:04 2013 MANAGEMENT: TCP Socket listening on [AF_INET]127.0.0.1:25340 Mon Jul 22 19:09:04 2013 Need hold release from management interface, waiting... Mon Jul 22 19:09:05 2013 MANAGEMENT: Client connected from [AF_INET]127.0.0.1:25340 Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'state on' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'log all on' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'hold off' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'hold release' Mon Jul 22 19:09:05 2013 Socket Buffers: R=[65536->65536] S=[65536->65536] Mon Jul 22 19:09:05 2013 UDPv4 link local: [undef] Mon Jul 22 19:09:05 2013 UDPv4 link remote: [AF_INET]66.666.66.666:9999 Mon Jul 22 19:09:05 2013 MANAGEMENT: >STATE:1374494945,WAIT,,, Mon Jul 22 19:10:05 2013 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity) Mon Jul 22 19:10:05 2013 TLS Error: TLS handshake failed Mon Jul 22 19:10:05 2013 SIGUSR1[soft,tls-error] received, process restarting Mon Jul 22 19:10:05 2013 MANAGEMENT: >STATE:1374495005,RECONNECTING,tls-error,, Mon Jul 22 19:10:05 2013 Restart pause, 2 second(s) Note I have changed the IP and port no (it uses a non-standard port for security reasons). That port is open on the hardware firewall. The server logs are showing a connection attempt from my client: TLS: Initial packet from [AF_INET]118.68.xx.xx:65011, sid=081af4ed xxxxxxxx Mon Jul 22 14:19:15 2013 118.68.xx.xx:65011 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity) How can I problem solve this & find the problem? Thx Update - Client config file: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote 00.00.00.00 1194 ;remote 00.00.00.00 9999 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca "C:\\Program Files\\OpenVPN\\config\\ca.crt" cert "C:\\Program Files\\OpenVPN\\config\\my-laptop.crt" key "C:\\Program Files\\OpenVPN\\config\\my-laptop.key" # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 Server config file: ################################################# # Sample OpenVPN 2.0 config file for # # multi-client server. # # # # This file is for the server side # # of a many-clients <-> one-server # # OpenVPN configuration. # # # # OpenVPN also supports # # single-machine <-> single-machine # # configurations (See the Examples page # # on the web site for more info). # # # # This config should work on Windows # # or Linux/BSD systems. Remember on # # Windows to quote pathnames and use # # double backslashes, e.g.: # # "C:\\Program Files\\OpenVPN\\config\\foo.key" # # # # Comments are preceded with '#' or ';' # ################################################# # Which local IP address should OpenVPN # listen on? (optional) ;local 00.00.00.00 # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. std 1194 port 1194 # TCP or UDP server? ;proto tcp proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca "C:\\Program Files\\OpenVPN\\config\\ca.crt" cert "C:\\Program Files\\OpenVPN\\config\\server.crt" key "C:\\Program Files\\OpenVPN\\config\\server.key" # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh "C:\\Program Files\\OpenVPN\\config\\dh2048.pem" # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. ;server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. ;push "route 192.168.10.0 255.255.255.0" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). ;push "redirect-gateway def1 bypass-dhcp" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 208.67.222.222" ;push "dhcp-option DNS 208.67.220.220" # Uncomment this directive to allow differenta # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. ;user nobody ;group nobody # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I have changed IP's for security

    Read the article

  • Solr DataImportHandler configuration

    - by talo
    I want to get data from mysql database with the help of DataImportHandler so i can create indexes. Now I've configured my Solr instance so that it works on Tomcat (the example admin page), but if I try to change the sorlconfig.xml file i'll get the error message. I'm working with Solr 3.6 So my configuration is: In solrconfig.xml i added: <dataDir>${solr.data.dir:/usr/share/tomcat7/solr2}</dataDir> to specify my working directory and then <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler"> <lst name="defaults"> <str name="config">/usr/share/tomcat7/solr2/conf/data-config.xml</str> </lst> </requestHandler> to specify new request handler. Theese two are my lib directives for DIH. Do i need to change them? <lib dir="../../dist/" regex="apache-solr-dataimporthandler-\d.*\.jar" /> <lib dir="../../contrib/dataimporthandler/lib/" regex=".*\.jar" /> I also created data-config.xml file and added following: <?xml version="1.0" encoding="UTF-8"?> <dataConfig> <dataSource type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/ethicsweb_experts" user="root" password=""/> <document> <entity name="experts" datasource="mysql" pk="mainid" query="SELECT experts.mainid as mainid FROM experts WHERE validRec = 'y'"> <field column="mainid" name="mainid"/> <field column="validRec" name="validRec"/> </entity> </document> I've coppied following jars to tomcat/lib folder (DIH jar files and mysql JDBC connector jar file) apache-solr-dataimporthandler-3.6.0.jar apache-solr-dataimporthandler-extras-3.6.0.jar mysql-connector-java-5.1.20-bin.jar Also in the schema.xml file i added folowing fields: <field name="mainid" type="int" indexed="true" stored="true" /> <field name="validRec" type="string" indexed="true" stored="true" /> <field name="recSource" type="string" indexed="true" stored="true" /> <uniqueKey>mainid</uniqueKey> But now when i try to acces: http://localhost:8080/solr2/ I get following output: HTTP Status 500 - Severe errors in solr configuration. Check your log files for more detailed information on what may be wrong. If you want solr to continue after configuration errors, change: false in solr.xml ------------------------------------------------------------- org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) ------------------------------------------------------------- java.lang.NoClassDefFoundError: org/apache/solr/util/plugin/SolrCoreAware at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:791) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698) at java.lang.ClassLoader.loadClass(ClassLoader.java:410) at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:378) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:419) at org.apache.solr.core.SolrCore.createRequestHandler(SolrCore.java:455) at org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:159) at org.apache.solr.core.SolrCore.(SolrCore.java:563) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:483) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:335) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:219) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.ClassNotFoundException: org.apache.solr.util.plugin.SolrCoreAware at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) ... 47 more My log entries show me that: SEVERE: java.lang.NoClassDefFoundError: org/apache/solr/util/plugin/SolrCoreAware at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:791) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698) at java.lang.ClassLoader.loadClass(ClassLoader.java:410) at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:378) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:419) at org.apache.solr.core.SolrCore.createRequestHandler(SolrCore.java:455) at org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:159) at org.apache.solr.core.SolrCore.(SolrCore.java:563) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:483) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:335) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:219) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.ClassNotFoundException: org.apache.solr.util.plugin.SolrCoreAware at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) ... 47 more Jun 15, 2012 4:07:50 PM org.apache.solr.servlet.SolrDispatchFilter init SEVERE: Could not start Solr. Check solr/home property and the logs org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Jun 15, 2012 4:07:50 PM org.apache.solr.common.SolrException log SEVERE: org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) So now I wonder where did i screw up the configuration for the DataImportHandler. Do i need to specify more jar files? Did i put jar files in the correct directory? Any help would be greatly appreciated.

    Read the article

  • Hibernate + PostgreSQL : relation does not exist - SQL Error: 0, SQLState: 42P01

    - by tommy599
    Hello, I am having some problems trying to work with PostgreSQL and Hibernate, more specifically, the issue mentioned in the title. I've been searching the net for a few hours now but none of the found solutions worked for me. I am using Eclipse Java EE IDE for Web Developers. Build id: 20090920-1017 with HibernateTools, Hibernate 3, PostgreSQL 8.4.3 on Ubuntu 9.10. Here are the relevant files: Message.class package hello; public class Message { private Long id; private String text; public Message() { } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getText() { return text; } public void setText(String text) { this.text = text; } } Message.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="hello"> <class name="Message" table="public.messages"> <id name="id" column="id"> <generator class="assigned"/> </id> <property name="text" column="messagetext"/> </class> </hibernate-mapping> hibernate.cfg.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.connection.driver_class">org.postgresql.Driver</property> <property name="hibernate.connection.password">bar</property> <property name="hibernate.connection.url">jdbc:postgresql:postgres/tommy</property> <property name="hibernate.connection.username">foo</property> <property name="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</property> <property name="show_sql">true</property> <property name="log4j.logger.org.hibernate.type">DEBUG</property> <mapping resource="hello/Message.hbm.xml"/> </session-factory> </hibernate-configuration> Main package hello; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.cfg.Configuration; public class App { public static void main(String[] args) { SessionFactory sessionFactory = new Configuration().configure() .buildSessionFactory(); Message message = new Message(); message.setText("Hello Cruel World"); message.setId(2L); Session session = null; Transaction transaction = null; try { session = sessionFactory.openSession(); transaction = session.beginTransaction(); session.save(message); } catch (Exception e) { System.out.println("Exception attemtping to Add message: " + e.getMessage()); } finally { if (session != null && session.isOpen()) { if (transaction != null) transaction.commit(); session.flush(); session.close(); } } } } Table structure: foo=# \d messages Table "public.messages" Column | Type | Modifiers -------------+---------+----------- id | integer | messagetext | text | Eclipse console output when I run it Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment <clinit> INFO: Hibernate 3.5.1-Final Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment <clinit> INFO: hibernate.properties not found Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment buildBytecodeProvider INFO: Bytecode provider name : javassist Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment <clinit> INFO: using JDK 1.4 java.sql.Timestamp handling Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Configuration configure INFO: configuring from resource: /hibernate.cfg.xml Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Configuration getConfigurationInputStream INFO: Configuration resource: /hibernate.cfg.xml Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Configuration addResource INFO: Reading mappings from resource : hello/Message.hbm.xml Apr 28, 2010 11:13:54 PM org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues INFO: Mapping class: hello.Message -> public.messages Apr 28, 2010 11:13:54 PM org.hibernate.cfg.Configuration doConfigure INFO: Configured SessionFactory: null Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: Using Hibernate built-in connection pool (not for production use!) Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: Hibernate connection pool size: 20 Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: autocommit mode: false Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: using driver: org.postgresql.Driver at URL: jdbc:postgresql:postgres/tommy Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: connection properties: {user=foo, password=****} Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: RDBMS: PostgreSQL, version: 8.4.3 Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC driver: PostgreSQL Native Driver, version: PostgreSQL 8.4 JDBC4 (build 701) Apr 28, 2010 11:13:54 PM org.hibernate.dialect.Dialect <init> INFO: Using dialect: org.hibernate.dialect.PostgreSQLDialect Apr 28, 2010 11:13:54 PM org.hibernate.engine.jdbc.JdbcSupportLoader useContextualLobCreation INFO: Disabling contextual LOB creation as createClob() method threw error : java.lang.reflect.InvocationTargetException Apr 28, 2010 11:13:54 PM org.hibernate.transaction.TransactionFactoryFactory buildTransactionFactory INFO: Using default transaction strategy (direct JDBC transactions) Apr 28, 2010 11:13:54 PM org.hibernate.transaction.TransactionManagerLookupFactory getTransactionManagerLookup INFO: No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Automatic flush during beforeCompletion(): disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Automatic session close at end of transaction: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC batch size: 15 Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC batch updates for versioned data: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Scrollable result sets: enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC3 getGeneratedKeys(): enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Connection release mode: auto Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Default batch fetch size: 1 Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Generate SQL with comments: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Order SQL updates by primary key: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Order SQL inserts for batching: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory createQueryTranslatorFactory INFO: Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory Apr 28, 2010 11:13:54 PM org.hibernate.hql.ast.ASTQueryTranslatorFactory <init> INFO: Using ASTQueryTranslatorFactory Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Query language substitutions: {} Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JPA-QL strict compliance: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Second-level cache: enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Query cache: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory createRegionFactory INFO: Cache region factory : org.hibernate.cache.impl.NoCachingRegionFactory Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Optimize cache for minimal puts: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Structured second-level cache entries: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Echoing all SQL to stdout Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Statistics: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Deleted entity synthetic identifier rollback: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Default entity-mode: pojo Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Named query checking : enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Check Nullability in Core (should be disabled when Bean Validation is on): enabled Apr 28, 2010 11:13:54 PM org.hibernate.impl.SessionFactoryImpl <init> INFO: building session factory Apr 28, 2010 11:13:55 PM org.hibernate.impl.SessionFactoryObjectFactory addInstance INFO: Not binding factory to JNDI, no JNDI name configured Hibernate: insert into public.messages (messagetext, id) values (?, ?) Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions WARNING: SQL Error: 0, SQLState: 42P01 Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions SEVERE: Batch entry 0 insert into public.messages (messagetext, id) values ('Hello Cruel World', '2') was aborted. Call getNextException to see the cause. Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions WARNING: SQL Error: 0, SQLState: 42P01 Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions SEVERE: ERROR: relation "public.messages" does not exist Position: 13 Apr 28, 2010 11:13:55 PM org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.SQLGrammarException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:92) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:179) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:375) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at hello.App.main(App.java:31) Caused by: java.sql.BatchUpdateException: Batch entry 0 insert into public.messages (messagetext, id) values ('Hello Cruel World', '2') was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2569) at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:459) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1796) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2708) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268) ... 8 more Exception in thread "main" org.hibernate.exception.SQLGrammarException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:92) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:179) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:375) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at hello.App.main(App.java:31) Caused by: java.sql.BatchUpdateException: Batch entry 0 insert into public.messages (messagetext, id) values ('Hello Cruel World', '2') was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2569) at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:459) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1796) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2708) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268) ... 8 more PostgreSQL log file 2010-04-28 23:13:55 EEST LOG: execute S_1: BEGIN 2010-04-28 23:13:55 EEST ERROR: relation "public.messages" does not exist at character 13 2010-04-28 23:13:55 EEST STATEMENT: insert into public.messages (messagetext, id) values ($1, $2) 2010-04-28 23:13:55 EEST LOG: unexpected EOF on client connection If I copy/paste the query into the postgre command line and put the values in and ; after it, it works. Everything is lowercase, so I don't think that it's that issue. If I switch to MySQL, the same code same project (I only change driver,URL, authentication), it works. In Eclipse Datasource Explorer, I can ping the DB and it succeeds. Weird thing is that I can't see the tables from there either. It expands the public schema but it doesn't expand the tables. Could it be some permission issue? Thanks!

    Read the article

  • Hibernate Query Exception

    - by dharga
    I've got a hibernate query I'm trying to get working but keep getting an exception with a not so helpful stack trace. I'm including the code, the stack trace, and hibernate chatter before the exception is thrown. If you need me to include the entity classes for MessageTarget and GrpExclusion let me know in comments and I'll add them. public List<MessageTarget> findMessageTargets(int age, String gender, String businessCode, String groupId, String systemCode) { Session session = getHibernateTemplate().getSessionFactory().openSession(); List<MessageTarget> results = new ArrayList<MessageTarget>(); try { String hSql = "from MessageTarget mt where " + "not exists (select GrpExclusion where grp_no = ?) and " + "(trgt_gndr_cd = 'A' or trgt_gndr_cd = ?) and " + "sys_src_cd = ? and " + "bampi_busn_sgmnt_cd = ? and " + "trgt_low_age <= ? and " + "trgt_high_age >= ? and " + "(effectiveDate is null or effectiveDate <= ?) and " + "(termDate is null or termDate >= ?)"; results = session.createQuery(hSql) .setParameter(0, groupId) .setParameter(1, gender) .setParameter(2, systemCode) .setParameter(3, businessCode) .setParameter(4, age) .setParameter(5, age) .setParameter(6, new Date()) .setParameter(7, new Date()) .list(); } catch (Exception e) { System.err.println(e.getMessage()); e.printStackTrace(); } finally { session.close(); } return results; } Here's the stacktrace. [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R java.lang.NullPointerException [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.util.SessionFactoryHelper.findSQLFunction(SessionFactoryHelper.java:365) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.tree.IdentNode.getDataType(IdentNode.java:289) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.tree.SelectClause.initializeExplicitSelectClause(SelectClause.java:165) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.HqlSqlWalker.useSelectClause(HqlSqlWalker.java:831) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.HqlSqlWalker.processQuery(HqlSqlWalker.java:619) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:672) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.collectionFunctionOrSubselect(HqlSqlBaseWalker.java:4465) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.comparisonExpr(HqlSqlBaseWalker.java:4165) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1864) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1839) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.whereClause(HqlSqlBaseWalker.java:818) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:604) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectStatement(HqlSqlBaseWalker.java:288) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:231) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:254) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:185) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:136) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:101) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:80) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:94) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.impl.AbstractSessionImpl.getHQLQueryPlan(AbstractSessionImpl.java:156) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.impl.AbstractSessionImpl.createQuery(AbstractSessionImpl.java:135) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.impl.SessionImpl.createQuery(SessionImpl.java:1651) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.bcbst.bamp.ws.dao.MessageTargetDAOImpl.findMessageTargets(MessageTargetDAOImpl.java:30) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.bcbst.bamp.ws.common.AlertReminder.findMessageTargets(AlertReminder.java:22) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at java.lang.reflect.Method.invoke(Method.java:599) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.jaxws.server.dispatcher.JavaDispatcher.invokeTargetOperation(JavaDispatcher.java:81) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.jaxws.server.dispatcher.JavaBeanDispatcher.invoke(JavaBeanDispatcher.java:98) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.jaxws.server.EndpointController.invoke(EndpointController.java:109) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.jaxws.server.JAXWSMessageReceiver.receive(JAXWSMessageReceiver.java:159) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:188) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(HTTPTransportUtils.java:275) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.websvcs.transport.http.WASAxis2Servlet.doPost(WASAxis2Servlet.java:1389) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at javax.servlet.http.HttpServlet.service(HttpServlet.java:738) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at javax.servlet.http.HttpServlet.service(HttpServlet.java:831) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1536) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:829) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:458) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.servlet.ServletWrapperImpl.handleRequest(ServletWrapperImpl.java:175) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:3742) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:276) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:929) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.WSWebContainer.handleRequest(WSWebContainer.java:1583) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:178) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:455) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:384) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.ready(HttpInboundLink.java:272) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:214) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:113) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:165) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:775) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:905) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1550) Here's the hibernate chatter. [5/6/10 15:05:20:651 EDT] 00000017 XmlBeanDefini I org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions Loading XML bean definitions from class path resource [beans.xml] [5/6/10 15:05:20:823 EDT] 00000017 Configuration I org.slf4j.impl.JCLLoggerAdapter info configuring from url: file:/C:/workspaces/bampi/AlertReminderWS/WebContent/WEB-INF/classes/hibernate.cfg.xml [5/6/10 15:05:20:838 EDT] 00000017 Configuration I org.slf4j.impl.JCLLoggerAdapter info Configured SessionFactory: java:hibernate/Alert/SessionFactory1.0.3 [5/6/10 15:05:20:838 EDT] 00000017 AnnotationBin I org.hibernate.cfg.AnnotationBinder bindClass Binding entity from annotated class: com.bcbst.bamp.ws.model.MessageTarget [5/6/10 15:05:20:838 EDT] 00000017 EntityBinder I org.hibernate.cfg.annotations.EntityBinder bindTable Bind entity com.bcbst.bamp.ws.model.MessageTarget on table MessageTarget [5/6/10 15:05:20:854 EDT] 00000017 AnnotationBin I org.hibernate.cfg.AnnotationBinder bindClass Binding entity from annotated class: com.bcbst.bamp.ws.model.GrpExclusion [5/6/10 15:05:20:854 EDT] 00000017 EntityBinder I org.hibernate.cfg.annotations.EntityBinder bindTable Bind entity com.bcbst.bamp.ws.model.GrpExclusion on table GrpExclusion [5/6/10 15:05:20:854 EDT] 00000017 CollectionBin I org.hibernate.cfg.annotations.CollectionBinder bindOneToManySecondPass Mapping collection: com.bcbst.bamp.ws.model.MessageTarget.exclusions -> GrpExclusion [5/6/10 15:05:20:885 EDT] 00000017 AnnotationSes I org.springframework.orm.hibernate3.LocalSessionFactoryBean buildSessionFactory Building new Hibernate SessionFactory [5/6/10 15:05:20:901 EDT] 00000017 ConnectionPro I org.slf4j.impl.JCLLoggerAdapter info Initializing connection provider: org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider [5/6/10 15:05:20:901 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info RDBMS: Microsoft SQL Server, version: 9.00.4035 [5/6/10 15:05:20:901 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info JDBC driver: Microsoft SQL Server 2005 JDBC Driver, version: 1.2.2828.100 [5/6/10 15:05:20:901 EDT] 00000017 Dialect I org.slf4j.impl.JCLLoggerAdapter info Using dialect: org.hibernate.dialect.SQLServerDialect [5/6/10 15:05:20:916 EDT] 00000017 TransactionFa I org.slf4j.impl.JCLLoggerAdapter info Transaction strategy: org.springframework.orm.hibernate3.SpringTransactionFactory [5/6/10 15:05:20:916 EDT] 00000017 TransactionMa I org.slf4j.impl.JCLLoggerAdapter info No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Automatic flush during beforeCompletion(): disabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Automatic session close at end of transaction: disabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Scrollable result sets: enabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info JDBC3 getGeneratedKeys(): enabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Connection release mode: auto [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Default batch fetch size: 1 [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Generate SQL with comments: disabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Order SQL updates by primary key: disabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Order SQL inserts for batching: disabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory [5/6/10 15:05:20:932 EDT] 00000017 ASTQueryTrans I org.slf4j.impl.JCLLoggerAdapter info Using ASTQueryTranslatorFactory [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Query language substitutions: {} [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info JPA-QL strict compliance: disabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Second-level cache: enabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Query cache: disabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge [5/6/10 15:05:20:932 EDT] 00000017 RegionFactory I org.slf4j.impl.JCLLoggerAdapter info Cache provider: org.hibernate.cache.NoCacheProvider [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Optimize cache for minimal puts: disabled [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Structured second-level cache entries: disabled [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Statistics: disabled [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Deleted entity synthetic identifier rollback: disabled [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Default entity-mode: pojo [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Named query checking : enabled [5/6/10 15:05:20:979 EDT] 00000017 SessionFactor I org.slf4j.impl.JCLLoggerAdapter info building session factory [5/6/10 15:05:21:010 EDT] 00000017 SessionFactor I org.slf4j.impl.JCLLoggerAdapter info Factory name: java:hibernate/Alert/SessionFactory1.0.3 [5/6/10 15:05:21:010 EDT] 00000017 NamingHelper I org.slf4j.impl.JCLLoggerAdapter info JNDI InitialContext properties:{} [5/6/10 15:05:21:010 EDT] 00000017 NamingHelper I org.slf4j.impl.JCLLoggerAdapter info Creating subcontext: java:hibernate [5/6/10 15:05:21:010 EDT] 00000017 NamingHelper I org.slf4j.impl.JCLLoggerAdapter info Creating subcontext: Alert [5/6/10 15:05:21:010 EDT] 00000017 SessionFactor I org.slf4j.impl.JCLLoggerAdapter info Bound factory to JNDI name: java:hibernate/Alert/SessionFactory1.0.3 [5/6/10 15:05:21:026 EDT] 00000017 SessionFactor W org.slf4j.impl.JCLLoggerAdapter warn InitialContext did not implement EventContext [5/6/10 15:05:21:041 EDT] 00000017 PARSER E org.slf4j.impl.JCLLoggerAdapter error <AST>:0:0: unexpected end of subtree

    Read the article

  • C# WCF Server retrieves 'List<T>' with 1 entry, but client doesn't receive it?! Please help Urgentl

    - by Neville
    Hi Everyone, I've been battling and trying to research this issue for over 2 days now with absolutely no luck. I am trying to retrieve a list of clients from the server (server using fluentNHibernate). The client object is as follow: [DataContract] //[KnownType(typeof(System.Collections.Generic.List<ContactPerson>))] //[KnownType(typeof(System.Collections.Generic.List<Address>))] //[KnownType(typeof(System.Collections.Generic.List<BatchRequest>))] //[KnownType(typeof(System.Collections.Generic.List<Discount>))] [KnownType(typeof(EClientType))] [KnownType(typeof(EComType))] public class Client { #region Properties [DataMember] public virtual int ClientID { get; set; } [DataMember] public virtual EClientType ClientType { get; set; } [DataMember] public virtual string RegisterID {get; set;} [DataMember] public virtual string HerdCode { get; set; } [DataMember] public virtual string CompanyName { get; set; } [DataMember] public virtual bool InvoicePerBatch { get; set; } [DataMember] public virtual EComType ResultsComType { get; set; } [DataMember] public virtual EComType InvoiceComType { get; set; } //[DataMember] //public virtual IList<ContactPerson> Contacts { get; set; } //[DataMember] //public virtual IList<Address> Addresses { get; set; } //[DataMember] //public virtual IList<BatchRequest> Batches { get; set; } //[DataMember] //public virtual IList<Discount> Discounts { get; set; } #endregion #region Overrides public override bool Equals(object obj) { var other = obj as Client; if (other == null) return false; return other.GetHashCode() == this.GetHashCode(); } public override int GetHashCode() { return ClientID.GetHashCode() | ClientType.GetHashCode() | RegisterID.GetHashCode() | HerdCode.GetHashCode() | CompanyName.GetHashCode() | InvoicePerBatch.GetHashCode() | ResultsComType.GetHashCode() | InvoiceComType.GetHashCode();// | Contacts.GetHashCode() | //Addresses.GetHashCode() | Batches.GetHashCode() | Discounts.GetHashCode(); } #endregion } As you can see, I have allready tried to remove the sub-lists, though even with this simplified version of the client I still run into the propblem. my fluent mapping is: public class ClientMap : ClassMap<Client> { public ClientMap() { Table("Clients"); Id(p => p.ClientID); Map(p => p.ClientType).CustomType<EClientType>(); ; Map(p => p.RegisterID); Map(p => p.HerdCode); Map(p => p.CompanyName); Map(p => p.InvoicePerBatch); Map(p => p.ResultsComType).CustomType<EComType>(); Map(p => p.InvoiceComType).CustomType<EComType>(); //HasMany<ContactPerson>(p => p.Contacts) // .KeyColumns.Add("ContactPersonID") // .Inverse() // .Cascade.All(); //HasMany<Address>(p => p.Addresses) // .KeyColumns.Add("AddressID") // .Inverse() // .Cascade.All(); //HasMany<BatchRequest>(p => p.Batches) // .KeyColumns.Add("BatchID") // .Inverse() // .Cascade.All(); //HasMany<Discount>(p => p.Discounts) // .KeyColumns.Add("DiscountID") // .Inverse() // .Cascade.All(); } The client method, seen below, connects to the server. The server retrieves the list, and everything looks right in the object, still, when it returns, the client doesn't receive anything (it receive a List object, but with nothing in it. Herewith the calling method: public List<s.Client> GetClientList() { try { s.DataServiceClient svcClient = new s.DataServiceClient(); svcClient.Open(); List<s.Client> clients = new List<s.Client>(); clients = svcClient.GetClientList().ToList<s.Client>(); svcClient.Close(); //when receiving focus from server, the clients object has a count of 0 return clients; } catch (Exception e) { MessageBox.Show(e.Message); } return null; } and the server method: public IList<Client> GetClientList() { var clients = new List<Client>(); try { using (var session = SessionHelper.OpenSession()) { clients = session.Linq<Client>().Where(p => p.ClientID > 0).ToList<Client>(); } } catch (Exception e) { EventLog.WriteEntry("eCOWS.Data", e.Message); } return clients; //returns a list with 1 client in it } the server method interface is: [UseNetDataContractSerializer] [OperationContract] IList<Client> GetClientList(); for final references, here is my client app.config entries: <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_IDataService" listenBacklog="10" maxConnections="10" transferMode="Buffered" transactionProtocol="OleTransactions" maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" receiveTimeout="00:10:00" sendTimeout="00:10:00"> <readerQuotas maxDepth="51200000" maxStringContentLength="51200000" maxArrayLength="51200000" maxBytesPerRead="51200000" maxNameTableCharCount="51200000" /> <security mode="Transport"/> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://localhost:9000/eCOWS/DataService" binding="netTcpBinding" bindingConfiguration="NetTcpBinding_IDataService" contract="eCowsDataService.IDataService" name="NetTcpBinding_IDataService" behaviorConfiguration="eCowsEndpointBehavior"> </endpoint> <endpoint address="MEX" binding="mexHttpBinding" contract="IMetadataExchange" /> </client> <behaviors> <endpointBehaviors> <behavior name="eCowsEndpointBehavior"> <dataContractSerializer maxItemsInObjectGraph="2147483647"/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> and my server app.config: <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTcpBinding" maxConnections="10" listenBacklog="10" transferMode="Buffered" transactionProtocol="OleTransactions" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647" sendTimeout="00:10:00" receiveTimeout="00:10:00"> <readerQuotas maxDepth="51200000" maxStringContentLength="51200000" maxArrayLength="51200000" maxBytesPerRead="51200000" maxNameTableCharCount="51200000" /> <security mode="Transport"/> </binding> </netTcpBinding> </bindings> <services> <service name="eCows.Data.Services.DataService" behaviorConfiguration="eCowsServiceBehavior"> <host> <baseAddresses> <add baseAddress="http://localhost:9001/eCOWS/" /> <add baseAddress="net.tcp://localhost:9000/eCOWS/" /> </baseAddresses> </host> <endpoint address="DataService" binding="netTcpBinding" contract="eCows.Data.Services.IDataService" behaviorConfiguration="eCowsEndpointBehaviour"> </endpoint> <endpoint address="MEX" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <behaviors> <endpointBehaviors> <behavior name="eCowsEndpointBehaviour"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="eCowsServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <serviceThrottling maxConcurrentCalls="10" maxConcurrentSessions="10"/> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> <behavior name="MexBehaviour"> <serviceMetadata /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> I use to run into "socket closed / network or timeout" errors, and the trace showed clearly that on the callback it was looking for a listening endpoint, but couldn't find one. Anyway, after adding the UseNetSerializer that error went away, yet now I'm just not getting anything. Oh PS. if I add all the commented out List items, I still retrieve an entry from the DB, but also still not receive anything on the client. if I remove the [UseNetDataContractSerializer] I get the following error(s) in the svclog : WARNING: Description Faulted System.ServiceModel.Channels.ServerSessionPreambleConnectionReader+ServerFramingDuplexSessionChannel WARNING: Description Faulted System.ServiceModel.Channels.ServiceChannel ERROR: Initializing[eCows.Data.Models.Client#3]-failed to lazily initialize a collection of role: eCows.Data.Models.Client.Addresses, no session or session was closed ... ERROR: Could not find default endpoint element that references contract 'ILogbookManager' in the ServiceModel client configuration section. This might be because no configuration file was found for your application, or because no endpoint element matching this contract could be found in the client element. If I add a .Not.LazyLoad to the List mapping items, I'm back at not receiving errors, but also not receiving any client information.. Sigh! Please, if anyone can help with this I'd be extremely grateful. I'm probably just missing something small.. but... what is it :) hehe. Thanks in advance! Neville

    Read the article

  • Memory leak / GLib issue.

    - by Andrei Ciobanu
    1: /* 2: * File: xyn-playlist.c 3: * Author: Andrei Ciobanu 4: * 5: * Created on June 4, 2010, 12:47 PM 6: */ 7:   8: #include <dirent.h> 9: #include <glib.h> 10: #include <stdio.h> 11: #include <stdlib.h> 12: #include <sys/stat.h> 13: #include <unistd.h> 14:   15: /** 16: * Returns a list all the file(paths) from a directory. 17: * Returns 'NULL' if a certain error occurs. 18: * @param dir_path. 19: * @param A list of gchars* indicating what file patterns to detect. 20: */ 21: GSList *xyn_pl_get_files(const gchar *dir_path, GSList *file_patterns) { 22: /* Returning list containing file paths */ 23: GSList *fpaths = NULL; 24: /* Used to scan directories for subdirs. Acts like a 25: * stack, to avoid recursion. */ 26: GSList *dirs = NULL; 27: /* Current dir */ 28: DIR *cdir = NULL; 29: /* Current dir entries */ 30: struct dirent *cent = NULL; 31: /* File stats */ 32: struct stat cent_stat; 33: /* dir_path duplicate, on the heap */ 34: gchar *dir_pdup; 35:   36: if (dir_path == NULL) { 37: return NULL; 38: } 39:   40: dir_pdup = g_strdup((const gchar*) dir_path); 41: dirs = g_slist_append(dirs, (gpointer) dir_pdup); 42: while (dirs != NULL) { 43: cdir = opendir((const gchar*) dirs->data); 44: if (cdir == NULL) { 45: g_slist_free(dirs); 46: g_slist_free(fpaths); 47: return NULL; 48: } 49: chdir((const gchar*) dirs->data); 50: while ((cent = readdir(cdir)) != NULL) { 51: lstat(cent->d_name, &cent_stat); 52: if (S_ISDIR(cent_stat.st_mode)) { 53: if (g_strcmp0(cent->d_name, ".") == 0 || 54: g_strcmp0(cent->d_name, "..") == 0) { 55: /* Skip "." and ".." dirs */ 56: continue; 57: } 58: dirs = g_slist_append(dirs, 59: g_strconcat((gchar*) dirs->data, "/", cent->d_name, NULL)); 60: } else { 61: fpaths = g_slist_append(fpaths, 62: g_strconcat((gchar*) dirs->data, "/", cent->d_name, NULL)); 63: } 64: } 65: g_free(dirs->data); 66: dirs = g_slist_delete_link(dirs, dirs); 67: closedir(cdir); 68: } 69: return fpaths; 70: } 71:   72: int main(int argc, char** argv) { 73: GSList *l = NULL; 74: l = xyn_pl_get_files("/home/andrei/Music", NULL); 75: g_slist_foreach(l,(GFunc)printf,NULL); 76: printf("%d\n",g_slist_length(l)); 77: g_slist_free(l); 78: return (0); 79: } 80:   81:   82: -----------------------------------------------------------------------------------------------==15429== 83: ==15429== HEAP SUMMARY: 84: ==15429== in use at exit: 751,451 bytes in 7,263 blocks 85: ==15429== total heap usage: 8,611 allocs, 1,348 frees, 22,898,217 bytes allocated 86: ==15429== 87: ==15429== 120 bytes in 1 blocks are possibly lost in loss record 1 of 11 88: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 89: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 90: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 91: ==15429== by 0x40971F6: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 92: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 93: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 94: ==15429== by 0x8048848: main (main.c:18) 95: ==15429== 96: ==15429== 129 bytes in 1 blocks are possibly lost in loss record 2 of 11 97: ==15429== at 0x4024F20: malloc (vg_replace_malloc.c:236) 98: ==15429== by 0x4081243: g_malloc (in /lib/libglib-2.0.so.0.2400.1) 99: ==15429== by 0x409B85B: g_strconcat (in /lib/libglib-2.0.so.0.2400.1) 100: ==15429== by 0x80489FE: xyn_pl_get_files (xyn-playlist.c:62) 101: ==15429== by 0x8048848: main (main.c:18) 102: ==15429== 103: ==15429== 360 bytes in 3 blocks are possibly lost in loss record 3 of 11 104: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 105: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 106: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 107: ==15429== by 0x4097222: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 108: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 109: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 110: ==15429== by 0x8048848: main (main.c:18) 111: ==15429== 112: ==15429== 508 bytes in 1 blocks are still reachable in loss record 4 of 11 113: ==15429== at 0x402425F: calloc (vg_replace_malloc.c:467) 114: ==15429== by 0x408113B: g_malloc0 (in /lib/libglib-2.0.so.0.2400.1) 115: ==15429== by 0x409624D: ??? (in /lib/libglib-2.0.so.0.2400.1) 116: ==15429== by 0x409710C: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 117: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 118: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 119: ==15429== by 0x8048848: main (main.c:18) 120: ==15429== 121: ==15429== 508 bytes in 1 blocks are still reachable in loss record 5 of 11 122: ==15429== at 0x402425F: calloc (vg_replace_malloc.c:467) 123: ==15429== by 0x408113B: g_malloc0 (in /lib/libglib-2.0.so.0.2400.1) 124: ==15429== by 0x409626F: ??? (in /lib/libglib-2.0.so.0.2400.1) 125: ==15429== by 0x409710C: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 126: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 127: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 128: ==15429== by 0x8048848: main (main.c:18) 129: ==15429== 130: ==15429== 508 bytes in 1 blocks are still reachable in loss record 6 of 11 131: ==15429== at 0x402425F: calloc (vg_replace_malloc.c:467) 132: ==15429== by 0x408113B: g_malloc0 (in /lib/libglib-2.0.so.0.2400.1) 133: ==15429== by 0x4096291: ??? (in /lib/libglib-2.0.so.0.2400.1) 134: ==15429== by 0x409710C: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 135: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 136: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 137: ==15429== by 0x8048848: main (main.c:18) 138: ==15429== 139: ==15429== 1,200 bytes in 10 blocks are possibly lost in loss record 7 of 11 140: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 141: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 142: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 143: ==15429== by 0x40971F6: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 144: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 145: ==15429== by 0x8048A0D: xyn_pl_get_files (xyn-playlist.c:61) 146: ==15429== by 0x8048848: main (main.c:18) 147: ==15429== 148: ==15429== 2,040 bytes in 1 blocks are still reachable in loss record 8 of 11 149: ==15429== at 0x402425F: calloc (vg_replace_malloc.c:467) 150: ==15429== by 0x408113B: g_malloc0 (in /lib/libglib-2.0.so.0.2400.1) 151: ==15429== by 0x40970AB: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 152: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 153: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 154: ==15429== by 0x8048848: main (main.c:18) 155: ==15429== 156: ==15429== 4,320 bytes in 36 blocks are possibly lost in loss record 9 of 11 157: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 158: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 159: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 160: ==15429== by 0x4097222: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 161: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 162: ==15429== by 0x80489D2: xyn_pl_get_files (xyn-playlist.c:58) 163: ==15429== by 0x8048848: main (main.c:18) 164: ==15429== 165: ==15429== 56,640 bytes in 472 blocks are possibly lost in loss record 10 of 11 166: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 167: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 168: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 169: ==15429== by 0x4097222: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 170: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 171: ==15429== by 0x8048A0D: xyn_pl_get_files (xyn-playlist.c:61) 172: ==15429== by 0x8048848: main (main.c:18) 173: ==15429== 174: ==15429== 685,118 bytes in 6,736 blocks are definitely lost in loss record 11 of 11 175: ==15429== at 0x4024F20: malloc (vg_replace_malloc.c:236) 176: ==15429== by 0x4081243: g_malloc (in /lib/libglib-2.0.so.0.2400.1) 177: ==15429== by 0x409B85B: g_strconcat (in /lib/libglib-2.0.so.0.2400.1) 178: ==15429== by 0x80489FE: xyn_pl_get_files (xyn-playlist.c:62) 179: ==15429== by 0x8048848: main (main.c:18) 180: ==15429== 181: ==15429== LEAK SUMMARY: 182: ==15429== definitely lost: 685,118 bytes in 6,736 blocks 183: ==15429== indirectly lost: 0 bytes in 0 blocks 184: ==15429== possibly lost: 62,769 bytes in 523 blocks 185: ==15429== still reachable: 3,564 bytes in 4 blocks 186: ==15429== suppressed: 0 bytes in 0 blocks 187: ==15429== 188: ==15429== For counts of detected and suppressed errors, rerun with: -v 189: ==15429== ERROR SUMMARY: 7 errors from 7 contexts (suppressed: 17 from 8) 190: ---------------------------------------------------------------------------------------------- I am using the above code in order to create a list with all the filepaths in a certain directory. (In my case fts.h or ftw.h are not an option). I am using GLib as the data structures library. Still I have my doubts in regarding the way GLib is allocating, de-allocating memory ? When invoking g_slist_free(list) i also free the data contained by the elements ? Why all those memory leaks appear ? Is valgrind a suitable tool for profilinf memory issues when using a complex library like GLib ? LATER EDIT: If I g_slist_foreach(l,(GFunc)g_free,NULL);, the valgrind report is different, (All the memory leaks from 'definitely lost' will move to 'indirectly lost'). Still I don't see the point ? Aren't GLib collections implement a way to be freed ?

    Read the article

< Previous Page | 146 147 148 149 150 151  | Next Page >