Search Results

Search found 119 results on 5 pages for 'xa'.

Page 3/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • polynomial multiplication using fastfourier transform

    - by mawia
    i am going through the above topic from CLRS(CORMEN) (page 834) and I got stuck at this point. Can anybody please explain how the following expression, A(x)=A^{[0]}(x^2) +xA^{[1]}(x^2) follows from, n-1 ` S a_j x^j j=0 Where, A^{[0]} = a_0 + a_2x + a_4a^x ... a_{n-2}x^{\frac{n}{2-1}} A^{[1]} = a_1 + a_3x + a_5a^x ... a_{n-1}x^{\frac{n}{2-1}} WITH REGARDS THANKS

    Read the article

  • solving origin of a vectors

    - by Mike
    I have two endpoints (xa,ya) and (xb,yb) of two vectors, respectively a and b, originating from a same point (xo, yo). Also, I know that |a|=|b|+s, where s is a constant. I tried to compute the origin (xo, yo) but seem to fail at some point. How to solve this?

    Read the article

  • How to set the transaction isolation level of a

    - by Michael Wiles
    How do I set the global transaction isolation level for a postgres data source. I'm running on jboss and I'm using hibernate to connect. I know that I can set the isolation level from hibernate, does this work for Postgres? This would be by setting the hibernate.connection.isolation hibernate property to 1,2,4,8 - the various values of the relevant static fields. I'm using the org.postgresql.xa.PGXADataSource

    Read the article

  • Unicode Regex; Invalid XML characters

    - by Ambush Commander
    The list of valid XML characters is well known, as defined by the spec it's: #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF] My question is whether or not it's possible to make a PCRE regular expression for this (or its inverse) without actually hard-coding the codepoints, by using Unicode general categories. An inverse might be something like [\p{Cc}\p{Cs}\p{Cn}], except that improperly covers linefeeds and tabs and misses some other invalid characters.

    Read the article

  • Question about multiple 'catch'

    - by chun
    Can anyone tell me why the output of this class is 'xa'? why the other exception won't be caught? public class Tree { public static void main(String... args){ try { throw new NullPointerException(new Exception().toString()); } catch (NullPointerException e) { System.out.print("x"); } catch (RuntimeException e) { System.out.print("y"); } catch (Exception e) { System.out.print("z"); } finally{System.out.println("a");} } }

    Read the article

  • ubuntu 10.04; kvm bridged networking not working with public ip addresses

    - by senorsmile
    I have a dedicated hosted server box with ubuntu 10.04 64 bit installed. I would like to run kvm with ubuntu 8.04 installed for some php 5.2 compatible apps(they don't work right with php 5.3, the default in ubuntu 10.04). I installed KVM as instructed at https://help.ubuntu.com/community/KVM/Installation . I installed the vm using virt-manager. I never could figure out how use virt-install or any of those automated installers. I just installed it using the disc. I set up bridged networking as per https://help.ubuntu.com/community/KVM/Networking . However, the bridged connection doesn't work. Here's my /etc/network/interfaces on the host, running ubuntu 10.04. (with specific public ip blanked) auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address xx.xx.xx.xx netmask 255.255.255.248 gateway xx.xx.xx.xa bridge_ports eth0 bridge_stp on bridge_fd 0 bridge_maxwait 10 ` Here's my /etc/network/interfaces on the guest, running ubuntu 8.04. auto lo iface lo inet loopback auto eth0 iface eth0 inet static address xx.xx.xx.xy netmask 255.255.255.248 gateway xx.xx.xx.xa The two vm's can communicate to each other. But, the guest vm can't access anyone in the real world. Here's my /etc/libvirt/qemu/store_804.xml <domain type='kvm'> <name>store_804</name> <uuid>27acfb75-4f90-a34c-9a0b-70a6927ae84c</uuid> <memory>2097152</memory> <currentMemory>2097152</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/store_804.img'/> <target dev='hda' bus='ide'/> </disk> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> </disk> <interface type='bridge'> <mac address='52:54:00:26:0b:c6'/> <source bridge='br0'/> <model type='virtio'/> </interface> <console type='pty'> <target port='0'/> </console> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <sound model='es1370'/> <video> <model type='cirrus' vram='9216' heads='1'/> </video> </devices> </domain> Any idea where I've gone wrong?

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • MySQL 5.1.49 freezing every two days

    - by maximus
    Hi all, our mysql system is "freezing" every two days. By "freezing" i mean the following: it doesn't respond to ping we can't login with SSH we don't get any answer from MySQL there is no entry in the error logs! neither from linux neither from MySQL. we have already changed to a completely new hardware, we have the same problem, so it's definitely not a hardware problem. we do not have any other software installed except a firewall (iptables rule) we can restart the server from another server using rsyslog (www.rsyslog.com)(software reset) Could someone help me, by giving me some pointers what could i do to figure out the problem? I have included every detail about our settings. Thank you in advance for your help. Max. Our system parameters and settings: System-Memory: 12GB Processor: Intel 7-920 Quadcore Operating system: Debian 5 (lenny) 64bit MySQL 5.1.49 Databases: (a) a small phpbb forum (b) a 6GB database 3 tables with about 15 million rows my.cnf # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = our-ip-address # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 256K thread_cache_size = 32 max_connections = 300 table_cache = 2048 #thread_concurrency = 4 # Used for InnoDB tables recommended to 50%-80% available memory innodb_buffer_pool_size = 6G # 20MB sometimes larger innodb_additional_mem_pool_size = 20M # 8M-16M is good for most situations innodb_log_buffer_size = 8M # Disable XA support because we do not use it innodb-support-xa = 0 # 1 is default wich is 100% secure but 2 offers better performance innodb_flush_log_at_trx_commit = 1 innodb_flush_method = O_DIRECT #innodb_thread_concurency = 8 # Recommended 64M - 512M depending on server size innodb_log_file_size = 512M # One file per table innodb_file_per_table # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M #query_cache_type = 1 #query_cache_min_res_unit= 2K #join_buffer_size = 1M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 log_bin = /var/log/mysql/mysql-bin.log # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian! expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # * InnoDB plugin # As of MySQL 5.1.38, the InnoDB plugin from Oracle is included in the MySQL source code. # It has many improvements and better performances than the built-in InnoDB storage engine. # Please read http://www.innodb.com/products/innodb_plugin/ for more information. # Uncommenting the two following lines to use the InnoDB plugin. ignore_builtin_innodb plugin-load=innodb=ha_innodb_plugin.so # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # !includedir /etc/mysql/conf.d/ UPDATE After installing sysstat and configuring it to collect data after every minute i have the following datas. I used sar to generate the following output: The log-file is too big so coudn't enter it here but uploaded to box.net. The link is http://www.box.net/shared/xc6rh7qqob SECOND UPDATE We started a ping command in the background, and that solved the problem. Now the server does work since more then a week. We still don't know what's the problem.

    Read the article

  • Zend Optimizer not Functioning Correctly on Plesk 9.3.0 VPS

    - by dallasclark
    I have a new VPS running Plesk 9.3.0 without 'much' modifications to any settings. I've moved a site to this VPS and I'm receiving a page full of random 'gibbrish' characters like: Zend2003120702116268102798xù Ÿ2½}MŒ%ÇqæCwËg¸„ÖXXZ[ÆùÿCK¢FŠäš’(’¢-ÂÒèu¿zš6gºÇÝ=$Ec:-xá=èàƒÃ ôžL/`,¼'û$èdû$ð ›±OYïUUdfde½á›GâcWTfDdF|‘™‘QÕ_nN‡OÝ›Ÿ/ú9¾¢»"…çÎ =B³øo/=÷…?úúW?·/LX5¯ß½ ðtEÍ ãB„ð÷øìÞéåU®•òÊëZÈi^¿lN/NÎNoÞ›/šÅC׸”šÅLËÏåùÉ+Ü á¸a6Ê÷Ž..ϯrç…Õ–)Õþñòüvsz•{å mî!F³ã[çWsÖZ%k'-ÐÝ<¬þZ1B¡¼ "-ÏîH @/Ü´b.Ï›ù"ü tb¼Ò!”]œ¼ïŠ6–Ál \Ü;½hÎOößh®^“4#…s¡CÀ†æôUèP³Ð§3¦¬“; –j‡ìþb¤÷š»¶³Wçç7÷îÜ…w•bÞs«[ÆÎav,@ÿ´ÜéÖåÌfž¯þVÚlö‹½ÎÛØå#Èoòudñ^÷чW+ÕSsÐý¹w˜7Ÿò«{ò…?<Ìo1»èZÄN_ð³»·îqr÷Vs¾"ýµ¾§þˆ¡v Ù.j†Çï®#{îÞüÞú¿ºý²Q0âLõ$rv¥{»[à|sÝwxþðúy¯)þ • 7ÛŽ È^YËZá‘JV<|·g“l2£{µ«Ù›=é§eCÍîõÖ»ÓÖQtL´D?ε܃ÁªÇ3=ﯸ^=þAIÏjöÐÁ0¡ò¥ 2øÙŸÞçÝÊéqÔ€Lï÷*+Jo¬õLͺFøì x¨ÕìÛ'GH“æådD)ÿ:¨5¼q±¦rÖøLf“Ðj îÅõ¬éa÷[!_zöN?þ"™†á©›0Ý{ˆWóª‘ÁH4µx5+Ë^–Ž›·ÉöŠd1¹Õ¬ phpinfo() shows PHP is running on the Zend engine. This server is unmanaged so I cannot ask the hosting provider for assistance. Any help big or small will be appreciated.

    Read the article

  • Looping through a batch file only if the response is "everything is okay"

    - by PeanutsMonkey
    I have a batch file that loops through the contents of a directory and compresses the files in the directory as follows; for %%a in (c:\data\*.*) do if "%%~xa" == "" "C:\Program Files\7-Zip\7za.exe" a -tzip -mx9 "%%a.zip" "%%a" Seeing that I am using 7zip to compress the file, it returns the message "everything is okay" if it has successfully compressed the file and it then moves onto the next file in any. What I would like to do is the following; Only move to the next file if the response is "everything is okay" If the response is anything but "everything is okay", the error is logged Since an error has occurred, it attempts to compress the file again Once when it has succeeded i.e. "everything is okay" it goes to the next file Steps 3 & 4 only occur a maximum of 3 times before it gives up and moves onto the next file. How can I achieve this?

    Read the article

  • Lynksys WRT120N wireless router not connecting to ADSL

    - by pradeetp
    I have a lynksys WRT120N wireless router that was connected to Sterlite ADSL modem. The internet connection was working fine through wireless access. I changed the ADSL router because it developed some power issues and got it replaced with the same model/brand from the service provider. However I am not able to access internet now. Here is the current setting Sterlite SAM300 XA IP Address: 192.168.2.1 (I changed it to this IP because the IP address of my ADSL router was same) Lynksys WRT120N wireless router IP Address: 192.168.1.1 If I connect my laptop to ADSL router directly, the internet works fine. It is only that that if I access it wirelessley through wireless router that I am not able to connect to internet. The IP address setting has been configured to obtain IP address automatically. I have windows 7 OS home edition. Could someone please help me to fix this issue.

    Read the article

  • New Release of Oracle Berkeley DB

    - by Eric Jensen
    We are pleased to announce that a new release of Oracle Berkeley DB, version 11.2.5.2.28, is available today. Our latest release includes yet more value added features for SQLite users, as well as several performance enhancements and new customer-requested features to the key-value pair API.  We continue to provide technology leadership, features and performance for SQLite applications.  This release introduces additional features that are not available in native SQLite, and adds functionality allowing customers to create richer, more scalable, more concurrent applications using the Berkeley DB SQL API. This release is compelling to Oracle’s customers and partners because it: delivers a complete, embeddable SQL92 database as a library under 1MB size drop-in API compatible with SQLite version 3 no-oversight, zero-touch database administration industrial quality, battle tested Berkeley DB B-TREE for concurrent transactional data storage New Features Include: MVCC support for even higher concurrency direct SQL support for HA/replication transactionally protected Sequence number generation functions lower memory requirements, shared memory regions and faster/smaller memory on startup easier B-TREE page size configuration with new ''db_tuner" utility New Key-Value API Features Include: HEAP access method for constrained disk-space applications (key-value API) faster QUEUE access method operations for highly concurrent applications -- up 2-3X faster! (key-value API) new X/open compliant XA resource manager, easily integrated with Oracle Tuxedo (key-value API) additional HA/replication management and communication options (key-value API) and a lot more! BDB is hands-down the best edge, mobile, and embedded database available to developers. Downloads available today on the Berkeley DB download pageProduct Documentation

    Read the article

  • Top 10 solution documents for Weblogic Server J2EE Feb 2014 - May 2014

    - by jhpierce -Oracle
    The following are the top 10 documents linked to SRs as solutions, for Weblogic Server J2EE issues, from Feb 2014 thru May 2014. 1163020.1 How to configure Filtering class loader in weblogic.xml   To configure the Filtering Class Loader to specify a certain package is loaded from an application, add a prefer-application-packages descriptor element. 1276593.1 WLS - How to supress servlet/JSP version details In WebLogic HTTP response header The string "X-Powered-By: Servlet/2.4 JSP/2.0" is showing up in the servlet response header.How to stop Weblogic from including servlet/JSP version details in the x-powered-by HTTP response header. 1490080.1 WebLogic Server 12.1.1.0 in a Cluster Environment Throws NotSerializableException for CDI Applications at com.sun.jersey.server.impl.cdi.CDIExtension When running in clustered environment, server start-up is not clean when you have CDI applications deployed. 1268138.1 Sample TwoWay SSL implementation for JAX-WS Webservice!   In this sample provided the recipient checks for the initiator's public certificate. Note that the client certificate can be used for authentication. 1584779.1 Socket Leaks When Calling Web-Service Over SSL This is a known bug 16810786 1598617.1 Secure WebService call throwing CANNOT RESOLVE URL FOR PROTOCOL HTTP/HTTPS through web server(APACHE) plug-in.    1056121.1 How to Timeout Weblogic Webservice Client   How to timeout a WebService client with and without using Stubs. 1568638.1 When packaging Jersey JAX-RS libraries into webapp throws NoSuchMethodError()  When attempting to include custom Jersey implementation libraries in to web application in a OSB domain. 1118264.1 WLS 10.3: Intermittent XA error: XAResource.XAER_RMERR In WebLogic 10.3, a CMP EJB sometimes throws the exception.   1608951.1 How to get More Details About Error BEA-101215 Malformed Request. Request parsing failed Code: -1   Which was seen when accessing the application via loadbalancer?

    Read the article

  • How-To: Run CMSDK against a RAC cluster

    - by frank.closheim
    Using CMSDK in a production environment often requires a robust, reliable and failover enabled repository. When using Oracle Real Application Cluster (RAC) with your CMSDK repository you need to have a specific configuration in place to support such a setup. This post will explain the configuration steps required when running CMSDK 9.0.4.6 with Oracle WebLogic Server (WLS).In the previous CMSDK 9.0.4.2 version a RAC enabled connect string looked like this: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = rac2)(PORT = 1521))(LOAD_BALANCE = NO)(FAILOVER = ON)(CONNECT_DATA =(SERVICE_NAME = rac)(failover_mode = (type=select)(method=basic)))CMSDK 9.0.4.6 makes use of data sources to connect to the underlying database. These data sources are configured inside your Application Server, such as Oracle WebLogic Server.In Oracle WebLogic Server 10.3.4, a single data source implementation has been introduced to support an RAC cluster. It responds to Fast Application Notification (FAN) events to provide Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. XA affinity is supported at the global transaction Id level. The new feature is called WebLogic Active GridLink for RAC; which is implemented as the GridLink data source within WebLogic Server.This GridLink data source also works with Oracle Single Client Access Name (SCAN). SCAN is a feature used in RAC environments that provides a single name for clients to access any Oracle Database running in a cluster. You can think of SCAN as a cluster alias for databases in the cluster. The benefit is that the client’s connect information does not need to change if you add or remove nodes or databases in the cluster.The CMSDK 9.0.4.6 documentation describes how to create a regular JDBC data source named jdbc/OracleDS. Please refer to the following document which describes in detail how to create a GridLink data source in WLS.

    Read the article

  • ????????????????????WebLogic Server 12c????

    - by ???02
    ??????Java EE 6???????????WebLogic Server 12c????????WebLogic Server 12c???/?????????????????????????????WebLogic Server 12c????????·??????·?????????????????????????/??????????????????????????????????(???) Web????????????????????Web?????·??????? WebLogic Server 12c???????????????????????????? Fusion Middleware?????? ????????? Cloud Application Foundation???????? ???????????????????? 24??365??????????????????IT?????????????????????????????????????????????????????????Oracle Real Application Cluster(RAC)???Oracle RAC?????????Oracle Database???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??Oracle RAC???????????????WebLogic Server????????????Active GridLink(GridLink??????)???Active GridLink?????????????????????????????WebLogic Server????????????Oracle RAC????????????Active GridLink ?Oracle RAC?????????????????????????????????????·?????????????XA?????????RAC??????·???????????????????? ??Active GridLink??WebLogic Server 12c?????????????Web??????RAC??????·??????(Web?????·??????)???? Oracle RAC?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????·???????????????????·?????????????? ???Web?????·?????????????????????????????HTTP?????????????????????????RAC???????????????????????????????????????????????????Oracle RAC??????????????????????????????????????????? ??Web?????·??????????????????????????????????????????????????????????????????????????????????????????????????????Web??????????????????????????????????????????????????????????Web?????·???????????????????????????? ????????·???????????????JDBC TLOG????????????? ???????WebLogic Server 12c?????????????????????????·??(TLOG)?Oracle Database??????JDBC TLOG???????? ????????????·???????????????????????????????????????????????2????·??????????????????????????????·?????????????????????????????????(????1)???????????·????????????????????????(????2)???????1??????????????????????????????????????????????????????????????????????????????TLOG??????????????????????????????????????????????? ???WebLogic Server?????TLOG?????·???????????????12c??Oracle Database?????????????????????????????????????·????JMS?????????TLOG????????????·????????????????????????????????????????? ??????????????????????????WebLogic Server???????????????????????????????????????????????TLOG?????·??????????????????????????????TLOG?????????????????????·??????????????????????????????????????????????????????????????????????????????????? Oracle Database???????????????????????????????????Oracle Data Guard?????????????????·????TLOG??????????????????????????????????????????????????????????????? ?????????·????????TLOG??????????·?????????????????????????????????????????????????????Oracle Data Guard?TLOG?????????????????????????????????????????????????????????TLOG??????????????????Oracle Data Guard?????????????? HTTP???????????????RESTful Management Services? ???????????????1????????????RESTful Management Services??????HTTP??????WebLogic Server 12c???????????????????????????????HTML?JSON?XML???????????? ????????TCP?80?????????????????????????????????????????????????PC???????????WebLogic Server????????????????????????????????????????????????????????????HTTP????????????????????????????????????????? ?????????????2??????WebLogic Server 12c??????????????????WebLogic Server 12c??Java EE 6/Java SE 7???????????????????????????????????????????????????????????????????

    Read the article

  • Neo4j OutOfMemory problem

    - by Edward83
    Hi! This is my source code of Main.java. It was grabbed from neo4j-apoc-1.0 examples. The goal of modification to store 1M records of 2 nodes and 1 relation: package javaapplication2; import org.neo4j.graphdb.GraphDatabaseService; import org.neo4j.graphdb.Node; import org.neo4j.graphdb.RelationshipType; import org.neo4j.graphdb.Transaction; import org.neo4j.kernel.EmbeddedGraphDatabase; public class Main { private static final String DB_PATH = "neo4j-store-1M"; private static final String NAME_KEY = "name"; private static enum ExampleRelationshipTypes implements RelationshipType { EXAMPLE } public static void main(String[] args) { GraphDatabaseService graphDb = null; try { System.out.println( "Init database..." ); graphDb = new EmbeddedGraphDatabase( DB_PATH ); registerShutdownHook( graphDb ); System.out.println( "Start of creating database..." ); int valIndex = 0; for(int i=0; i<1000; ++i) { for(int j=0; j<1000; ++j) { Transaction tx = graphDb.beginTx(); try { Node firstNode = graphDb.createNode(); firstNode.setProperty( NAME_KEY, "Hello" + valIndex ); Node secondNode = graphDb.createNode(); secondNode.setProperty( NAME_KEY, "World" + valIndex ); firstNode.createRelationshipTo( secondNode, ExampleRelationshipTypes.EXAMPLE ); tx.success(); ++valIndex; } finally { tx.finish(); } } } System.out.println("Ok, client processing finished!"); } finally { System.out.println( "Shutting down database ..." ); graphDb.shutdown(); } } private static void registerShutdownHook( final GraphDatabaseService graphDb ) { // Registers a shutdown hook for the Neo4j instance so that it // shuts down nicely when the VM exits (even if you "Ctrl-C" the // running example before it's completed) Runtime.getRuntime().addShutdownHook( new Thread() { @Override public void run() { graphDb.shutdown(); } } ); } } After a few iterations (around 150K) I got error message: "java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.(HeapByteBuffer.java:39) at java.nio.ByteBuffer.allocate(ByteBuffer.java:312) at org.neo4j.kernel.impl.nioneo.store.PlainPersistenceWindow.(PlainPersistenceWindow.java:30) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.allocateNewWindow(PersistenceWindowPool.java:534) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.refreshBricks(PersistenceWindowPool.java:430) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:122) at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:459) at org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.updateRecord(AbstractDynamicStore.java:240) at org.neo4j.kernel.impl.nioneo.store.PropertyStore.updateRecord(PropertyStore.java:209) at org.neo4j.kernel.impl.nioneo.xa.Command$PropertyCommand.execute(Command.java:513) at org.neo4j.kernel.impl.nioneo.xa.NeoTransaction.doCommit(NeoTransaction.java:443) at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:316) at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(XaResourceManager.java:399) at org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(XaResourceHelpImpl.java:64) at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:514) at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:571) at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:543) at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:102) at org.neo4j.kernel.EmbeddedGraphDbImpl$TransactionImpl.finish(EmbeddedGraphDbImpl.java:329) at javaapplication2.Main.main(Main.java:62) 28.05.2010 9:52:14 org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool logWarn WARNING: [neo4j-store-1M\neostore.propertystore.db.strings] Unable to allocate direct buffer" Guys! Help me plzzz, what I did wrong, how can I repair it? Tested on platform Windows XP 32bit SP3. Maybe solution within creation custom configuration? thnx 4 every advice!

    Read the article

  • Scala: Recursively building all pathes in a graph?

    - by DarqMoth
    Trying to build all existing paths for an udirected graph defined as a map of edges using the following algorithm: Start: with a given vertice A Find an edge (X.A, X.B) or (X.B, X.A), add this edge to path Find all edges Ys fpr which either (Y.C, Y.B) or (Y.B, Y.C) is true For each Ys: A=B, goto Start Providing edges are defined as the following map, where keys are tuples consisting of two vertices: val edges = Map( ("n1", "n2") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n1") -> "n5n1", ("n5", "n4") -> "n5n4") As an output I need to get a list of ALL pathes where each path is a list of adjecent edges like this: val allPaths = List( List(("n1", "n2") -> "n1n2"), List(("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4"), List(("n5", "n1") -> "n5n1"), List(("n5", "n4") -> "n5n4"), List(("n2", "n1") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n4") -> "n5n4")) //... //... more pathes to go } Note: Edge XY = (x,y) - "xy" and YX = (y,x) - "yx" exist as one instance only, either as XY or YX So far I have managed to implement code that duplicates edges in the path, which is wrong and I can not find the error: object Graph2 { type Vertice = String type Edge = ((String, String), String) type Path = List[((String, String), String)] val edges = Map( //(("v1", "v2") , "v1v2"), (("v1", "v3") , "v1v3"), (("v3", "v4") , "v3v4") //(("v5", "v1") , "v5v1"), //(("v5", "v4") , "v5v4") ) def main(args: Array[String]): Unit = { val processedVerticies: Map[Vertice, Vertice] = Map() val processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)] = Map() val path: Path = List() println(buildPath(path, "v1", processedVerticies, processedEdges)) } /** * Builds path from connected by edges vertices starting from given vertice * Input: map of edges * Output: list of connected edges like: List(("n1", "n2") -> "n1n2"), List(("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4"), List(("n5", "n1") -> "n5n1"), List(("n5", "n4") -> "n5n4"), List(("n2", "n1") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n4") -> "n5n4")) */ def buildPath(path: Path, vertice: Vertice, processedVerticies: Map[Vertice, Vertice], processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)]): List[Path] = { println("V: " + vertice + " VM: " + processedVerticies + " EM: " + processedEdges) if (!processedVerticies.contains(vertice)) { val edges = children(vertice) println("Edges: " + edges) val x = edges.map(edge => { if (!processedEdges.contains(edge._1)) { addToPath(vertice, processedVerticies.++(Map(vertice -> vertice)), processedEdges, path, edge) } else { println("ALready have edge: "+edge+" Return path:"+path) path } }) val y = x.toList y } else { List(path) } } def addToPath( vertice: Vertice, processedVerticies: Map[Vertice, Vertice], processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)], path: Path, edge: Edge): Path = { val newPath: Path = path ::: List(edge) val key = edge._1 val nextVertice = neighbor(vertice, key) val x = buildPath (newPath, nextVertice, processedVerticies, processedEdges ++ (Map((vertice, nextVertice) -> (vertice, nextVertice))) ).flatten // need define buidPath type x } def children(vertice: Vertice) = { edges.filter(p => (p._1)._1 == vertice || (p._1)._2 == vertice) } def containsPair(x: (Vertice, Vertice), m: Map[(Vertice, Vertice), (Vertice, Vertice)]): Boolean = { m.contains((x._1, x._2)) || m.contains((x._2, x._1)) } def neighbor(vertice: String, key: (String, String)): String = key match { case (`vertice`, x) => x case (x, `vertice`) => x } } Running this results in: List(List(((v1,v3),v1v3), ((v1,v3),v1v3), ((v3,v4),v3v4))) Why is that?

    Read the article

  • Struct in C, are they efficient?

    - by pygabriel
    I'm reading some C code like that: double function( int lena,double xa,double ya, double za, double *acoefs, ..., int lenb,double xb,double yb, double zb, double *bcoefs, ..., same for c, same for d ) This function is called in the code mor than 100.000 times so it's performance-critical. I'm trying to extend this code but I want to know if it's efficient or not (and how much this influences the speed) to encapsulate all the parameters in a struct like this struct PGTO { int len; double x,y,z ; double *acoefs } and then access the parameters in the function.

    Read the article

  • Batch file to check if a file exists and raise an error if it doesn't

    - by cfdev9
    This batch file releases a build from TEST to LIVE. I want to add a check constraint in this file that ensures there is an accomanying release document in a specific folder. "C:\Program Files\Windows Resource Kits\Tools\robocopy.exe" "\\testserver\testapp$" "\\liveserver\liveapp$" *.* /E /XA:H /PURGE /XO /XD ".svn" /NDL /NC /NS /NP del "\\liveserver\liveapp$\web.config" ren "\\liveserver\liveapp$\web.live.config" web.config So I have a couple of questions about how to achieve this... There is a version.txt file in the \testserver\testapp$ folder and the only contents of this file is the build number (eg 45 - for build 45) How do I read the contents of the version.txt file into a variable in the batch file? How do I check if a file \fileserver\myapp\releasedocs\ {build}.doc exists using the variable from part 1 in place of {build}?

    Read the article

  • C++'s char * by swig got problem in Python 3.0

    - by gpliu3
    Our C++ lib works fine with Python2.4 using Swig, returning a C++ char* back to a python str. But this solution hit problem in Python3.0, error is: Exception=(, UnicodeDecodeError('utf8', b"\xb6\x9d\xa.....",0, 1, 'unexpected code byte') Our definition is like(working fine in Python 2.4): void cGetPubModulus( void* pSslRsa, char* cMod, int* nLen ); %include "cstring.i" %cstring_output_withsize( char* cMod, int* nLen ); Suspect swig is doing a Bytes-Str conversion automatically. In python2.4 it can be implicit but in Python3.0 it's no long allowed.. Anyone got a good idea? thanks

    Read the article

  • How to get the next prefix in C++?

    - by Vicente Botet Escriba
    Given a sequence (for example a string "Xa"), I want to get the next prefix in order lexicographic (i.e "Xb"). As I don't want to reinvent the wheel, I'm wondering if there is any function in C++ STL or boost that can help to define this generic function easily? If not, do you think that this function can be useful? Notes The next of "aZ" should be "b". Even if the examples are strings, the function should work for any Sequence. The lexicographic order should be a template parameter of the function.

    Read the article

  • xsltproc killed, out of memory

    - by David Parks
    I'm trying to split up a 13GB xml file into small ~50MB xml files with this XSLT style sheet. But this process kills xsltproc after I see it taking up over 1.7GB of memory (that's the total on the system). Is there any way to deal with huge XML files with xsltproc? Can I change my style sheet? Or should I use a different processor? Or am I just S.O.L.? <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:exsl="http://exslt.org/common" extension-element-prefixes="exsl" xmlns:fn="http://www.w3.org/2005/xpath-functions"> <xsl:output method="xml" indent="yes"/> <xsl:strip-space elements="*"/> <xsl:param name="block-size" select="75000"/> <xsl:template match="/"> <xsl:copy> <xsl:apply-templates select="mysqldump/database/table_data/row[position() mod $block-size = 1]" /> </xsl:copy> </xsl:template> <xsl:template match="row"> <exsl:document href="chunk-{position()}.xml"> <add> <xsl:for-each select=". | following-sibling::row[position() &lt; $block-size]" > <doc> <xsl:for-each select="field"> <field> <xsl:attribute name="name"><xsl:value-of select="./@name"/></xsl:attribute> <xsl:value-of select="."/> </field> <xsl:text>&#xa;</xsl:text> </xsl:for-each> </doc> </xsl:for-each> </add> </exsl:document> </xsl:template>

    Read the article

  • ubuntu 13.04 recognizes usb mobile broadband modem as ethernet connection

    - by Bence Mihalka
    When I plug in my usb mobile broadband modem (ZTE MF-667), in the network manager instead of a mobile broadband connection, I get an ethernet connection, called: Ethernet Network (ZTE WCDMA Technologies MSM), which of course doesn't work. Here is my lsusb output and the relevant parts of dmesg output: lsusb: Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 0cf3:3005 Atheros Communications, Inc. AR3011 Bluetooth Bus 001 Device 004: ID 04f2:b1b9 Chicony Electronics Co., Ltd Asus Integrated Webcam Bus 001 Device 005: ID 058f:6366 Alcor Micro Corp. Multi Flash Reader Bus 002 Device 004: ID 19d2:1405 ZTE WCDMA Technologies MSM dmesg: [ 195.328467] usb 2-1.1: new high-speed USB device number 3 using ehci-pci [ 195.423545] usb 2-1.1: New USB device found, idVendor=19d2, idProduct=1225 [ 195.423555] usb 2-1.1: New USB device strings: Mfr=3, Product=2, SerialNumber=4 [ 195.423561] usb 2-1.1: Product: ZTE WCDMA Technologies MSM [ 195.423567] usb 2-1.1: Manufacturer: ZTE,Incorporated [ 195.423572] usb 2-1.1: SerialNumber: P680A1ZTED000000 [ 195.426319] scsi7 : usb-storage 2-1.1:1.0 [ 196.425354] scsi 7:0:0:0: CD-ROM CWID USB SCSI CD-ROM 2.31 PQ: 0 ANSI: 2 [ 197.447919] usb 2-1.1: USB disconnect, device number 3 [ 197.457582] sr0: scsi3-mmc drive: 243x/186x xa/form2 cdda pop-up [ 197.457594] cdrom: Uniform CD-ROM driver Revision: 3.20 [ 197.459058] sr 7:0:0:0: Attached scsi CD-ROM sr0 [ 197.459483] sr 7:0:0:0: Attached scsi generic sg2 type 5 [ 197.759186] usb 2-1.1: new high-speed USB device number 4 using ehci-pci [ 197.854543] usb 2-1.1: New USB device found, idVendor=19d2, idProduct=1405 [ 197.854556] usb 2-1.1: New USB device strings: Mfr=4, Product=3, SerialNumber=5 [ 197.854564] usb 2-1.1: Product: ZTE WCDMA Technologies MSM [ 197.854572] usb 2-1.1: Manufacturer: ZTE,Incorporated [ 197.854579] usb 2-1.1: SerialNumber: P680A1ZTED010000 [ 197.957739] scsi8 : usb-storage 2-1.1:1.2 [ 198.076554] cdc_ether 2-1.1:1.0 eth1: register 'cdc_ether' at usb-0000:00:1d.0-1.1, CDC Ethernet Device, 00:a0:c6:00:00:00 [ 198.076583] usbcore: registered new interface driver cdc_ether [ 198.955985] scsi 8:0:0:0: CD-ROM CWID USB SCSI CD-ROM 2.31 PQ: 0 ANSI: 2 [ 198.956797] scsi 8:0:0:1: Direct-Access ZTE MMC Storage 2.31 PQ: 0 ANSI: 2 I created the appropriate mobile broadband connection manually, but I cannot enable it in network manager, since the device is not recognized as mobile broadband. Any tips how to make it work?

    Read the article

  • Move Data into the grid for scalable, predictable response times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. It is still in its limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.ph , or send your questions to [email protected]. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: CloudTran,data grid,M,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-05

    - by Bob Rhubart
    Why is enterprise software often so complicated? | Rajesh Raheja rraheja.wordpress.com Rajesh Raheja shares "a few examples of requirements that lead to creation of complex platform infrastructures that up the complex enterprise software." Educause Top-Ten IT Issues - the most change in a decade or more | Cole Clark blogs.oracle.com Cole Clark discusses why "higher education IT must change in order to fully realize the potential for transforming the institution, and therefore it's people must learn new skills, understand and accept new ways of solving problems, and not be tied down by past practices or institutional inertia." Oracle VM RAC template - what it took | Wim Coekaerts blogs.oracle.com Wim Coekaerts shares an example that shows how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. Oracle Cloud and Oracle Platinum Services Announcements oracle.com Featuring Larry Ellison and Mark Hurd. Wednesday, June 06, 2012. 1:00 p.m. PT – 2:30 p.m. PT Creating an Oracle Endeca Information Discovery 2.3 Application Part 1 : Scoping and Design | Mark Rittman www.rittmanmead.com Oracle ACE Director Mark Rittman launches a new series that dives into "the various stages in building a simple Oracle Endeca Information Discovery application, using the recent Endeca Information Discovery 2.3 release." Introducing Decision Tables in the SOA Suite 11g | Lucas Jellama technology.amis.nl Oracle ACE Director Lucas Jellema demonstrates how "the decision table can be put to good use to implement the business logic behind the classical game of Rock, Paper and Scissors." Application integration: reorganise, recycle, repurpose | Andrew Clarke radiofreetooting.blogspot.com "Integration is a topic which is in everybody's baliwick," says Oracle ACE Andrew Clarke. "The business people want to get the best value from their existing IT investments. The architects need to understand the interfaces between the silos and across the layers. The developers have to implement it." Using XA Transactions in Coherence-based Applications | Jonathan Purdy blogs.oracle.com Purdy shares "a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager." Thought for the Day "The difficulty lies, not in the new ideas, but in escaping from the old ones..." — John Maynard Keynes (June 5, 1883 - April 4, 1946) Source: Quotations Page

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >