Search Results

Search found 21392 results on 856 pages for 'order of operations'.

Page 181/856 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • ORM Release History : Q1 2010 SP 1 (v2010.01.0527)

    Enhancements Full support for Visual Studio 2010 - the visual designer is now working in Visual Studio 2010 Ria Provider beta - supports all basic operations (query, insert, update, delete) New Enhancer - The enhancer has been replaced by a new implementation based on mono cecil. This fixes all known enhancer bugs and speeds up the enhancing process as well. Data Services Wizard integration - The Data Services Wizard is now integrated into the OpenAccess product. You can start it by using the...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Different execution plan for similar queries

    - by Graham Clements
    I am running two very similar update queries but for a reason unknown to me they are using completely different execution plans. Normally this wouldn't be a problem but they are both updating exactly the same amount of rows but one is using an execution plan that is far inferior to the other, 4 secs vs 2 mins, when scaled up this is causing me a massive problem. The only difference between the two queries is one is using the column CLI and the other DLI. These columns are exactly the same datatype, and are both indexed exactly the same, but for the DLI query execution plan, the index is not used. Any help as to why this is happening is much appreciated. -- Query 1 UPDATE a SET DestKey = ( SELECT TOP 1 b.PrefixKey FROM refPrefixDetail AS b WHERE a.DLI LIKE b.Prefix + '%' ORDER BY len(b.Prefix) DESC ) FROM CallData AS a -- Query 2 UPDATE a SET DestKey = ( SELECT TOP 1 b.PrefixKey FROM refPrefixDetail b WHERE a.CLI LIKE b.Prefix + '%' ORDER BY len(b.Prefix) DESC ) FROM CallData AS a

    Read the article

  • DocuSign Connect xsd [on hold]

    - by Brian
    I'm trying to parse xml inbound to our system from the DocuSign Connect service (which publishes status updates to a given web service). I typiclly use jaxb to process xml but as far as I know I need xsd's in order to generate the jaxb. I don't have (or can't find) xsd's from DocuSign in order to do this. I could use other java classes/libraries to parse the xml but without xsd's I'd have to guess at types, restrictions and formats. Is there a way to generate jaxb without having xsd's?

    Read the article

  • Navigant Consulting Implements Oracle's PeopleSoft Enterprise 9.1 to Integrate Financial and HR Information

    - by jay.richey
    Integration to Help Global Consultancy Increase Business Productivity and Streamline Operations Redwood Shores, Calif. - Dec. 15, 2010 "Our business is based on the seamless execution and expertise of our highly-trained consultants and we're always seeking ways to improve processes so they can focus on providing excellent client service," said Changappa Kodendera, CIO, Navigant Consulting. "Our phased implementation of Oracle's PeopleSoft Enterprise 9.1 will provide us with a solid technology foundation that we can rely on to support our global consulting business, with a scalable platform that facilitates further improvement." Read the press release Watch their video

    Read the article

  • How do you test an ICF based connector using Connector Facade Standalone?

    - by Shashidhar Malyala
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The following code helps in writing a standalone java program to test an ICF based connector. The sample code in this example takes into account an ICF based flatfile connector. It is possible to test various operations like create, update, delete, search etc... It is also possible to set values to the connector configuration parameters, add/remove attributes and their values. public class FlatFile { private static final java.lang.String BUNDLE_NAME = "<PACKAGE_NAME>"; //Ex: org.info.icf.flatfile private static final java.lang.String BUNDLE_VERSION = "1.0.0"; private static final java.lang.String CONNECTOR_NAME = "org.info.icf.flatfile.FlatFileConnector"; // Name of connector class i.e. the class implemting the connector SPI operations public ConnectorFacade getFacade() throws IOException { ConnectorInfoManagerFactory fact = ConnectorInfoManagerFactory .getInstance(); File bundleDirectory = new File("<BUNDLE_LOCATION>"); //Ex: /usr/oracle/connector_bundles/ URL url = IOUtil.makeURL(bundleDirectory, "org.info.icf.flatfile-1.0.0.jar"); ConnectorInfoManager manager = fact.getLocalManager(url); ConnectorKey key = new ConnectorKey(BUNDLE_NAME, BUNDLE_VERSION, CONNECTOR_NAME); ConnectorInfo info = manager.findConnectorInfo(key); // From the ConnectorInfo object, create the default APIConfiguration. APIConfiguration apiConfig = info.createDefaultAPIConfiguration(); // From the default APIConfiguration, retrieve the // ConfigurationProperties. ConfigurationProperties properties = apiConfig .getConfigurationProperties(); // Print out what the properties are (not necessary) List propertyNames = properties.getPropertyNames(); for (String propName : propertyNames) { ConfigurationProperty prop = properties.getProperty(propName); System.out.println("Property Name: " + prop.getName() + "\tProperty Type: " + prop.getType()); } properties .setPropertyValue("fileLocation", "/usr/oracle/accounts.csv"); // Set all of the ConfigurationProperties needed by the connector. // properties.setPropertyValue("host", FOOBAR_HOST); // properties.setPropertyValue("adminName", FOOBAR_ADMIN); // properties.setPropertyValue("adminPassword", FOOBAR_PASSWORD); // properties.setPropertyValue("useSSL", false); // Use the ConnectorFacadeFactory's newInstance() method to get a new // connector. ConnectorFacade connFacade = ConnectorFacadeFactory.getInstance() .newInstance(apiConfig); // Make sure we have set up the Configuration properly connFacade.validate(); return connFacade; } public static void main(String[] args) throws IOException { FlatFile file = new FlatFile(); ConnectorFacade cfac = file.getFacade(); Set attrSet = new HashSet(); attrSet.add(AttributeBuilder.build(Name.NAME, "Test01")); attrSet.add(AttributeBuilder.build("FIRST_NAME", "Test_First")); attrSet.add(AttributeBuilder.build("LAST_NAME", "Test_Last")); //Create Uid uid = cfac.create(ObjectClass.ACCOUNT, attrSet, null); //Delete Uid uidP = new Uid("Test01"); cfac.delete(ObjectClass.ACCOUNT, uidP, null); } }

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • How to create temporary files on the client machine, from Web Application?

    - by Gaurav Srivastava
    I am creating a Web Application using JSP, Struts, EJB and Servlets. The Application is a combined CRM and Accounting Package so the Database size is very huge. So, in order to make Execution faster, I want prevent round trips to the Database. For that purpose, what I want to do is create some temporary XML files on the client Machine and use them whenever required. How can I do this, as Javascript do not permits me to do so. Is there any way of doing this? Or, is there any other solution which I can adopt in order to make my application Faster?

    Read the article

  • Oracle Process Accelerators Release 11.1.1.7.0 Now Available

    - by Cesare Rotundo
    The new Oracle Process Accelerators (PA) Release (11.1.1.7.0) delivers key functionality in many dimensions: new PAs across industries, new functionality in preexisting PAs, and an improved installation process. All PAs in Release 11.1.1.7.0 run on the latest Oracle BPM Suite and SOA Suite, 11.1.1.7. New PAs include: Financial Reports Approval (FRA): end-to-end solution for efficient and controlled Financial Report review and approval process, enabling financial analysts and decision makers to collaborate around Excel. Electronic Forms Management (EFM): supports the process to design and expose eForms with the ability to quickly design eForms and associate approval processes to them, and to then enable users to select, fill, and submit eForms for approval Mobile Data Offloading (MDO): enables telecommunications providers to reduce congestion on cellular networks and lower cost of operations by using Oracle Event Processing (OEP) and BAM to switch devices from cellular networks to Wi-Fi. By adopting the latest PA release , customers will also be able to better identify and kick-start smart extension of their processes where business steps are supported by Apps: PA 11.1.1.7.0 includes out-of-the-box business process extension scenarios with Oracle Apps such as Siebel (FSLO) and PeopleSoft (EOB).

    Read the article

  • C++ assignment question [closed]

    - by Adam Joof
    (Bubble Sort) In the bubble sort algorithm, smaller values gradually "bubble" their way upward to the top of the array like air bubbles rising in water, while the larger values sink to the bottom. The bubble sort makes several passes through the array. On each pass, successive pairs of elements are compared. If a pair is in increasing order (or the values are identical), we leave the values as they are. If a pair is in decreasing order, their values are swapped in the array. Write a program that sorts an array of 10 integers using bubble sort.

    Read the article

  • SQL Server: How to remove empty lines in SSMS?

    - by atricapilla
    I have many .sql files with lots of empty lines e.g. WITH cteTotalSales (SalesPersonID, NetSales) AS ( SELECT SalesPersonID, ROUND(SUM(SubTotal), 2) FROM Sales.SalesOrderHeader WHERE SalesPersonID IS NOT NULL GROUP BY SalesPersonID ) SELECT sp.FirstName + ' ' + sp.LastName AS FullName, sp.City + ', ' + StateProvinceName AS Location, ts.NetSales FROM Sales.vSalesPerson AS sp INNER JOIN cteTotalSales AS ts ON sp.BusinessEntityID = ts.SalesPersonID ORDER BY ts.NetSales DESC Is ther a way to remove these empty lines in SQL Server Management Studio? This is what I would like to have: WITH cteTotalSales (SalesPersonID, NetSales) AS ( SELECT SalesPersonID, ROUND(SUM(SubTotal), 2) FROM Sales.SalesOrderHeader WHERE SalesPersonID IS NOT NULL GROUP BY SalesPersonID ) SELECT sp.FirstName + ' ' + sp.LastName AS FullName, sp.City + ', ' + StateProvinceName AS Location, ts.NetSales FROM Sales.vSalesPerson AS sp INNER JOIN cteTotalSales AS ts ON sp.BusinessEntityID = ts.SalesPersonID ORDER BY ts.NetSales DESC

    Read the article

  • Join a single row in one table to n random rows in another

    - by Einar Egilsson
    Is it possible to make a join in SQL server that joins each row from table A to n random rows in another? For example, say I have a Customer table, a Product table and an Order table. I want to join each customer to 5 random products and insert these rows into the order table. (And each customer should be joined to 5 random rows of his own, I don't want all customers joining to the same 5 rows). Is this possible? I'm using SQL Server 2005 and it's fine if the solution is specific to that. This is a weird requirement but I'm basically making a small data generator to generate some random data.

    Read the article

  • Retrieve part of a MySQL column with PHP

    - by Gerardo Marset
    For instance, if I have the following table: +----+---+----------+ | id | a | position | +----+---+----------+ | 0 | 0 | 0 | | 1 | 0 | 1 | | 2 | 1 | 4 | | 3 | 1 | 9 | | 4 | 1 | 6 | | 5 | 1 | 1 | +----+---+----------+ and I want to get an array that contains the first 100 values from position where a is 1 in ascending order, what would I do? Im guessing something like this: $col = mysql_fetch_array( mysql_query(' SELECT `position` FROM `table` WHERE `a`="1" ORDER BY `position` ASC LIMIT 100 ')); I'd expect to get the following array: +-------+-------+ | index | value | +-------+-------+ | 0 | 1 | | 1 | 4 | | 2 | 6 | | 3 | 9 | +-------+-------+ but it doesn't work. ¿What should I do to make it work? Thanks

    Read the article

  • UPK Pre-Built Content Update

    - by Karen Rihs
    UPK pre-built content development efforts are always underway and growing. Over the last few months, the following new and upgraded modules became available:  NEW CONTENT RELEASES E-Business Suite 12.1 Field Service Manufacturing Operations Center Process Manufacturing:  System Administration Strategic Network Optimization U.S. Federal Financials Oracle Communications 11.1 Oracle Communications UPK for Pricing Design Center, Voice and Data Offerings Oracle Mobile Workforce 2.1.0 Administrative Setup User Tasks Primavera Primavera Portfolio Management 9.0 UPK CONTENT UPGRADES JDE E1 9.1 HCM Fundamentals for EnterpriseOne Manufacturing - Product Data Management Manufacturing Management Discrete Shop Floor Management Procurement and Subcontract Management JDE World A9.3 Accounts Payable Address Book  Common Foundation General Ledger For a list of modules currently available for each product line, visit the UPK Resource Library on Oracle.com. For more information on how your organization can take advantage of UPK pre-built content, see our previous blog,  The Value of UPK Pre-Built Content. - Karen Rihs, UPK Outbound Product Management

    Read the article

  • OpenGL: How to draw Bezier curve of degree higher then 8?

    - by maciekp
    I am trying to draw high order Bezier Curve using OpenGL evaluators: glMap1f(GL_MAP1_VERTEX_3, 0.0, 1.0, 3, 30, &points[0][0]); glMapGrid1f(30, 0, 1); glEvalMesh1(GL_LINE, 0, 30); or glBegin(GL_LINE_STRIP); for (int i = 0; i <= 30; i++) glEvalCoord1f((GLfloat) i/30.0); glEnd(); When number of points exceeds 8, curve disappears. How to draw higher order Bezier curve using evaluators?

    Read the article

  • Vendor neutral SQL

    - by Sparafusile
    I'm currently working on a project for a web application that may be installed on several different servers with various software configurations. I want to make my application as flexible as possible by allowing the user to have various SQL servers installed. The problem is the SQL syntax used by any two server vendors does not match up. For a simple example, here is the same SELECT statement for MS SQL and MySQL: MS SQL - SELECT TOP 1 * FROM MyTable ORDER BY DateCreated DESC MySQL - SELECT * FROM MyTable ORDER BY DateCreated DESC LIMIT 1 Are there any standard way to abstract the statement creation for various vendors? Any online resources or books discussing this problem? Any hints or smart-alec remarks that I'd find useful? Further information: I'm writing my we application in vanilla ASP running on a Windows server. Thanks, Spara

    Read the article

  • Per query relevance elevation for solr?

    - by plusplus
    I want to tune the relevance of solr search results on a per user basis - based on the number of times the user has clicked through a result before. Frequently hit items FOR THAT USER should rise to the top of their search results. Is there a way to provide custom boost/elevation for particular document ids on the query? I'm thinking in the order of ~100s of particular documents to elevate. The elevation should have no effect if the rest of the query doesn't find those documents. Alternatively, if this isn't possible, what is a sane way for setting up an alternative indexing approach that would make this possible? Could I add a field per user in the index to store their scores? I'm thinking in the order of 1000 users. The major drawback of that approach is the number of times a document would need to be reindexed (i.e. each time it was used by the user).

    Read the article

  • IP address numbers in MySQL subquery

    - by Iain Collins
    I have a problem with a subquery involving IPV4 addresses stored in MySQL (MySQL 5.0). The IP addresses are stored in two tables, both in network number format - e.g. the format output by MySQL's INET_ATON(). The first table ('events') contains lots of rows with IP addresses associated with them, the second table ('network_providers') contains a list of provider information for given netblocks. events table (~4,000,000 rows): event_id (int) event_name (varchar) ip_address (unsigned 4 byte int) network_providers table (~60,000 rows): ip_start (unsigned 4 byte int) ip_end (unsigned 4 byte int) provider_name (varchar) Simplified for the purposes of the problem I'm having, the goal is to create an export along the lines of: event_id,event_name,ip_address,provider_name If do a query along the lines of either of the following, I get the result I expect: SELECT provider_name FROM network_providers WHERE INET_ATON('192.168.0.1') >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 SELECT provider_name FROM network_providers WHERE 3232235521 >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 That is to say, it returns the correct provider_name for whatever IP I look up (of course I'm not really using 192.168.0.1 in my queries). However, when performing this same query as a subquery, in the following manner, it doesn't yield the result I would expect: SELECT event.id, event.event_name, (SELECT provider_name FROM network_providers WHERE event.ip_address >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1) as provider FROM events Instead the a different (incorrect) value for network_provider is returned - over 90% (but curiously not all) values returned in the provider column contain the wrong provider information for that IP. Using event.ip_address in a subquery just to echo out the value confirms it contains the value I'd expect and that the subquery can parse it. Replacing event.ip_address with an actual network number also works, just using it dynamically in the subquery in this manner that doesn't work for me. I suspect the problem is there is something fundamental and important about subqueries in MySQL that I don't get. I've worked with IP addresses like this in MySQL quite a bit before, but haven't previously done lookups for them using a subquery. The question: I'd really appreciate an example of how I could get the output I want, and if someone here knows, some enlightenment as to why what I'm doing doesn't work so I can avoid making this mistake again. Notes: The actual real-world usage I'm trying to do is considerably more complicated (involving joining two or three tables). This is a simplified version, to avoid overly complicating the question. Additionally, I know I'm not using a between on ip_start & ip_end - that's intentional (the DB's can be out of date, and such cases the owner in the DB is almost always in the next specified range and 'best guess' is fine in this context) however I'm grateful for any suggestions for improvement that relate to the question. Efficiency is always nice, but in this case absolutely not essential - any help appreciated.

    Read the article

  • STL Static-Const Member Definitions

    - by javery
    How does the following work? #include <limits> int main() { const int* const foo = &std::numeric_limits<int> ::digits; } I was under the impression that in order to take an address of a static const-ant member we had to physically define it in some translation unit in order to please the linker. That said, after looking at the preprocessed code for this TU, I couldn't find an external definition for the digits member (or any other relevant members). I tested this on two compilers (VC++ 10 and g++ 4.2.4) and got identical results (i.e., it works). Does the linker auto-magically link against an object file where this stuff is defined, or am I missing something obvious here?

    Read the article

  • Lock HTML select element, allow value to be sent on submit

    - by ILMV
    I have a select box (for a customer field) on a complex order form, when the user starts to add lines to the order they should not be allowed to change the customer select box (unless all lines are deleted). My immediate thought was that I could use the disabled attribute, but when the box is disabled the selected value is no longer passed to the target. When the problem arose a while ago one of the other developers worked around this by looping through all the options and disabling all but the selected option, and sure enough the value was passed to the target and we've been using since. But now I'm looking for a proper solution, I don't want to loop through all the options because are data is expanding and it's starting to introduce performance issues. I'd prefer not to enable this / all the elements when the submit button is hit. How can I lock the input, whilst maintaining the selected option and passing that value to the target script? I would prefer a non-JavaScript solution if possible, but if needed we are running jQuery 1.4.2 so that could be used.

    Read the article

  • Return only the new database items since last check in Rails

    - by Smith
    I'm fairly new to Ruby, and currently trying to implement an AJAX style commenting system. When the user views a topic, all the current comments on that topic will be displayed. The user can post a comment on the page of a topic and it should automatically display without having to refresh the page, along with any new comments that have been posted since the last comment currently displayed to the user. The comments should also automatically refresh at a specified frequency. I currently have the following code: views/idea/view.html.erb <%= periodically_call_remote(:update => "div_chat", :frequency => 1, :position => "top", :url => {:controller => "comment", :action => :test_view, :idea_id => @idea.id } ) %> <div id="div_chat"> </div> views/comment/test_view.html.erb <% @comments.each do |c| %><div id="comment"> <%= c.comment %> </div> <% end %> controllers/comment_controller.rb class CommentController < ApplicationController before_filter :start_defs def add_comment @comment = Comment.new params[:comment] if @comment.save flash[:notice] = "Successfully commented." else flash[:notice] = "UnSuccessfully commented." end end def test_render @comments = Comment.find_all_by_idea_id(params[:idea_id], :order => "created_at DESC", :conditions => ["created_at > ?", @latest_time] ) @latest = Comment.find(:first, :order => "created_at DESC") @latest_time = @latest.created_at end def start_defs @latest = Comment.find(:first, :order => "created_at ASC") @latest_time = @latest.created_at end end The problem is that every time periodically_call_remote makes a call, it returns the entire list of comments for that topic. From what I can tell, the @latest_time gets constantly reset to the earliest created_at, rather than staying updated to the latest created_at after the comments have been retrieved. I'm also not sure how I should directly refresh the comments when a comment is posted. Is it possible to force a call to periodically_call_remote on a successful save?

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • JSF actionListener is called multiple times from within HtmlTable

    - by Rose
    I have a mix of columns in my htmltable: 1 column is an actionlistener, 2 columns are actions and other columns are simple output. <h:dataTable styleClass="table" id="orderTable" value="#{table.dataModel}" var="anOrder" binding="#{table.dataTable}" rows="#{table.rows}" <an:listenerColumn backingBean="${orderEntry}" entity="${anOrder}" actionListener="closeOrder"/ <an:column label="#{msg.hdr_orderStatus}" entity="#{anOrder}" propertyName="orderStatus" / <an:actionColumn backingBean="${orderEntry}" entity="${anOrder}" action="editOrder" / <an:actionColumn backingBean="${orderEntry}" entity="${anOrder}" action="viewOrder"/ .... I'm using custom tags, but it's the same behavior if I use the default column tags. I've noticed a very strange effect: when clicking the actionlistenercolumn, the actionevent is handled 3 times. If I remove the 2 action columns then the actionevent is handled only once. The managed bean has sessionscope, bean method: public void closeOrder(ActionEvent event) { OrdersDto order; if ((order = orderRow()) == null) { return; } System.out.println("closeOrder() 1 "); orderManager.closeOrder(); System.out.println("closeOrder() 2 "); } the console prints the'debug' text 3 times.

    Read the article

  • Enzo Backup for SQL Azure Beta Released!

    - by ScottKlein
    Blue Syntax is happy to announce the release of their SQL Azure database backup product! Enzo Backup for SQL Azure offers unparalleled backup and restored functionlity and flexibility of a SQL Azure database. You can download the beta release here: http://www.bluesyntax.net/backup.aspx With Enzo Backup for SQL Azure, you can: Create a backup blob, or a backup file from a SQL Azure database Restore a SQL Azure database from a backup blob, or a backup file Perform limited backup and restore of SQL Server databases (see details) Run backups entirely in the cloud using a remote agent Backup a single schema of a database Restore specific tables only Copy backup devices from on-premise to the cloud Use a command-line utility to perform backup operations Perform transactionally consistent backups for SQL Azure Please download it and provide us your feed back!

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >