Search Results

Search found 2725 results on 109 pages for 'nodes'.

Page 20/109 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • How to optimize my PostgreSQL DB for prefix search?

    - by asmaier
    I have a table called "nodes" with roughly 1.7 million rows in my PostgreSQL db =#\d nodes Table "public.nodes" Column | Type | Modifiers --------+------------------------+----------- id | integer | not null title | character varying(256) | score | double precision | Indexes: "nodes_pkey" PRIMARY KEY, btree (id) I want to use information from that table for autocompletion of a search field, showing the user a list of the ten titles having the highest score fitting to his input. So I used this query (here searching for all titles starting with "s") =# explain analyze select title,score from nodes where title ilike 's%' order by score desc; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------- Sort (cost=64177.92..64581.38 rows=161385 width=25) (actual time=4930.334..5047.321 rows=161264 loops=1) Sort Key: score Sort Method: external merge Disk: 5712kB -> Seq Scan on nodes (cost=0.00..46630.50 rows=161385 width=25) (actual time=0.611..4464.413 rows=161264 loops=1) Filter: ((title)::text ~~* 's%'::text) Total runtime: 5260.791 ms (6 rows) This was much to slow for using it with autocomplete. With some information from Using PostgreSQL in Web 2.0 Applications I was able to improve that with a special index =# create index title_idx on nodes using btree(lower(title) text_pattern_ops); =# explain analyze select title,score from nodes where lower(title) like lower('s%') order by score desc limit 10; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=18122.41..18122.43 rows=10 width=25) (actual time=1324.703..1324.708 rows=10 loops=1) -> Sort (cost=18122.41..18144.60 rows=8876 width=25) (actual time=1324.700..1324.702 rows=10 loops=1) Sort Key: score Sort Method: top-N heapsort Memory: 17kB -> Bitmap Heap Scan on nodes (cost=243.53..17930.60 rows=8876 width=25) (actual time=96.124..1227.203 rows=161264 loops=1) Filter: (lower((title)::text) ~~ 's%'::text) -> Bitmap Index Scan on title_idx (cost=0.00..241.31 rows=8876 width=0) (actual time=90.059..90.059 rows=161264 loops=1) Index Cond: ((lower((title)::text) ~>=~ 's'::text) AND (lower((title)::text) ~<~ 't'::text)) Total runtime: 1325.085 ms (9 rows) So this gave me a speedup of factor 4. But can this be further improved? What if I want to use '%s%' instead of 's%'? Do I have any chance of getting a decent performance with PostgreSQL in that case, too? Or should I better try a different solution (Lucene?, Sphinx?) for implementing my autocomplete feature?

    Read the article

  • Change TreeNode image on expand-collapse events

    - by Alexander Stalt
    I have a treeView with many nodes. I want that some nodes change their image when node collapsed/expanded. How can I do it ? Unfortunately, TreeNode don't have properties like ExpandNodeImage, CollapseNodeImage \ TreeView can change very often, so nodes can be deleted/added.. i can delete child nodes and so on... Maybe, there is a class like ExpandAndCollapseNode ?

    Read the article

  • In an extension method how do a create an object based on the implementation class

    - by Greg
    Hi, In an extension method how do a create an object based on the implementation class. So in the code below I wanted to add an "AddRelationship" extension method, however I'm not sure how within the extension method I can create an Relationship object? i.e. don't want to tie the extension method to this particular implementation of relationship public static class TopologyExtns { public static void AddNode<T>(this ITopology<T> topIf, INode<T> node) { topIf.Nodes.Add(node.Key, node); } public static INode<T> FindNode<T>(this ITopology<T> topIf, T searchKey) { return topIf.Nodes[searchKey]; } public static bool AddRelationship<T>(this ITopology<T> topIf, INode<T> parentNode, INode<T> childNode) { var rel = new RelationshipImp(); // ** How do I create an object from teh implementation // Add nodes to Relationship // Add relationships to Nodes } } public interface ITopology<T> { //List<INode> Nodes { get; set; } Dictionary<T, INode<T> > Nodes { get; set; } } public interface INode<T> { // Properties List<IRelationship<T>> Relationships { get; set; } T Key { get; } } public interface IRelationship<T> { // Parameters INode<T> Parent { get; set; } INode<T> Child { get; set; } } namespace TopologyLibrary_Client { class RelationshipsImp : IRelationship<string> { public INode<string> Parent { get; set; } public INode<string> Child { get; set; } } } public class TopologyImp<T> : ITopology<T> { public Dictionary<T, INode<T>> Nodes { get; set; } public TopologyImp() { Nodes = new Dictionary<T, INode<T>>(); } } thanks

    Read the article

  • How do I tie a cmbBox that selects all drives (local and network) into a treeNode VB

    - by jpavlov
    How do i tie in a selected item from a cmbBox with a treeView? I am looking to just obtain the value of the one selected drive Thanks. Imports System Imports System.IO Imports System.IO.File Imports System.Windows.Forms Public Class F_Treeview_Demo Private Sub F_Treeview_Demo_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load ' Initialize the local directory treeview Dim nodeText As String = "" Dim sb As New C_StringBuilder With My.Computer.FileSystem 'Read in the number of drives For i As Integer = 0 To .Drives.Count - 1 '** Build the drive's node text sb.ClearText() sb.AppendText(.Drives(i).Name) cmbDrives.Items.Add(sb.FullText) Next End With ListRootNodes() End Sub Private Sub btnExit_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnExit.Click Application.Exit() End Sub Private Sub tvwLocalFolders_AfterSelect(ByVal sender As Object, ByVal e As System.Windows.Forms.TreeViewEventArgs) _ Handles tvwLocalFolders.AfterSelect ' Display the path for the selected node Dim folder As String = tvwLocalFolders.SelectedNode.Tag lblLocalPath.Text = folder ListView1.Items.Clear() Dim childNode As TreeNode = e.Node.FirstNode Dim parentPath As String = AddChar(e.Node.Tag) End Sub Private Sub AddToList(ByVal nodes As TreeNodeCollection) For Each node As TreeNode In nodes If node.Checked Then ListView1.Items.Add(node.Text) ListView1.Items.Add(Chr(13)) AddToList(node.Nodes) End If Next End Sub Private Sub tvwLocalFolders_BeforeExpand(ByVal sender As Object, ByVal e As System.Windows.Forms.TreeViewCancelEventArgs) _ Handles tvwLocalFolders.BeforeExpand ' Display the path for the selected node lblLocalPath.Text = e.Node.Tag ' Populate all child nodes below the selected node Dim parentPath As String = AddChar(e.Node.Tag) tvwLocalFolders.BeginUpdate() Dim childNode As TreeNode = e.Node.FirstNode 'this i added Dim smallNode As TreeNode = e.Node.FirstNode Do While childNode IsNot Nothing ListLocalSubFolders(childNode, parentPath & childNode.Text) childNode = childNode.NextNode ''this i added ListLocalFiles(smallNode, parentPath & smallNode.Text) Loop tvwLocalFolders.EndUpdate() tvwLocalFolders.Refresh() ' Select the node being expanded tvwLocalFolders.SelectedNode = e.Node ListView1.Items.Clear() AddToList(tvwLocalFolders.Nodes) ListView1.Items.Add(Environment.NewLine) End Sub Private Sub ListRootNodes() ' Add all local drives to the Local treeview Dim nodeText As String = "" Dim sb As New C_StringBuilder With My.Computer.FileSystem For i As Integer = 0 To .Drives.Count - 1 '** Build the drive's node text sb.ClearText() sb.AppendText(.Drives(i).Name) nodeText = sb.FullText nodeText = Me.cmbDrives.SelectedItem '** Add the drive to the treeview Dim driveNode As TreeNode driveNode = tvwLocalFolders.Nodes.Add(nodeText) 'driveNode.Tag = .Drives(i).Name '** Add the next level of subfolders 'ListLocalSubFolders(driveNode, .Drives(i).Name) ListLocalSubFolders(driveNode, nodeText) 'driveNode = Nothing Next End With End Sub Private Sub ListLocalFiles(ByVal ParentNode As TreeNode, ByVal PParentPath As String) Dim FileNode As String = "" Try For Each FileNode In Directory.GetFiles(PParentPath) Dim smallNode As TreeNode smallNode = ParentNode.Nodes.Add(FilenameFromPath(FileNode)) With smallNode .ImageIndex = 0 .SelectedImageIndex = 1 .Tag = FileNode End With smallNode = Nothing Next Catch ex As Exception End Try End Sub Private Sub ListLocalSubFolders(ByVal ParentNode As TreeNode, _ ByVal ParentPath As String) ' Add all local subfolders below the passed Local treeview node Dim FolderNode As String = "" Try For Each FolderNode In Directory.GetDirectories(ParentPath) Dim childNode As TreeNode childNode = ParentNode.Nodes.Add(FilenameFromPath(FolderNode)) With childNode .ImageIndex = 0 .SelectedImageIndex = 1 .Tag = FolderNode End With childNode = Nothing Next Catch ex As Exception End Try End Sub Private Sub ComboBox1_SelectedIndexChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmbDrives.SelectedIndexChanged End Sub Private Sub lblLocalPath_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles lblLocalPath.Click End Sub Private Sub grpLocalFileSystem_Enter(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles grpLocalFileSystem.Enter End Sub Private Sub btn1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btn1.Click ' lbl1.Text = End Sub Private Sub ListView1_SelectedIndexChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles ListView1.SelectedIndexChanged End Sub End Class

    Read the article

  • QuickGraph - is there algorithm for find all parents (up to root vertex's) of a set of vertex's

    - by Greg
    Hi, In QuickGraph - is there algorithm for find all parents (up to root vertex's) of a set of vertex's. In other words all vertex's which have somewhere under them (on the way to the leaf nodes) one or more of the vertexs input. So if the vertexs were Nodes, and the edges were a depends on relationship, find all nodes that would be impacted by a given set of nodes. If not how hard is it to write one's own algorithms?

    Read the article

  • NUMA-aware placement of communication variables

    - by Dave
    For classic NUMA-aware programming I'm typically most concerned about simple cold, capacity and compulsory misses and whether we can satisfy the miss by locally connected memory or whether we have to pull the line from its home node over the coherent interconnect -- we'd like to minimize channel contention and conserve interconnect bandwidth. That is, for this style of programming we're quite aware of where memory is homed relative to the threads that will be accessing it. Ideally, a page is collocated on the node with the thread that's expected to most frequently access the page, as simple misses on the page can be satisfied without resorting to transferring the line over the interconnect. The default "first touch" NUMA page placement policy tends to work reasonable well in this regard. When a virtual page is first accessed, the operating system will attempt to provision and map that virtual page to a physical page allocated from the node where the accessing thread is running. It's worth noting that the node-level memory interleaving granularity is usually a multiple of the page size, so we can say that a given page P resides on some node N. That is, the memory underlying a page resides on just one node. But when thinking about accesses to heavily-written communication variables we normally consider what caches the lines underlying such variables might be resident in, and in what states. We want to minimize coherence misses and cache probe activity and interconnect traffic in general. I don't usually give much thought to the location of the home NUMA node underlying such highly shared variables. On a SPARC T5440, for instance, which consists of 4 T2+ processors connected by a central coherence hub, the home node and placement of heavily accessed communication variables has very little impact on performance. The variables are frequently accessed so likely in M-state in some cache, and the location of the home node is of little consequence because a requester can use cache-to-cache transfers to get the line. Or at least that's what I thought. Recently, though, I was exploring a simple shared memory point-to-point communication model where a client writes a request into a request mailbox and then busy-waits on a response variable. It's a simple example of delegation based on message passing. The server polls the request mailbox, and having fetched a new request value, performs some operation and then writes a reply value into the response variable. As noted above, on a T5440 performance is insensitive to the placement of the communication variables -- the request and response mailbox words. But on a Sun/Oracle X4800 I noticed that was not the case and that NUMA placement of the communication variables was actually quite important. For background an X4800 system consists of 8 Intel X7560 Xeons . Each package (socket) has 8 cores with 2 contexts per core, so the system is 8x8x2. Each package is also a NUMA node and has locally attached memory. Every package has 3 point-to-point QPI links for cache coherence, and the system is configured with a twisted ladder "mobius" topology. The cache coherence fabric is glueless -- there's not central arbiter or coherence hub. The maximum distance between any two nodes is just 2 hops over the QPI links. For any given node, 3 other nodes are 1 hop distant and the remaining 4 nodes are 2 hops distant. Using a single request (client) thread and a single response (server) thread, a benchmark harness explored all permutations of NUMA placement for the two threads and the two communication variables, measuring the average round-trip-time and throughput rate between the client and server. In this benchmark the server simply acts as a simple transponder, writing the request value plus 1 back into the reply field, so there's no particular computation phase and we're only measuring communication overheads. In addition to varying the placement of communication variables over pairs of nodes, we also explored variations where both variables were placed on one page (and thus on one node) -- either on the same cache line or different cache lines -- while varying the node where the variables reside along with the placement of the threads. The key observation was that if the client and server threads were on different nodes, then the best placement of variables was to have the request variable (written by the client and read by the server) reside on the same node as the client thread, and to place the response variable (written by the server and read by the client) on the same node as the server. That is, if you have a variable that's to be written by one thread and read by another, it should be homed with the writer thread. For our simple client-server model that means using split request and response communication variables with unidirectional message flow on a given page. This can yield up to twice the throughput of less favorable placement strategies. Our X4800 uses the QPI 1.0 protocol with source-based snooping. Briefly, when node A needs to probe a cache line it fires off snoop requests to all the nodes in the system. Those recipients then forward their response not to the original requester, but to the home node H of the cache line. H waits for and collects the responses, adjudicates and resolves conflicts and ensures memory-model ordering, and then sends a definitive reply back to the original requester A. If some node B needed to transfer the line to A, it will do so by cache-to-cache transfer and let H know about the disposition of the cache line. A needs to wait for the authoritative response from H. So if a thread on node A wants to write a value to be read by a thread on node B, the latency is dependent on the distances between A, B, and H. We observe the best performance when the written-to variable is co-homed with the writer A. That is, we want H and A to be the same node, as the writer doesn't need the home to respond over the QPI link, as the writer and the home reside on the very same node. With architecturally informed placement of communication variables we eliminate at least one QPI hop from the critical path. Newer Intel processors use the QPI 1.1 coherence protocol with home-based snooping. As noted above, under source-snooping a requester broadcasts snoop requests to all nodes. Those nodes send their response to the home node of the location, which provides memory ordering, reconciles conflicts, etc., and then posts a definitive reply to the requester. In home-based snooping the snoop probe goes directly to the home node and are not broadcast. The home node can consult snoop filters -- if present -- and send out requests to retrieve the line if necessary. The 3rd party owner of the line, if any, can respond either to the home or the original requester (or even to both) according to the protocol policies. There are myriad variations that have been implemented, and unfortunately vendor terminology doesn't always agree between vendors or with the academic taxonomy papers. The key is that home-snooping enables the use of a snoop filter to reduce interconnect traffic. And while home-snooping might have a longer critical path (latency) than source-based snooping, it also may require fewer messages and less overall bandwidth. It'll be interesting to reprise these experiments on a platform with home-based snooping. While collecting data I also noticed that there are placement concerns even in the seemingly trivial case when both threads and both variables reside on a single node. Internally, the cores on each X7560 package are connected by an internal ring. (Actually there are multiple contra-rotating rings). And the last-level on-chip cache (LLC) is partitioned in banks or slices, which with each slice being associated with a core on the ring topology. A hardware hash function associates each physical address with a specific home bank. Thus we face distance and topology concerns even for intra-package communications, although the latencies are not nearly the magnitude we see inter-package. I've not seen such communication distance artifacts on the T2+, where the cache banks are connected to the cores via a high-speed crossbar instead of a ring -- communication latencies seem more regular.

    Read the article

  • Dynamic Tree control in Flex 3

    - by nimmyliji
    I am looking for sample code to create a dynamic Tree control in Flex using a Collection of objects obtained from the backend(Perl cgi). So, initially the Tree will display the root nodes. Clicking root node, will invoke the data for populating the child nodes (basically adding child nodes on demand). Clicking child nodes will pull another collection add child nodes of child node. So, lets assume Initially the Tree will display - Root1 Root2 Root3 Clicking Root1 will display something like this - Root1 Child 1 Child 2 Root2 Root3 And Clicking Child1 will display - Root1 Child 1 Child1 of Child 1 Child2 of Child 1 Child 2 Root2 Root3 Is it possible? Thanks in advance...

    Read the article

  • AWS: setting up auto-scale for EC2 instances

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/16/aws-setting-up-auto-scale-for-ec2-instances.aspxWith Amazon Web Services, there’s no direct equivalent to Azure Worker Roles – no Elastic Beanstalk-style application for .NET background workers. But you can get the auto-scale part by configuring an auto-scaling group for your EC2 instance. This is a step-by-step guide, that shows you how to create the auto-scaling configuration, which for EC2 you need to do with the command line, and then link your scaling policies to CloudWatch alarms in the Web console. I’m using queue size as my metric for CloudWatch,  which is a good fit if your background workers are pulling messages from a queue and processing them.  If the queue is getting too big, the “high” alarm will fire and spin up a new instance to share the workload. If the queue is draining down, the “low” alarm will fire and shut down one of the instances. To start with, you need to manually set up your app in an EC2 VM, for a background worker that would mean hosting your code in a Windows Service (I always use Topshelf). If you’re dual-running Azure and AWS, then you can isolate your logic in one library, with a generic entry point that has Start() and Stop()  functions, so your Worker Role and Windows Service are essentially using the same code. When you have your instance set up with the Windows Service running automatically, and you’ve tested it starts up and works properly from a reboot, shut the machine down and take an image of the VM, using Create Image (EBS AMI) from the Web Console: When that completes, you’ll have your own AMI which you can use to spin up new instances, and you’re ready to create your auto-scaling group. You need to dip into the command-line tools for this, so follow this guide to set up the AWS autoscale command line tool. Now we’re ready to go. 1. Create a launch configuration This launch configuration tells AWS what to do when a new instance needs to be spun up. You create it with the as-create-launch-config command, which looks like this: as-create-launch-config sc-xyz-launcher # name of the launch config --image-id ami-7b9e9f12 # id of the AMI you extracted from your VM --region eu-west-1 # which region the new instance gets created in --instance-type t1.micro # size of the instance to create --group quicklaunch-1 #security group for the new instance 2. Create an auto-scaling group The auto-scaling group links to the launch config, and defines the overall configuration of the collection of instances: as-create-auto-scaling-group sc-xyz-asg # auto-scaling group name --region eu-west-1 # region to create in --launch-configuration sc-xyz-launcher # name of the launch config to invoke for new instances --min-size 1 # minimum number of nodes in the group --max-size 5 # maximum number of nodes in the group --default-cooldown 300 # period to wait (in seconds) after each scaling event, before checking if another scaling event is required --availability-zones eu-west-1a eu-west-1b eu-west-1c # which availability zones you want your instances to be allocated in – multiple entries means EC@ will use any of them 3. Create a scale-up policy The policy dictates what will happen in response to a scaling event being triggered from a “high” alarm being breached. It links to the auto-scaling group; this sample results in one additional node being spun up: as-put-scaling-policy scale-up-policy # policy name -g sc-psod-woker-asg # auto-scaling group the policy works with --adjustment 1 # size of the adjustment --region eu-west-1 # region --type ChangeInCapacity # type of adjustment, this specifies a fixed number of nodes, but you can use PercentChangeInCapacity to make an adjustment relative to the current number of nodes, e.g. increasing by 50% 4. Create a scale-down policy The policy dictates what will happen in response to a scaling event being triggered from a “low” alarm being breached. It links to the auto-scaling group; this sample results in one node from the group being taken offline: as-put-scaling-policy scale-down-policy -g sc-psod-woker-asg "--adjustment=-1" # in Windows, use double-quotes to surround a negative adjustment value –-type ChangeInCapacity --region eu-west-1 5. Create a “high” CloudWatch alarm We’re done with the command line now. In the Web Console, open up the CloudWatch view and create a new alarm. This alarm will monitor your metrics and invoke the scale-up policy from your auto-scaling group, when the group is working too hard. Configure your metric – this example will fire the alarm if there are more than 10 messages in my queue for over a minute: Then link the alarm to the scale-up policy in your group: 6. Create a “low” CloudWatch alarm The opposite of step 4, this alarm will trigger when the instances in your group don’t have enough work to do (e.g fewer than 2 messages in the queue for 1 minute), and will invoke the scale-down policy. And that’s it. You don’t need your original VM as the auto-scale group has a minimum number of nodes connected. You can test out the scaling by flexing your CloudWatch metric – in this example, filling up a queue from a  stub publisher – and watching AWS create new nodes as required, then stopping the publisher and watch AWS kill off the spare nodes.

    Read the article

  • Routing configuration in cakephp

    - by ShiVik
    Hello all I am trying to implement routing in cakephp. I want the urls to mapped like this... www.example.com/nodes/main - www.example.com/main www.example.com/nodes/about - www.example.com/about So for this I wrote in my config/routes.php file.. Router::connect('/:action', array('controller' => 'nodes')); Now, I got the thing going but when I click on the links, the url in browser appears like www.example.com/nodes/main www.example.com/nodes/about Is there some way where I can get the urls to appear the way they are routed? Setting in .htaccess or httpd.conf would be easy - but I don't have access to that. Regards Vikram

    Read the article

  • Relative XPath selection using XmlNode (c#)

    - by maxp
    Say i have the following xml file: <a> <b> <c></c> </b> <b> <c></c> </b> </a> var nodes = doc.SelectNodes("/a/b"); will select the two b nodes. I then loop these two nodes suchas: foreach (XmlNode node in nodes) { } However, when i call node.SelectNodes("/a/b/c"); It still returns both values and not just the descendants. Is it possible to select nodes only descening from the current node?

    Read the article

  • LINQ2SQL: orderby note.hasChildren(), name ascending

    - by Peter Bridger
    I have a hierarchical data structure which I'm displaying in a webpage as a treeview. I want to data to be ordered to first show nodes ordered alphabetically which have no children, then under these nodes ordered alphabetically which have children. Currently I'm ordering all nodes in one group, which means nodes with children appear next to nodes with no children. I'm using a recursive method to build up the treeview, which has this LINQ code at it's heart: var filteredCategory = from c in category orderby c.Name ascending where c.ParentCategoryId == parentCategoryId && c.Active == true select c; So this is the orderby statement I want to enhance. Shown below is the database table structure: [dbo].[Category]( [CategoryId] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](100) NOT NULL, [Level] [tinyint] NOT NULL, [ParentCategoryId] [int] NOT NULL, [Selectable] [bit] NOT NULL CONSTRAINT [DF_Category_Selectable] DEFAULT ((1)), [Active] [bit] NOT NULL CONSTRAINT [DF_Category_Active] DEFAULT ((1))

    Read the article

  • Data structure for unrooted trees

    - by Esmond
    I'm having problems figuring out how to build an unrooted tree with weighted edges and what data structure to store such a tree. An example of an unrooted tree would be like the one here: http://www.bio.davidson.edu/courses/GENOMICS/seq/unrooted.gif The problem i am having is the leaves would only have 1 link to the internal nodes and the internal nodes would have 3 links(the internal nodes would have 2 children and a link to another internal node). Do i have to distinguish between the 2 different kinds of nodes or can i have one class having the function of both types of nodes?

    Read the article

  • How can I make this code more generic

    - by Greg
    Hi How could I make this code more generic in the sense that the Dictionary key could be a different type, depending on what the user of the library wanted to implement? For example someone might what to use the extension methods/interfaces in a case where there "unique key" so to speak for Node is actually an "int" not a "string" for example. public interface ITopology { Dictionary<string, INode> Nodes { get; set; } } public static class TopologyExtns { public static void AddNode(this ITopology topIf, INode node) { topIf.Nodes.Add(node.Name, node); } public static INode FindNode(this ITopology topIf, string searchStr) { return topIf.Nodes[searchStr]; } } public class TopologyImp : ITopology { public Dictionary<string, INode> Nodes { get; set; } public TopologyImp() { Nodes = new Dictionary<string, INode>(); } }

    Read the article

  • Drupal Views: Render Null Result for Relationship as 0

    - by Kyle S
    I have a View configured in Drupal to return nodes, sorting them by their average vote in descending order. For the purpose of the View, the value of the average votes is a Relationship. I noticed that nodes with no votes are displayed after nodes with a negative average. Nodes with no votes should have an average of 0, but I believe the MySQL JOIN is causing NULL values to be returned (as there are no matching rows in the joined table, since a row is created after the first vote is cast for that item). I discovered that with MySQL it is possible to output all values that are NULL in a column as another value with IFNULL(column_name,'other value'). I feel like I would need to modify the Views module in order to obtain this functionality, but I'm hoping that there is some sort of option that returns NULL values in a relation (a relation doesn't exist for the item) as 0 instead of NULL, so that I can properly sort the nodes. The modules I am using include Views, Voting API, Vote Up/Down, and CTools. Thanks.

    Read the article

  • Programmatically disclosing a node in af:tree and af:treeTable

    - by Frank Nimphius
    A common developer requirement when working with af:tree or af:treeTable components is to programmatically disclose (expand) a specific node in the tree. If the node to disclose is not a top level node, like a location in a LocationsView -> DepartmentsView -> EmployeesView hierarchy, you need to also disclose the node's parent node hierarchy for application users to see the fully expanded tree node structure. Working on ADF Code Corner sample #101, I wrote the following code lines that show a generic option for disclosing a tree node starting from a handle to the node to disclose. The use case in ADF Coder Corner sample #101 is a drag and drop operation from a table component to a tree to relocate employees to a new department. The tree node that receives the drop is a department node contained in a location. In theory the location could be part of a country and so on to indicate the depth the tree may have. Based on this structure, the code below provides a generic solution to parse the current node parent nodes and its child nodes. The drop event provided a rowKey for the tree node that received the drop. Like in af:table, the tree row key is not of type oracle.jbo.domain.Key but an implementation of java.util.List that contains the row keys. The JUCtrlHierBinding class in the ADF Binding layer that represents the ADF tree binding at runtime provides a method named findNodeByKeyPath that allows you to get a handle to the JUCtrlHierNodeBinding instance that represents a tree node in the binding layer. CollectionModel model = (CollectionModel) your_af_tree_reference.getValue(); JUCtrlHierBinding treeBinding = (JUCtrlHierBinding ) model.getWrappedData(); JUCtrlHierNodeBinding treeDropNode = treeBinding.findNodeByKeyPath(dropRowKey); To disclose the tree node, you need to create a RowKeySet, which you do using the RowKeySetImpl class. Because the RowKeySet replaces any existing row key set in the tree, all other nodes are automatically closed. RowKeySetImpl rksImpl = new RowKeySetImpl(); //the first key to add is the node that received the drop //operation (departments).            rksImpl.add(dropRowKey);    Similar, from the tree binding, the root node can be obtained. The root node is the end of all parent node iteration and therefore important. JUCtrlHierNodeBinding rootNode = treeBinding.getRootNodeBinding(); The following code obtains a reference to the hierarchy of parent nodes until the root node is found. JUCtrlHierNodeBinding dropNodeParent = treeDropNode.getParent(); //walk up the tree to expand all parent nodes while(dropNodeParent != null && dropNodeParent != rootNode){    //add the node's keyPath (remember its a List) to the row key set    rksImpl.add(dropNodeParent.getKeyPath());      dropNodeParent = dropNodeParent.getParent(); } Next, you disclose the drop node immediate child nodes as otherwise all you see is the department node. Its not quite exactly "dinner for one", but the procedure is very similar to the one handling the parent node keys ArrayList<JUCtrlHierNodeBinding> childList = (ArrayList<JUCtrlHierNodeBinding>) treeDropNode.getChildren();                     for(JUCtrlHierNodeBinding nb : childList){   rksImpl.add(nb.getKeyPath()); } Next, the row key set is defined as the disclosed row keys on the tree so when you refresh (PPR) the tree, the new disclosed state shows tree.setDisclosedRowKeys(rksImpl); AdfFacesContext.getCurrentInstance().addPartialTarget(tree.getParent()); The refresh in my use case is on the tree parent component (a layout container), which usually shows the best effect for refreshing the tree component. 

    Read the article

  • maintaining a large list in python

    - by Oren
    I need to maintain a large list of python pickleable objects that will quickly execute the following algorithm: The list will have 4 pointers and can: Read\Write each of the 4 pointed nodes Insert new nodes before the pointed nodes Increase a pointer by 1 Pop nodes from the beginning of the list (without overlapping the pointers) Add nodes at the end of the list The list can be very large (300-400 MB), so it shouldnt be contained in the RAM. The nodes are small (100-200 bytes) but can contain various types of data. The most efficient way it can be done, in my opinion, is to implement some paging-mechanism. On the harddrive there will be some database of a linked-list of pages where in every moment up to 5 of them are loaded in the memory. On insertion, if some max-page-size was reached, the page will be splitted to 2 small pages, and on pointer increment, if a pointer is increased beyond its loaded page, the following page will be loaded instead. This is a good solution, but I don't have the time to write it, especially if I want to make it generic and implement all the python-list features. Using a SQL tables is not good either, since I'll need to implement a complex index-key mechanism. I tried ZODB and zc.blist, which implement a BTree based list that can be stored in a ZODB database file, but I don't know how to configure it so the above features will run in reasonable time. I don't need all the multi-threading\transactioning features. No one else will touch the database-file except for my single-thread program. Can anyone explain me how to configure the ZODB\zc.blist so the above features will run fast, or show me a different large-list implementation?

    Read the article

  • help in the Donalds B. Johnson's algorithm, i cannot understand the pseudo code (PART II)

    - by Pitelk
    Hi all , i cannot understand a certain part of the paper published by Donald Johnson about finding cycles (Circuits) in a graph. More specific i cannot understand what is the matrix Ak which is mentioned in the following line of the pseudo code : Ak:=adjacency structure of strong component K with least vertex in subgraph of G induced by {s,s+1,....n}; to make things worse some lines after is mentins " for i in Vk do " without declaring what the Vk is... As far i have understand we have the following: 1) in general, a strong component is a sub-graph of a graph, in which for every node of this sub-graph there is a path to any node of the sub-graph (in other words you can access any node of the sub-graph from any other node of the sub-graph) 2) a sub-graph induced by a list of nodes is a graph containing all these nodes plus all the edges connecting these nodes. in paper the mathematical definition is " F is a subgraph of G induced by W if W is subset of V and F = (W,{u,y)|u,y in W and (u,y) in E)}) where u,y are edges , E is the set of all the edges in the graph, W is a set of nodes. 3)in the code implementation the nodes are named by integer numbers 1 ... n. 4) I suspect that the Vk is the set of nodes of the strong component K. now to the question. Lets say we have a graph G= (V,E) with V = {1,2,3,4,5,6,7,8,9} which it can be divided into 3 strong components the SC1 = {1,4,7,8} SC2= {2,3,9} SC3 = {5,6} (and their edges) Can anybody give me an example for s =1, s= 2, s= 5 what if going to be the Vk and Ak according to the code? The pseudo code is in my previous question in http://stackoverflow.com/questions/2908575/help-in-the-donalds-b-johnsons-algorithm-i-cannot-understand-the-pseudo-code and the paper can be found at http://stackoverflow.com/questions/2908575/help-in-the-donalds-b-johnsons-algorithm-i-cannot-understand-the-pseudo-code thank you in advance

    Read the article

  • problem with AddSort method

    - by netNewbi3
    Hi Could you let me know what the proboem is with sorting in this code? It doesn't work. My xml: CONTRACTS --CONTRACT ---SUPPLIER ---COMMODITIES ----COMMODITY -----COMODDITYNAME My code: Dim myString As StringBuilder = New StringBuilder(200) Dim xdoc As New XPathDocument("local_xml.xml") Dim nav As XPathNavigator = xdoc.CreateNavigator() Dim expr As XPathExpression expr = nav.Compile("/pf:CONTRACTS/pf:CONTRACT") Dim namespaceManager As XmlNamespaceManager = New XmlNamespaceManager(nav.NameTable) namespaceManager.AddNamespace("pf", "http://namespace.ac.uk/") expr.AddSort("pf:SUPPLIER", XmlSortOrder.Ascending, XmlCaseOrder.None, String.Empty, XmlDataType.Text) expr.SetContext(namespaceManager) Dim nodes As XPathNodeIterator = nav.Select(expr) If nodes.Count > 0 Then myString.AppendLine("<table width='96%' border='0' cellpadding='0' cellspacing='0' border='0' class='datatable1'>") myString.AppendLine("<th width='35%'>Name</th><th width='35%'>Commodity</th><th width='20%'>Supplier</a></th>") While nodes.MoveNext() Dim node As XPathNavigator = nodes.Current.SelectSingleNode("pf:NAME", namespaceManager) Dim supplier As XPathNavigator = nodes.Current.SelectSingleNode("pf:SUPPLIER", namespaceManager) Dim commodity As XPathNavigator = nodes.Current.SelectSingleNode("pf:COMMODITIES/pf:COMMODITY/pf:COMMODITYNAME", namespaceManager) Dim sChars As String = " " myString.AppendLine("<tr>") myString.AppendLine("<td>") myString.AppendLine(node.ToString()) myString.AppendLine("</td>") myString.AppendLine("<td>") myString.AppendLine(commodity.ToString()) myString.AppendLine("</td>") myString.AppendLine("<td>") myString.AppendLine(supplier.ToString()) myString.AppendLine("</td>") myString.AppendLine("</tr>") End While myString.AppendLine("</table>") Dim strOutput As String = myString.ToString() lblOutput.Text = strOutput Else lblOutput.Text = "No results for your search<br/>" End If

    Read the article

  • Neural Network with softmax activation

    - by Cambium
    This is more or less a research project for a course, and my understanding of NN is very/fairly limited, so please be patient :) ============== I am currently in the process of building a neural network that attempts to examine an input dataset and output the probability/likelihood of each classification (there are 5 different classifications). Naturally, the sum of all output nodes should add up to 1. Currently, I have two layers, and I set the hidden layer to contain 10 nodes. I came up with two different types of implementations 1) Logistic sigmoid for hidden layer activation, softmax for output activation 2) Softmax for both hidden layer and output activation I am using gradient descent to find local maximums in order to adjust the hidden nodes' weights and the output nodes' weights. I am certain in that I have this correct for sigmoid. I am less certain with softmax (or whether I can use gradient descent at all), after a bit of researching, I couldn't find the answer and decided to compute the derivative myself and obtained softmax'(x) = softmax(x) - softmax(x)^2 (this returns an column vector of size n). I have also looked into the MATLAB NN toolkit, the derivative of softmax provided by the toolkit returned a square matrix of size nxn, where the diagonal coincides with the softmax'(x) that I calculated by hand; and I am not sure how to interpret the output matrix. I ran each implementation with a learning rate of 0.001 and 1000 iterations of back propagation. However, my NN returns 0.2 (an even distribution) for all five output nodes, for any subset of the input dataset. My conclusions: o I am fairly certain that my gradient of descent is incorrectly done, but I have no idea how to fix this. o Perhaps I am not using enough hidden nodes o Perhaps I should increase the number of layers Any help would be greatly appreciated! The dataset I am working with can be found here (processed Cleveland): http://archive.ics.uci.edu/ml/datasets/Heart+Disease

    Read the article

  • Combining multiple content types into a single search result with Drupal 6 and Views 2

    - by Chaulky
    Hi all, I need to create a somewhat advanced search functionality for my Drupal 6 site. I have a one-to-many relationship between two content types and need to search them, respecting that relationship. To make things more clear... I have content types TypeX and TypeY. TypeY has a node reference CCK field that relates it to a single node of TypeX. So, many nodes of TypeY reference the same node of TypeX. I want to use Views 2 to create a search page for these nodes. I want each search result to be a node of TypeX, along with all the nodes of TypeY that reference it. I know I could just theme the individual results and use a view to add the nodes of TypeY to the single node of TypeX... but that won't allow users to actually search TypeY... it would only search TypeX and merely display some nodes of TypeY along with it. Is there anyway to get the search to account for content in nodes of both content types, but merge the TypeY results into the "parent" node of TypeX? In database terms, it seems like I need to do a join, then filter by the search terms. But I can't figure out how to do this in Views. Thanks for any help i can get!!!

    Read the article

  • Drupal view filter to show only one of a certain item

    - by Joel
    I'm fairly new to Drupal, and am using Node Import to take a TSV file and turn it into nodes. I'm hitting a problem, though, with automating updates to the nodes. Again, I'd like to take a Tab Separated Values text file, and load it into my site via Node Import (or whatever else anyone might suggest) and then only show updated Nodes. Here's a specific example: I have a Node with the following info: StoreId Name Address Phone Contact 01 Name1 Address1 Phone1 Contact1 02 Name2 Address2 Phone2 Contact2 etc. The info pulls into the nodes just fine (Thank you Node Import!), but we also want to process updates to the nodes. So far I have two ideas... figure out how to delete duplicate (previous) instances of the same StoreID, or just save the node with the duplicate StoreID (and new other info) and just display the most current version. In Views, I can get it to show the nodes and everything, but I can't figure out how to only display the most recent version of each StoreID. A view of views would work, but I can't seem to get that to work, either. Any ideas or other approaches I could take? Thanks in advance for the help!

    Read the article

  • Can I get an XPathNodeIterator directly from an XPath?

    - by Val
    I hope I'm just missing something obvious. I have a number of repeating nodes in an XML document: <root> <parent> <child/> <child/> </parent> </root> I need to examine the contents of each of the <child> elements in turn, so I need an XPathNodeIterator containing the nodeset of all the <child> nodes. If I have an XPath that would select the child nodes, e.g. /root/parent/child, is there any way to feed that directly to a new XPathNodeIterator? Everything I see in the docs and examples indicates I have to first get an XPathNavigator to the <parent>, then Select the child nodes, like: XPathNavigator nav = datasource.CreateNavigator().SelectSingleNode( "/root/parent" ); XPathNodeIterator it = nav.Select( "./child" ); foreach ( child in it ) { /* do something */ } I was hoping to skip the XPathNavigator, and intialize the XPathNodeIterator with XPath to the child nodes directly, something like: XpathNodeIterator it = new XpathNodeIterator("/root/parent/child"); foreach ( child in it ) { /* do something */ } Possible? The benefit is not only saving a line of code, but I can use a single XPath expression, rather than splitting the path to the <child> nodes in two, first to get the parent element, then to select its children.

    Read the article

  • Slides of my HOL on MySQL Cluster

    - by user13819847
    Hi!Thanks everyone who attended my hands-on lab on MySQL Cluster at MySQL Connect last Saturday.The following are the links for the slides, the HOL instructions, and the code examples.I'll try to summarize my HOL below.Aim of the HOL was to help attendees to familiarize with MySQL Cluster. In particular, by learning: the basics of MySQL Cluster Architecture the basics of MySQL Cluster Configuration and Administration how to start a new Cluster for evaluation purposes and how to connect to it We started by introducing MySQL Cluster. MySQL Cluster is a proven technology that today is successfully servicing the most performance-intensive workloads. MySQL Cluster is deployed across telecom networks and is powering mission-critical web applications. Without trading off use of commodity hardware, transactional consistency and use of complex queries, MySQL Cluster provides: Web Scalability (web-scale performance on both reads and writes) Carrier Grade Availability (99.999%) Developer Agility (freedom to use SQL or NoSQL access methods) MySQL Cluster implements: an Auto-Sharding, Multi-Master, Shared-nothing Architecture, where independent nodes can scale horizontally on commodity hardware with no shared disks, no shared memory, no single point of failure In the architecture of MySQL Cluster it is possible to find three types of nodes: management nodes: responsible for reading the configuration files, maintaining logs, and providing an interface to the administration of the entire cluster data nodes: where data and indexes are stored api nodes: provide the external connectivity (e.g. the NDB engine of the MySQL Server, APIs, Connectors) MySQL Cluster is recommended in the situations where: it is crucial to reduce service downtime, because this produces a heavy impact on business sharding the database to scale write performance higly impacts development of application (in MySQL Cluster the sharding is automatic and transparent to the application) there are real time needs there are unpredictable scalability demands it is important to have data-access flexibility (SQL & NoSQL) MySQL Cluster is available in two Editions: Community Edition (Open Source, freely downloadable from mysql.com) Carrier Grade Edition (Commercial Edition, can be downloaded from eDelivery for evaluation purposes) MySQL Carrier Grade Edition adds on the top of the Community Edition: Commercial Extensions (MySQL Cluster Manager, MySQL Enterprise Monitor, MySQL Cluster Installer) Oracle's Premium Support Services (largest team of MySQL experts backed by MySQL developers, forward compatible hot fixes, multi-language support, and more) We concluded talking about the MySQL Cluster vision: MySQL Cluster is the default database for anyone deploying rapidly evolving, realtime transactional services at web-scale, where downtime is simply not an option. From a practical point of view the HOL's steps were: MySQL Cluster installation start & monitoring of the MySQL Cluster processes client connection to the Management Server and to an SQL Node connection using the NoSQL NDB API and the Connector J In the hope that this blog post can help you get started with MySQL Cluster, I take the opportunity to thank you for the questions you made both during the HOL and at the MySQL Cluster booth. Slides are also on SlideShares: Santo Leto - MySQL Connect 2012 - Getting Started with Mysql Cluster Happy Clustering!

    Read the article

  • 2D Grid Map Connectivity Check (avoiding stack overflow)

    - by SombreErmine
    I am trying to create a routine in C++ that will run before a more expensive A* algorithm that checks to see if two nodes on a 2D grid map are connected or not. What I need to know is a good way to accomplish this sequentially rather than recursively to avoid overflowing the stack. What I've Done Already I've implemented this with ease using a recursive algorithm; however, depending upon different situations it will generate a stack overflow. Upon researching this, I've come to the conclusion that it is overflowing the stack because of too many recursive function calls. I am sure that my recursion does not enter an infinite loop. I generate connected sets at the beginning of the level, and then I use those connected sets to determine connectivity on the fly later. Basically, the generating algorithm starts from left-to-right top-to-bottom. It skips wall nodes and marks them as visited. Whenever it reaches a walkable node, it recursively checks in all four cardinal directions for connected walkable nodes. Every node that gets checked is marked as visited so they aren't handled twice. After checking a node, it is added to either a walls set, a doors set, or one of multiple walkable nodes sets. Once it fills that area, it continues the original ltr ttb loop skipping already-visited nodes. I've also looked into flood-fill algorithms, but I can't make sense of the sequential algorithms and how to adapt them. Can anyone suggest a better way to accomplish this without causing a stack overflow? The only way I can think of is to do the left-to-right top-to-bottom loop generating connected sets on a row basis. Then check the previous row to see if any of the connected sets are connected and then join the sets that are. I haven't decided on the best data structures to use for that though. I also just thought about having the connected sets pre-generated outside the game, but I wouldn't know where to start with creating a tool for that. Any help is appreciated. Thanks!

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >