Search Results

Search found 3826 results on 154 pages for 'graph theory'.

Page 45/154 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • A Guided Tour of Complexity

    - by JoshReuben
    I just re-read Complexity – A Guided Tour by Melanie Mitchell , protégé of Douglas Hofstadter ( author of “Gödel, Escher, Bach”) http://www.amazon.com/Complexity-Guided-Tour-Melanie-Mitchell/dp/0199798109/ref=sr_1_1?ie=UTF8&qid=1339744329&sr=8-1 here are some notes and links:   Evolved from Cybernetics, General Systems Theory, Synergetics some interesting transdisciplinary fields to investigate: Chaos Theory - http://en.wikipedia.org/wiki/Chaos_theory – small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible. System Dynamics / Cybernetics - http://en.wikipedia.org/wiki/System_Dynamics – study of how feedback changes system behavior Network Theory - http://en.wikipedia.org/wiki/Network_theory – leverage Graph Theory to analyze symmetric  / asymmetric relations between discrete objects Algebraic Topology - http://en.wikipedia.org/wiki/Algebraic_topology – leverage abstract algebra to analyze topological spaces There are limits to deterministic systems & to computation. Chaos Theory definitely applies to training an ANN (artificial neural network) – different weights will emerge depending upon the random selection of the training set. In recursive Non-Linear systems http://en.wikipedia.org/wiki/Nonlinear_system – output is not directly inferable from input. E.g. a Logistic map: Xt+1 = R Xt(1-Xt) Different types of bifurcations, attractor states and oscillations may occur – e.g. a Lorenz Attractor http://en.wikipedia.org/wiki/Lorenz_system Feigenbaum Constants http://en.wikipedia.org/wiki/Feigenbaum_constants express ratios in a bifurcation diagram for a non-linear map – the convergent limit of R (the rate of period-doubling bifurcations) is 4.6692016 Maxwell’s Demon - http://en.wikipedia.org/wiki/Maxwell%27s_demon - the Second Law of Thermodynamics has only a statistical certainty – the universe (and thus information) tends towards entropy. While any computation can theoretically be done without expending energy, with finite memory, the act of erasing memory is permanent and increases entropy. Life & thought is a counter-example to the universe’s tendency towards entropy. Leo Szilard and later Claude Shannon came up with the Information Theory of Entropy - http://en.wikipedia.org/wiki/Entropy_(information_theory) whereby Shannon entropy quantifies the expected value of a message’s information in bits in order to determine channel capacity and leverage Coding Theory (compression analysis). Ludwig Boltzmann came up with Statistical Mechanics - http://en.wikipedia.org/wiki/Statistical_mechanics – whereby our Newtonian perception of continuous reality is a probabilistic and statistical aggregate of many discrete quantum microstates. This is relevant for Quantum Information Theory http://en.wikipedia.org/wiki/Quantum_information and the Physics of Information - http://en.wikipedia.org/wiki/Physical_information. Hilbert’s Problems http://en.wikipedia.org/wiki/Hilbert's_problems pondered whether mathematics is complete, consistent, and decidable (the Decision Problem – http://en.wikipedia.org/wiki/Entscheidungsproblem – is there always an algorithm that can determine whether a statement is true).  Godel’s Incompleteness Theorems http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems  proved that mathematics cannot be both complete and consistent (e.g. “This statement is not provable”). Turing through the use of Turing Machines (http://en.wikipedia.org/wiki/Turing_machine symbol processors that can prove mathematical statements) and Universal Turing Machines (http://en.wikipedia.org/wiki/Universal_Turing_machine Turing Machines that can emulate other any Turing Machine via accepting programs as well as data as input symbols) that computation is limited by demonstrating the Halting Problem http://en.wikipedia.org/wiki/Halting_problem (is is not possible to know when a program will complete – you cannot build an infinite loop detector). You may be used to thinking of 1 / 2 / 3 dimensional systems, but Fractal http://en.wikipedia.org/wiki/Fractal systems are defined by self-similarity & have non-integer Hausdorff Dimensions !!!  http://en.wikipedia.org/wiki/List_of_fractals_by_Hausdorff_dimension – the fractal dimension quantifies the number of copies of a self similar object at each level of detail – eg Koch Snowflake - http://en.wikipedia.org/wiki/Koch_snowflake Definitions of complexity: size, Shannon entropy, Algorithmic Information Content (http://en.wikipedia.org/wiki/Algorithmic_information_theory - size of shortest program that can generate a description of an object) Logical depth (amount of info processed), thermodynamic depth (resources required). Complexity is statistical and fractal. John Von Neumann’s other machine was the Self-Reproducing Automaton http://en.wikipedia.org/wiki/Self-replicating_machine  . Cellular Automata http://en.wikipedia.org/wiki/Cellular_automaton are alternative form of Universal Turing machine to traditional Von Neumann machines where grid cells are locally synchronized with their neighbors according to a rule. Conway’s Game of Life http://en.wikipedia.org/wiki/Conway's_Game_of_Life demonstrates various emergent constructs such as “Glider Guns” and “Spaceships”. Cellular Automatons are not practical because logical ops require a large number of cells – wasteful & inefficient. There are no compilers or general program languages available for Cellular Automatons (as far as I am aware). Random Boolean Networks http://en.wikipedia.org/wiki/Boolean_network are extensions of cellular automata where nodes are connected at random (not to spatial neighbors) and each node has its own rule –> they demonstrate the emergence of complex  & self organized behavior. Stephen Wolfram’s (creator of Mathematica, so give him the benefit of the doubt) New Kind of Science http://en.wikipedia.org/wiki/A_New_Kind_of_Science proposes the universe may be a discrete Finite State Automata http://en.wikipedia.org/wiki/Finite-state_machine whereby reality emerges from simple rules. I am 2/3 through this book. It is feasible that the universe is quantum discrete at the plank scale and that it computes itself – Digital Physics: http://en.wikipedia.org/wiki/Digital_physics – a simulated reality? Anyway, all behavior is supposedly derived from simple algorithmic rules & falls into 4 patterns: uniform , nested / cyclical, random (Rule 30 http://en.wikipedia.org/wiki/Rule_30) & mixed (Rule 110 - http://en.wikipedia.org/wiki/Rule_110 localized structures – it is this that is interesting). interaction between colliding propagating signal inputs is then information processing. Wolfram proposes the Principle of Computational Equivalence - http://mathworld.wolfram.com/PrincipleofComputationalEquivalence.html - all processes that are not obviously simple can be viewed as computations of equivalent sophistication. Meaning in information may emerge from analogy & conceptual slippages – see the CopyCat program: http://cognitrn.psych.indiana.edu/rgoldsto/courses/concepts/copycat.pdf Scale Free Networks http://en.wikipedia.org/wiki/Scale-free_network have a distribution governed by a Power Law (http://en.wikipedia.org/wiki/Power_law - much more common than Normal Distribution). They are characterized by hubs (resilience to random deletion of nodes), heterogeneity of degree values, self similarity, & small world structure. They grow via preferential attachment http://en.wikipedia.org/wiki/Preferential_attachment – tipping points triggered by positive feedback loops. 2 theories of cascading system failures in complex systems are Self-Organized Criticality http://en.wikipedia.org/wiki/Self-organized_criticality and Highly Optimized Tolerance http://en.wikipedia.org/wiki/Highly_optimized_tolerance. Computational Mechanics http://en.wikipedia.org/wiki/Computational_mechanics – use of computational methods to study phenomena governed by the principles of mechanics. This book is a great intuition pump, but does not cover the more mathematical subject of Computational Complexity Theory – http://en.wikipedia.org/wiki/Computational_complexity_theory I am currently reading this book on this subject: http://www.amazon.com/Computational-Complexity-Christos-H-Papadimitriou/dp/0201530821/ref=pd_sim_b_1   stay tuned for that review!

    Read the article

  • Examples of different architecture methodologies

    - by Lane
    Is there a resource or site which illustrates building the same application (desktop or web) using several different contrasting architectures? Such as MVP versus MVVM versus MVC, etc. It would be very helpful to see how they look side-by-side using real-world code instead of comparing written theory to written theory. I've often found that something can be described well in a book, but when you go to implement it, the subtleties and weaknesses of the theory become readily apparent.

    Read the article

  • Why does Graphviz fail on gvLayout?

    - by David Brown
    Once again, here I am writing C without really knowing what I'm doing... I've slapped together a simple function that I can call from a C# program that takes a DOT string, an output format, and a file name and renders a graph using Graphviz. #include "types.h" #include "graph.h" #include "gvc.h" #define FUNC_EXPORT __declspec(dllexport) // Return codes #define GVUTIL_SUCCESS 0 #define GVUTIL_ERROR_GVC 1 #define GVUTIL_ERROR_DOT 2 #define GVUTIL_ERROR_LAYOUT 3 #define GVUTIL_ERROR_RENDER 4 FUNC_EXPORT int RenderDot(char * dotData, const char * format, const char * fileName) { Agraph_t * g; // The graph GVC_t * gvc; // The Graphviz context int result; // Result of layout and render operations // Create a new graphviz context gvc = gvContext(); if (!gvc) return GVUTIL_ERROR_GVC; // Read the DOT data into the graph g = agmemread(dotData); if (!g) return GVUTIL_ERROR_DOT; // Layout the graph result = gvLayout(gvc, g, "dot"); if (result) return GVUTIL_ERROR_LAYOUT; // Render the graph result = gvRenderFilename(gvc, g, format, fileName); if (result) return GVUTIL_ERROR_RENDER; // Free the layout gvFreeLayout(gvc, g); // Close the graph agclose(g); // Free the graphviz context gvFreeContext(gvc); return GVUTIL_SUCCESS; } It compiles fine, but when I call it, I get GVUTIL_ERROR_LAYOUT. At first, I thought it might have been how I was declaring my P/Invoke signature, so I tested it from a C program instead, but it still failed in the same way. RenderDot("digraph graphname { a -> b -> c; }", "png", "C:\testgraph.png"); Did I miss something? EDIT If there's a chance it has to do with how I'm compiling the code, here's the command I'm using: cl gvutil.c /I "C:\Program Files (x86)\Graphviz2.26\include\graphviz" /LD /link /LIBPATH:"C:\Program Files (x86)\Graphviz2.26\lib\release" gvc.lib graph.lib cdt.lib pathplan.lib I've been following this tutorial that explains how to use Graphviz as a library, so I linked to the .lib files that it listed.

    Read the article

  • Core-Plot crash only in Release Configuration

    - by denbec
    Hey everyone, I really don't know a solution or even an idea to get around the following failure. It only happens in Release Configuration on the Device - Simulator and Debug Configuration work fine. It also only appears on the second launch. So if I have the phone connected to my mac, build the application and run it, everything works fine. If I then close the app and restart, it crashes. After long search, it seems that the error comes from the following line: x.majorIntervalLength = CPDecimalFromFloat(2.0f); The code before: CPLayerHostingView *chartView = [[CPLayerHostingView alloc] initWithFrame:CGRectMake(0, 0, 320, 160)]; [self addSubview:chartView]; // create an CPXYGraph and host it inside the view CPTheme *theme = [CPTheme themeNamed:kCPPlainWhiteTheme]; CPXYGraph *graph = (CPXYGraph *)[theme newGraph]; chartView.hostedLayer = graph; graph.paddingLeft = 20.0; graph.paddingTop = 10.0; graph.paddingRight = 10.0; graph.paddingBottom = 20.0; CPXYPlotSpace *plotSpace = (CPXYPlotSpace *)graph.defaultPlotSpace; plotSpace.xRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(0) length:CPDecimalFromFloat(100)]; plotSpace.yRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(0) length:CPDecimalFromFloat(10)]; CPXYAxisSet *axisSet = (CPXYAxisSet *)graph.axisSet; CPXYAxis *x = axisSet.xAxis; x.majorIntervalLength = CPDecimalFromFloat(2.0f); If I comment the last line, everything works fine (off course the interval length is not correct). I would appreciate any help! Thanks in advance!

    Read the article

  • Can you authenticate Facebook Graph entirely from command line with Python?

    - by Sebastian
    I'm writing a (tabbed) application for Facebook that requires a background process to run on a server and, periodically, upload images to an album on this application's page. What I'm trying to do is create a script that will: a) authenticate me with the program b) upload an image to a specific album All of this entirely from the command line and completely with the new Graph API. My problem right now is trying to locate the documentation that will allow me to get a token without a pop-up window of sorts. Thoughts?

    Read the article

  • One-liner javascript to collect values from object graph?

    - by Kevin Pauli
    Given the following object graph: { "children": [ { "child": { "pets": [ { "pet": { "name": "fido" } }, { "pet": { "name": "fluffy" } } ] } }, { "child": { "pets": [ { "pet": { "name": "spike" } } ] } } ] } What would be a nice one-liner (or two) to collect the names of my grandchildren's pets? The result should be ["fido", "fluffy", "spike"] I don't want to write custom methods for this... I'm looking for something like the way jQuery works in selecting dom nodes, where you can just give it a CSS-like path and it collects them up for you. I would expect the expression path to look something like "children child pets pet name"

    Read the article

  • Why are these two sql statements deadlocking? (Deadlock graph + details included).

    - by Pure.Krome
    Hi folks, I've got the following deadlock graph that describes two sql statements that are deadlocking each other. I'm just not sure how to analyse this and then fix up my sql code to prevent this from happening. Main deadlock graph Click here for a bigger image. Left side, details Click here for a bigger image. Right side, details Click here for a bigger image. What is the code doing? I'm reading in a number of files (eg. lets say 3, for this example). Each file contains different data BUT the same type of data. I then insert data into LogEntries table and then (if required) I insert or delete something from the ConnectedClients table. Here's my sql code. using (TransactionScope transactionScope = new TransactionScope()) { _logEntryRepository.InsertOrUpdate(logEntry); // Now, if this log entry was a NewConnection or an LostConnection, then we need to make sure we update the ConnectedClients. if (logEntry.EventType == EventType.NewConnection) { _connectedClientRepository.Insert(new ConnectedClient { LogEntryId = logEntry.LogEntryId }); } // A (PB) BanKick does _NOT_ register a lost connection .. so we need to make sure we handle those scenario's as a LostConnection. if (logEntry.EventType == EventType.LostConnection || logEntry.EventType == EventType.BanKick) { _connectedClientRepository.Delete(logEntry.ClientName, logEntry.ClientIpAndPort); } _unitOfWork.Commit(); transactionScope.Complete(); } Now each file has it's own UnitOfWork instance (which means it has it's own database connection, transaction and repository context). So i'm assuming this means there's 3 different connections to the db all happening at the same time. Finally, this is using Entity Framework as the repository, but please don't let that stop you from having a think about this problem. Using a profiling tool, the Isolation Level is Serializable. I've also tried ReadCommited and ReadUncommited, but they both error :- ReadCommited: same as above. Deadlock. ReadUncommited: different error. EF exception that says it expected some result back, but got nothing. I'm guessing this is the LogEntryId Identity (scope_identity) value that is expected but not retrieve because of the dirty read. Please help! PS. It's Sql Server 2008, btw.

    Read the article

  • Can XCode draw the call graph of a program?

    - by Werner
    Hi, I am new to Mac OSX, and I wonder if Xcode can generate , for a given C++ source code, the call graph of the program in a visual way. I also wonder if for each function, and after a run, whether it can also print the %time spent on the function If so, I would thank really some links with tutorials or info, after googling I did not find anything relevant Thanks

    Read the article

  • Core Plot causing crash on device but not simulator.

    - by Eric
    I'm using core plot to create a small plot in one of my view controllers. I have been pulling my hair out trying to track down this error. I install on the simulator and it works fine but as soon as I put it on my device I get the following error: 2010-02-04 22:15:37.394 Achieve[127:207] *** -[NSCFString drawAtPoint:withTextStyle:inContext:]: unrecognized selector sent to instance 0x108530 2010-02-04 22:15:37.411 Achieve[127:207] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFString drawAtPoint:withTextStyle:inContext:]: unrecognized selector sent to instance 0x108530' 2010-02-04 22:15:37.427 Achieve[127:207] Stack: ( 843263261, 825818644, 843267069, 842763033, 842725440, 253481, 208187, 823956912, 823956516, 823956336, 823953488, 823952500, 823985628, 842717233, 843010887, 843009055, 860901832, 843738160, 843731504, 8797, 8692 ) terminate called after throwing an instance of 'NSException' Program received signal: “SIGABRT”. Debugger Output (as requested): #0 0x33b3db2c in __kill #1 0x33b3db20 in kill #2 0x33b3db14 in raise #3 0x33b54e3a in abort #4 0x33c5c398 in __gnu_cxx::__verbose_terminate_handler #5 0x313918a0 in _objc_terminate #6 0x33c59a8c in __cxxabiv1::__terminate #7 0x33c59b04 in std::terminate #8 0x33c59c2c in __cxa_throw #9 0x3138fe5c in objc_exception_throw #10 0x32433bfc in -[NSObject doesNotRecognizeSelector:] #11 0x323b8b18 in ___forwarding___ #12 0x323af840 in __forwarding_prep_0___ #13 0x0003de28 in -[CPTextLayer renderAsVectorInContext:] at CPTextLayer.m:117 #14 0x00032d3a in -[CPLayer drawInContext:] at CPLayer.m:146 #15 0x311c95b0 in -[CALayer _display] #16 0x311c9424 in -[CALayer display] #17 0x311c9370 in CALayerDisplayIfNeeded #18 0x311c8850 in CA::Context::commit_transaction #19 0x311c8474 in CA::Transaction::commit #20 0x311d05dc in CA::Transaction::observer_callback #21 0x323ad830 in __CFRunLoopDoObservers #22 0x323f5346 in CFRunLoopRunSpecific #23 0x323f4c1e in CFRunLoopRunInMode #24 0x335051c8 in GSEventRunModal #25 0x324a6c30 in -[UIApplication _run] #26 0x324a5230 in UIApplicationMain #27 0x0000225c in main at main.m:14 Here is my viewDidLoad method: - (void)viewDidLoad { [super viewDidLoad]; self.view.backgroundColor = [[UIColor alloc] initWithPatternImage:[UIImage imageNamed:@"bg.png"]]; [self loadData]; self.graph = [[CPXYGraph alloc] initWithFrame: self.plotView.frame]; CPLayerHostingView *hostingView = self.plotView; hostingView.hostedLayer = graph; graph.paddingLeft = 50; graph.paddingTop = 10; graph.paddingRight = 10; graph.paddingBottom = 10; percentFormatter = [[NSNumberFormatter alloc] init]; [percentFormatter setPercentSymbol:@"%"]; [percentFormatter setNumberStyle:NSNumberFormatterPercentStyle]; [percentFormatter setLocale: [NSLocale currentLocale]]; [percentFormatter setMultiplier:[NSNumber numberWithInt:1]]; [percentFormatter setMaximumFractionDigits:0]; CPXYPlotSpace *plotSpace = (CPXYPlotSpace *)graph.defaultPlotSpace; plotSpace.xRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(0) length:CPDecimalFromFloat(maxX)]; plotSpace.yRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(minY) length:CPDecimalFromFloat(maxY-minY)]; CPLineStyle *lineStyle = [[CPLineStyle lineStyle]retain]; lineStyle.lineColor = [CPColor grayColor]; lineStyle.lineWidth = 1.0f; CPTextStyle *whiteText = [CPTextStyle textStyle]; whiteText.color = [CPColor whiteColor]; CPXYAxisSet *axisSet = (CPXYAxisSet *)graph.axisSet; // axisSet.xAxis.majorIntervalLength = [[NSDecimalNumber decimalNumberWithString:@"0"]decimalValue]; axisSet.xAxis.minorTicksPerInterval = 0; axisSet.xAxis.majorTickLineStyle = nil; axisSet.xAxis.minorTickLineStyle = nil; axisSet.xAxis.axisLineStyle = lineStyle; axisSet.xAxis.minorTickLength = 0; axisSet.xAxis.majorTickLength = 0; axisSet.xAxis.labelFormatter = nil; axisSet.xAxis.labelTextStyle = nil; axisSet.yAxis.majorIntervalLength = [[NSDecimalNumber decimalNumberWithString:intY]decimalValue]; axisSet.yAxis.minorTicksPerInterval = 5; axisSet.yAxis.majorTickLineStyle = lineStyle; axisSet.yAxis.minorTickLineStyle = lineStyle; axisSet.yAxis.axisLineStyle = lineStyle; axisSet.yAxis.minorTickLength = 2.0f; axisSet.yAxis.majorTickLength = 4.0f; axisSet.yAxis.labelFormatter = percentFormatter; axisSet.yAxis.labelTextStyle = whiteText; CPScatterPlot *xSquaredPlot = [[[CPScatterPlot alloc]initWithFrame:graph.defaultPlotSpace.graph.bounds] autorelease]; xSquaredPlot.identifier = @"Plot"; xSquaredPlot.dataLineStyle.lineWidth = 4.0f; xSquaredPlot.dataLineStyle.lineColor = [CPColor yellowColor]; xSquaredPlot.dataSource = self; [graph addPlot:xSquaredPlot]; } Any help would be appreciated!

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • Change the size of the text in legend according to the length of the legend vector in the graph

    - by user1021713
    I have to draw a 20 plots and horizontally place a legends in each plots. I gave the following command for the first plot: plot(x=1:4,y=1:4) legend("bottom",legend = c("a","b","c","d"),horiz=TRUE,text.font=2,cex=0.64) then for the second plot I tried : plot(x=1:2,y=1:2) legend("bottom",legend = c("a","b"),horiz=TRUE,text.font=2,cex=0.64) But because the size of the character vector passed to legend argument are different I get the size of the legend different. Since I have to plot so many different plots having varying sizes of legends,I would want to do it in an automated fashion. Is there a way to do this which can fix the size of the legend in all the plots and fit it to graph size?

    Read the article

  • Why ruby object has two to_s and inspect methods that do the same thing? Or, so it seems.

    - by prosseek
    The p calls inspect, and puts/print calls to_s for representing its object. If I run class Graph def initialize @nodeArray = Array.new @wireArray = Array.new end def to_s # called with print / puts "Graph : #{@nodeArray.size}" end def inspect # called with p "G" end end if __FILE__ == $0 gr = Graph.new p gr print gr puts gr end I get G Graph : 0Graph : 0 Then, why does ruby has two functions do the same thing? What makes the difference between to_s and inspect? And what's the difference between puts/print/p? If I comment out the to_s or inspect function, I get as follows. #<Graph:0x100124b88>#<Graph:0x100124b88>

    Read the article

  • How to apply custom BidirectionalGraph from QuickGraph to GraphLayout from Graph#?

    - by Dmitry
    Whats wrong? using QuickGraph; using GraphSharp; public class State { public string Name { get; set; } public override string ToString() { return Name; } } public class Event { public string Name; public override string ToString() { return Name; } } BidirectionalGraph<State, TaggedEdge<State, Event>> x = new BidirectionalGraph<State, TaggedEdge<State, Event>>(); GraphLayout graphLayout = new GraphLayout(); graphLayout.Graph = x; Error: Cannot implicitly convert type 'QuickGraph.BidirectionalGraph' to 'QuickGraph.IBidirectionalGraph'. An explicit conversion exists (are you missing a cast?) If I put the cast, then application gets fault error on start without any information Whats wrong?

    Read the article

  • Why does gcc think that I am trying to make a function call in my template function signature?

    - by nieldw
    GCC seem to think that I am trying to make a function call in my template function signature. Can anyone please tell me what is wrong with the following? 227 template<class edgeDecor, class vertexDecor, bool dir> 228 vector<Vertex<edgeDecor,vertexDecor,dir>> Graph<edgeDecor,vertexDecor,dir>::vertices() 229 { 230 return V; 231 }; GCC is giving the following: graph.h:228: error: a function call cannot appear in a constant-expression graph.h:228: error: template argument 3 is invalid graph.h:228: error: template argument 1 is invalid graph.h:228: error: template argument 2 is invalid graph.h:229: error: expected unqualified-id before ‘{’ token Thanks a lot.

    Read the article

  • How i draw the graph including X- and Y-axis lines?

    - by Rajendra Bhole
    Hi, I want to make an application in which i want to make simple graph using NSObject class and using CGContext method. All lines should be displaying dynamically in X and Y-axis interval text also, i trying develop something like following code, CGContextSetRGBStrokeColor(ctx, 2.0, 2.0, 2.0, 1.0); CGContextSetLineWidth(ctx, 2.0); //(number of lines) CGContextMoveToPoint(ctx, 30.0, 230.0); CGContextAddLineToPoint(ctx, 30.0, 440.0); //CGContextAddLineToPoint(ctx, 320.0, 420.0); //CGContextStrokePath(ctx); for(float x = 20.0; x <= 320.0; x++) { CGContextSetRGBStrokeColor(ctx, 2.0, 2.0, 2.0, 1.0); CGContextMoveToPoint(ctx, x, 420.0); CGContextAddLineToPoint(ctx, x+45.0, 420.0); CGContextStrokePath(ctx); } How i develop using above functions? Thanks.

    Read the article

  • Why ruby object has two to_s and inspect methods that (looks like) do the same thing?

    - by prosseek
    The p calls inspect, and puts/print calls to_s for representing its object. If I run class Graph def initialize @nodeArray = Array.new @wireArray = Array.new end def to_s # called with print / puts "Graph : #{@nodeArray.size}" end def inspect # called with p "G" end end if __FILE__ == $0 gr = Graph.new p gr print gr puts gr end I get G Graph : 0Graph : 0 Then, why does ruby has two functions do the same thing? What makes the difference between to_s and inspect? And what's the difference between puts/print/p? If I comment out the to_s or inspect function, I get as follows. #<Graph:0x100124b88>#<Graph:0x100124b88>

    Read the article

  • Databinding in windows forms on an object graph with possible null properties?

    - by Fredrik
    If I have an object graph like this: class Company { public Address MainAddress {...} } class Address { public string City { ... } } Company c = new Company(); c.MainAddress = new Address(); c.MainAddress.City = "Stockholm"; and databind to a control using: textBox1.DataBinding.Add( "Text", c, "MainAddress.City" ); Everything is fine, but If I bind to: Company c2 = new Company(); c2 using the same syntax it crashes since the MainAddress property is null. I wonder if there is a custom Binding class that can set up listeners for all the possible paths here and bind to the actual object dynamically when/if I sometime later in the application set the MainAddress property.

    Read the article

  • Why not speed up testing by using function dependency graph?

    - by Maltrap
    It seems logical to me that if you have a dependency graph of your source code (tree showing call stack of all functions in your code base) you should be able to save a tremendous amount of time doing functional and integration tests after each release. Essentially you will be able to tell the testers exactly what functionality to test as the rest of the features remain unchanged from a source code point of view. If for instance you change a spelling mistake in once piece of the code, there is no reason to run through your whole test script again "just in case" you introduced a critical bug. My question, why are dependency trees not used in software engineering and if you use them, how do you maintain them? What tools are available that generate these trees for C# .NET, C++ and C source code?

    Read the article

  • i need help to designe code in c++

    - by user344987
    ) Design and implement a Graph data structure. Use adjacency matrix to implement the unweighted graph edges. The Graph must support the following operations: 1.Constructor 2.Destructor 3.Copy constructor 4.A function to add an edge between two nodes in the graph 5.A display function that outputs all the edges of the graph 6.A function edge that accepts two nodes, the function returns true if there is an edge between the passed nodes, and returns false otherwise. B.(100 points) Depth first search and Breadth first search functions. C.(100 points) A function to output a spanning tree of the graph, use any algorithm you find appropriate, also, make the necessary changes on the data structure in A to implement your algorithm.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >