Search Results

Search found 32391 results on 1296 pages for 'graph database'.

Page 120/1296 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Java graph library for comparing 2 graphs

    - by user311909
    Hello, Does anyone know a good java library for graph comparing by searching maximal common subgraph isomorphism to get information about their similarity? I do not want to compare graphs based on node labels. Or is there any other way how to topologicaly compare graphs with good java library? Now I am using library SimPack and it is usefull but I need something more. Any suggestions will be very helpful. Thanks in advance

    Read the article

  • I am trying to build a list of limitations of all graph algorithms

    - by Jack
    Single Source shortest Path Dijkstra's - directed and undirected - works only for positive edge weights - cycles ?? Bellman Ford - directed - no cycles should exist All source shortest path Floyd Warshall - no info Minimum Spanning Tree ( no info about edge weights or nature of graph or cycles) Kruskal's Prim's - undirected Baruvka's

    Read the article

  • Facebook Graph API authorization problem

    - by kujawk
    If I load the following URL in Firefox and login to Facebook, I'm getting a page displaying "An invalid next or cancel parameter was specified." https://graph.facebook.com/oauth/authorize?client_id=c8caf78d724d142ee82334131ef5c9ce&redirect_uri=http://www.facebook.com/connect/login_success.html&type=user_agent&display=touch&scope=offline_access,publish_stream But if I change the display parameter to display=page I no longer get this error. Any ideas as to why?

    Read the article

  • is there something analogous to Connect.registerUsers in the Graph API

    - by timpone
    I am trying to figure this out. Basically, I'd like to reconcile my local emails with a users list of friends. I can see after OAuth token is established that I can get a list of users friends via http://graph.facebook.com/me/friends Would the solution be to keep track of a list of users friends over time and then reconcile with our local ids (in other words, i can know my id) and then use the friends API call to see if your friends are the same in local db?

    Read the article

  • Real-time Java graph / chart library?

    - by Joonas Pulakka
    There was an earlier thread on Java graph or chart library, where JFreeChart was found to be quite good, but, as stated in its FAQ, it's not meant for real-time rendering. Can anyone recommend a comparable library that supports real-time rendering? Just some basic xy-rendering - for instance, getting a voltage signal from data acquisition system and plotting it as it comes (time on x-axis, voltage on y-axis).

    Read the article

  • Publish to Facebook Stream via PHP using Graph API

    - by Liquid
    I'm trying to post a message to a user's wall using the new graph API and PHP. Connection seems to work fine, but no post appears. I'm not sure how to set up the posting code correctly. Please help me out. Sorry for the broken-looking code, for some reason StackOverflow didn't want to close it all in the code block. Below is my full code. Am I missing an extender permission requests, or is that taken care in this code: PHP Code <?php include_once 'facebook.php'; $facebook = new Facebook(array( 'appId' => 'xxxxxxxxxxxxxxxxxx', 'secret' => 'xxxxxxxxxxxxxxxxxxxxxxxxxx', 'cookie' => true )); $session = $facebook->getSession(); if (!$session) { $url = $facebook->getLoginUrl(array( 'canvas' => 1, 'fbconnect' => 0 )); echo "<script type='text/javascript'>top.location.href = '$url';</script>"; } else { try { $uid = $facebook->getUser(); $me = $facebook->api('/me'); $updated = date("l, F j, Y", strtotime($me['updated_time'])); echo "Hello " . $me['name'] . "<br />"; echo "You last updated your profile on " . $updated; $connectUrl = $facebook->getUrl( 'www', 'login.php', array_merge(array( 'api_key' => $facebook->getAppId(), 'cancel_url' => 'http://www.test.com', 'req_perms' => 'publish_stream', 'display' => 'page', 'fbconnect' => 1, 'next' => 'http://www.test.com', 'return_session' => 1, 'session_version' => 3, 'v' => '1.0', ), $params) ); $result = $facebook->api( '/me/feed/', 'post', array('access_token' => $facebook->access_token, 'message' => 'Playing around with FB Graph..') ); } catch (FacebookApiException $e) { echo "Error:" . print_r($e, true); } } ?>

    Read the article

  • All possible paths in a cyclic undirected graph

    - by Elias
    I'm trying to develop an algorithm that identifies all possible paths between two nodes in a graph, as in this example. in fact, i just need to know which nodes appear in all existing paths. in the web only got references about DFS, A* or dijkstra, but i think they doesn't work in this case. Does anyone know how to solve it?

    Read the article

  • Calculate area under FFT graph in Matlab

    - by lytheone
    Hiya, Currently I did a FFT of a set of data which gives me a plot with frequency at x axis and amplitude at y axis. I would like to calculate out the area under graph to give me energy. I am not sure how to determinate the area because i am without the equation and also i only want a certain area of the plot rather than whole area under the plot. Is there a way i can do it?

    Read the article

  • plot a line graph in vb.net

    - by Husna5207
    this is my function for plotting a graph in vb.net how I'm going to replace the ("Jon", 10),("Jordan", 30) with a value that i search from database ? Private Sub chart_btn_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles chart_btn.Click Chart1.Series("Student").Points.AddXY("Jon", 10) Chart1.Series("Student").Points.AddXY("Jon", 10) Chart1.Series("Student").ChartType = System.Windows.Forms.DataVisualization.Charting.SeriesChartType.Bar End Sub

    Read the article

  • LINQ-to-SQL eagerly load entire object graph

    - by Paddy
    I have a need to load an entire LINQ-to-SQL object graph from a certain point downwards, loading all child collections and the objects within them etc. This is going to be used to dump out the object structure and data to XML. Is there a way to do this without generating a large hard coded set of DataLoadOptions to 'shape' my data?

    Read the article

  • transpose graph

    - by rana123
    how can i write program in java to find the transpose of the graph G, where the input and the output of the program are represented as adjacency list structure. for example: input: 12341 outout: 1432

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • Outputting a bar chart to an iPhone application?

    - by Moddy
    Right, I really want to output a Bar Graph to an Obj-C iPhone application - now I may be missing a vital SDK class or something; but right now my solution is to embed a WebView and inside that have a JQuery/Flot based graph - not totally ideal I know! Just wondering if anybody has any other creative solutions or whether a WebView/AJAX solution is the way to go? (For the record; my data source will be figures returned from an external source - i.e downloaded from the internet. So I was even toying with having a PHP Proxy/script do all the work on that, return the figures AND a graph to the application - but then I risk placing extra strain on the server!)

    Read the article

  • Explain BFS and DFS in terms of backtracking

    - by HH
    Wikipedia about DFS Depth-first search (DFS) is an algorithm for traversing or searching a tree, tree structure, or graph. One starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking. So is BFS? "an algorithm that choose a starting node, checks all nodes -- backtracks --, chooses the shortest path, chose neighbour nodes -- backtracks --, chose the shortest path -- finally finds the optimal path because of traversing each path due to continuos backtracking. Regex, find's pruning -- backtracking? The term backtracking confuseses due to its variety of use. UNIX find's pruning an SO-user explained with backtracking. Regex Buddy uses the term "catastrophic backtracking" if you do not limit the scope of your Regexes. It seems to be too wide umbrella-term. So: how do you define "Backtracking" GRAPH-theoretically? what is "backtracking" in BFS and DFS?

    Read the article

  • Facebook App Wall Posting no longer showing in Facebook iPhone App

    - by David Hsu
    I use the GRAPH API with django for Facebook wall postings. Since yesterday, the wall posts only show on the Facebook web app but not the Facebook iPhone app. I tried Yelp, and their postings still show up. How can I debug this? Anyone notice this issue with their Facebook connect? Is this a Facebook algorithm issue. Code for Wall Post: graph = facebook.GraphAPI(access_token) attachment = {"name": name, "link": link, #"caption": "{*actor*} posted a new review", "description": desc, "picture": picture } graph.put_wall_post("",attachment)

    Read the article

  • Query to find all the nodes that are two steps away from a particular node.

    - by iecut
    Suppose I have two columns in a table that represents a graph, the first column is a FROMNODE and second one is TONODE. What I would like to know is that how will we find all the nodes that are two steps away from a particular node. Lets suppose I have a node numbered '1' and i would like to know all the nodes that are two steps away from it. I have tried(I am assuming the table name as graph) SELECT FROMNODE FROM GRAPH WHERE TONODE=1 (this is to select all the nodes that are connected to node 1, but I couldn't figure out how would I find all the nodes that are two steps away from node 1??)

    Read the article

  • Posting an action works... but no image

    - by Brian Rice
    I'm able to post an open graph action to facebook using the following url: https://graph.facebook.com/me/video.watches with the following post data: video=http://eqnetwork.com/home/video.html?f=8e7b4f27-8cbd-4430-84df-d9ccb46da45f.mp4 It seems to be getting the title from the open graph metatags at the "video" object. But, it's not getting the image (even though one is specified in the metatag "og:image"). Also, if I add this to the post data: picture=http://eqnetwork.com/icons/mgen/overplayThumbnail.ms?drid=14282&subType=ljpg&w=120&h=120&o=1&thumbnail= still no image. Any thoughts? Brian

    Read the article

  • Calculating Divergent Paths on Subtending Rings

    - by Russ
    I need to calculate two paths from A to B in the following graph, with the constraint that the paths can't share any edges: hmm, okay, can't post images, here's a link. All edges have positive weights; for this example I think we can assume that they're equal. My naive approach is to use Djikstra's algorithm to calculate the first path, shown in the second graph in the above image. Then I remove the edges from the graph and try to calculate the second path, which fails. Is there a variation of Djikstra, Bellman-Ford (or anything else) that will calculate the paths shown in the third diagram above? (Without special knowledge and removal of the subtending link, is what I mean)

    Read the article

  • FlockDB - What is it? And best cases for it uses.

    - by Guru
    Just came across FlockDB graph database. Details at github /flockDB. Twitter claims it uses FlockDB for the following: Twitter runs FlockDB on a large cluster of machines. we use it to store social graphs (who follows whom, who blocks whom) and secondary indices at twitter. At first glance, setup and trying it doesn't look straight forward. Have anyone already used it / setup this? If so, please answer the following general queries. What kind of applications is it better suited for? (Twitter claims it is simple and very rough, it remains to see what it meant though) How is FlockDB better than other graph db / noSQL db. Have you setup FlockDB, used it for a application? Early advices any? Note: I am evaluating the FlockDB and other graph databases mainly for learning them. Perhaps, I will build an application for that.

    Read the article

  • list all likes to a certain post coming from my friends

    - by Max Favilli
    I know I can get posts (paginated, 25 at times) of a certain user with this: https://graph.facebook.com/575128756/posts And the likes of a certain post (paginated, 25 at times) with this GRAPH api call (I omit the first part of the url): /575128756_176517292390301/likes But if I want to get the likes from friends of the post user only? FQL could be an alternative (from table like and subqueries for friend and stream), but FQL seems so buggy in my tests I don't feel comfortable using it in a productive system. Is there anyway to achieve my target using the GRAPH api?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >