Search Results

Search found 4133 results on 166 pages for 'boost graph'.

Page 11/166 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Eliminating cyclic flows from a graph

    - by Jon Harrop
    I have a directed graph with flow volumes along edges and I would like to simplify it by removing all cyclic flows. This can be done by finding the minimum of the flow volumes along each edge in any given cycle and reducing the flows of every edge in the cycle by that minimum volume, deleting edges with zero flow. When all cyclic flows have been removed the graph will be acyclic. For example, if I have a graph with vertices A, B and C with flows of 1 from A?B, 2 from B?C and 3 from C?A then I can rewrite this with no flow from A?B, 1 from B?C and 2 from C?A. The number of edges in the graph has reduced from 3 to 2 and the resulting graph is acyclic. Which algorithm(s), if any, solve this problem?

    Read the article

  • async_write/async_read problems while trying to implement question-answer logic

    - by Max
    Good day. I'm trying to implement a question - answer logic using boost::asio. On the Client I have: void Send_Message() { .... boost::asio::async_write(server_socket, boost::asio::buffer(&Message, sizeof(Message)), boost::bind(&Client::Handle_Write_Message, this, boost::asio::placeholders::error)); .... } void Handle_Write_Message(const boost::system::error_code& error) { .... std::cout << "Message was sent.\n"; .... boost::asio::async_read(server_socket_,boost::asio::buffer(&Message, sizeof(Message)), boost::bind(&Client::Handle_Read_Message, this, boost::asio::placeholders::error)); .... } void Handle_Read_Message(const boost::system::error_code& error) { .... std::cout << "I have a new message.\n"; .... } And on the Server i have the "same - logic" code: void Read_Message() { .... boost::asio::async_read(client_socket, boost::asio::buffer(&Message, sizeof(Message)), boost::bind(&Server::Handle_Read_Message, this, boost::asio::placeholders::error)); .... } void Handle_Read_Message(const boost::system::error_code& error) { .... std::cout << "I have a new message.\n"; .... boost::asio::async_write(client_socket_,boost::asio::buffer(&Message, sizeof(Message)), boost::bind(&Server::Handle_Write_Message, this, boost::asio::placeholders::error)); .... } void Handle_Write_Message(const boost::system::error_code& error) { .... std::cout << "Message was sent back.\n"; .... } Message it's just a structure. And the output on the Client is: Message was sent. Output on the Server is: I have a new message. And that's all. After this both programs are still working but nothing happens. I tried to implement code like: if (!error) { .... } else { // close sockets and etc. } But there are no errors in reading or writing. Both programs are just running normally, but doesn't interact with each other. This code is quite obvious but i can't understand why it's not working. Thanks in advance for any advice.

    Read the article

  • g++ linker error--typeinfo, but not vtable

    - by James
    I know the standard answer for a linker error about missing typeinfo usually also involves vtable and some virtual function that I forgot to actually define. I'm fairly certain that's not the situation this time. Here's the error: UI.o: In function boost::shared_ptr<Graphics::Widgets::WidgetSet>::shared_ptr<Graphics::Resource::GroupByState>(boost::shared_ptr<Graphics::Resource::GroupByState> const&, boost::detail::dynamic_cast_tag)': UI.cpp:(.text._ZN5boost10shared_ptrIN8Graphics7Widgets9WidgetSetEEC1INS1_8Resource12GroupByStateEEERKNS0_IT_EENS_6detail16dynamic_cast_tagE[boost::shared_ptr<Graphics::Widgets::WidgetSet>::shared_ptr<Graphics::Resource::GroupByState>(boost::shared_ptr<Graphics::Resource::GroupByState> const&, boost::detail::dynamic_cast_tag)]+0x30): undefined reference totypeinfo for Graphics::Widgets::WidgetSet' Running c++filt on the obnoxious mangled name shows that it actually is looking at .boost::shared_ptr::shared_ptr(boost::shared_ptr const&, boost::detail::dynamic_cast_tag) The inheritance hierarchy looks something like class AbstractGroup { typedef boost::shared_ptr<AbstractGroup> Ptr; ... }; class WidgetSet : public AbstractGroup { typedef boost::shared_ptr<WidgetSet> Ptr; ... }; class GroupByState : public AbstractGroup { ... }; Then there's this: class UI : public GroupByState { ... void LoadWidgets( GroupByState::Ptr resource ); }; Then the original implementation: void UI::LoadWidgets( GroupByState::Ptr resource ) { WidgetSet::Ptr tmp( boost::dynamic_pointer_cast< WidgetSet >(resource) ); if( tmp ) { ... } } Stupid error on my part (trying to cast to a sibling class with a shared parent), even if the error is kind of cryptic. Changing to this: void UI::LoadWidgets( AbstractGroup::Ptr resource ) { WidgetSet::Ptr tmp( boost::dynamic_pointer_cast< WidgetSet >(resource) ); if( tmp ) { ... } } (which I'm fairly sure is what I actually meant to be doing) left me with a very similar error: UI.o: In function boost::shared_ptr<Graphics::Widgets::WidgetSet>::shared_ptr<Graphics::_Drawer::Group>(boost::shared_ptr<Graphics::_Drawer::Group> const&, boost::detail::dynamic_cast_tag)': UI.cpp:(.text._ZN5boost10shared_ptrIN8Graphics7Widgets9WidgetSetEEC1INS1_7_Drawer5GroupEEERKNS0_IT_EENS_6detail16dynamic_cast_tagE[boost::shared_ptr<Graphics::Widgets::WidgetSet>::shared_ptr<Graphics::_Drawer::Group>(boost::shared_ptr<Graphics::_Drawer::Group> const&, boost::detail::dynamic_cast_tag)]+0x30): undefined reference totypeinfo for Graphics::Widgets::WidgetSet' collect2: ld returned 1 exit status dynamic_cast_tag is just an empty struct in boost/shared_ptr.hpp. It's just a guess that boost might have anything at all to do with the error. Passing in a WidgetSet::Ptr totally eliminates the need for a cast, and it builds fine (which is why I think there's more going on than the standard answer for this question). Obviously, I'm trimming away a lot of details that might be important. My next step is to cut it down to the smallest example that fails to build, but I figured I'd try the lazy way out and take a stab on here first. TIA!

    Read the article

  • What is wrong with this attempt of sending a break-signal?

    - by Jook
    I have quite a headache about this seemingly easy task: send a break signal to my device, like the wxTerm (or any similar Terminal application) does. This signal has to be 125ms long, according to my tests and the devices specification. It should result in a specific response, but what I get is a longer response than expected, and the transmitted date is false. e.g.: what it should respond 08 00 81 00 00 01 07 00 what it does respond 08 01 0A 0C 10 40 40 07 00 7F What really boggles me is, that after I have used wxTerm to look at my available com-ports (without connecting or sending anything), my code starts to work! I can send then as many breaks as I like, I get my response right from then on. I have to reset my PC in order to try it again. What the heck is going on here?! Here is my code for a reset through a break-signal: minicom_client(boost::asio::io_service& io_service, unsigned int baud, const string& device) : active_(true), io_service_(io_service), serialPort(io_service, device) { if (!serialPort.is_open()) { cerr << "Failed to open serial port\n"; return; } boost::asio::serial_port_base::flow_control FLOW( boost::asio::serial_port_base::flow_control::hardware ); boost::asio::serial_port_base::baud_rate baud_option(baud); serialPort.set_option(FLOW); serialPort.set_option(baud_option); read_start(); std::cout << SetCommBreak(serialPort.native_handle()) << std::endl; std::cout << GetLastError() << std::endl; boost::posix_time::ptime mst1 = boost::posix_time::microsec_clock::local_time(); boost::this_thread::sleep(boost::posix_time::millisec(125)); boost::posix_time::ptime mst2 = boost::posix_time::microsec_clock::local_time(); std::cout << ClearCommBreak(serialPort.native_handle()) << std::endl; std::cout << GetLastError() << std::endl; boost::posix_time::time_duration msdiff = mst2 - mst1; std::cout << msdiff.total_milliseconds() << std::endl; } Edit: It was only necessary to look at the combo-box selection of com-ports of wxTerm - no active connection was needed to be established in order to make my code work. I am guessing, that there is some sort of initialisation missing, which is done, when wxTerm is creating the list for the serial-port combo-box.

    Read the article

  • Would a model like this translate well to a document or graph database?

    - by Eric
    I'm trying to understand what types of models that I have traditionally persisted relationally would translate well to some kind of NoSQL database. Suppose I have a model with the following relationships: Product 1-----0..N Order Customer 1-----0..N Order And suppose I need to frequently query things like All Orders, All Products, All Customers, All Orders for Given Customer, All Orders for Given Product. My feeling is that this kind of model would not denormalize cleanly - If I had Product and Customer documents with embedded Orders, both documents would have duplicate orders. So I think I'd need separate documents for all three entities. Does a characteristic like this typically indicate that a document database is not well suited for a given model? Generally speaking, would a document database perform as well as a relational database in this kind of situation? I know very little about graph databases, but I understand that a graph database handles relationships more performantly than a document database - would a graph database be suited for this kind of model?

    Read the article

  • Zlib compression in boost::iostreams not compatible with zlib.NET

    - by Johan
    Hello, I want to send compressed data between my C# to a C++ application in ZLIB format. In C++, I use the zlib_compressor/zlib_decompressor available in boost::iostreams. In C#, I am currently using the ZOutputStream available in the zlib.NET library. First of all, when I compress the same data using both libraries, the results look different: boost::iostreams::zlib_compressor: FF 13 49 48 00 00 01 00 01 00 00 00 63 61 60 60 F8 00 C4 C1 25 45 99 79 E9 23 87 04 00 zlib.NET (zlib.ZOutputStream): FF 13 49 48 00 00 01 00 01 00 00 00 78 9C 63 61 60 60 F8 00 C4 C1 25 45 99 79 E9 23 87 04 00 4F 31 63 8D (Note the 78 9C pattern that is present in zlib.NET, but not in boost). Furthermore, when I decompress data in boost that I compressed in zlib.NET, I am not able to read from the stream suggesting something is wrong. It does work when I try to decompress data compressed in boost. Does anybody know what is going wrong? Thank you, Johan

    Read the article

  • Boost.python building

    - by Ockonal
    Hi guys, really can't understand, how to build correctly project that uses boost.python. I've included boost_(python/thread/system)-mt. Here is simple module file: #include <boost/python.hpp> #include "script.hpp" #include "boost/python/detail/wrap_python.hpp" BOOST_PYTHON_MODULE(temp) { namespace py = boost::python; py::def("PyLog", &engine::log); } Here is bulid log: http://dpaste.com/179232/. Can't imagine what I forgot. System: arch linux; ls /usr/lib |grep boost : http://dpaste.com/179233/

    Read the article

  • Using boost::iostreams to parse a binary file byte by byte

    - by Zsol
    So I would like to parse a binary file and extract some data from it. The problem I am facing with this is that I need to convert a stream of chars to a stream of unsigned chars. Reading the boost documentation, it seems that boost::iostreams::code_converter should be the solution for this, so I tried this: typedef unsigned char uint8_t; typedef boost::iostreams::stream<boost::iostreams::code_converter< boost::iostreams::basic_array_source<uint8_t> >, std::codecvt<uint8_t, char, std::mbstate_t> > array_stream; The idea was to specify a codecvt with InternalType=uint8_t and ExternalType=char. Unfortunately this does not compile. So the question is: how do I convert a stream of chars to a stream of uint8_ts?

    Read the article

  • Building Boost with LSB C++ Compiler

    - by Alex Farber
    I want to build my program with LSB C++ Compiler from the Linux Standard Base http://www.linuxfoundation.org/collaborate/workgroups/lsb. Program depends on the Boost library, built with gcc 4.4 version. Compilation fails. Is it possible to build the Boost library with LSB C++ Compiler? Alternatively, is it possible to build the Boost library with some old gcc version, what version is recommended? My final goal is to get my executable and third-party Boost libraries running on most Linux distributions. Generally, what can be done to get better binary compatibility for Linux distributions, developing C++ closed-source application depending on the Boost library?

    Read the article

  • Python extension building with boost

    - by user1544053
    Hey guys I'm fairly new to boost c/c++ library. I downloaded boost library and build the library. I created a very simple python library in c++ using boost interface (actually it is example code given in the documentation). I built it into a dll file. In the documentation it reads that this dll is exposed to python and they just show the import function in python and include the created library. I don't understand how to expose that dll to python and load the library inside in tradition ('import') manner. In case if you wanna look at the code then here it is: #include <boost/python.hpp> char const* greet() { return "hello, world"; } BOOST_PYTHON_MODULE(hello_ext) { using namespace boost::python; def("greet", greet); } Please help I really want to build applications with c/c++ and python. I simply want to use hello_ext as: >>>import hello_ext >>>print hello_ext.greet() Thank you.

    Read the article

  • boost::this_thread::disable_interruption usage confusion

    - by Evgenii
    boost/thread/pthread/shared_mutex.hpp contains this code: ... #include <boost/thread/detail/thread_interruption.hpp> ... class shared_mutex { ... void lock_shared() { boost::this_thread::disable_interruption do_not_disturb; boost::mutex::scoped_lock lk(state_change); while(state.exclusive || state.exclusive_waiting_blocked) { shared_cond.wait(lk); } ++state.shared_count; } ... }; but boost/thread/detail/thread_interruption.hpp does not contain implementation of disable_interruption, only the prototype. in boost_1_42_0/libs/thread/src/pthread we don't have the implementation too how does it work!???

    Read the article

  • How do I get the size of the boost buffer

    - by Anonymous
    I am trying to make an asynchronised server in visual studio and I use boost::asio::async_read(m_socket, boost::asio::buffer(m_buffer), boost::bind(&tcp_connection::handle_read, shared_from_this(), boost::asio::placeholders::error)); to get the buffer to be put in m_buffer boost::array<char, 256> m_buffer; but how do I get the size of this thing, m_buffer? size() didn't work, end() didn't work.. Any help would be fantastic. Thanks in advance.

    Read the article

  • Boost ASIO read X bytes synchroniously into a vector

    - by xeross
    Hey, I've been attempting to write a client/server app with boost now, so far it sends and receives but I can't seem to just read X bytes into a vector. If I use the following code vector<uint8_t> buf; for (;;) { buf.resize(4); boost::system::error_code error; size_t len = socket.read_some(boost::asio::buffer(buf), error); if (error == boost::asio::error::eof) break; // Connection closed cleanly by peer. else if (error) throw boost::system::system_error(error); // Some other error. } And the packet is bigger then 4 bytes then it seems it keeps writing into those 4 bytes until the entire packet has been received, however I want it to fetch 4 bytes, then allow me to parse them, and then get the rest of the packet. Can anyone provide me with a working example, or at least a pointer on how to make it work properly ? Regards, Xeross

    Read the article

  • how to use replace_regex_copy() from boost::algorithm library?

    - by Vincenzo
    This is my code: #include <string> #include <boost/algorithm/string/regex.hpp> string f(const string& s) { using namespace boost::algorithm; return replace_regex_copy(s, "\\w", "?"); } This is what compiler says: no matching function for call to ‘replace_regex_copy(const std::basic_string<char, std::char_traits<char>, std::allocator<char> >&, std::string, std::string) The link to the library: http://www.boost.org/doc/libs/1_43_0/doc/html/boost/algorithm/replace_regex_copy.html Could anyone please help? Thanks! ps. Boost library is in place, since other functions from it work fine.

    Read the article

  • Using * Width & Precision Specifiers With boost::format

    - by John Dibling
    I am trying to use width and precision specifiers with boost::format, like this: #include <boost\format.hpp> #include <string> int main() { int n = 5; std::string s = (boost::format("%*.*s") % (n*2) % (n*2) % "Hello").str(); return 0; } But this doesn't work because boost::format doesn't support the * specifier. Boost throws an exception when parsing the string. Is there a way to accomplish the same goal, preferably using a drop-in replacement?

    Read the article

  • Howto use FB Graph to post a message on a feed (wall)

    - by qualbeen
    I have created an app, and now i want to post a message on one of my friends wall with use of the new Graph API. Is this do-able? I am already using oAuth and the Graph-api to get a list of all my friends. The API at http://developers.facebook.com/docs/api tells me to cURL https://graph.facebook.com/[userid]/feed to read the feed, but it also tells me howto post a message: curl -F 'access_token=[...]' -F 'message=Hello, Arjun. I like this new API.' https://graph.facebook.com/arjun/feed Ofcourse this doesn't work! And I can't find out why.. Here are my PHP-code: require_once 'facebook.php'; // PHP-SDK downloaded from http://github.com/facebook/php-sdk $facebook = new Facebook(array(appId=>123, secret=>'secret')); $result = $facebook->api( '/me/feed/', array('access_token' => $this->access_token, 'message' => 'Playing around with FB Graph..') ); This code does not throws any error, and I know my access_token are correct (otherwise i could't run $facebook-api('/me?access_token='.$this-access_token); to get my userobject. Have anyone out there sucsessfully posted a message using Graph-api? Then i need your help! :-)

    Read the article

  • Big problem with Dijkstra algorithm in a linked list graph implementation

    - by Nazgulled
    Hi, I have my graph implemented with linked lists, for both vertices and edges and that is becoming an issue for the Dijkstra algorithm. As I said on a previous question, I'm converting this code that uses an adjacency matrix to work with my graph implementation. The problem is that when I find the minimum value I get an array index. This index would have match the vertex index if the graph vertexes were stored in an array instead. And the access to the vertex would be constant. I don't have time to change my graph implementation, but I do have an hash table, indexed by a unique number (but one that does not start at 0, it's like 100090000) which is the problem I'm having. Whenever I need, I use the modulo operator to get a number between 0 and the total number of vertices. This works fine for when I need an array index from the number, but when I need the number from the array index (to access the calculated minimum distance vertex in constant time), not so much. I tried to search for how to inverse the modulo operation, like, 100090000 mod 18000 = 10000 and, 10000 invmod 18000 = 100090000 but couldn't find a way to do it. My next alternative is to build some sort of reference array where, in the example above, arr[10000] = 100090000. That would fix the problem, but would require to loop the whole graph one more time. Do I have any better/easier solution with my current graph implementation?

    Read the article

  • Detecting Asymptotes in a Graph

    - by nasufara
    I am creating a graphing calculator in Java as a project for my programming class. There are two main components to this calculator: the graph itself, which draws the line(s), and the equation evaluator, which takes in an equation as a String and... well, evaluates it. To create the line, I create a Path2D.Double instance, and loop through the points on the line. To do this, I calculate as many points as the graph is wide (e.g. if the graph itself is 500px wide, I calculate 500 points), and then scale it to the window of the graph. Now, this works perfectly for most any line. However, it does not when dealing with asymptotes. If, when calculating points, the graph encounters a domain error (such as 1/0), the graph closes the shape in the Path2D.Double instance and starts a new line, so that the line looks mathematically correct. Example: However, because of the way it scales, sometimes it is rendered correctly, sometimes it isn't. When it isn't, the actual asymptotic line is shown, because within those 500 points, it skipped over x = 2.0 in the equation 1 / (x-2), and only did x = 1.98 and x = 2.04, which are perfectly valid in that equation. Example: In that case, I increased the window on the left and right one unit each. My question is: Is there a way to deal with asymptotes using this method of scaling so that the resulting line looks mathematically correct? I myself have thought of implementing a binary search-esque method, where, if it finds that it calculates one point, and then the next point is wildly far away from the last point, it searches in between those points for a domain error. I had trouble figuring out how to make it work in practice, however. Thank you for any help you may give!

    Read the article

  • Detecting Singularities in a Graph

    - by nasufara
    I am creating a graphing calculator in Java as a project for my programming class. There are two main components to this calculator: the graph itself, which draws the line(s), and the equation evaluator, which takes in an equation as a String and... well, evaluates it. To create the line, I create a Path2D.Double instance, and loop through the points on the line. To do this, I calculate as many points as the graph is wide (e.g. if the graph itself is 500px wide, I calculate 500 points), and then scale it to the window of the graph. Now, this works perfectly for most any line. However, it does not when dealing with singularities. If, when calculating points, the graph encounters a domain error (such as 1/0), the graph closes the shape in the Path2D.Double instance and starts a new line, so that the line looks mathematically correct. Example: However, because of the way it scales, sometimes it is rendered correctly, sometimes it isn't. When it isn't, the actual asymptotic line is shown, because within those 500 points, it skipped over x = 2.0 in the equation 1 / (x-2), and only did x = 1.98 and x = 2.04, which are perfectly valid in that equation. Example: In that case, I increased the window on the left and right one unit each. My question is: Is there a way to deal with singularities using this method of scaling so that the resulting line looks mathematically correct? I myself have thought of implementing a binary search-esque method, where, if it finds that it calculates one point, and then the next point is wildly far away from the last point, it searches in between those points for a domain error. I had trouble figuring out how to make it work in practice, however. Thank you for any help you may give!

    Read the article

  • Scientific Data processing(Graph comparison and interpretation)

    - by pinkynobrain
    Hi stackoverflow friends, I'm trying to write a program to automate one of my more boring and repetative work tasks. I have some programming experience but none with processing or interpreting large volumes of data so i am seeking your advice(both suggestions of techneques to try and also things to read to learn more about doing this stuff). I have a piece of equipment that monitors an experiment by taking repeated samples and displays the readings on its screen as a graph. The input of experiment can be altered and one of these changes should produce a change in a section of the graph which i currently identify by eye and is what im looking for in the experiment. I want to automate it so that a computer looks at a set of results and spots the experiment input that causes the change. I can already extract the results from the machine. Currently they results for a run are in the form of an integer array with the index being the sample number and the corresponding value being the measurement. The overall shape of the graph will be similar for each experiment run. The change im looking for will be roughly the same and will occur in approximatly the same place every time for the correct experiment input. Unfortunatly there are a few gotcha's that make this problem more difficult. There is some noise in the measuring process which mean there is some random variation in the measured values between different runs. Although the overall shape of the graph remains the same. The time the experiment takes varies slightly each run causing two effects. First, the a whole graph may be shifted slightly on the x axis relative to another runs graph. Second, individual features may appear slightly wider or narrower in different runs. In both these cases the variation isn't particularly large and you can assume that the only non random variation is caused by the correct input being found. Thank you for your time, Pinky

    Read the article

  • Scientific Data processing (Graph comparison and interpretation)

    - by pinkynobrain
    Hi stackoverflow friends, I'm trying to write a program to automate one of my more boring and repetitive work tasks. I have some programming experience but none with processing or interpreting large volumes of data so I am seeking your advice (both suggestions of techniques to try and also things to read to learn more about doing this stuff). I have a piece of equipment that monitors an experiment by taking repeated samples and displays the readings on its screen as a graph. The input of experiment can be altered and one of these changes should produce a change in a section of the graph which I currently identify by eye and is what I'm looking for in the experiment. I want to automate it so that a computer looks at a set of results and spots the experiment input that causes the change. I can already extract the results from the machine. Currently they results for a run are in the form of an integer array with the index being the sample number and the corresponding value being the measurement. The overall shape of the graph will be similar for each experiment run. The change I'm looking for will be roughly the same and will occur in approximately the same place every time for the correct experiment input. Unfortunately there are a few gotchas that make this problem more difficult. There is some noise in the measuring process which mean there is some random variation in the measured values between different runs. Although the overall shape of the graph remains the same. The time the experiment takes varies slightly each run causing two effects. First, the a whole graph may be shifted slightly on the x axis relative to another run's graph. Second, individual features may appear slightly wider or narrower in different runs. In both these cases the variation isn't particularly large and you can assume that the only non random variation is caused by the correct input being found. Thank you for your time, Pinky

    Read the article

  • boost::filesystem - how to create a boost path from a windows path string on posix plattforms?

    - by VolkA
    I'm reading path names from a database which are stored as relative paths in Windows format, and try to create a boost::filesystem::path from them on a Unix system. What happens is that the constructor call interprets the whole string as the filename. I need the path to be converted to a correct Posix path as it will be used locally. I didn't find any conversion functions in the boost::filesystem reference, nor through google. Am I just blind, is there an obvious solution? If not, how would you do this? Example: std::string win_path("foo\\bar\\asdf.xml"); std::string posix_path("foo/bar/asdf.xml"); // loops just once, as part is the whole win_path interpreted as a filename boost::filesystem::path boost_path(win_path); BOOST_FOREACH(boost::filesystem::path part, boost_path) { std::cout << part << std::endl; } // prints each path component separately boost::filesystem::path boost_path_posix(posix_path); BOOST_FOREACH(boost::filesystem::path part, boost_path_posix) { std::cout << part << std::endl; }

    Read the article

  • Which technology is best suited to store and query a huge readonly graph?

    - by asmaier
    I have a huge directed graph: It consists of 1.6 million nodes and 30 million edges. I want the users to be able to find all the shortest connections (including incoming and outgoing edges) between two nodes of the graph (via a web interface). At the moment I have stored the graph in a PostgreSQL database. But that solution is not very efficient and elegant, I basically need to store all the edges of the graph twice (see my question PostgreSQL: How to optimize my database for storing and querying a huge graph). It was suggested to me to use a GraphDB like neo4j or AllegroGraph. However the free version of AllegroGraph is limited to 50 million nodes and also has a very high-level API (RDF), which seems too powerful and complex for my problem. Neo4j on the other hand has only a very low level API (and the python interface is not mature yet). Both of them seem to be more suited for problems, where nodes and edges are frequently added or removed to a graph. For a simple search on a graph, these GraphDBs seem to be too complex. One idea I had would be to "misuse" a search engine like Lucene for the job, since I'm basically only searching connections in a graph. Another idea would be, to have a server process, storing the whole graph (500MB to 1GB) in memory. The clients could then query the server process and could transverse the graph very quickly, since the graph is stored in memory. Is there an easy possibility to write such a server (preferably in Python) using some existing framework? Which technology would you use to store and query such a huge readonly graph?

    Read the article

  • Add a wall post to a page or application wall as page or application with facebook graph API

    - by blauesocke
    Hi, I wan't to create a new wall post on a appliaction page or a "normal" page with the facebook graph API. Is there a way to "post as page"? With the old REST-API it worked like this: $facebook->api_client->stream_publish($message, NULL, $links, $targetPageId, $asPageId); So, if I passed equal IDs for $targetPageId and $asPageId I was able to post a "real" wall post not caused by my own facebook account. Thanks!

    Read the article

  • How can I get node coordinates from a graph, using Perl?

    - by jonny
    Ok, I have a flowchart definition (basically, array of nodes and edges for each node). Now I want to calculate coordinates for every task in the flow, preferably hierarchycal style. I need something like Graph::Easy::Layout but I have no idea how to get nodes coordinates: I render nodes myself and I only want to retrieve box coordinates/size. Any suggestions? What I need is a CPAN module available even in Debian repository.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >