Search Results

Search found 42001 results on 1681 pages for 'type theory'.

Page 61/1681 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • 3d symmetry search algorithm

    - by aaa
    this may be more appropriate for math overflow, but nevertheless: Given 3d structure (for example molecule), what is a good approach/algorithm to find symmetry (rotational/reflection/inversion/etc.)? I came up with brute force naive algorithm, but it seems there should be better approach. I am not so much interested in genetic algorithms as I would like best symmetry rather then almost the best symmetry link to website/paper would be great. thanks

    Read the article

  • How to draw a graph in LaTeX?

    - by Amir Rachum
    First of all, let me say I'm using LyX, though I have no problem using ERT. Secondly, what is the most simplest way to draw a simple graph like this in Latex? I've seen some documents with graphs and I've seen some examples, but I couldn't figure out how to just draw a simple graph - what packages do I need, etc?

    Read the article

  • Complex behavior generated by simple computation

    - by Yuval A
    Stephen Wolfram gave a fascinating talk at TED about his work with Mathematica and Wolfram Alpha. Amongst other things, he pointed out how very simple computations can yield extremely complex behaviors. (He goes on to discuss his ambition for computing the entire physical universe. Say what you will, you gotta give the guy some credit for his wild ideas...) As an example he showed several cellular automata. What other examples of simple computations do you know of that yield fascinating results?

    Read the article

  • Provable planarity of flowcharts

    - by Nikolaos Kavvadias
    Hi all I have a question: is there any reference (e.g. paper) with a proof of the planarity of flowchart layouts? Can anyone suggest an algorithm for generating flowchart (planar) layouts? I know that there are some code-to-flowchart tools out there, but i'm unaware of their internals. Thanks in advance -kavi

    Read the article

  • Cast from Void* to TYPE* using C++ style cast: static_cast or reinterpret_cast

    - by David Relihan
    So if your converting from Void* to Type* or from Type* to Void* should you use: void func(void *p) { Params *params = static_cast<Params*>(p); } or void func(void *p) { Params *params = reinterpret_cast<Params*>(p); } To me static_cast seems the more correct but I've seen both used for the same purpose. Also, does the direction of the conversion matter. i.e. should I still use static_cast for: _beginthread(func,0,static_cast<void*>(params) I have read the other questions on C++ style casting but I'm still not sure what the correct way is for this scenario (I think it is static_cast)

    Read the article

  • C# -Interview Question Anonymous Type

    - by Amutha
    Recently i was asked to prove the power of C# 3.0 in a single line( might be tricky) i wrote new int[] { 1, 2, 3 }.Union(new int[]{10,23,45}). ToList().ForEach(x => Console.WriteLine(x)); and explained you can have (i) anonymous array (ii) extension method (iii)lambda and closure all in a single line.I got spot offer. But..... The interviewer asked me how will you convert an anonymous type into know type :( I am 100% sure ,we can not do that.The interviewer replied there is 200% chance to do that if you have a small work around.I was clueless. As usual,I am waiting for your valuable reply(Is it possible?).

    Read the article

  • How to minimize total cost of shortest path tree

    - by Michael
    I have a directed acyclic graph with positive edge-weights. It has a single source and a set of targets (vertices furthest from the source). I find the shortest paths from the source to each target. Some of these paths overlap. What I want is a shortest path tree which minimizes the total sum of weights over all edges. For example, consider two of the targets. Given all edge weights equal, if they share a single shortest path for most of their length, then that is preferable to two mostly non-overlapping shortest paths (fewer edges in the tree equals lower overall cost). Another example: two paths are non-overlapping for a small part of their length, with high cost for the non-overlapping paths, but low cost for the long shared path (low combined cost). On the other hand, two paths are non-overlapping for most of their length, with low costs for the non-overlapping paths, but high cost for the short shared path (also, low combined cost). There are many combinations. I want to find solutions with the lowest overall cost, given all the shortest paths from source to target. Does this ring any bells with anyone? Can anyone point me to relevant algorithms or analogous applications? Cheers!

    Read the article

  • jquery ajax type GET question

    - by Andrew Jackson
    Hi all! I hava a simple question. I have a server running which take actions according to parameters in the url. Example: if I type in browser: http://localhost:8081/Edit?action=renameModule&newName=Module2 This works correctly. I would like to know the equivalent jquery ajax method to perform the same thing I have tried $.ajax({ url: 'http://localhost:8081/Edit', type: 'GET', data:'action=renameModule&newName=Module2 }); It is not working. I would be very grateful for any help. Thanks

    Read the article

  • Can Haskell's Parsec library be used to implement a recursive descent parser with backup?

    - by Thor Thurn
    I've been considering using Haskell's Parsec parsing library to parse a subset of Java as a recursive descent parser as an alternative to more traditional parser-generator solutions like Happy. Parsec seems very easy to use, and parse speed is definitely not a factor for me. I'm wondering, though, if it's possible to implement "backup" with Parsec, a technique which finds the correct production to use by trying each one in turn. For a simple example, consider the very start of the JLS Java grammar: Literal: IntegerLiteral FloatingPointLiteral I'd like a way to not have to figure out how I should order these two rules to get the parse to succeed. As it stands, a naive implementation like this: literal = do { x <- try (do { v <- integer; return (IntLiteral v)}) <|> (do { v <- float; return (FPLiteral v)}); return(Literal x) } Will not work... inputs like "15.2" will cause the integer parser to succeed first, and then the whole thing will choke on the "." symbol. In this case, of course, it's obvious that you can solve the problem by re-ordering the two productions. In the general case, though, finding things like this is going to be a nightmare, and it's very likely that I'll miss some cases. Ideally, I'd like a way to have Parsec figure out stuff like this for me. Is this possible, or am I simply trying to do too much with the library? The Parsec documentation claims that it can "parse context-sensitive, infinite look-ahead grammars", so it seems like something like I should be able to do something here.

    Read the article

  • Is information a subset of data?

    - by Jason Baker
    I apologize as I don't know whether this is more of a math question that belongs on mathoverflow or if it's a computer science question that belongs here. That said, I believe I understand the fundamental difference between data, information, and knowledge. My understanding is that information carries both data and meaning. One thing that I'm not clear on is whether information is data. Is information considered a special kind of data, or is it something completely different?

    Read the article

  • create table based on a user defined type

    - by Glen
    Suppose I have a user defined type: CREATE OR REPLACE TYPE TEST_TYPE AS OBJECT ( f1 varchar2(10), f2 number(5) ); Now, I want to create a table to hold these types. I can do the following: create table test_type_table ( test_type_field test_type ); This gives me a table with one column, test_type_field. Is there an easy and automated way to instead create a table such that it has 2 columns, f1 and f2?. So that it's the equivilent to writing: create table test_type_table ( f1 varchar2(10), f2 number(5) );

    Read the article

  • How to expose a Delphi set type via Soap

    - by Wouter van Nifterick
    I'm currently creating soap wrappers for some Delphi functions so that we can easily use them from PHP, C# and Delphi. I wonder what's the best way to expose sets. type TCountry = (countryUnknown,countryNL,countryD,countryB,countryS,countryFIN,countryF,countryE,countryP,countryPl,countryL); TCountrySet = set of TCountry; function GetValidCountrySet(const LicensePlate:string; const PossibleCountriesSet:TCountrySet):TCountrySet; I'm currently wrapping it like this for the soap server: type TCountryArray = array of TCountry; function TVehicleInfo.GetValidCountrySet(const LicensePlate:string; const PossibleCountriesSet:TCountryArray):TCountryArray; It works, but I need to write a lot of useless and ugly code to convert sets--arrays and arrays--sets. Is there an easier, more elegant, or more generic way to do this?

    Read the article

  • Why is a fixed size buffers (arrays) must be unsafe?

    - by brickner
    Let's say I want to have a value type of 7 bytes (or 3 or 777). I can define it like that: public struct Buffer71 { public byte b0; public byte b1; public byte b2; public byte b3; public byte b4; public byte b5; public byte b6; } A simpler way to define it is using a fixed buffer public struct Buffer72 { public unsafe fixed byte bs[7]; } Of course the second definition is simpler. The problem lies with the unsafe keyword that must be provided for fixed buffers. I understand that this is implemented using pointers and hence unsafe. My question is why does it have to be unsafe? Why can't C# provide arbitrary constant length arrays and keep them as a value type instead of making it a C# reference type array or unsafe buffers?

    Read the article

  • Find all complete sub-graphs within a graph

    - by mvid
    Is there a known algorithm or method to find all complete sub-graphs within a graph? I have an undirected, unweighted graph and I need to find all subgraphs within it where each node in the subgraph is connected to each other node in the subgraph. Is there an existing algorithm for this?

    Read the article

  • Find all cycles in graph, redux

    - by Shadow
    Hi, I know there are a quite some answers existing on this question. However, I found none of them really bringing it to the point. Some argue that a cycle is (almost) the same as a strongly connected components (s. http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) , so one could use algorithms designed for that goal. Some argue that finding a cycle can be done via DFS and checking for back-edges (s. boost graph documentation on file dependencies). I now would like to have some suggestions on whether all cycles in a graph can be detected via DFS and checking for back-edges? My opinion is that it indeed could work that way as DFS-VISIT (s. pseudocode of DFS) freshly enters each node that was not yet visited. In that sense, each vertex exhibits a potential start of a cycle. Additionally, as DFS visits each edge once, each edge leading to the starting point of a cycle is also covered. Thus, by using DFS and back-edge checking it should indeed be possible to detect all cycles in a graph. Note that, if cycles with different numbers of participant nodes exist (e.g. triangles, rectangles etc.), additional work has to be done to discriminate the acutal "shape" of each cycle.

    Read the article

  • What are logical and path queries

    - by NomeN
    I'm reading a paper which mentions that a language for refactoring has three specific requirements. functional features (like ML) logical queries (like Datalog) path queries (like Datalog) I know what they mean by functional features, but I'm not totally clear on the latter two and can't find a clear explanation either. Although I have a good idea after what I could find on the subjects, I need to be sure so here goes: Could the SO-community please clearly explain to me what logical queries and path queries are? Or at the very least what the people from the paper meant?

    Read the article

  • Computationally simple Pseudo-Gaussian Distribution with varying mean and standard deviation?

    - by mstksg
    This picture from wikipedia has a nice example of the sort of functions I'd ideally like to generate http://en.wikipedia.org/wiki/File:Normal_Distribution_PDF.svg Right now I'm using the Irwin-Hall Distribution, which is more or less a Polynomial approximation of the Gaussian distribution...basically, you use uniform random number generator and iterate it x times, and take the average. The more iterations, the more like a Gaussian Distribution it is. It's pretty nice; however I'd like to be able to have one where I can vary the mean. For example, let's say I wanted a number between the range 0 and 10, but around 7. Like, the mean (if I repeated this function multiple times) would turn out to be 7, but the actual range is 0-10. Is there one I should look up, or should I work on doing some fancy maths with standard Gaussian Distributions?

    Read the article

  • How do I compute the approximate entropy of a bit string?

    - by dreeves
    Is there a standard way to do this? Googling -- "approximate entropy" bits -- uncovers multiple academic papers but I'd like to just find a chunk of pseudocode defining the approximate entropy for a given bit string of arbitrary length. (In case this is easier said than done and it depends on the application, my application involves 16,320 bits of encrypted data (cyphertext). But encrypted as a puzzle and not meant to be impossible to crack. I thought I'd first check the entropy but couldn't easily find a good definition of such. So it seemed like a question that ought to be on StackOverflow! Ideas for where to begin with de-cyphering 16k random-seeming bits are also welcome...) See also this related question: http://stackoverflow.com/questions/510412/what-is-the-computer-science-definition-of-entropy

    Read the article

  • Gaining information from nodes of tree

    - by jainp
    I am working with the tree data structure and trying to come up with a way to calculate information I can gain from the nodes of the tree. I am wondering if there are any existing techniques which can assign higher numerical importance to a node which appears less frequently at lower level (Distance from the root of the tree) than the same nodes appearance at higher level and high frequency. To give an example, I want to give more significance to node Book, at level 2 appearing once, then at level 3 appearing thrice. Will appreciate any suggestions/pointers to techniques which achieve something similar. Thanks, Prateek

    Read the article

  • Examples of useful or non-trival dual interfaces

    - by Scott Weinstein
    Recently Erik Meijer and others have show how IObservable/IObserver is the dual of IEnumerable/IEnumerator. The fact that they are dual means that any operation on one interface is valid on the other, thus providing a theoretical foundation for the Reactive Extentions for .Net Do other dual interfaces exist? I'm interested in any example, not just .Net based.

    Read the article

  • Sort by an object's type

    - by Richard Levasseur
    Hi all, I have code that statically registers (type, handler_function) pairs at module load time, resulting in a dict like this: HANDLERS = { str: HandleStr, int: HandleInt, ParentClass: HandleCustomParent, ChildClass: HandleCustomChild } def HandleObject(obj): for data_type in sorted(HANDLERS.keys(), ???): if isinstance(obj, data_type): HANDLERS[data_type](obj) Where ChildClass inherits from ParentClass. The problem is that, since its a dict, the order isn't defined - but how do I introspect type objects to figure out a sort key? The resulting order should be child classes follow by super classes (most specific types first). E.g. str comes before basestring, and ChildClass comes before ParentClass. If types are unrelated, it doesn't matter where they go relative to each other.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >