Search Results

Search found 27 results on 2 pages for 'triples'.

Page 1/2 | 1 2  | Next Page >

  • Creating a triple store query using SQL - how to find all triples that have a common predicate and o

    - by Ankur
    I have a database that acts like a triple store, except that it is just a simple MySQL database. I want to select all triples that have a common predicate and object. Info about RDF and triples I can't seem to work out the SQL. If I had just a single predicate and object to match I would do: select TRIPLE from TRIPLES where PREDICATE="predicateName" and OBJECT="objectName" But if I have a list (HashMap) of many pairs of (predicateName,objectName) I am not sure what I need to do. Please let me know if I need to provide more info, I am not sure that I have made this quite clear, but I am wary of providing too much info and confusing the issue.

    Read the article

  • Solving Naked Triples in Sudoku

    - by Kave
    Hello, I wished I paid more attention to the math classes back in Uni. :) How do I implement this math formula for naked triples? Naked Triples Take three cells C = {c1, c2, c3} that share a unit U. Take three numbers N = {n1, n2, n3}. If each cell in C has as its candidates ci ? N then we can remove all ni ? N from the other cells in U.** I have a method that takes a Unit (e.g. a Box, a row or a column) as parameter. The unit contains 9 cells, therefore I need to compare all combinations of 3 cells at a time that from the box, perhaps put them into a stack or collection for further calculation. Next step would be taking these 3-cell-combinations one by one and compare their candidates against 3 numbers. Again these 3 numbers can be any possible combination from 1 to 9. Thats all I need. But how would I do that? How many combinations would I get? Do I get 3 x 9 = 27 combinations for cells and then the same for numbers (N)? How would you solve this in classic C# loops? No Lambda expression please I am already confused enough :) Many Thanks for any help,

    Read the article

  • How to select a subset of results from a select statement

    - by Ankur
    I have a table that stores RDF triples: triples(triple_id, sub_id, pre_id, obj_id) The method (I need to write) will receive an array of numbers which correspond to pre_id values. I want to select all sub_id values that have a corresponding pre_id for all the pre_ids in the array that is passed in. E.g. if I had a single pre_id values passed in... lets call the value passed in preId, I would do: select sub_id from triples where pre_id=preId; However since I have mutliple pre_id values I want to keep iterating through the pre_id values and only keep the sub_id values corresponding to the "triples" records that have both. E.g. image there are five records: triples(1, 34,65,23) triples(2, 31,35,28) triples(3, 32,32,19) triples(4, 12,65,28) triples(5, 76,32,34) If I pass in an array of pre_id values [65,32] then I want to select the first, third, fourth and fifth records. What would I do for that?

    Read the article

  • Reading Ontology with Jena, feeding it with RDF triples, and producing correct RDF string output.

    - by JonB
    Hi, I have an ontology, which I read in with Jena to help me scrape some RDFa triples from a website. I don't currently store these triples in a Jena model, but that is fairly straight forward to do, its on my to do next list. The area I am struggling with, though, is to get Jena to output correct RDF for the ontology I have. The ontology uses Owl and RDFS definitions, but when I pass some example triples into the model, they don't appear correctly. Almost as if it doesn't know anything about the ontology. The output is, however, still valid RDF, just it's not coming out in the form I was hoping for. Am I correct in thinking that Jena should be able to produce well written RDF (not just valid) about the triples I have collected, based on the ontology or does this out stretch what it is capable of? Many thanks for any input.

    Read the article

  • SWI-Prolog: how to load rdf triples using semweb/rdf_db library?

    - by Li Li
    Hi, I have a rdf file (file.trp) in n-triples format, where each line is: "subject predicate object ." I tried to use rdf_load in semweb/rdf_db to load it into memory, but failed. Here is what I tried: ?- rdf_load('file.trp'). ?- rdf_load('file.trp', [format(triples]). the manual says it supports xml and triples. But it only loads rdf xml files. How can I load such rdf triple file? Thanks, Li

    Read the article

  • How can one extract rdf:about or rdf:ID properties from triples using SPARQL?

    - by lennyks
    It seemed a trivial matter at the beginning but so far I have not managed to get the unique identifier for a given resource using SPARQL. What I mean is given, e.g., rdf:Description rdf:about="http://..." and then some properties identifying this resource, what I want to do is to first find this very resource and then retrieve all the triples given some URI. I have tried naïve approaches by writing statements in a WHERE clause such as: ?x rdf:about ?y and ?x rdfs:about ?y I hope I am being precise.

    Read the article

  • How can one extract rdf:about or rdf:ID properties from triples using SPARKQL?

    - by lennyks
    It seemed a trivial matter at the beginning but so far I had not managed to get unique identifier for a given resource using SPARKQL. What I mean is given, let say, rdf:Description rdf:about="http://..." and then some properties identifying this resource, what I want to do is to first find this very resource and then retrieve all the triples given some uri. I have tried naive approaches by writing statements in a WHERE clause such as ?x rdf:about ?y and ?x rdfs:about ?y. I hope I am being precise.

    Read the article

  • How can one extra rdf:about or rdf:ID properties from triples using SPARKQL?

    - by lennyks
    It seemed a trivial matter at the beginning but so far I had not managed to get unique identifier for a given resource using SPARKQL. What I mean is given and then some properties identifiying this resource in perfect world I want to first know how to retrieve an individual triple given some uri. I have tried naive approaches by writing statements in a WHERE clause such as ?x rdf:about ?y and ?x rdfs:about ?y. I hope I am being precise.

    Read the article

  • Handling inheritance with overriding efficiently

    - by Fyodor Soikin
    I have the following two data structures. First, a list of properties applied to object triples: Object1 Object2 Object3 Property Value O1 O2 O3 P1 "abc" O1 O2 O3 P2 "xyz" O1 O3 O4 P1 "123" O2 O4 O5 P1 "098" Second, an inheritance tree: O1 O2 O4 O3 O5 Or viewed as a relation: Object Parent O2 O1 O4 O2 O3 O1 O5 O3 O1 null The semantics of this being that O2 inherits properties from O1; O4 - from O2 and O1; O3 - from O1; and O5 - from O3 and O1, in that order of precedence. NOTE 1: I have an efficient way to select all children or all parents of a given object. This is currently implemented with left and right indexes, but hierarchyid could also work. This does not seem important right now. NOTE 2: I have tiggers in place that make sure that the "Object" column always contains all possible objects, even when they do not really have to be there (i.e. have no parent or children defined). This makes it possible to use inner joins rather than severely less effiecient outer joins. The objective is: Given a pair of (Property, Value), return all object triples that have that property with that value either defined explicitly or inherited from a parent. NOTE 1: An object triple (X,Y,Z) is considered a "parent" of triple (A,B,C) when it is true that either X = A or X is a parent of A, and the same is true for (Y,B) and (Z,C). NOTE 2: A property defined on a closer parent "overrides" the same property defined on a more distant parent. NOTE 3: When (A,B,C) has two parents - (X1,Y1,Z1) and (X2,Y2,Z2), then (X1,Y1,Z1) is considered a "closer" parent when: (a) X2 is a parent of X1, or (b) X2 = X1 and Y2 is a parent of Y1, or (c) X2 = X1 and Y2 = Y1 and Z2 is a parent of Z1 In other words, the "closeness" in ancestry for triples is defined based on the first components of the triples first, then on the second components, then on the third components. This rule establishes an unambigous partial order for triples in terms of ancestry. For example, given the pair of (P1, "abc"), the result set of triples will be: O1, O2, O3 -- Defined explicitly O1, O2, O5 -- Because O5 inherits from O3 O1, O4, O3 -- Because O4 inherits from O2 O1, O4, O5 -- Because O4 inherits from O2 and O5 inherits from O3 O2, O2, O3 -- Because O2 inherits from O1 O2, O2, O5 -- Because O2 inherits from O1 and O5 inherits from O3 O2, O4, O3 -- Because O2 inherits from O1 and O4 inherits from O2 O3, O2, O3 -- Because O3 inherits from O1 O3, O2, O5 -- Because O3 inherits from O1 and O5 inherits from O3 O3, O4, O3 -- Because O3 inherits from O1 and O4 inherits from O2 O3, O4, O5 -- Because O3 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 O4, O2, O3 -- Because O4 inherits from O1 O4, O2, O5 -- Because O4 inherits from O1 and O5 inherits from O3 O4, O4, O3 -- Because O4 inherits from O1 and O4 inherits from O2 O5, O2, O3 -- Because O5 inherits from O1 O5, O2, O5 -- Because O5 inherits from O1 and O5 inherits from O3 O5, O4, O3 -- Because O5 inherits from O1 and O4 inherits from O2 O5, O4, O5 -- Because O5 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 Note that the triple (O2, O4, O5) is absent from this list. This is because property P1 is defined explicitly for the triple (O2, O4, O5) and this prevents that triple from inheriting that property from (O1, O2, O3). Also note that the triple (O4, O4, O5) is also absent. This is because that triple inherits its value of P1="098" from (O2, O4, O5), because it is a closer parent than (O1, O2, O3). The straightforward way to do it is the following. First, for every triple that a property is defined on, select all possible child triples: select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value from TriplesAndProperties tp -- Select corresponding objects of the triple inner join Objects as Objects1 on Objects1.Id = tp.O1 inner join Objects as Objects2 on Objects2.Id = tp.O2 inner join Objects as Objects3 on Objects3.Id = tp.O3 -- Then add all possible children of all those objects inner join Objects as Children1 on Objects1.Id [isparentof] Children1.Id inner join Objects as Children2 on Objects2.Id [isparentof] Children2.Id inner join Objects as Children3 on Objects3.Id [isparentof] Children3.Id But this is not the whole story: if some triple inherits the same property from several parents, this query will yield conflicting results. Therefore, second step is to select just one of those conflicting results: select * from ( select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value, row_number() over( partition by Children1.Id, Children2.Id, Children3.Id, tp.Property order by Objects1.[depthInTheTree] descending, Objects2.[depthInTheTree] descending, Objects3.[depthInTheTree] descending ) as InheritancePriority from ... (see above) ) where InheritancePriority = 1 The window function row_number() over( ... ) does the following: for every unique combination of objects triple and property, it sorts all values by the ancestral distance from the triple to the parents that the value is inherited from, and then I only select the very first of the resulting list of values. A similar effect can be achieved with a GROUP BY and ORDER BY statements, but I just find the window function semantically cleaner (the execution plans they yield are identical). The point is, I need to select the closest of contributing ancestors, and for that I need to group and then sort within the group. And finally, now I can simply filter the result set by Property and Value. This scheme works. Very reliably and predictably. It has proven to be very powerful for the business task it implements. The only trouble is, it is awfuly slow. One might point out the join of seven tables might be slowing things down, but that is actually not the bottleneck. According to the actual execution plan I'm getting from the SQL Management Studio (as well as SQL Profiler), the bottleneck is the sorting. The problem is, in order to satisfy my window function, the server has to sort by Children1.Id, Children2.Id, Children3.Id, tp.Property, Parents1.[depthInTheTree] descending, Parents2.[depthInTheTree] descending, Parents3.[depthInTheTree] descending, and there can be no indexes it can use, because the values come from a cross join of several tables. EDIT: Per Michael Buen's suggestion (thank you, Michael), I have posted the whole puzzle to sqlfiddle here. One can see in the execution plan that the Sort operation accounts for 32% of the whole query, and that is going to grow with the number of total rows, because all the other operations use indexes. Usually in such cases I would use an indexed view, but not in this case, because indexed views cannot contain self-joins, of which there are six. The only way that I can think of so far is to create six copies of the Objects table and then use them for the joins, thus enabling an indexed view. Did the time come that I shall be reduced to that kind of hacks? The despair sets in.

    Read the article

  • OWL inferencing question

    - by user439170
    I am using the Jena semantic web framework version 2.6.3. I have code that creates a model with owl inferencing and then adds the following triples: [:bnode-3 rdf:type owl:Restriction] [:bnode-3 owl:onProperty :offspringOf] [:bnode-3 owl:someValuesFrom :Person] [:bnode-3 rdfs:subClassOf :Person] bnode-3 is supposed to be a restriction class which, for example, would contain :joe if :bob is a :Person and the following triple were asserted: [:joe :offspringOf :bob]. Then, since the restriction class is a subclass of Person, :joe would also be a person. And, in fact, this works. Whats confusing to me is that after I assert just the 4 triples at the top of this post, the inferencer creates a blank node which is a Person. In other words, the following triple is now in the model: [_:b0 rdf:type :Person] I don't understand why it would do this. Any help in understanding this would be greatly appreciated. Thanks. Kent.

    Read the article

  • SWI-Prolog Semantic Web Library and Python Interface

    - by John Peter Thompson Garcés
    I want to write a Python web application that queries RDF triples using Prolog. I found pyswip for interfacing Python with SWI-Prolog, and I am currently looking into SWI-Prolog's RDF capabilities. I am wondering if anyone has tried this before--and if anyone has: what did your setup look like? How do you get pyswip to work with the SWI-Prolog semantic web library? Or is there another Python-Prolog interface that makes this easier?

    Read the article

  • Database Schema for Machine Tags?

    - by Gabriel
    Machine tags are more precise tags: http://www.flickr.com/groups/api/discuss/72157594497877875. They allow a user to basically tag anything as an object in the format object:property=value Any tips on a rdbms schema that implements this? Just wondering if anyone has already dabbled with this. I imagine the schema is quite similar to implementing rdf triples in a rdbms

    Read the article

  • How do you prevent a Hudson slave from archiving artifacts?

    - by Wilfred Springer
    It turns out our slaves spend a considerable amount of time moving the archived artifacts back to the master Hudson node. It at least triples the duration of the build. It would be nice if there would be a way to prevent it. However, setting the maximum number of builds to keep doesn't have an influence at all. Is there another way to prevent sending the results back to the central Hudson master?

    Read the article

  • How can I reduce the CPU usage of Offline Files?

    - by Diego
    Whenever I have the Offline Files service running, there is a constant 25% CPU usage on svchost.exe (this is a quad core, so that means it's using up one core). This, in turn, triples the power consumption and keeps the machine hot... I do have several GB synchronized (music collection), but they are not changing at all, in either side. Am I misusing this feature? Is there anything I can configure to keep it down when there's nothing to do? Or should I forget about it and synchronize big folders manually?

    Read the article

  • Explaining the difference between OData & RDF by way of analogy

    - by jamiet
    A couple of months back I wrote a blog post entitled Microsoft, OData and RDF where I gave a high level view of the OData protocol and how it compares to RDF. I talked about linked data, triples and such like which may have been somewhat useful however jargon-heavy. Earlier today Dr Michael Hausenblas (blog | twitter) offered an analogy which I think is probably more useful and with Michael's permission I'm re-posting it here:Imagine a Web (a Web of Documents, if you wish), which is not based on HTML and hyperlinks, but on MS Word documents. The documents are all available on the Internet, so you can download them and consume the content. But after you’re done with a certain document that talks about a book, how do you learn more about it? For example, reviews about the book or where you can purchase it? Maybe the original document mentions that there is some more related information on another server. So you’d need to go there and look for the related bit of information yourself. You see? That’s what the Web is great at – you just click on a hyperlink and it takes you to the document (or section) you’re interested in. All the legwork is taken care of for you through HTML, URIs and HTTP.Hm, right, but how is this related to OData? Well, OData feels a bit like the above mentioned scenario, just concerning data. Of course you – well actually rather a software program I guess – can consume it (a single source), but that’s it.from Oh – it is data on the Web by Michael Hausenblas I believe that OData has loads of use cases but its important to understand its limitations as well and I think Michael has done a good job of explaining those limitations.@Jamiet   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Web hosting deciding to pay for hosting or host your own?

    - by pllee
    Is there a guide out there on how to choose when to pay for web hosting vs. hosting your own? Assuming that root access is a must I would like to compare things like cost, scalability and personal stress. Here is what I could come up with. Paying for web hosting: Benefits: Much cheaper for a small scale. I assume anything under $50 a month would be cheaper than paying for the bandwidth of hosting. No stress in dealing with power outages, server restarts or internet going down. For the most part less busy work involved with setting up. Negatives: Cost goes way up when higher specs are needed (for example monthly cost triples with ability to use 8gb of ram that you can buy for $90 ). This means you have to target a particular ram usage and monitor so your instance stays within the threshold. root access for the most part is a premium. You may get tied into a vendor specific deployment process. Hosting on own : Positives: 100% control of specs and software. When you get past paying for the bandwidth you get much more bang for your buck by building your own machine. Negatives: Doesn't make financial sense if bandwidth costs are more than web hosting costs. Having to deal with power outages, server restarts or internet going down. I think the best of both worlds would be if there was a place that dealt with bandwidth, power outages and server restarts but you provided your own server. Kind of like a 24 hour day care for a server. Does anything like that exist?

    Read the article

  • Which rdfa parser for java that supports currently used rdfa attributes?

    - by lennyks
    I am building an app in Java using Jena for semantic information scraping. I am looking for a RDFa parser that would allow me to correctly extract all the rdfa statements. Specifically, one that extracts info about namespaces used and presuming that rdfa tags are correct in the page produces correct triples, ones that distinguish between object and data properties. I went through all rdfa parsers from the site http://rdfa.info/wiki/Consume for Java. They all struggle to extract any rdfa statements and if they do not crash, Jena RDFa parser shows plenty of errors and then dies a terrible death, the data is of little use as it is incorrectly processed and generally mixed up. I am newbie in this area so please be gentle:) I was also thinking of using a library written in different language but then again I don't really know how to plug it into Java code. Any suggestions?

    Read the article

  • Using Apache Velocity with StringBuilders/CharSequences

    - by mindas
    We are using Apache Velocity for dynamic templates. At the moment Velocity has following methods for evaluation/replacing: public static boolean evaluate(Context context, Writer writer, String logTag, Reader reader) public static boolean evaluate(Context context, Writer out, String logTag, String instring) We use these methods by providing StringWriter to write evaluation results. Our incoming data is coming in StringBuilder format so we use StringBuilder.toString and feed it as instring. The problem is that our templates are fairly large (can be megabytes, tens of Ms on rare cases), replacements occur very frequently and each replacement operation triples the amount of required memory (incoming data + StringBuilder.toString() which creates a new copy + outgoing data). I was wondering if there is a way to improve this. E.g. if I could find a way to provide a Reader and Writer on top of same StringBuilder instance that only uses extra memory for in/out differences, would that be a good approach? Has anybody done anything similar and could share any source for such a class? Or maybe there any better solutions to given problem?

    Read the article

  • Generating a beveled edge for a 2D polygon

    - by Metaphile
    I'm trying to programmatically generate beveled edges for geometric polygons. For example, given an array of 4 vertices defining a square, I want to generate something like this. But computing the vertices of the inner shape is baffling me. Simply creating a copy of the original shape and then scaling it down will not produce the desired result most of the time. My algorithm so far involves analyzing adjacent edges (triples of vertices; e.g., the bottom-left, top-left, and top-right vertices of a square). From there, I need to find the angle between them, and then create a vertex somewhere along that angle, depending on how deep I want the bevel to be. And because I don't have much of a math background, that's where I'm stuck. How do I find that center angle? Or is there a much simpler way of attacking this problem?

    Read the article

  • What is the target color profile in Image.FromFile?

    - by Jan Zich
    I am curious what the useEmbeddedColorManagement parameter in System.Drawing.Image.FromFile actually does. This parameter directory corresponds to a GDI+ parameter in the same method of the same class. So debugging .NET source does not lead anywhere. If my understanding of color profiles is correct, a color profile is basically a mapping which describes how particular RGB triples (or CMYK or something else) map into the so called Profile Connection Space (CIELAB or CIEXYZ). Now, if I open an image with embedded color in .NET setting useEmbeddedColorManagement to true, my experience is I get an image whose RGB values are not exactly the same as the original values in the file, i.e. is transformed. Since the original image was an RGB and the new is also an RGB, there must have been a transformation from the embedded color profile to a Profile Connection Space and the back to RGB. The thing which I don’t understand is what is the target color system? Is it some default Windows color profile? Is the current monitor profile? Is it sRGB?

    Read the article

  • Curve fitting: Find the smoothest function that satisfies a list of constraints.

    - by dreeves
    Consider the set of non-decreasing surjective (onto) functions from (-inf,inf) to [0,1]. (Typical CDFs satisfy this property.) In other words, for any real number x, 0 <= f(x) <= 1. The logistic function is perhaps the most well-known example. We are now given some constraints in the form of a list of x-values and for each x-value, a pair of y-values that the function must lie between. We can represent that as a list of {x,ymin,ymax} triples such as constraints = {{0, 0, 0}, {1, 0.00311936, 0.00416369}, {2, 0.0847077, 0.109064}, {3, 0.272142, 0.354692}, {4, 0.53198, 0.646113}, {5, 0.623413, 0.743102}, {6, 0.744714, 0.905966}} Graphically that looks like this: We now seek a curve that respects those constraints. For example: Let's first try a simple interpolation through the midpoints of the constraints: mids = ({#1, Mean[{#2,#3}]}&) @@@ constraints f = Interpolation[mids, InterpolationOrder->0] Plotted, f looks like this: That function is not surjective. Also, we'd like it to be smoother. We can increase the interpolation order but now it violates the constraint that its range is [0,1]: The goal, then, is to find the smoothest function that satisfies the constraints: Non-decreasing. Tends to 0 as x approaches negative infinity and tends to 1 as x approaches infinity. Passes through a given list of y-error-bars. The first example I plotted above seems to be a good candidate but I did that with Mathematica's FindFit function assuming a lognormal CDF. That works well in this specific example but in general there need not be a lognormal CDF that satisfies the constraints.

    Read the article

  • Curve fitting: Find a CDF (or any function) that satisfies a list of constraints.

    - by dreeves
    I have some constraints on a CDF in the form of a list of x-values and for each x-value, a pair of y-values that the CDF must lie between. We can represent that as a list of {x,y1,y2} triples such as constraints = {{0, 0, 0}, {1, 0.00311936, 0.00416369}, {2, 0.0847077, 0.109064}, {3, 0.272142, 0.354692}, {4, 0.53198, 0.646113}, {5, 0.623413, 0.743102}, {6, 0.744714, 0.905966}} Graphically that looks like this: And since this is a CDF there's an additional implicit constraint of {Infinity, 1, 1} Ie, the function must never exceed 1. Also, it must be monotone. Now, without making any assumptions about its functional form, we want to find a curve that respects those constraints. For example: (I cheated to get that one: I actually started with a nice log-normal distribution and then generated fake constraints based on it.) One possibility is a straight interpolation through the midpoints of the constraints: mids = ({#1, Mean[{#2,#3}]}&) @@@ constraints f = Interpolation[mids, InterpolationOrder->0] Plotted, f looks like this: That sort of technically satisfies the constraints but it needs smoothing. We can increase the interpolation order but now it violates the implicit constraints (always less than one, and monotone): How can I get a curve that looks as much like the first one above as possible? Note that NonLinearModelFit with a LogNormalDistribution will do the trick in this example but is insufficiently general as sometimes there will sometimes not exist a log-normal distribution satisfying the constraints.

    Read the article

  • Can't get RDFlib to work on windows

    - by john
    I have installed RDFlib 3.0 and everything that is needed, but when I run the following code I get an error. The code below is from: http://code.google.com/p/rdflib/wiki/IntroSparql. I have tried for hours to fix this but with no success. Can please someone help? import rdflib rdflib.plugin.register('sparql', rdflib.query.Processor, 'rdfextras.sparql.processor', 'Processor') rdflib.plugin.register('sparql', rdflib.query.Result, 'rdfextras.sparql.query', 'SPARQLQueryResult') from rdflib import ConjunctiveGraph g = ConjunctiveGraph() g.parse("http://bigasterisk.com/foaf.rdf") g.parse("http://www.w3.org/People/Berners-Lee/card.rdf") from rdflib import Namespace FOAF = Namespace("http://xmlns.com/foaf/0.1/") g.parse("http://danbri.livejournal.com/data/foaf") [g.add((s, FOAF['name'], n)) for s,_,n in g.triples((None, FOAF['member_name'], None))] for row in g.query( """SELECT ?aname ?bname WHERE { ?a foaf:knows ?b . ?a foaf:name ?aname . ?b foaf:name ?bname . }""", initNs=dict(foaf=Namespace("http://xmlns.com/foaf/0.1/"))): print "%s knows %s" % row The error I get is: Traceback (most recent call last): File "...", line 18 in <module> initNs=dict(foaf=Namespace("http://xmlns.com/foaf/0.1/"))): TypeError: query() got an unexpected keyword argument 'initNS'

    Read the article

  • Why does the answer print out twice?

    - by rEgonicS
    I made a program that returns the product a*b*c where a,b,c are pythagorean triples and add up to 1000. The program does output the correct answer but does it twice. I was curious as to why this is so.After playing around with it a bit I found out that it prints out when a = 200 b = 375 c = 425. And once again when a = 375 b = 200 c = 425. *How do you put your code into a "code" block? It does it automatically for part of the code but as you can see the top portion and bottom line is left out. bool isPythagTriple(int a, int b, int c); int main() { for(int a = 1; a < 1000; a++) { for(int b = 1; b < 1000; b++) { for(int c = 1; c < 1000; c++) { if( ((a+b+c)==1000) && isPythagTriple(a,b,c) ) { cout << a*b*c << " "; break; } } } } return 0; } bool isPythagTriple(int a, int b, int c) { if( (a*a)+(b*b)-(c*c) == 0 ) return true; else return false; }

    Read the article

  • Good functions and techniques for dealing with haskell tuples?

    - by toofarsideways
    I've been doing a lot of work with tuples and lists of tuples recently and I've been wondering if I'm being sensible. Things feel awkward and clunky which for me signals that I'm doing something wrong. For example I've written three convenience functions for getting the first, second and third value in a tuple of 3 values. Is there a better way I'm missing? Are there more general functions that allow you to compose and manipulate tuple data? Here are some things I am trying to do that feel should be generalisable. Extracting values: Do I need to create a version of fst,snd,etc... for tuples of size two, three, four and five, etc...? fst3(x,_,_) = x fst4(x,_,_,_) = x Manipulating values: Can you increment the last value in a list of pairs and then use that same function to increment the last value in a list of triples? Zipping and Unzipping values: There is a zip and a zip3. Do I also need a zip4? or is there some way of creating a general zip function? Sorry if this seems subjective, I honestly don't know if this is even possible or if I'm wasting my time writing 3 extra functions every time I need a general solution. Thank you for any help you can give!

    Read the article

1 2  | Next Page >