Search Results

Search found 6083 results on 244 pages for 'graphical algorithm'.

Page 215/244 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Getting started with character and text processing (encoding, regular expressions)

    - by TK
    I'd like to learn foundations of encodings, characters and text. Understanding these is important for dealing with a large set of text whether that are log files or text source for building algorithms for collective intelligence. My current knowledge is pretty basic: something like "As long as I use UTF-8, I'm okay." I don't say I need to learn about advanced topics right away. But I need to know: Bit and bytes level knowledge of encodings. Characters and alphabets not used in English. Multi-byte encodings. (I understand some Chinese and Japanese. And parsing them is important.) Regular expressions. Algorithm for text processing. Parsing natural languages. I also need an understanding of mathematics and corpus linguistics. The current and future web (semantic, intelligent, real-time web) needs processing, parsing and analyzing large text. I'm looking for some resources (maybe books?) that get me started with some of the bullets. (I find many helpful discussion on regular expressions here on Stack Overflow. So, you don't need to suggest resources on that topic.)

    Read the article

  • The subsets-sum problem and the solvability of NP-complete problems

    - by G.E.M.
    I was reading about the subset-sums problem when I came up with what appears to be a general-purpose algorithm for solving it: (defun subset-contains-sum (set sum) (let ((subsets) (new-subset) (new-sum)) (dolist (element set) (dolist (subset-sum subsets) (setf new-subset (cons element (car subset-sum))) (setf new-sum (+ element (cdr subset-sum))) (if (= new-sum sum) (return-from subset-contains-sum new-subset)) (setf subsets (cons (cons new-subset new-sum) subsets))) (setf subsets (cons (cons element element) subsets))))) "set" is a list not containing duplicates and "sum" is the sum to search subsets for. "subsets" is a list of cons cells where the "car" is a subset list and the "cdr" is the sum of that subset. New subsets are created from old ones in O(1) time by just cons'ing the element to the front. I am not sure what the runtime complexity of it is, but appears that with each element "sum" grows by, the size of "subsets" doubles, plus one, so it appears to me to at least be quadratic. I am posting this because my impression before was that NP-complete problems tend to be intractable and that the best one can usually hope for is a heuristic, but this appears to be a general-purpose solution that will, assuming you have the CPU cycles, always give you the correct answer. How many other NP-complete problems can be solved like this one?

    Read the article

  • LaTeX limitation?

    - by Jayen
    Hi, I've hit an annoying problem in LaTeX. I've got a tex file of about 1000 lines. I've already got a few figures, but when I try to add another figure, It barfs with: ! Undefined control sequence. <argument> ... \sf@size \z@ \selectfont \@currbox l.937 \begin{figure}[t] If I move the figure to other parts of the file, I can get similar errors on different lines: ! Undefined control sequence. <argument> ... \sf@size \z@ \selectfont \@currbox l.657 \paragraph {A Centering Algorithm} If I comment out the figure, all is ok. %\begin{figure}[t] % \caption{Example decision tree, from Reiter and Dale [2000]} % \label{fig:relation-decision-tree} % \centering % \includegraphics[keepaspectratio=true]{./relation-decision-tree.eps} %\end{figure} If I keep just the begin and end like: \begin{figure}%[t] % \caption{Example decision tree, from Reiter and Dale [2000]} % \label{fig:relation-decision-tree} % \centering % \includegraphics[keepaspectratio=true]{./relation-decision-tree.eps} \end{figure} I get: ! Undefined control sequence. <argument> ... \sf@size \z@ \selectfont \@currbox l.942 \end {figure} At first, I thought maybe LaTeX has hit some limit, and I tried playing with the ulimits, but that didn't help. Any ideas? i've got other figures with graphics already. my preamble looks like: \documentclass[acmcsur,acmnow]{acmtrans2n} \usepackage{array} \usepackage{lastpage} \usepackage{pict2e} \usepackage{amsmath} \usepackage{varioref} \usepackage{epsfig} \usepackage{graphics} \usepackage{qtree} \usepackage{rotating} \usepackage{tree-dvips} \usepackage{mdwlist} \makecompactlist{quote*}{quote} \usepackage{verbatim} \usepackage{ulem}

    Read the article

  • Beginner video capture and processing/Camera selection

    - by mattbauch
    I'll soon be undertaking a research project in real-time event recognition but have no experience with the programming aspect of video capture (I'm an upperclassman undergraduate in computer engineering). I want to start off on the right foot so advice from anyone with experience would be great. The ultimate goal is to track events such as a person standing up/sitting down, entering/leaving a room, possibly even shrugging/slumping in posture, etc. from a security camera-like vantage point. First of all, which cameras/companies would you recommend? I'm looking to spend ~$100, more if necessary but not much. Great resolution isn't a must, but is desirable if affordable. What about IP network cameras vs. a USB type webcam? Webcams are less expensive, but IP cameras seem like they'd be much less work to deal with in software. What features should I look for in the camera? Once I've selected a camera, what does converting its output to a series of RGB bitmaps entail? I've never dealt with video encoding/decoding so a starting point or a tutorial that will guide me up to this point would be great if anyone has suggestions. Finally, what is the best (least complicated/most efficient) way to display video from the camera plus my own superimposed images (boxes around events in progress, for instance) in a GUI application? I can work on any operating system in any language. I have some experience with win32 GUIs and Java GUIs. The focus of the project is on the algorithm and so I'm trying to get the video capture/display portion of the app done cleanly and quickly. Thanks for any responses!!

    Read the article

  • Extra fulltext ordering criteria beyond default relevance

    - by Jeremy Warne
    I'm implementing an ingredient text search, for adding ingredients to a recipe. I've currently got a full text index on the ingredient name, which is stored in a single text field, like so: "Sauce, tomato, lite, Heinz" I've found that because there are a lot of ingredients with very similar names in the database, simply sorting by relevance doesn't work that well a lot of the time. So, I've found myself sorting by a bunch of my own rules of thumb, which probably duplicates a lot of the full-text search algorithm which spits out a numerical relevance. For instance (abridged): ORDER BY [ingredient name is exactly search term], [ingredient name starts with search term], [ingredient name starts with any word from the search and contains all search terms in some order], [ingredient name contains all search terms in some order], ...and so on. Each of these is defined in the SELECT specification as an expression returning either 1 or 0, and so I order by those in sequential order. I would love to hear suggestions for: A better way to define complicated order-by criteria in one place, say perhaps in a view or stored procedure that you can pass just the search term to and get back a set of results without having to worry about how they're ordered? A better tool for this than MySQL's fulltext engine -- perhaps if I was using Sphinx or something [which I've heard of but not used before], would I find some sort of complicated config option designed to solve problems like this? Some google search terms which might turn up discussion on how to order text items within a specific domain like this? I haven't found much that's of use. Thanks for reading!

    Read the article

  • Why can a struct defined inside a function not be used as functor to std::for_each?

    - by Oswald
    The following code won't compile. The compiler complains about *no matching function for call to for_each*. Why is this so? #include <map> #include <algorithm> struct Element { void flip() {} }; void flip_all(std::map<Element*, Element*> input) { struct FlipFunctor { void operator() (std::pair<Element* const, Element*>& item) { item.second->flip(); } }; std::for_each(input.begin(), input.end(), FlipFunctor()); } When I move struct FlipFunctor before function flip_all, the code compiles. Full error message: no matching function for call to ‘for_each(std::_Rb_tree_iterator<std::pair<Element* const, Element*> >, std::_Rb_tree_iterator<std::pair<Element* const, Element*> >, flip_all(std::map<Element*, Element*, std::less<Element*>, std::allocator<std::pair<Element* const, Element*> > >)::FlipFunctor)’

    Read the article

  • Multiple sendto() using UDP socket

    - by ereOn
    Hi, I have a network software which uses UDP to communicate with other instances of the same program. For different reasons, I must use UDP here. I recently had problems sending huge ammounts of data over UDP and had to implement a fragmentation system to split my messages into small data chunks. So far, it worked well but I now encounter an issue when I have to send a lot of data chunks. I have the following algorithm: Split message into small data chunks (around 1500 bytes) Iterate over the data chunks list and for each, send it using sendto() However, when I send a lot of data chunks, the receiver only gets the first 6 messages. Sometimes it misses the sixth and receives the seventh. It depends. Anyway, sendto() always indicates success. This always happen when I test my software over a loopback interface (127.0.0.1) but never over my LAN network. If I add something like std::cout << "test" << std::endl; between the sendto() then every frame is received. I am aware that UDP allows packet loss and that my frames might be loss for a lot of reasons and I suppose it has to do with the rate I am sending the data chunks at. What would be the right approach here ? Implementing some acknowledgement mechanism (just like TCP) seems overkill. Adding some arbitrary waiting time between the sendto() is ugly and will probably decrease performance. Increasing (if possible) the receiver UDP internal buffer ? I don't even know if this is possible. Something else ? I really need your advices here. Thank very much.

    Read the article

  • Recurrence relation solution

    - by Travis
    I'm revising past midterms for a final exam this week and am trying to make sense of a solution my professor posted for one of past exams. (You can see the original pdf here, question #6). I'm given the original recurrence relation T(m)=3T(n/2) + n and am told T(1) = 1. I'm pretty sure the solution I've been given is wrong in a few places. The solution is as follows: Let n=2^m T(2^m) = 3T(2^(m-1)) + 2^m 3T(2^(m-1)) = 3^2*T(2^(m-2)) + 2^(m-1)*3 ... 3^(m-1)T(2) = T(1) + 2*3^(m-1) I'm pretty sure this last line is incorrect and they forgot to multiply T(1) by 3^m. He then (tries to) sum the expressions: T(2^m) = 1 + (2^m + 2^(m-1)*3 + ... + 2*3(m-1)) = 1 + 2^m(1 + (3/2)^1 + (3/2)^2 + ... + (3/2)^(m-1)) = 1 + 2^m((3/2)^m-1)*(1/2) = 1 + 3^m - 2^(m-1) = 1 + n^log 3 - n/2 Thus the algorithm is big Theta of (n^log 3). I'm pretty sure that he also got the summation wrong here. By my calculations this should be as follows: T(2^m) = 2^m + 3 * 2^(m-1) + 3^2 * 2^(m-2) + ... + 3^m (3^m because 3^m*T(1) = 3^m should be added, not 1) = 2^m * ((3/2)^1 + (3/2)^2 + ... + (3/2)^m) = 2^m * sum of (3/2)^i from i=0 to m = 2^m * ((3/2)^(m+1) - 1)/(3/2 - 1) = 2^m * ((3/2)^(m+1) - 1)/(1/2) = 2^(m+1) * 3^(m+1)/2^(m+1) - 2^(m+1) = 3^(m+1) - 2 * 2^m Replacing n = 2^m, and from that m = log n T(n) = 3*3^(log n) - 2*n n is O(3^log n), thus the runtime is big Theta of (3^log n) Does this seem right? Thanks for your help!

    Read the article

  • why is this C++ Code not doing his job

    - by hamza
    i want to create a program that write all the primes in a file ( i know that its a popular algorithm but m trying to make it by my self ) , but it still showing all the numbers instead of just the primes , & i dont know why someone pleas tell me why #include <iostream> #include <stdlib.h> #include <stdio.h> void afficher_sur_un_ficher (FILE* ficher , int nb_bit ); int main() { FILE* p_fich ; char tab[4096] , mask ; int nb_bit = 0 , x ; for (int i = 0 ; i < 4096 ; i++ ) { tab[i] = 0xff ; } for (int i = 0 ; i < 4096 ; i++ ) { mask = 0x01 ; for (int j = 0 ; j < 8 ; j++) { if ((tab[i] & mask) != 0 ) { x = nb_bit ; while (( x > 1 )&&(x < 16384)) { tab[i] = tab[i] ^ mask ; x = x * 2 ; } afficher_sur_un_ficher (p_fich , nb_bit ) ; } mask = mask<<1 ; nb_bit++ ; } } system ("PAUSE"); return 0 ; } void afficher_sur_un_ficher (FILE* ficher , int nb_bit ) { ficher = fopen("res.txt","a+"); fprintf (ficher ,"%d \t" , nb_bit); int static z ; z = z+1 ; if ( z%10 == 0) fprintf (ficher , "\n"); fclose(ficher); }

    Read the article

  • Source of parsers for programming languages?

    - by Arkaaito
    I'm dusting off an old project of mine which calculates a number of simple metrics about large software projects. One of the metrics is the length of files/classes/methods. Currently my code "guesses" where class/method boundaries are based on a very crude algorithm (traverse the file, maintaining a "current depth" and adjusting it whenever you encounter unquoted brackets; when you return to the level a class or method began on, consider it exited). However, there are many problems with this procedure, and a "simple" way of detecting when your depth has changed is not always effective. To make this give accurate results, I need to use the canonical way (in each language) of detecting function definitions, class definitions and depth changes. This amounts to writing a simple parser to generate parse trees containing at least these elements for every language I want my project to be applicable to. Obviously parsers have been written for all these languages before, so it seems like I shouldn't have to duplicate that effort (even though writing parsers is fun). Is there some open-source project which collects ready-to-use parser libraries for a bunch of source languages? Or should I just be using ANTLR to make my own from scratch? (Note: I'd be delighted to port the project to another language to make use of a great existing resource, so if you know of one, it doesn't matter what language it's written in.)

    Read the article

  • For "draggable" div tags that are NOT nested: JQuery/JavaScript div tag “containment” approach/algor

    - by Pete Alvin
    Background: I've created an online circuit design application where .draggable() div tags are containers that contain smaller div containers and so forth. Question: For any particular div tag I need to quickly identify if it contains other div tags (that may in turn contain other div tags). -- Since the div tags are draggable, in the DOM they are NOT nested inside each other but I think are absolutely positioned. So I think that a "hit testing" approach is the only way to determine containment, unless there is some "secret" routine built-in somewhere that could help with this. I've searched JQuery and I don't see any built-in routine for this. Does anyone know of an algorithm that's quicker than O(n^2)? Seems like I have to walk the list of div tags in an outer loop (n) and have an inner loop (another n) to compare against all other div tags and do a "containment test" (position, width, height), building a list of contained div tags. That's n-squared. Then I have to build a list of all nested div tags by concatenating contained lists. So the total would be O(n^2)+n. There must be a better way?

    Read the article

  • 48-bit bitwise operations in Javascript?

    - by randomhelp
    I've been given the task of porting Java's Java.util.Random() to JavaScript, and I've run across a huge performance hit/inaccuracy using bitwise operators in Javascript on sufficiently large numbers. Some cursory research states that "bitwise operators in JavaScript are inherently slow," because internally it appears that JavaScript will cast all of its double values into signed 32-bit integers to do the bitwise operations (see https://developer.mozilla.org/En/Core_JavaScript_1.5_Reference/Operators/Bitwise_Operators for more on this.) Because of this, I can't do a direct port of the Java random number generator, and I need to get the same numeric results as Java.util.Random(). Writing something like this.next = function(bits) { if (!bits) { bits = 48; } this.seed = (this.seed * 25214903917 + 11) & ((1 << 48) - 1); return this.seed >>> (48 - bits); }; (which is an almost-direct port of the Java.util.Random()) code won't work properly, since Javascript can't do bitwise operations on an integer that size.) I've figured out that I can just make a seedable random number generator in 32-bit space using the Lehmer algorithm, but the trick is that I need to get the same values as I would with Java.util.Random(). What should I do to make a faster, functional port?

    Read the article

  • Editing 8bpp indexed Bitmaps

    - by Pedro Sá
    hi, i'm trying to edit the pixels of a 8bpp. Since this PixelFormat is indexed i'm aware that it uses a Color Table to map the pixel values. Even though I can edit the bitmap by converting it to 24bpp, 8bpp editing is much faster (13ms vs 3ms). But, changing each value when accessing the 8bpp bitmap results in some random rgb colors even though the PixelFormat remains 8bpp. I'm currently developing in c# and the algorithm is as follows: (C#) 1- Load original Bitmap at 8bpp 2- Create Empty temp Bitmap with 8bpp with the same size as the original 3-LockBits of both bitmaps and, using P/Invoke, calling c++ method where I pass the Scan0 of each BitmapData object. (I used a c++ method as it offers better performance when iterating through the Bitmap's pixels) (C++) 4- Create a int[256] palette according to some parameters and edit the temp bitmap bytes by passing the original's pixel values through the palette. (C#) 5- UnlockBits. My question is how can I edit the pixel values without having the strange rgb colors, or even better, edit the 8bpp bitmap's Color Table? Regards, Pedro

    Read the article

  • How to determine the (natural) language of a document?

    - by Robert Petermeier
    I have a set of documents in two languages: English and German. There is no usable meta information about these documents, a program can look at the content only. Based on that, the program has to decide which of the two languages the document is written in. Is there any "standard" algorithm for this problem that can be implemented in a few hours' time? Or alternatively, a free .NET library or toolkit that can do this? I know about LingPipe, but it is Java Not free for "semi-commercial" usage This problem seems to be surprisingly hard. I checked out the Google AJAX Language API (which I found by searching this site first), but it was ridiculously bad. For six web pages in German to which I pointed it only one guess was correct. The other guesses were Swedish, English, Danish and French... A simple approach I came up with is to use a list of stop words. My app already uses such a list for German documents in order to analyze them with Lucene.Net. If my app scans the documents for occurrences of stop words from either language the one with more occurrences would win. A very naive approach, to be sure, but it might be good enough. Unfortunately I don't have the time to become an expert at natural-language processing, although it is an intriguing topic.

    Read the article

  • Diffie-Hellman in Silverlight

    - by cmaduro
    I am trying to devise a security scheme for encrypting the application level data between a silverlight client, and a php webservice that I created. Since I am dealing with a public website the information I am pulling from the service is public, but the information I'm submitting to the webservice is not public. There is also a back end to the website for administration, so naturally all application data being pushed and pulled from the webservice to the silverlight administration back end must also be encrypted. Silverlight does not support asymmetric encryption, which would work for the public website. Symmetric encryption would only work on the back end because users do not log in to the public website, so no password based keys could be derived. Still symmetric encryption would be great, but I cannot securely save the private key in the silverlight client. Because it would either have to be hardcoded or read from some kind of config file. None of that is considered secure. So... plan B. My final alternative would be then to implement the Diffie-Hellman algorithm, which supports symmetric encryption by means of key agreement. However Diffie-Hellman is vulnerable to man-in-the-middle attacks. In other words, there is no guarantee that either side is sure of each others identity, making it possible for communication to be intercepted and altered without the receiving party knowing about it. It is thus recommended to use a private shared key to encrypt the key agreement handshaking, so that the identity of either party is confirmed. This brings me back to my initial problem that resulted in me needing to use Diffie-Hellman, how can I use a private key in a silverlight client without hardcoding it either in the code or an xml file. I'm all out of love on this one... is there any answer to this?

    Read the article

  • Efficient Context-Free Grammar parser, preferably Python-friendly

    - by Max Shawabkeh
    I am in need of parsing a small subset of English for one of my project, described as a context-free grammar with (1-level) feature structures (example) and I need to do it efficiently . Right now I'm using NLTK's parser which produces the right output but is very slow. For my grammar of ~450 fairly ambiguous non-lexicon rules and half a million lexical entries, parsing simple sentences can take anywhere from 2 to 30 seconds, depending it seems on the number of resulting trees. Lexical entries have little to no effect on performance. Another problem is that loading the (25MB) grammar+lexicon at the beginning can take up to a minute. From what I can find in literature, the running time of the algorithm used to parse such a grammar (Earley or CKY) should be linear to the size of the grammar and cubic to the size of the input token list. My experience with NLTK indicates that ambiguity is what hurts the performance most, not the absolute size of the grammar. So now I'm looking for a CFG parser to replace NLTK. I've been considering PLY but I can't tell whether it supports feature structures in CFGs, which are required in my case, and the examples I've seen seem to be doing a lot of procedural parsing rather than just specifying a grammar. Can anybody show me an example of PLY both supporting feature structs and using a declarative grammar? I'm also fine with any other parser that can do what I need efficiently. A Python interface is preferable but not absolutely necessary.

    Read the article

  • Automatic Adjusting Range Table

    - by Bradford
    I have a table with a start date range, an end date range, and a few other additional columns. On input of a new record, I want to automatically adjust any overlapping date ranges (shrinking them to allow for the new input). I also want to ensure that no overlapping records can accidentally be inserted into this table. I'm using Oracle and Java for my application code. How should I enforce the prevention of overlapping date ranges and also allow for automatically adjusting overlapping ranges? Should I create an AFTER INSERT trigger, with a dbms_lock to serialize access, to prevent the overlapping data. Then in Java, apply the logic to auto adjust everything? Or should that part be in PL/SQL in stored procedure call? This is something that we need for a couple other tables so it'd be nice to abstract. If anyone has something like this already written, please share :) I did find this reference: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:474221407101 Here's an example of how each of the 4 overlapping cases should be handled for adjustment on insert: = Example 1 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 50, 'A') Gives (0, 10, 'X') **(20, 50, 'A') **(51, 100, 'Z') (200, 500, 'Y') = Example 2 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (40, 80, 'A') Gives (0, 10, 'X') **(30, 39, 'Z') **(40, 80, 'A') **(81, 100, 'Z') (200, 500, 'Y') = Example 3 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (50, 120, 'A') Gives (0, 10, 'X') **(30, 49, 'Z') **(50, 120, 'A') (200, 500, 'Y') = Example 4 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 120, 'A') Gives (0, 10, 'X') **(20, 120, 'A') (200, 500, 'Y') The algorithm is as follows: given range = g; input range = i; output range set = o if i.start <= g.start if i.end >= g.end o_1 = i else o_1 = i o_2 = (o.end + 1, g.end) else if i.end >= g.end o_1 = (g.start, i.start - 1) o_2 = i else o_1 = (g.start, i.start - 1) o_2 = i o_3 = (i.end + 1, i.end)

    Read the article

  • Soft Paint Bucket Fill: Colour Equality

    - by Bart van Heukelom
    I'm making a small app where children can fill preset illustrations with colours. I've succesfully implemented an MS-paint style paint bucket using the flood fill argorithm. However, near the edges of image elements pixels are left unfilled, because the lines are anti-aliased. This is because the current condition on whether to fill is colourAtCurrentPixel == colourToReplace (the colours are RGB uints) I'd like to add a smoothing/treshold option like in Photoshop and other sophisticated tools, but what's the algorithm to determine the equality/distance between two colours? if (match(pixel(x,y), colourToReplace) setpixel(x,y,colourToReplaceWith) How to fill in match()? Here's my current full code: var b:BitmapData = settings.background; b.lock(); var from:uint = b.getPixel(x,y); var q:Array = []; var xx:int; var yy:int; var w:int = b.width; var h:int = b.height; q.push(y*w + x); while (q.length != 0) { var xy:int = q.shift(); xx = xy % w; yy = (xy - xx) / w; if (b.getPixel(xx,yy) == from) { b.setPixel(xx,yy,to); if (xx != 0) q.push(xy-1); if (xx != w-1) q.push(xy+1); if (yy != 0) q.push(xy-w); if (yy != h-1) q.push(xy+w); } } b.unlock(null);

    Read the article

  • How to create Fibonacci Sequence in Java

    - by rfkrocktk
    I really suck at math. I mean, I REALLY suck at math. I'm trying to make a simple fibonacci sequence class for an algorithm I'll be using. I have seen the python example which looks something like this: a = 0 b = 1 while b < 10: print b a, b = b, b+a The problem is that I can't really make this work in any other language. I'd like to make it work in Java, since I can pretty much translate it into the other languages I use from there. This is the general thought: public class FibonacciAlgorithm { private Integer a = 0; private Integer b = 1; public FibonacciAlgorithm() { } public Integer increment() { a = b; b = a + b; return value; } public Integer getValue() { return b; } } All that I end up with is doubling, which I could do with multiplication :( Can anyone help me out? Math pwns me.

    Read the article

  • Linq Query to Update Nested Array Items?

    - by Brett
    I have an object structure generated from xsd.exe. Roughly, it consists of 3 nested arrays: protocols, sources and reports. The xml looks like this: <protocols> <protocol> <source> <report /> <report /> </source> <source> <report /> <report /> </source> </protocol> <!-- more protocols --> </protocols> I need to update a single "Report" within the data structure. A brute force algorithm is shown below. I know that this could be done using XDocument and Linq, but I'd prefer to update the data structure and then serialize the structure back to disk. Thoughts? Brett bool updated = false; foreach (ProtocolsProtocol protocol in protocols.Protocol) { if (updated) break; foreach (ProtocolsProtocolSource source in protocol.Source) { if (updated) break; for (int i = 0; i < source.Report.Length; i++) { ProtocolsProtocolSourceReport currentReport = source.Report[i]; if (currentReport.Id == report.Id) { currentReport.Attribute1 = report.Attribute1; currentReport.Attribute2 = report.Attribute2; updated = true; break; } } } }

    Read the article

  • How to exploit Diffie-hellman to perform a man in the middle attack

    - by jfisk
    Im doing a project where Alice and Bob send each other messages using the Diffie-Hellman key-exchange. What is throwing me for a loop is how to incorporate the certificate they are using in this so i can obtain their secret messages. From what I understand about MIM attakcs, the MIM acts as an imposter as seen on this diagram: Below are the details for my project. I understand that they both have g and p agreed upon before communicating, but how would I be able to implement this with they both having a certificate to verify their signatures? Alice prepares ?signA(NA, Bob), pkA, certA? where signA is the digital signature algorithm used by Alice, “Bob” is Bob’s name, pkA is the public-key of Alice which equals gx mod p encoded according to X.509 for a fixed g, p as specified in the Diffie-Hellman key- exchange and certA is the certificate of Alice that contains Alice’s public-key that verifies the signature; Finally, NA is a nonce (random string) that is 8 bytes long. Bob checks Alice's signature, and response with ?signB{NA,NB,Alice},pkB,certB?. Alice gets the message she checks her nonce NA and calculates the joint key based on pkA, pkB according to the Diffie-Hellman key exchange. Then Alice submits the message ?signA{NA,NB,Bob},EK(MA),certA? to Bob and Bobrespondswith?SignB{NA,NB,Alice},EK(MB),certB?. where MA and MB are their corresponding secret messages.

    Read the article

  • Using datetime float representation as primary key

    - by devanalyst
    From my experience I have learn that using an surrogate INT data type column as primary key esp. an IDENTITY key column offers better performance than using GUID or char/varchar data type column as primary key. I try to use IDENTITY key as primary key wherever possible. But recently I came across a schema where the tables were horizontally partitioned and were managed via a Partitioned view. So the tables could not have an IDENTITY column since that would make the Partitioned View non updatable. One work around for this was to create a dummy 'keygenerator' table with an identity column to generate IDs for primary key. But this would mean having a 'keygenerator' table for each of the Partitioned View. My next thought was to use float as a primary key. The reason is the following key algorithm that I devised DECLARE @KEY FLOAT SET @KEY = CONVERT(FLOAT,GETDATE())/100000.0 SET @KEY = @EMP_ID + @KEY Heres how it works. CONVERT(FLOAT,GETDATE()) gives float representation of current datetime since internally all datetime are represented by SQL as a float value. CONVERT(FLOAT,GETDATE())/100000.0 converts the float representation into complete decimal value i.e. all digits are pushed to right side of ".". @KEY = @EMP_ID + @KEY adds the Employee ID which is an integer to this decimal value. The logic is that the Employee ID is guaranteed to be unique across sessions since an employee cannot connect to an application more than once at the same time. And for the same employee each time a key will be generated the current datetime will be unique. In all an unique key across all employee sessions and across time. So for Emp Ids 11 and 12, I have key values like 12.40046693321566357, 11.40046693542361111 But my concern whether float data type as primary key offer benefits compared to choosing GUID or char/varchar as primary keys. Also important thing is because of partitioning the float column is going to be part of a composite key.

    Read the article

  • Padding is invalid and cannot be removed

    - by Ajay
    I have hosted an asp.net 2.0 site. Everyday, i am getting an error "Padding is invalid and cannot be removed" 2-3 times. Backend used is sql server 2005. the site is controlled via plesk 9.2 CP. Pooling is enabled with timeout as 120 Minutes, in IIS. Can it be the reason for this? I have not used any encryption except for the stored passwords(MD5) The error message is : " Base Exception = : Padding is invalid and cannot be removed. - Source : System.Web - TargetSite :Void ThrowError(System.Exception, System.String, System.String, Boolean)Message : Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. " & System Log (Application) says " Event code: 4009 Event message: Viewstate verification failed. Reason: The viewstate supplied failed integrity check. Event detail code: 50203 ViewStateException information: Exception message: Invalid viewstate. Port: 31235 User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) " I have hosted it in a dedicated server and not in a web farm. Will keeping a static machine key help resolve this issue? Please guide me on this.

    Read the article

  • RESTful API design question - how should one allow users to create new resource instances?

    - by Tamás
    I'm working in a research group where we intend to publish implementations of some of the algorithms we develop on the web via a RESTful API. Most of these algorithms work on small to medium size datasets, and in many cases, a user of our services might want to run multiple queries (with different parameters) on the same dataset, so for me it seems reasonable to allow users to upload their datasets in advance and refer to them in their queries later. In this sense, a dataset could be a resource in my API, and an algorithm could be another. My question is: how should I let the users upload their own datasets? I cannot simply let users upload their data to /dataset/dataset_id as letting the users invent their own dataset_ids might result in ID collision and users overwriting each other's datasets by accident. (I believe one of the most frequently used dataset ID would be test). I think an ideal way would be to have a dedicated URL (like /dataset/upload) where users can POST their datasets and the response would contain a unique ID under which the dataset was stored, but I'm not sure that it does not violate the basic principles of REST. What is the preferred way of dealing with such scenarios?

    Read the article

  • AES Encryption Java Invalid Key length

    - by wuntee
    I am trying to create an AES encryption method, but for some reason I keep getting a 'java.security.InvalidKeyException: Key length not 128/192/256 bits'. Here is the code: public static SecretKey getSecretKey(char[] password, byte[] salt) throws NoSuchAlgorithmException, InvalidKeySpecException{ SecretKeyFactory factory = SecretKeyFactory.getInstance("PBEWithMD5AndDES"); // NOTE: last argument is the key length, and it is 256 KeySpec spec = new PBEKeySpec(password, salt, 1024, 256); SecretKey tmp = factory.generateSecret(spec); SecretKey secret = new SecretKeySpec(tmp.getEncoded(), "AES"); return(secret); } public static byte[] encrypt(char[] password, byte[] salt, String text) throws NoSuchAlgorithmException, InvalidKeySpecException, NoSuchPaddingException, InvalidKeyException, InvalidParameterSpecException, IllegalBlockSizeException, BadPaddingException, UnsupportedEncodingException{ SecretKey secret = getSecretKey(password, salt); Cipher cipher = Cipher.getInstance("AES"); // NOTE: This is where the Exception is being thrown cipher.init(Cipher.ENCRYPT_MODE, secret); byte[] ciphertext = cipher.doFinal(text.getBytes("UTF-8")); return(ciphertext); } Can anyone see what I am doing wrong? I am thinking it may have something to do with the SecretKeyFactory algorithm, but that is the only one I can find that is supported on the end system I am developing against. Any help would be appreciated. Thanks.

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >