Search Results

Search found 5104 results on 205 pages for 'evolutionary algorithm'.

Page 180/205 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • Beginner video capture and processing/Camera selection

    - by mattbauch
    I'll soon be undertaking a research project in real-time event recognition but have no experience with the programming aspect of video capture (I'm an upperclassman undergraduate in computer engineering). I want to start off on the right foot so advice from anyone with experience would be great. The ultimate goal is to track events such as a person standing up/sitting down, entering/leaving a room, possibly even shrugging/slumping in posture, etc. from a security camera-like vantage point. First of all, which cameras/companies would you recommend? I'm looking to spend ~$100, more if necessary but not much. Great resolution isn't a must, but is desirable if affordable. What about IP network cameras vs. a USB type webcam? Webcams are less expensive, but IP cameras seem like they'd be much less work to deal with in software. What features should I look for in the camera? Once I've selected a camera, what does converting its output to a series of RGB bitmaps entail? I've never dealt with video encoding/decoding so a starting point or a tutorial that will guide me up to this point would be great if anyone has suggestions. Finally, what is the best (least complicated/most efficient) way to display video from the camera plus my own superimposed images (boxes around events in progress, for instance) in a GUI application? I can work on any operating system in any language. I have some experience with win32 GUIs and Java GUIs. The focus of the project is on the algorithm and so I'm trying to get the video capture/display portion of the app done cleanly and quickly. Thanks for any responses!!

    Read the article

  • Extra fulltext ordering criteria beyond default relevance

    - by Jeremy Warne
    I'm implementing an ingredient text search, for adding ingredients to a recipe. I've currently got a full text index on the ingredient name, which is stored in a single text field, like so: "Sauce, tomato, lite, Heinz" I've found that because there are a lot of ingredients with very similar names in the database, simply sorting by relevance doesn't work that well a lot of the time. So, I've found myself sorting by a bunch of my own rules of thumb, which probably duplicates a lot of the full-text search algorithm which spits out a numerical relevance. For instance (abridged): ORDER BY [ingredient name is exactly search term], [ingredient name starts with search term], [ingredient name starts with any word from the search and contains all search terms in some order], [ingredient name contains all search terms in some order], ...and so on. Each of these is defined in the SELECT specification as an expression returning either 1 or 0, and so I order by those in sequential order. I would love to hear suggestions for: A better way to define complicated order-by criteria in one place, say perhaps in a view or stored procedure that you can pass just the search term to and get back a set of results without having to worry about how they're ordered? A better tool for this than MySQL's fulltext engine -- perhaps if I was using Sphinx or something [which I've heard of but not used before], would I find some sort of complicated config option designed to solve problems like this? Some google search terms which might turn up discussion on how to order text items within a specific domain like this? I haven't found much that's of use. Thanks for reading!

    Read the article

  • Why can a struct defined inside a function not be used as functor to std::for_each?

    - by Oswald
    The following code won't compile. The compiler complains about *no matching function for call to for_each*. Why is this so? #include <map> #include <algorithm> struct Element { void flip() {} }; void flip_all(std::map<Element*, Element*> input) { struct FlipFunctor { void operator() (std::pair<Element* const, Element*>& item) { item.second->flip(); } }; std::for_each(input.begin(), input.end(), FlipFunctor()); } When I move struct FlipFunctor before function flip_all, the code compiles. Full error message: no matching function for call to ‘for_each(std::_Rb_tree_iterator<std::pair<Element* const, Element*> >, std::_Rb_tree_iterator<std::pair<Element* const, Element*> >, flip_all(std::map<Element*, Element*, std::less<Element*>, std::allocator<std::pair<Element* const, Element*> > >)::FlipFunctor)’

    Read the article

  • Multiple sendto() using UDP socket

    - by ereOn
    Hi, I have a network software which uses UDP to communicate with other instances of the same program. For different reasons, I must use UDP here. I recently had problems sending huge ammounts of data over UDP and had to implement a fragmentation system to split my messages into small data chunks. So far, it worked well but I now encounter an issue when I have to send a lot of data chunks. I have the following algorithm: Split message into small data chunks (around 1500 bytes) Iterate over the data chunks list and for each, send it using sendto() However, when I send a lot of data chunks, the receiver only gets the first 6 messages. Sometimes it misses the sixth and receives the seventh. It depends. Anyway, sendto() always indicates success. This always happen when I test my software over a loopback interface (127.0.0.1) but never over my LAN network. If I add something like std::cout << "test" << std::endl; between the sendto() then every frame is received. I am aware that UDP allows packet loss and that my frames might be loss for a lot of reasons and I suppose it has to do with the rate I am sending the data chunks at. What would be the right approach here ? Implementing some acknowledgement mechanism (just like TCP) seems overkill. Adding some arbitrary waiting time between the sendto() is ugly and will probably decrease performance. Increasing (if possible) the receiver UDP internal buffer ? I don't even know if this is possible. Something else ? I really need your advices here. Thank very much.

    Read the article

  • Recurrence relation solution

    - by Travis
    I'm revising past midterms for a final exam this week and am trying to make sense of a solution my professor posted for one of past exams. (You can see the original pdf here, question #6). I'm given the original recurrence relation T(m)=3T(n/2) + n and am told T(1) = 1. I'm pretty sure the solution I've been given is wrong in a few places. The solution is as follows: Let n=2^m T(2^m) = 3T(2^(m-1)) + 2^m 3T(2^(m-1)) = 3^2*T(2^(m-2)) + 2^(m-1)*3 ... 3^(m-1)T(2) = T(1) + 2*3^(m-1) I'm pretty sure this last line is incorrect and they forgot to multiply T(1) by 3^m. He then (tries to) sum the expressions: T(2^m) = 1 + (2^m + 2^(m-1)*3 + ... + 2*3(m-1)) = 1 + 2^m(1 + (3/2)^1 + (3/2)^2 + ... + (3/2)^(m-1)) = 1 + 2^m((3/2)^m-1)*(1/2) = 1 + 3^m - 2^(m-1) = 1 + n^log 3 - n/2 Thus the algorithm is big Theta of (n^log 3). I'm pretty sure that he also got the summation wrong here. By my calculations this should be as follows: T(2^m) = 2^m + 3 * 2^(m-1) + 3^2 * 2^(m-2) + ... + 3^m (3^m because 3^m*T(1) = 3^m should be added, not 1) = 2^m * ((3/2)^1 + (3/2)^2 + ... + (3/2)^m) = 2^m * sum of (3/2)^i from i=0 to m = 2^m * ((3/2)^(m+1) - 1)/(3/2 - 1) = 2^m * ((3/2)^(m+1) - 1)/(1/2) = 2^(m+1) * 3^(m+1)/2^(m+1) - 2^(m+1) = 3^(m+1) - 2 * 2^m Replacing n = 2^m, and from that m = log n T(n) = 3*3^(log n) - 2*n n is O(3^log n), thus the runtime is big Theta of (3^log n) Does this seem right? Thanks for your help!

    Read the article

  • why is this C++ Code not doing his job

    - by hamza
    i want to create a program that write all the primes in a file ( i know that its a popular algorithm but m trying to make it by my self ) , but it still showing all the numbers instead of just the primes , & i dont know why someone pleas tell me why #include <iostream> #include <stdlib.h> #include <stdio.h> void afficher_sur_un_ficher (FILE* ficher , int nb_bit ); int main() { FILE* p_fich ; char tab[4096] , mask ; int nb_bit = 0 , x ; for (int i = 0 ; i < 4096 ; i++ ) { tab[i] = 0xff ; } for (int i = 0 ; i < 4096 ; i++ ) { mask = 0x01 ; for (int j = 0 ; j < 8 ; j++) { if ((tab[i] & mask) != 0 ) { x = nb_bit ; while (( x > 1 )&&(x < 16384)) { tab[i] = tab[i] ^ mask ; x = x * 2 ; } afficher_sur_un_ficher (p_fich , nb_bit ) ; } mask = mask<<1 ; nb_bit++ ; } } system ("PAUSE"); return 0 ; } void afficher_sur_un_ficher (FILE* ficher , int nb_bit ) { ficher = fopen("res.txt","a+"); fprintf (ficher ,"%d \t" , nb_bit); int static z ; z = z+1 ; if ( z%10 == 0) fprintf (ficher , "\n"); fclose(ficher); }

    Read the article

  • Source of parsers for programming languages?

    - by Arkaaito
    I'm dusting off an old project of mine which calculates a number of simple metrics about large software projects. One of the metrics is the length of files/classes/methods. Currently my code "guesses" where class/method boundaries are based on a very crude algorithm (traverse the file, maintaining a "current depth" and adjusting it whenever you encounter unquoted brackets; when you return to the level a class or method began on, consider it exited). However, there are many problems with this procedure, and a "simple" way of detecting when your depth has changed is not always effective. To make this give accurate results, I need to use the canonical way (in each language) of detecting function definitions, class definitions and depth changes. This amounts to writing a simple parser to generate parse trees containing at least these elements for every language I want my project to be applicable to. Obviously parsers have been written for all these languages before, so it seems like I shouldn't have to duplicate that effort (even though writing parsers is fun). Is there some open-source project which collects ready-to-use parser libraries for a bunch of source languages? Or should I just be using ANTLR to make my own from scratch? (Note: I'd be delighted to port the project to another language to make use of a great existing resource, so if you know of one, it doesn't matter what language it's written in.)

    Read the article

  • For "draggable" div tags that are NOT nested: JQuery/JavaScript div tag “containment” approach/algor

    - by Pete Alvin
    Background: I've created an online circuit design application where .draggable() div tags are containers that contain smaller div containers and so forth. Question: For any particular div tag I need to quickly identify if it contains other div tags (that may in turn contain other div tags). -- Since the div tags are draggable, in the DOM they are NOT nested inside each other but I think are absolutely positioned. So I think that a "hit testing" approach is the only way to determine containment, unless there is some "secret" routine built-in somewhere that could help with this. I've searched JQuery and I don't see any built-in routine for this. Does anyone know of an algorithm that's quicker than O(n^2)? Seems like I have to walk the list of div tags in an outer loop (n) and have an inner loop (another n) to compare against all other div tags and do a "containment test" (position, width, height), building a list of contained div tags. That's n-squared. Then I have to build a list of all nested div tags by concatenating contained lists. So the total would be O(n^2)+n. There must be a better way?

    Read the article

  • 48-bit bitwise operations in Javascript?

    - by randomhelp
    I've been given the task of porting Java's Java.util.Random() to JavaScript, and I've run across a huge performance hit/inaccuracy using bitwise operators in Javascript on sufficiently large numbers. Some cursory research states that "bitwise operators in JavaScript are inherently slow," because internally it appears that JavaScript will cast all of its double values into signed 32-bit integers to do the bitwise operations (see https://developer.mozilla.org/En/Core_JavaScript_1.5_Reference/Operators/Bitwise_Operators for more on this.) Because of this, I can't do a direct port of the Java random number generator, and I need to get the same numeric results as Java.util.Random(). Writing something like this.next = function(bits) { if (!bits) { bits = 48; } this.seed = (this.seed * 25214903917 + 11) & ((1 << 48) - 1); return this.seed >>> (48 - bits); }; (which is an almost-direct port of the Java.util.Random()) code won't work properly, since Javascript can't do bitwise operations on an integer that size.) I've figured out that I can just make a seedable random number generator in 32-bit space using the Lehmer algorithm, but the trick is that I need to get the same values as I would with Java.util.Random(). What should I do to make a faster, functional port?

    Read the article

  • Editing 8bpp indexed Bitmaps

    - by Pedro Sá
    hi, i'm trying to edit the pixels of a 8bpp. Since this PixelFormat is indexed i'm aware that it uses a Color Table to map the pixel values. Even though I can edit the bitmap by converting it to 24bpp, 8bpp editing is much faster (13ms vs 3ms). But, changing each value when accessing the 8bpp bitmap results in some random rgb colors even though the PixelFormat remains 8bpp. I'm currently developing in c# and the algorithm is as follows: (C#) 1- Load original Bitmap at 8bpp 2- Create Empty temp Bitmap with 8bpp with the same size as the original 3-LockBits of both bitmaps and, using P/Invoke, calling c++ method where I pass the Scan0 of each BitmapData object. (I used a c++ method as it offers better performance when iterating through the Bitmap's pixels) (C++) 4- Create a int[256] palette according to some parameters and edit the temp bitmap bytes by passing the original's pixel values through the palette. (C#) 5- UnlockBits. My question is how can I edit the pixel values without having the strange rgb colors, or even better, edit the 8bpp bitmap's Color Table? Regards, Pedro

    Read the article

  • How to determine the (natural) language of a document?

    - by Robert Petermeier
    I have a set of documents in two languages: English and German. There is no usable meta information about these documents, a program can look at the content only. Based on that, the program has to decide which of the two languages the document is written in. Is there any "standard" algorithm for this problem that can be implemented in a few hours' time? Or alternatively, a free .NET library or toolkit that can do this? I know about LingPipe, but it is Java Not free for "semi-commercial" usage This problem seems to be surprisingly hard. I checked out the Google AJAX Language API (which I found by searching this site first), but it was ridiculously bad. For six web pages in German to which I pointed it only one guess was correct. The other guesses were Swedish, English, Danish and French... A simple approach I came up with is to use a list of stop words. My app already uses such a list for German documents in order to analyze them with Lucene.Net. If my app scans the documents for occurrences of stop words from either language the one with more occurrences would win. A very naive approach, to be sure, but it might be good enough. Unfortunately I don't have the time to become an expert at natural-language processing, although it is an intriguing topic.

    Read the article

  • Diffie-Hellman in Silverlight

    - by cmaduro
    I am trying to devise a security scheme for encrypting the application level data between a silverlight client, and a php webservice that I created. Since I am dealing with a public website the information I am pulling from the service is public, but the information I'm submitting to the webservice is not public. There is also a back end to the website for administration, so naturally all application data being pushed and pulled from the webservice to the silverlight administration back end must also be encrypted. Silverlight does not support asymmetric encryption, which would work for the public website. Symmetric encryption would only work on the back end because users do not log in to the public website, so no password based keys could be derived. Still symmetric encryption would be great, but I cannot securely save the private key in the silverlight client. Because it would either have to be hardcoded or read from some kind of config file. None of that is considered secure. So... plan B. My final alternative would be then to implement the Diffie-Hellman algorithm, which supports symmetric encryption by means of key agreement. However Diffie-Hellman is vulnerable to man-in-the-middle attacks. In other words, there is no guarantee that either side is sure of each others identity, making it possible for communication to be intercepted and altered without the receiving party knowing about it. It is thus recommended to use a private shared key to encrypt the key agreement handshaking, so that the identity of either party is confirmed. This brings me back to my initial problem that resulted in me needing to use Diffie-Hellman, how can I use a private key in a silverlight client without hardcoding it either in the code or an xml file. I'm all out of love on this one... is there any answer to this?

    Read the article

  • Automatic Adjusting Range Table

    - by Bradford
    I have a table with a start date range, an end date range, and a few other additional columns. On input of a new record, I want to automatically adjust any overlapping date ranges (shrinking them to allow for the new input). I also want to ensure that no overlapping records can accidentally be inserted into this table. I'm using Oracle and Java for my application code. How should I enforce the prevention of overlapping date ranges and also allow for automatically adjusting overlapping ranges? Should I create an AFTER INSERT trigger, with a dbms_lock to serialize access, to prevent the overlapping data. Then in Java, apply the logic to auto adjust everything? Or should that part be in PL/SQL in stored procedure call? This is something that we need for a couple other tables so it'd be nice to abstract. If anyone has something like this already written, please share :) I did find this reference: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:474221407101 Here's an example of how each of the 4 overlapping cases should be handled for adjustment on insert: = Example 1 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 50, 'A') Gives (0, 10, 'X') **(20, 50, 'A') **(51, 100, 'Z') (200, 500, 'Y') = Example 2 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (40, 80, 'A') Gives (0, 10, 'X') **(30, 39, 'Z') **(40, 80, 'A') **(81, 100, 'Z') (200, 500, 'Y') = Example 3 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (50, 120, 'A') Gives (0, 10, 'X') **(30, 49, 'Z') **(50, 120, 'A') (200, 500, 'Y') = Example 4 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 120, 'A') Gives (0, 10, 'X') **(20, 120, 'A') (200, 500, 'Y') The algorithm is as follows: given range = g; input range = i; output range set = o if i.start <= g.start if i.end >= g.end o_1 = i else o_1 = i o_2 = (o.end + 1, g.end) else if i.end >= g.end o_1 = (g.start, i.start - 1) o_2 = i else o_1 = (g.start, i.start - 1) o_2 = i o_3 = (i.end + 1, i.end)

    Read the article

  • Soft Paint Bucket Fill: Colour Equality

    - by Bart van Heukelom
    I'm making a small app where children can fill preset illustrations with colours. I've succesfully implemented an MS-paint style paint bucket using the flood fill argorithm. However, near the edges of image elements pixels are left unfilled, because the lines are anti-aliased. This is because the current condition on whether to fill is colourAtCurrentPixel == colourToReplace (the colours are RGB uints) I'd like to add a smoothing/treshold option like in Photoshop and other sophisticated tools, but what's the algorithm to determine the equality/distance between two colours? if (match(pixel(x,y), colourToReplace) setpixel(x,y,colourToReplaceWith) How to fill in match()? Here's my current full code: var b:BitmapData = settings.background; b.lock(); var from:uint = b.getPixel(x,y); var q:Array = []; var xx:int; var yy:int; var w:int = b.width; var h:int = b.height; q.push(y*w + x); while (q.length != 0) { var xy:int = q.shift(); xx = xy % w; yy = (xy - xx) / w; if (b.getPixel(xx,yy) == from) { b.setPixel(xx,yy,to); if (xx != 0) q.push(xy-1); if (xx != w-1) q.push(xy+1); if (yy != 0) q.push(xy-w); if (yy != h-1) q.push(xy+w); } } b.unlock(null);

    Read the article

  • Efficient Context-Free Grammar parser, preferably Python-friendly

    - by Max Shawabkeh
    I am in need of parsing a small subset of English for one of my project, described as a context-free grammar with (1-level) feature structures (example) and I need to do it efficiently . Right now I'm using NLTK's parser which produces the right output but is very slow. For my grammar of ~450 fairly ambiguous non-lexicon rules and half a million lexical entries, parsing simple sentences can take anywhere from 2 to 30 seconds, depending it seems on the number of resulting trees. Lexical entries have little to no effect on performance. Another problem is that loading the (25MB) grammar+lexicon at the beginning can take up to a minute. From what I can find in literature, the running time of the algorithm used to parse such a grammar (Earley or CKY) should be linear to the size of the grammar and cubic to the size of the input token list. My experience with NLTK indicates that ambiguity is what hurts the performance most, not the absolute size of the grammar. So now I'm looking for a CFG parser to replace NLTK. I've been considering PLY but I can't tell whether it supports feature structures in CFGs, which are required in my case, and the examples I've seen seem to be doing a lot of procedural parsing rather than just specifying a grammar. Can anybody show me an example of PLY both supporting feature structs and using a declarative grammar? I'm also fine with any other parser that can do what I need efficiently. A Python interface is preferable but not absolutely necessary.

    Read the article

  • How to create Fibonacci Sequence in Java

    - by rfkrocktk
    I really suck at math. I mean, I REALLY suck at math. I'm trying to make a simple fibonacci sequence class for an algorithm I'll be using. I have seen the python example which looks something like this: a = 0 b = 1 while b < 10: print b a, b = b, b+a The problem is that I can't really make this work in any other language. I'd like to make it work in Java, since I can pretty much translate it into the other languages I use from there. This is the general thought: public class FibonacciAlgorithm { private Integer a = 0; private Integer b = 1; public FibonacciAlgorithm() { } public Integer increment() { a = b; b = a + b; return value; } public Integer getValue() { return b; } } All that I end up with is doubling, which I could do with multiplication :( Can anyone help me out? Math pwns me.

    Read the article

  • Linq Query to Update Nested Array Items?

    - by Brett
    I have an object structure generated from xsd.exe. Roughly, it consists of 3 nested arrays: protocols, sources and reports. The xml looks like this: <protocols> <protocol> <source> <report /> <report /> </source> <source> <report /> <report /> </source> </protocol> <!-- more protocols --> </protocols> I need to update a single "Report" within the data structure. A brute force algorithm is shown below. I know that this could be done using XDocument and Linq, but I'd prefer to update the data structure and then serialize the structure back to disk. Thoughts? Brett bool updated = false; foreach (ProtocolsProtocol protocol in protocols.Protocol) { if (updated) break; foreach (ProtocolsProtocolSource source in protocol.Source) { if (updated) break; for (int i = 0; i < source.Report.Length; i++) { ProtocolsProtocolSourceReport currentReport = source.Report[i]; if (currentReport.Id == report.Id) { currentReport.Attribute1 = report.Attribute1; currentReport.Attribute2 = report.Attribute2; updated = true; break; } } } }

    Read the article

  • How to exploit Diffie-hellman to perform a man in the middle attack

    - by jfisk
    Im doing a project where Alice and Bob send each other messages using the Diffie-Hellman key-exchange. What is throwing me for a loop is how to incorporate the certificate they are using in this so i can obtain their secret messages. From what I understand about MIM attakcs, the MIM acts as an imposter as seen on this diagram: Below are the details for my project. I understand that they both have g and p agreed upon before communicating, but how would I be able to implement this with they both having a certificate to verify their signatures? Alice prepares ?signA(NA, Bob), pkA, certA? where signA is the digital signature algorithm used by Alice, “Bob” is Bob’s name, pkA is the public-key of Alice which equals gx mod p encoded according to X.509 for a fixed g, p as specified in the Diffie-Hellman key- exchange and certA is the certificate of Alice that contains Alice’s public-key that verifies the signature; Finally, NA is a nonce (random string) that is 8 bytes long. Bob checks Alice's signature, and response with ?signB{NA,NB,Alice},pkB,certB?. Alice gets the message she checks her nonce NA and calculates the joint key based on pkA, pkB according to the Diffie-Hellman key exchange. Then Alice submits the message ?signA{NA,NB,Bob},EK(MA),certA? to Bob and Bobrespondswith?SignB{NA,NB,Alice},EK(MB),certB?. where MA and MB are their corresponding secret messages.

    Read the article

  • Using datetime float representation as primary key

    - by devanalyst
    From my experience I have learn that using an surrogate INT data type column as primary key esp. an IDENTITY key column offers better performance than using GUID or char/varchar data type column as primary key. I try to use IDENTITY key as primary key wherever possible. But recently I came across a schema where the tables were horizontally partitioned and were managed via a Partitioned view. So the tables could not have an IDENTITY column since that would make the Partitioned View non updatable. One work around for this was to create a dummy 'keygenerator' table with an identity column to generate IDs for primary key. But this would mean having a 'keygenerator' table for each of the Partitioned View. My next thought was to use float as a primary key. The reason is the following key algorithm that I devised DECLARE @KEY FLOAT SET @KEY = CONVERT(FLOAT,GETDATE())/100000.0 SET @KEY = @EMP_ID + @KEY Heres how it works. CONVERT(FLOAT,GETDATE()) gives float representation of current datetime since internally all datetime are represented by SQL as a float value. CONVERT(FLOAT,GETDATE())/100000.0 converts the float representation into complete decimal value i.e. all digits are pushed to right side of ".". @KEY = @EMP_ID + @KEY adds the Employee ID which is an integer to this decimal value. The logic is that the Employee ID is guaranteed to be unique across sessions since an employee cannot connect to an application more than once at the same time. And for the same employee each time a key will be generated the current datetime will be unique. In all an unique key across all employee sessions and across time. So for Emp Ids 11 and 12, I have key values like 12.40046693321566357, 11.40046693542361111 But my concern whether float data type as primary key offer benefits compared to choosing GUID or char/varchar as primary keys. Also important thing is because of partitioning the float column is going to be part of a composite key.

    Read the article

  • Padding is invalid and cannot be removed

    - by Ajay
    I have hosted an asp.net 2.0 site. Everyday, i am getting an error "Padding is invalid and cannot be removed" 2-3 times. Backend used is sql server 2005. the site is controlled via plesk 9.2 CP. Pooling is enabled with timeout as 120 Minutes, in IIS. Can it be the reason for this? I have not used any encryption except for the stored passwords(MD5) The error message is : " Base Exception = : Padding is invalid and cannot be removed. - Source : System.Web - TargetSite :Void ThrowError(System.Exception, System.String, System.String, Boolean)Message : Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. " & System Log (Application) says " Event code: 4009 Event message: Viewstate verification failed. Reason: The viewstate supplied failed integrity check. Event detail code: 50203 ViewStateException information: Exception message: Invalid viewstate. Port: 31235 User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) " I have hosted it in a dedicated server and not in a web farm. Will keeping a static machine key help resolve this issue? Please guide me on this.

    Read the article

  • RESTful API design question - how should one allow users to create new resource instances?

    - by Tamás
    I'm working in a research group where we intend to publish implementations of some of the algorithms we develop on the web via a RESTful API. Most of these algorithms work on small to medium size datasets, and in many cases, a user of our services might want to run multiple queries (with different parameters) on the same dataset, so for me it seems reasonable to allow users to upload their datasets in advance and refer to them in their queries later. In this sense, a dataset could be a resource in my API, and an algorithm could be another. My question is: how should I let the users upload their own datasets? I cannot simply let users upload their data to /dataset/dataset_id as letting the users invent their own dataset_ids might result in ID collision and users overwriting each other's datasets by accident. (I believe one of the most frequently used dataset ID would be test). I think an ideal way would be to have a dedicated URL (like /dataset/upload) where users can POST their datasets and the response would contain a unique ID under which the dataset was stored, but I'm not sure that it does not violate the basic principles of REST. What is the preferred way of dealing with such scenarios?

    Read the article

  • AES Encryption Java Invalid Key length

    - by wuntee
    I am trying to create an AES encryption method, but for some reason I keep getting a 'java.security.InvalidKeyException: Key length not 128/192/256 bits'. Here is the code: public static SecretKey getSecretKey(char[] password, byte[] salt) throws NoSuchAlgorithmException, InvalidKeySpecException{ SecretKeyFactory factory = SecretKeyFactory.getInstance("PBEWithMD5AndDES"); // NOTE: last argument is the key length, and it is 256 KeySpec spec = new PBEKeySpec(password, salt, 1024, 256); SecretKey tmp = factory.generateSecret(spec); SecretKey secret = new SecretKeySpec(tmp.getEncoded(), "AES"); return(secret); } public static byte[] encrypt(char[] password, byte[] salt, String text) throws NoSuchAlgorithmException, InvalidKeySpecException, NoSuchPaddingException, InvalidKeyException, InvalidParameterSpecException, IllegalBlockSizeException, BadPaddingException, UnsupportedEncodingException{ SecretKey secret = getSecretKey(password, salt); Cipher cipher = Cipher.getInstance("AES"); // NOTE: This is where the Exception is being thrown cipher.init(Cipher.ENCRYPT_MODE, secret); byte[] ciphertext = cipher.doFinal(text.getBytes("UTF-8")); return(ciphertext); } Can anyone see what I am doing wrong? I am thinking it may have something to do with the SecretKeyFactory algorithm, but that is the only one I can find that is supported on the end system I am developing against. Any help would be appreciated. Thanks.

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • Developmnet process for an embedded project with significant Hardware change

    - by pierr
    Hi, I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware change. I will describe below what we are currently doing (Ad-hoc way , no defined process yet). The change are divided to three categories and different process are used for them : complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the leagcy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real Hardware hardware improvement example : enhance the image display quaulity by improving the underlie algorithm a)RTL/FPGA simulation b)Wait until hardware and test on the hardware Mino change exmaple : only change hardware register mapping a)Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware change as the bring up schedule is always very tight and the customer desired a seemless change when updating to a new version hardware. How did you manage this kind of hardware hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatical test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well documented process for this kind of change? Thanks for your insight.

    Read the article

  • So can unique_ptr be used safely in stl collections?

    - by DanDan
    I am confused with unique_ptr and rvalue move philosophy. Let's say we have two collections: std::vector<std::auto_ptr<int>> autoCollection; std::vector<std::unique_ptr<int>> uniqueCollection; Now I would expect the following to fail, as there is no telling what the algorithm is doing internally and maybe making internal pivot copies and the like, thus ripping away ownership from the auto_ptr: std::sort(autoCollection.begin(), autoCollection.end()); I get this. And the compiler rightly disallows this happening. But then I do this: std::sort(uniqueCollection.begin(), uniqueCollection.end()); And this compiles. And I do not understand why. I did not think unique_ptrs could be copied. Does this mean a pivot value cannot be taken, so the sort is less efficient? Or is this pivot actually a move, which in fact is as dangerous as the collection of auto_ptrs, and should be disallowed by the compiler? I think I am missing some crucial piece of information, so I eagerly await someone to supply me with the aha! moment.

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >