Search Results

Search found 1375 results on 55 pages for 'asymptotic complexity'.

Page 43/55 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Sorting by dates (including nil) with NSFetchedResultsController

    - by glorifiedHacker
    In my NSFetchedResultsController, I set a sortDescriptor that sorts based on the date property of my managed objects. The problem that I have encountered (along with several others according to Google) is that nil values are sorted at the earliest end rather than the latest end of the date spectrum. I want my list to be sorted earliest, earlier, now, later, latest, nil. As I understand it, this sorting is done at the database level in SQLite and so I cannot construct my own compare: method to provide the sorting I want. I don't want to manually sort in memory, because I would have to give up all of the benefits of NSFetchedResultsController. I can't do compound sorting because the sectionNameKeyPaths are tightly coupled to the date ranges. I could write a routine that redirects indexPath requests so that section 0 in the results controller gets mapped to the last section of the tableView, but I fear that would add a lot of overhead, severely increase the complexity of my code, and be very, very error-prone. The latest idea that I am considering is to map all nil dates to the furthest future date that NSDate supports. My left brain hates this idea, as it feels more like a hack. It will also take a bit of work to implement, since checking for nil factors heavily into how I process dates in my app. I don't want to go this route without first checking for better options. Can anyone think of a better way to get around this problem?

    Read the article

  • How to implement Excel Solver functionality in C#?

    - by Vic
    Hi, I have an application in C#, I need to do some optimization calculations, like Excel Solver Add-in does, one option is certainly to write my own solver implementation, but I'm kind of short of time, so I'm looking into libraries that already exist that can help me with this. I've been trying the Microsoft Solver Foundation, which seems pretty neat and cool, the problem is that it doesn't seem to work with the kind of calculations that I need to do. At the end of this question I'm adding the information about the calculations I need to perform and optimize. So basically my question is if any of you know of any other library that I can use for this purpose, or any tutorial that can help to do my own solver, or any idea that gives me a lead to solve this issue. Thanks. Additional Info: This is the data I need to calculate: I have 7 variables, lets call them var1, var2,...,var7 The constraints for these variables are: All of them need to be 0 <= varn <= 0.5 (where n is the number of the variable) The sum of all the variables should be equal to 1 The objective is to maximize the target formula, which in Excel looks like this: (MMULT(TRANSPOSE(L26:L32),M14:M20)) / (SQRT(MMULT(MMULT(TRANSPOSE(L26:L32),M4:S10),L26:L32))) The range that you see in this formula, L26:L32, is actually the range with the variables from above, var1, var2,..., varn. M14:M20 and M4:S10 are ranges with data that I get from different sources, there are more likely decimal values. As I said before, I was using Microsoft Solver Foundation, I modeled pretty much everything with it, I created functions that handle the operations of the target formula, but when I tried to solve the model it always fail, I think it is because of the complexity of the operations. In any case, I just wanted to show these data so you can have an idea about the kind of calculations that I need to implement.

    Read the article

  • How to export Configuration of each IIS 6 Web Site using WMI

    - by Diego
    Hi all, I need to export the WebSite Configuration (MetaBase) from IIS6 for a whole server, but I need each web site configuration to be saved separately. The result should be the same that can be obtained by right clicking on each Web Site in IIS Snap-in and selecting All Tasks-Save Configuration to a File. My idea is quite simple: 1- Using WMI, retrieve a list of Web Sites. 2- Also via WMI, export each configuration to a file having the same name of the Web Site. However, I'm having some difficulties, mainly due to the complexity of WMI and, in my opinion, the poor documentation of its classes. At the moment I found out how to enumerate Virtual Directories, but not Web Sites; while it's true that this method returns Web Sites as well, I can't see how to identify them among other virtual folders (a Web Site has different properties, which are not returned in the Virtual Directory Object), and I can't find the Web Site Description property, which I'd like to use to name the exported file. I thought about a workaround for all the above issues, but it would be more a hack than a real solution... Any help if appreciated. Thanks.

    Read the article

  • Constructing human readable sentences based on a survey

    - by Joshua
    The following is a survey given to course attendees to assess an instructor at the end of the course. Communication Skills 1. The instructor communicated course material clearly and accurately. Yes No 2. The instructor explained course objectives and learning outcomes. Yes No 3. In the event of not understanding course materials the instructor was available outside of class. Yes No 4. Was instructor feedback and grading process clear and helpful? Yes No 5. Do you feel that your oral and written skills have improved while in this course? Yes No We would like to summarize each attendees selection based on the choices chosen by him. If the provided answers were [No, No, Yes, Yes, Yes]. Then we would summarize this as "The instructor was not able to summarize course objectives and learning outcomes clearly, but was available for usually helpful outside of class. The instructor feedback and grading process was clear and helpful and I feel that my oral and written skills have improved because of this course. Based on the selections chosen by the attendee the summary would be quite different. This leads to many answers based on the choices selected and the number of such questions in the survey. The questions are usually provided by the training organization. How do you come up with a generic solution so that this can be effectively translated into a human readable form. I am looking for tools or libraries (java based), suggestions which will help me create such human readable output. I would like to hide the complexity from the end users as much as possible.

    Read the article

  • How to protect copyright on large web applications?

    - by Saif Bechan
    Recently I have read about Myows, described as "the universal copyright management and protection app for smart creatives". It is used to protect copyright and more. Currently I am working on a large web application which is in late testing phase. Because of the complexity of the app there are not many versions of it online so copyright will be a huge issue for me as much of the code is in JavaScript and is easy copyable. I was glad to see that there is some company out there that provide such services, and naturally I wanted to know if there were people using it. I did not know that this type of concept was so new. Is protecting copyright a good idea for a large web application? If so, do you think Myows will be worth using, or are there better ways to achieve that? Update: Wow, there is no better person to have answered this question like the creator himself. There were some nice points made in the answer, and I think it will be a good service for people like me. In the next couple of weeks I will be looking further into the subject and start uploading my code and see how it works out. I will leave this question up because I do want some more suggestions on this topic.

    Read the article

  • Small-o(n^2) implementation of Polynomial Multiplication

    - by AlanTuring
    I'm having a little trouble with this problem that is listed at the back of my book, i'm currently in the middle of test prep but i can't seem to locate anything regarding this in the book. Anyone got an idea? A real polynomial of degree n is a function of the form f(x)=a(n)x^n+?+a1x+a0, where an,…,a1,a0 are real numbers. In computational situations, such a polynomial is represented by a sequence of its coefficients (a0,a1,…,an). Assuming that any two real numbers can be added/multiplied in O(1) time, design an o(n^2)-time algorithm to compute, given two real polynomials f(x) and g(x) both of degree n, the product h(x)=f(x)g(x). Your algorithm should **not** be based on the Fast Fourier Transform (FFT) technique. Please note it needs to be small-o(n^2), which means it complexity must be sub-quadratic. The obvious solution that i have been finding is indeed the FFT, but of course i can't use that. There is another method that i have found called convolution, where if you take polynomial A to be a signal and polynomial B to be a filter. A passed through B yields a shifted signal that has been "smoothed" by A and the resultant is A*B. This is supposed to work in O(n log n) time. Of course i am completely unsure of implementation. If anyone has any ideas of how to achieve a small-o(n^2) implementation please do share, thanks.

    Read the article

  • Help understanding the Single Responsibility Principle

    - by user204588
    I'm trying to understand what a responsibility actually is so I want to use an example of something I'm currently working on. I have a app that imports product information from one system to another system. The user of the apps gets to choose various settings for which product fields in one system that want to use in the other system. So I have a class, say ProductImporter and it's responsibility is to import products. This class is large, probably too large. The methods in this class are complex and would be for example, getDescription. This method doesn't simply grab a description from the other system but sets a product description based on various settings set by the user. If I were to add a setting and a new way to get a description, this class could change. So, is that two responsibilities? Is there one that imports products and one that gets a description. It would seem this way, almost every method I have would be in it's own class and that seems like overkill. I really need a good description of this principle because it's hard for me to completely understand. I don't want needless complexity.

    Read the article

  • How to target multiple versions of .NET Framework from MSBuild?

    - by McKAMEY
    I am improving the builds for an open source project which currently supports .NET Framework v2.0, v3.5, and now v4.0. Up until now, I've restricted myself to v2.0 to ensure compatibility, but with VS2010 I am interested in having real targeted builds. I'm looking for some guidance on how to edit the MSBuild csproj/sln to be able to cleanly produce builds for each target. I'm willing to have complexity in the csproj and in a batch file to control the build. My goal is to be able to have a command line script that could produce the builds without needing Visual Studio installed, but only the necessary .NET Framework(s). Ideally, I'd like to minimize dependencies on additional software. I notice that a lot of people use NAnt (e.g. Ninject builds many targets with NAnt) but I'm unsure if this is necessary or if they are just more familiar with it. I'm pretty sure this can be done but am having trouble finding a definitive guide on setting it up and best practices. Bonus: my next step after getting this set up will be to better support Mono Framework. Any help on doing this same thing for Mono would be much appreciated.

    Read the article

  • SQL 2008: Using separate tables for each datatype to return single row

    - by Thomas C
    Hi all I thought I'd be flexible this time around and let the users decide what contact information the wish to store in their database. In theory it would look as a single row containing, for instance; name, adress, zipcode, Category X, Listitems A. Example FieldType table defining the datatypes available to a user: FieldTypeID, FieldTypeName, TableName 1,"Integer","tblContactInt" 2,"String50","tblContactStr50" ... A user the define his fields in the FieldDefinition table: FieldDefinitionID, FieldTypeID, FieldDefinitionName 11,2,"Name" 12,2,"Adress" 13,1,"Age" Finally we store the actual contact data in separate tables depending on its datatype. Master table, only contains the ContactID tblContact: ContactID 21 22 tblContactStr50: ContactStr50ID,ContactID,FieldDefinitionID,ContactStr50Value 31,21,11,"Person A" 32,21,12,"Adress of person A" 33,22,11,"Person B" tblContactInt: ContactIntID,ContactID,FieldDefinitionID,ContactIntValue 41,22,13,27 Question: Is it possible to return the content of these tables in two rows like this: ContactID,Name,Adress,Age 21,"Person A","Adress of person A",NULL 22,"Person B",NULL,27 I have looked into using the COALESCE and Temp tables, wondering if this is at all possible. Even if it is: maybe I'm only adding complexity whilst sacrificing performance for benefit in datastorage and user definition option. What do you think? Best Regards /Thomas C

    Read the article

  • How to pass binary data between two apps using Content Provider?

    - by Viktor
    I need to pass some binary data between two android apps using Content Provider (sharedUserId is not an option). I would prefer not to pass the data (a savegame stored as a file, small in size < 20k) as a file (ie. overriding openFile()) since this would necessitate some complicated temp-file scheme to cope with concurrency with several content provider accesses and a running game. I would like to read the file into memory under a mutex lock and then pass the binary array in the simplest way possible. How do I do this? It seems creating a file in memory is not a possibility due to the return type of openFile(). query() needs to return a Cursor. Using MatrixCursor is not possible since it applies toString() to all stored objects when reading it. What do I need to do? Implement a custom Cursor? This class has 30 abstract methods. Do I read the file, put it in a SQLite db and return the cursor? The complexity of this seemingly simple task is mindboggling.

    Read the article

  • Calling function using 'new' is less expensive than without it?

    - by Matthew Taylor
    Given this very familiar model of prototypal construction: function Rectangle(w,h) { this.width = w; this.height = h; } Rectangle.prototype.area = function() { return this.width * this.height; }; Can anyone explain why calling "new Rectangle(2,3)" is consistently 10x FASTER than calling "Rectangle(2,3)" without the 'new' keyword? I would have assumed that because new adds more complexity to the execution of a function by getting prototypes involved, it would be slower. Example: var myTime; function startTrack() { myTime = new Date(); } function stopTrack(str) { var diff = new Date().getTime() - myTime.getTime(); println(str + ' time in ms: ' + diff); } function trackFunction(desc, func, times) { var i; if (!times) times = 1; startTrack(); for (i=0; i<times; i++) { func(); } stopTrack('(' + times + ' times) ' + desc); } var TIMES = 1000000; trackFunction('new rect classic', function() { new Rectangle(2,3); }, TIMES); trackFunction('rect classic (without new)', function() { Rectangle(2,3); }, TIMES); Yields (in Chrome): (1000000 times) new rect classic time in ms: 33 (1000000 times) rect classic (without new) time in ms: 368 (1000000 times) new rect classic time in ms: 35 (1000000 times) rect classic (without new) time in ms: 374 (1000000 times) new rect classic time in ms: 31 (1000000 times) rect classic (without new) time in ms: 368

    Read the article

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Perform Grouping of Resultsets in Code, not on Database Level

    - by NinjaBomb
    Stackoverflowers, I have a resultset from a SQL query in the form of: Category Column2 Column3 A 2 3.50 A 3 2 B 3 2 B 1 5 ... I need to group the resultset based on the Category column and sum the values for Column2 and Column3. I have to do it in code because I cannot perform the grouping in the SQL query that gets the data due to the complexity of the query (long story). This grouped data will then be displayed in a table. I have it working for specific set of values in the Category column, but I would like a solution that would handle any possible values that appear in the Category column. I know there has to be a straightforward, efficient way to do it but I cannot wrap my head around it right now. How would you accomplish it? EDIT I have attempted to group the result in SQL using the exact same grouping query suggested by Thomas Levesque and both times our entire RDBMS crashed trying to process the query. I was under the impression that Linq was not available until .NET 3.5. This is a .NET 2.0 web application so I did not think it was an option. Am I wrong in thinking that? EDIT Starting a bounty because I believe this would be a good technique to have in the toolbox to use no matter where the different resultsets are coming from. I believe knowing the most concise way to group any 2 somewhat similar sets of data in code (without .NET LINQ) would be beneficial to more people than just me.

    Read the article

  • Objective-C Implementation Pointers

    - by Dwaine Bailey
    Hi, I am currently writing an XML parser that parses a lot of data, with a lot of different nodes (the XML isn't designed by me, and I have no control over the content...) Anyway, it currently takes an unacceptably long time to download and read in (about 13 seconds) and so I'm looking for ways to increase the efficiency of the read. I've written a function to create hash values, so that the program no longer has to do a lot of string comparison (just NSUInteger comparison), but this still isn't reducing the complexity of the read in... So I thought maybe I could create an array of IMPs so that, I could then go something like: for(int i = 0; i < [hashValues count]; i ++) { if(currHash == [[hashValues objectAtIndex:i] unsignedIntValue]) { [impArray objectAtIndex:i]; } } Or something like that. The only problem is that I don't know how to actually make the call to the IMP function? I've read that I perform the selector that an IMP defines by going IMP tImp = [impArray objectAtIndex:i]; tImp(self, @selector(methodName)); But, if I need to know the name of the selector anyway, what's the point? Can anybody help me out with what I want to do? Or even just some more ways to increase the efficiency of the parser...

    Read the article

  • What options to parse a DTD using PHP

    - by Chadwick
    I need to parse DTDs using PHP and am hoping there's a simple library to help out. Each DTD has numerous <!ENTITY... and <!-- Comment... elements, which I need to act upon. Note that I do not need to validate anything against these DTDs, simply parse them as data files themselves. A few options I've looked at: James Clarke's SD, which is an option of last resort, but I'd like to avoid the complexity of building/installing/configuring code external to PHP. I'm not sure it's even possible in my situation. PEAR has an XML_DTD_Parser, which requires installing/configuring PEAR and a number of pear modules, which I'm also not sure is possible, and would rather avoid. Has anyone used it with success? PHP XML Classes has the class_path_parser, which another site suggested, but it fails to read ENTITY elements. It appears to be using PHP's built in XML parsing capabilities, which use EXPAT. PHP's DOMDocument will validate against a DTD, so must be able to read them, though I don't see how to get at the DTD parser directly at first glance.

    Read the article

  • What libraries will parse a DTD using PHP

    - by Chadwick
    I need to parse DTDs using PHP and am hoping there's a simple library to help out. Each DTD has numerous <!ENTITY... and <!-- Comment... elements, which I need to act upon. Note that I do not need to validate anything against these DTDs, simply parse them as data files themselves. A few options I've looked at: James Clarke's SD, which is an option of last resort, but I'd like to avoid the complexity of building/installing/configuring code external to PHP. I'm not sure it's even possible in my situation. PEAR has an XML_DTD_Parser, which requires installing/configuring PEAR and a number of pear modules, which I'm also not sure is possible, and would rather avoid. Has anyone used it with success? PHP XML Classes has the class_path_parser, which another site suggested, but it fails to read ENTITY elements. It appears to be using PHP's built in XML parsing capabilities, which use EXPAT. PHP's DOMDocument will validate against a DTD, so must be able to read them, though I don't see how to get at the DTD parser directly at first glance.

    Read the article

  • VB.NET vs. C#.NET?

    - by Onion-Knight
    Hello everyone, The company I work for has all of its legacy ("legacy" being used rather liberally in this context) code in VB.NET. They have about 6000+ lines of VB.NET code, so all of the developers are comfortable with it. We have started to develop a new product, and are finding that some modules are easier to complete in C# than in VB.NET, such as Interprocess Communication via WCF. The things our product will eventually need to do are as follows: Communicate via IPC between Windows Services, Silverlight, and WinForms Handle parallization, and all the complexity that comes along with it Windows Service and WinForms development ASP.NET, AJAX, and Silverlight development Database (SQL) access Lots of event handling (Sync and Async events) My question is: Given the type of work we will be doing to complete our product, are there features of one language that will make life easier that the other does not have? And if so, it is worth asking the developers to switch to a language they are less comfortable with? I was hoping to keep this as objective as possible, by listing exactly what type of work we will be doing with the product. Please don't turn this into a VB/C# holy war. Thanks, Onion-Knight

    Read the article

  • searching for a programming platform with hot code swap

    - by Andreas
    I'm currently brainstorming over the idea how to upgrade a program while it is running. (Not while debugging, a "production" system.) But one thing that is required for it, is to actually submit the changed source code or compiled byte code into the running process. Pseudo Code var method = typeof(MyClass).GetMethod("Method1"); var content = //get it from a database (bytecode or source code) SELECT content FROM methods WHERE id=? AND version=? method.SetContent(content); At first, I want to achieve the system to work without the complexity of object-orientation. That leads to the following requirements: change source code or byte code of function drop functions add new functions change the signature of a function With .NET (and others) I could inject a class via an IoC and could thus change the source code. But the loading would be cumbersome, because everything has to be in an Assembly or created via Emit. Maybe with Java this would be easier? The whole ClassLoader is replacable, I think. With JavaScript I could achieve many of the goals. Simply eval a new function (MyMethod_V25) and assign it to MyClass.prototype.MyMethod. I think one can also drop functions somehow with "del" Which general-purpose platform can handle such things?

    Read the article

  • Search implementation dilemma: full text vs. plain SQL

    - by Ethan
    I have a MySQL/Rails app that needs search. Here's some info about the data: Users search within their own data only, so searches are narrowed down by user_id to begin with. Each user will have up to about five thousand records (they accumulate over time). I wrote out a typical user's records to a text file. The file size is 2.9 MB. Search has to cover two columns: title and body. title is a varchar(255) column. body is column type text. This will be lightly used. If I average a few searches per second that would be surprising. It's running an a 500 MB CentOS 5 VPS machine. I don't want relevance ranking or any kind of fuzziness. Searches should be for exact strings and reliably return all records containing the string. Simple date order -- newest to oldest. I'm using the InnoDB table type. I'm looking at plain SQL search (through the searchlogic gem) or full text search using Sphinx and the Thinking Sphinx gem. Sphinx is very fast and Thinking Sphinx is cool, but it adds complexity, a daemon to maintain, cron jobs to maintain the index. Can I get away with plain SQL search for a small scale app?

    Read the article

  • What is the best software design to use in this scenario

    - by domdefelice
    I need to generate HTML snippets using jQuery. The creation of those snippets depends on some data. The data is stored server-side, in session (where PHP is used). At the moment I achieved this - retrieving the data from the server via AJAX in form of JSON - and building the snippets via specific javascript functions that read those data The problem is that the complexity of the data is getting bigger and hence the serialization into JSON is getting even more difficult since I can't do it automatically. I can't do it automatically because some information are sensible so I generate a "stripped" version to send to the client. I know it is difficult to understand without any code to read, but I am hoping this is a common scenario and would be glad for any tip, suggestion or even design-pattern you can give me. Should I store both a complete and a stripped data on the server and then use some library to automatically generate the JSON from the stripped data? But this also means I have to get the two data synchronized. Or maybe I could move the logic server-side, this way avoiding sending the data. But this means sending javascript code (since I rely on jQuery). Maybe not a good idea. Feel free to ask me more details if this is not clear. Thank you for any help

    Read the article

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

  • MongoDB - proper use of collections?

    - by zmg
    In Mongo my understanding is that you can have databases and collections. I'm working on a social-type app that will have blogs and comments (among other things) and had previously be using MySQL and pretty heavy partitioning in an attempt to limit possible concurrency issues. With MySQL I've stuffed all my user data into a _user database with several tables to further partition the data (blogs, pages, etc). My immediate reaction with Mongo would be to create a 'users' database with one collection per user. In this way user 'zach' blog entries would go into the 'zach' collection with associated comments and such becoming sub-objects in the same collection. Basically like dynamically creating one table per user in MySQL, but apparently without the complexity and limitations that might impose. Of course since I haven't really used Mongo before I'm having trouble gauging the (ahem..) quality of this idea and the potential problems it might cause down the road. I'd like user data to be treated a lot like a users directory in a *nix environment where user created/non-shared (mostly) gets put into one place (currently with MySQL that would be the appname_users as mentioned above). Most of the users data will be specific to the users page(s). Some of the user data which is queried across all site users (searchable user profiles) is currently kept in a separate database/table and I expect things like this could be put into a appname_system database and be broken up into collections and/or application specific databases (appname_profiles). Anyway, since the available documentation on this is currently a little thin and my experience is extremely limited I thought I might find a little guidance from someone with a better working understanding of the system. On the plus side I'd really already been attempting to treat MySQL as a schema-less document-store and doing this with Mongo seems much more intuitive/sane/rational so I'm really looking forward to getting started. Thanks, Zach

    Read the article

  • Fast Lightweight Image Comparisson Metric Algorithm

    - by gav
    Hi All, I am developing an application for the Android platform which contains 1000+ image filters that have been 'evolved'. When a user selects a photo I want to present the most relevant filters first. This 'relevance' should be dependent on previous use cases. I have already developed tools that register when a filtered image is saved; this combination of filter and image can be seen as the training data for my system. The issue is that the comparison must occur between selecting an image and the next screen coming up. From a UI point of view I need the whole process to take less that 4 seconds; select an image- obtain a metric to use for similarity - check against use cases - return 6 closest matches. I figure with 4 seconds I can use animations and progress dialogs to keep the user happy. Due to platform contraints I am fairly limited in the computational expense of the algorithm. I have implemented a technique adapted from various online tutorials for running C code on the G1 and hence this language is available Specific Constraints; Qualcomm® MSM7201A™, 528 MHz Processor 320 x 480 Pixel bitmap in 32 bit ARGB ~ 2 seconds computational time for the native method to get the metric ~ 2 seconds to compare the metric of the current image with training data This is an academic project so all ideas are welcome, anything you can think of or have heard about would be of interest to me. My ideas; I want to keep the complexity down (O(n*m)?) by using pixel data only rather than a neighbourhood function I was looking at using the Colour historgram/Greyscale histogram/Texture/Entropy of the image, combining them to make the measure. There will be an obvious loss of information but I need the resultant metric to be substantially smaller than the memory footprint of the image (~0.512 MB) As I said, any ideas to direct my research would be fantastic. Kind regards, Gavin

    Read the article

  • latin1/unicode conversion problem with ajax request and special characters

    - by mfn
    Server is PHP5 and HTML charset is latin1 (iso-8859-1). With regular form POST requests, there's no problem with "special" characters like the em dash (–) for example. Although I don't know for sure, it works. Probably because there exists a representable character for the browser at char code 150 (which is what I see in PHP on the server for a literal em dash with ord). Now our application also provides some kind of preview mechanism via ajax: the text is sent to the server and a complete HTML for a preview is sent back. However, the ordinary char code 150 em dash character when sent via ajax (tested with GET and POST) mutates into something more: %E2%80%93. I see this already in the apache log. According to various sources I found, e.g. http://www.tachyonsoft.com/uc0020.htm , this is the UTF8 byte representation of em dash and my current knowledge is that JavaScript handles everything in Unicode. However within my app, I need everything in latin1. Simply said: just like a regular POST request would have given me that em dash as char code 150, I would need that for the translated UTF8 representation too. That's were I'm failing, because with PHP on the server when I try to decode it with either utf8_decode(...) or iconv('UTF-8', 'iso-8859-1', ...) but in both cases I get a regular ? representing this character (and iconv also throws me a notice: Detected an illegal character in input string ). My goal is to find an automated solution, but maybe I'm trying to be überclever in this case? I've found other people simply doing manual replacing with a predefined input/output set; but that would always give me the feeling I could loose characters. The observant reader will note that I'm behind on understanding the full impact/complexity with things about Unicode and conversion of chars and I definitely prefer to understand the thing as a whole then a simply manual mapping. thanks

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >