Search Results

Search found 4125 results on 165 pages for 'hash cluster'.

Page 77/165 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Apache basic auth, mod_authn_dbd and password salt

    - by Cristian Vrabie
    Using Apache mod_auth_basic and mod_authn_dbd you can authenticate a user by looking up that user's password in the database. I see that working if the password is held in clear, but what if we use a random string as a salt (also stored in the database) then store the hash of the concatenation? mod_authn_dbd requires you to specify a query to select that password not to decide if the user is authenticated of not. So you cannot use that query to concatenate the user provided password with the salt then compare with the stored hash. AuthDBDUserRealmQuery "SELECT password FROM authn WHERE user = %s AND realm = %s" Is there a way to make this work?

    Read the article

  • How to use SQL - INSERT...ON DUPLICATE KEY UPDATE?

    - by Probocop
    Hi, I have a script which captures tweets and puts them into a database. I will be running the script on a cronjob and then displaying the tweets on my site from the database to prevent hitting the limit on the twitter API. So I don't want to have duplicate tweets in my database, I understand I can use 'INSERT...ON DUPLICATE KEY UPDATE' to achieve this, but I don't quite understand how to use it. My database structure is as follows. Table - Hash id (auto_increment) tweet user user_url And currently my SQL to insert is as follows: $tweet = $clean_content[0]; $user_url = $clean_uri[0]; $user = $clean_name[0]; $query='INSERT INTO hash (tweet, user, user_url) VALUES ("'.$tweet.'", "'.$user.'", "'.$user_url.'")'; mysql_query($query); How would I correctly use 'INSERT...ON DUPLICATE KEY UPDATE' to insert only if it doesn't exist, and update if it does? Thanks

    Read the article

  • Django. default=datetime.now() problem

    - by Shamanu4
    Hello. I've such db model: from datetime import datetime class TermPayment(models.Model): dev_session = models.ForeignKey(DeviceSession, related_name='payments') user_session = models.ForeignKey(UserSession, related_name='payment') date = models.DateTimeField(default=datetime.now(),blank=True) sum = models.FloatField(default=0) cnt = models.IntegerField(default=0) class Meta: db_table = 'term_payments' ordering = ['-date'] and here new instance is added: # ... tp = TermPayment() tp.dev_session = self.conn.session # device session hash tp.user_session = self.session # user session hash tp.sum = sum tp.cnt = cnt tp.save() But i've a problem: all records in database have the same value in date field - the date of the first payment. After server restart - one record have new date and others have the same as first after restart. It's look like some data cache is using but I can't found where. database: mysql 5.1.25 django v1.1.1

    Read the article

  • R: How to get a stack trace from the snow package

    - by Keith
    How can I get a stack trace back from a snow node after an error occurs? I am using the snow package (version 0.3-3) on R 2.10.1 and I'm getting errors when I use parSapply that do not occur when I use sapply. Snow is nice enough to give me the error message but it would be much more useful for me to have the kind of stack trace you can get from traceback(). So far I have tried: options(showWarnCalls = T, showErrorCalls = T) setDefaultClusterOptions(outfile = "/dev/tty") and options(error=traceback) setDefaultClusterOptions(outfile = "/dev/tty") without luck. I'm currently just testing with a local cluster ie: makeSOCKcluster(c("localhost","localhost")) but I will eventually be using an MPI cluster. Thanks.

    Read the article

  • Is MD5 really that bad?

    - by Col. Shrapnel
    Everyone says that MD5 is "broken". Though I have never seen a code that can show it's weakness. So, I hope someone of local experts can prove it with simple test. I have an MD5 hash c1e877411f5cb44d10ece283a37e1668 And a simple code to produce it $salt="#bh35^&Res%"; $pass="***"; echo $hash=md5($salt.$pass); So, the question is: 1. Is MD% really that bad? 2. If so, what's the pass behind the asterisks?

    Read the article

  • rails data aggregation

    - by ash34
    Hi, I have to create a hash of the form h[:bill] = ["Billy", "NA", 20, "PROJ_A"] by login where 20 is the cumulative number of hours reported by the login for all task transactions returned by the query where each login has multiple reported transactions. Did I do this in a bad way or this seems alright. h = Hash.new Task.find_each(:include => [:user], :joins => :user, :conditions => ["from_date >= ? AND from_date <= ? AND category = ?", Date.today - 30, Date.today + 30, 'PROJ1']) do |t| h[t.login.intern] = [t.user.name, 'NA', h[t.login.intern].nil? ? (t.hrs_per_day * t.num_days) : h[t.login.intern][2] + (t.hrs_day * t.workdays), t.category] end Also if I have to aggregate this data not just by login but login and category how do I accomplish this? thanks, ash

    Read the article

  • Ruby on Rails: restrict file type with Paperclip using a flash uploader

    - by aperture
    I have a pretty basic Paperclip Upload model that is attached to a User model through has_many, and am using Uploadify to do the actual uploading. Flash sends all files with the content type of "application/octet-stream" so using validates_attachment_content_type rejects all files. In my create action, I am able to get the mime-type from the original file name, but only after it's been saved, with: def coerce(params) h = Hash.new h[:upload] = Hash.new h[:upload][:attachment].content_type = MIME::Types.type_for(h[:upload][:attachment].original_filename).to_s ... end and def create diff_params = coerce(params) @upload = Upload.new(diff_params[:upload]) ... end What would be the best way of white listing file types? I am thinking a before_validation method, but I'm not sure how that would work. Any ideas would be welcome.

    Read the article

  • Python - from file to data structure?

    - by Seafoid
    Hi, I have large file comprising ~100,000 lines. Each line corresponds to a cluster and each entry within each line is a reference i.d. for another file (protein structure in this case), e.g. 1hgn 1dju 3nmj 8kfn 9opu 7gfb 4bui I need to read in the file as a list of lists where each line is a sublist, thus preserving the integrity of the cluster, e.g. nested_list = [['1hgn', '1dju', '3nmj', '8kfn'], ['9opu', '7gfb'], ['4bui']] My current code creates a nested list but the entries within each list are a single string and not comma separated. Therefore, I cannot splice the list with indices so easily. Any help greatly appreciated. Thanks, S :-)

    Read the article

  • jQuery Dynamic Page Loading wont work, not sure why any ideas?

    - by Luke
    Live demo here <- http://webcallonline.exoflux.co.uk/html/ $(function() { var url = $(this).attr("href"); $("nav").delegate("a", "click", function(event) { event.preventDefault(); window.location.hash = $(this).attr('href'); $("#main").slideUp('slow', function(){ $("#main").load(url + " #main", function() { $("#main").slideDown('slow'); }); }); }); $(window).bind('hashchange', function(){ newHash = window.location.hash.substring(1); }); $(window).trigger('hashchange'); }); Does anyone have any ideas?

    Read the article

  • create a dataset by using modulo division method

    - by ayoom
    create a dataset with 101 integers. Use the modulo division method of hashing to store the random data values into hash tables with table sizes of 7, 51, and 151. Use the linear probing and quadratic method of collision resolution. Print out the tables after the data values have been stored. Search for 10 different values in each of the three hash tables, counting the number of comparisons necessary. Print out the number of comparisons necessary in each case, in tabular form.

    Read the article

  • How can I create a custom cleanup mode for git?

    - by Danny
    Git's default cleanup of strip removes all lines starting with a # character. Unfortunately, the Trac engine's wiki formatter uses hashes in the beginning of a code block to denote the syntax type. Additionally any code added verbatim might include hashes as they are a common comment prefix; Perl comes to mind. In the following example the comments all get destroyed by git's cleanup mode. Example: {{{ #!/usr/bin/perl use strict; # say hi to the user. print "hello world\n"; }}} I'd like to use a custom filter that removes all lines beginning with a hash from the bottom of the file upwards. Leaving those lines that being with a hash that are embedded in the commit message I wrote alone. Where or how can I specify this in git? Note, creating a sed or perl script to perform the operation is not a problem, just knowing where to hook it into git is the question.

    Read the article

  • Form action with #hashtag not working in internet explorer

    - by Stephane
    I am using jquery-ui tabs and I've set it up to select the correct tab depending on the #hash from the requested URL. I have a form which performs a search, and each tab present the result from different providers. so if the form is submitting to the action "/myAction#tab1", when the results load, the corresponding tab gets selected. This works perfectly in every browser except for IE. When my form is submitted, it loses somehow the #hash which describes which tab to select. Is that yet another a bug from IE, or am I doing something wrong? I could not find much information about this, but I can hardly believe that this is not a common problem.

    Read the article

  • Cache an FTP connection via session variables for use via AJAX?

    - by Chad Johnson
    I'm working on a Ruby web Application that uses the Net::FTP library. One part of it allows users to interact with an FTP site via AJAX. When the user does something, and AJAX call is made, and then Ruby reconnects to the FTP server, performs an action, and outputs information. Every time the AJAX call is made, Ruby has to reconnect to the FTP server, and that's slow. Is there a way I could cache this FTP connection? I've tried caching in the session hash, but "We're sorry, but something went wrong" is displayed, and a TCP dump is outputted in my logs whenever I attempt to store it in the session hash. I haven't tried memcache yet. Any suggestions?

    Read the article

  • help regarding PERL program

    - by riya
    Could someone write simple PERL programs for the following scenarios: 1) convert a list from {1,2,3,4,5,7,9,10,11,12,34} to {1-5,7,9-12,34} 2) to sort a list of negative numbers 3) to insert values to hash array 4) there is a file with content: C1 c2 c3 c4 r1 r2 r3 r4 put it into an hash array where keys = {c1,c2,c3,c4} and values = {r1,r2,r3,r4} 5) There are testcases running each testcase runs as a process and has a process ID. The logs are logged in a logfile process ID appended to each line. Prog to find out if the test case has passed or failed. The program shoud be running till the processes are running and display output.

    Read the article

  • Security when writing a PHP webservice?

    - by chustar
    I am writing a web service in PHP for the first time and had ran into some security problems. 1) I am planning to hash passwords using md5() before I write them to the database (or to authenticate the user) but I realize that to do that, I would have to transmit the password in plaintext to the server and hash it there. Because of this I thought of md5()ing it with javascript client side and then rehashing on the server but then if javascript is disabled, then the user can't login, right? 2) I have heard that anything that when the action is readonly, you should use GET but if it modifies the database, you should use POST. Isn't post just as transparent as GET, just not in the address bar?

    Read the article

  • Webservice for uploading data: security considerations

    - by Philip Daubmeier
    Hi everyone! Im not sure about what authentification method I should use for my webservice. I've searched on SO, and found nothing that helped me. Preliminary Im building an application that uploads data from a local database to a server (running my webservice), where all records are merged and stored in a central database. I am currently binary serializing a DataTable, that holds a small fragment of the local database, where all uninteresting stuff is already filtered out. The byte[] (serialized DataTable), together with the userid and a hash of the users password is then uploaded to the webservice via SOAP. The application together with the webservice already work exactly like intended. The Problem The issue I am thinking about is now: What is if someone just sniffs the network traffic, 'steals' the users id and password hash to send his own SOAP message with modified data that corrupts my database? Options The approaches to solving that problem, I already thought of, are: Using ssl + certificates for establishing the connection: I dont really want to use ssl, I would prefer a simpler solution. After all, every information that is transfered to the webservice can be seen on the website later on. What I want to say is: there is no secret/financial/business-critical information, that has to be hidden. I think ssl would be sort of an overkill for that task. Encrypting the byte[]: I think that would be a performance killer, considering that the goal of the excercise was simply to authenticate the user. Hashing the users password together with the data: I kind of like the idea: Creating a checksum from the data, concatenating that checksum with the password-hash and hashing this whole thing again. That would assure the data was sent from this specific user, and the data wasnt modified. The actual question So, what do you think is the best approach in terms of meeting the following requirements? Rather simple solution (As it doesnt have to be super secure; no secret/business-critical information transfered) Easily implementable retrospectively (Dont want to write it all again :) ) Doesnt impact to much on performance What do you think of my prefered solution, the last one in the list above? Is there any alternative solution I didnt mention, that would fit better? You dont have to answer every question in detail. Just push me in the right direction. I very much appreciate every well-grounded opinion. Thanks in advance!

    Read the article

  • Ordering by multiple columns in mysql with subquery

    - by Scarface
    Hey guys I have a query that selects data and organizes but not in the correct order. What I want to do is select all the comments for a user in that week and sort it by each topic, then sort the cluster by the latest timestamp of each comment in their respective cluster. My current query selects the right data, but in seemingly random order. Does anyone have any ideas? select * from ( SELECT topic.topic_title, topic.topic_id FROM comments JOIN topic ON topic.topic_id=comments.topic_id WHERE comments.user='$user' AND comments.timestamp>$week order by comments.timestamp desc) derived_table group by topic_id

    Read the article

  • The C vs. C++ way

    - by amc
    Hi, So I have to write a program that will iterate through an image and record the pixel locations corresponding to each color pixel that appears in it. For example, given http://www.socuteurl.com/fishywishykissy I need to find the coordinates of all yellow, purple, dark pink, etc pixels. In C++ I would use a hash table to do this. I would iterate through the image, check each pixel's value, look up that value and either add to a vector of pixel coordinates if it were found or add a new entry to the table if the value were not already there. The problem is that I may need to write this program in pure C instead of C++. How would I go about doing this in C? I feel like implementing a hash table would be pretty obnoxious and error-prone: should I avoid doing that? I'm pretty inexperienced with C and have a fair amount of C++ experience, if that matters. Thanks.

    Read the article

  • .NET: efficient way to produce a string from a Dictionary<K,V> ?

    - by Cheeso
    Suppose I have a Dictionary<String,String>, and I want to produce a string representation of it. The "stone tools" way of doing it would be: private static string DictionaryToString(Dictionary<String,String> hash) { var list = new List<String> (); foreach (var kvp in hash) { list.Add(kvp.Key + ":" + kvp.Value); } var result = String.Join(", ", list.ToArray()); return result; } Is there an efficient way to do this in C# using existing extension methods? I know about the ConvertAll() and ForEach() methods on List, that can be used to eliminate foreach loops. Is there a similar method I can use on Dictionary to iterate through the items and accomplish what I want?

    Read the article

  • Is there a class like a Dictionary without a Value template? Is HashSet<T> the correct answer?

    - by myotherme
    I have 3 tables: Foos, Bars and FooBarConfirmations I want to have a in-memory list of FooBarConfirmations by their hash: FooID BarID Hash 1 1 1_1 2 1 2_1 1 2 1_2 2 2 2_2 What would be the best Class to use to store this type of structure in-memory, so that I can quickly check to see if a combination exists like so: list.Contains("1_2"); I can do this with Dictionary<string,anything>, but it "feels" wrong. HashSet looks like the right tool for the job, but does it use some form of hashing algorithm in the background to do the lookups efficiently?

    Read the article

  • Is SHA sufficient for checking file duplication? (sha1_file in PHP)

    - by wag2639
    Suppose you wanted to make a file hosting site for people to upload their files and send a link to their friends to retrieve it later and you want to insure files are duplicated where we store them, is PHP's sha1_file good enough for the task? Is there any reason to not use md5_file instead? For the frontend, it'll be obscured using the original file name store in a database but some additional concerns would be if this would reveal anything about the original poster. Does a file inherit any meta information with it like last modified or who posted it or is this stuff based in the file system? Also, is using a salt frivolous since security in regards of rainbow table attack mean nothing to this and the hash could later be used as a checksum? One last thing, scalability? initially, it's only going to be used for small files a couple of megs big but eventually... Edit 1: The point of the hash is primarily to avoid file duplication, not to create obscurity.

    Read the article

  • Enumerating combinations in a distributed manner

    - by Reyzooti
    I have a problem where I must analyse 500C5 combinations (255244687600) of something. Distributing it over a 10 node cluster where each cluster processes roughly 10^6 combinations per second means the job will be complete in about 7hours. The problem I have is distributing the 255244687600 combinations over the 10 nodes. I'd like to present each node with 25524468760, however the algorithms I'm using can only produce the combinations sequentially, I'd like to be able to pass the set of elements and a range of combination indicies eg: [0-10^7) or [10^7,2.0 10^7) etc and have the nodes themselves figure out the combinations. The algorithms I'm using at the moment are from the following: http://home.roadrunner.com/~hinnant/combinations.html A logical question I've considered using a master node, that enumerates each of the combinations and sends work to each of the nodes, however the overhead incurred in iterating the combinations from a single node and communicating back and forth work is enormous, and will subsequently lead to the master node becoming the bottleneck. Are there any good combination iterating algorithms geared up for efficient/optimal distributed enumeration?

    Read the article

  • aspx page gives viewstate error

    - by Priya10
    Hi, I have a simple aspx page with one grid view. When deployed on server, and accessed through that machine, it works fine. However, when connected through load balancer, we get this error ( when click on any button). The page however refreshes when pressed F5. Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Any idea what is happening here???

    Read the article

  • Prevent auto scrolling when clicking tab anchor

    - by JohnCrossy
    On my page I have some tabs that I've added some javascript so when they're clicked the page loads the tab's anchor URL instead of simply changing the view with AJAX, but I now want to stop the browser from auto scrolling to the anchor location. Having done some research, I'm pretty sure I just need to add a couple of lines so that when clicked it returns false or prevents default but, being a noob, I've no idea where to put them!?! I know it's cheeky, but if any answers could include the full script (ie. the below) with the solution, that way I'll understand better and hopefully learn some valuable lessons also :) Here's the code I'm using: jQuery( function() { jQuery('.ui-tabs').bind( 'tabsselect', function( e, ui ) { window.location.hash = ui.tab.hash }); });

    Read the article

  • Cache an FTP connection for use via AJAX?

    - by Chad Johnson
    I'm working on a Ruby web Application that uses the Net::FTP library. One part of it allows users to interact with an FTP site via AJAX. When the user does something, and AJAX call is made, and then Ruby reconnects to the FTP server, performs an action, and outputs information. Every time the AJAX call is made, Ruby has to reconnect to the FTP server, and that's slow. Is there a way I could cache this FTP connection? I've tried caching in the session hash, but "We're sorry, but something went wrong" is displayed, and a TCP dump is outputted in my logs whenever I attempt to store it in the session hash. I haven't tried memcache yet. Any suggestions?

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >