Search Results

Search found 4650 results on 186 pages for 'perfect hash'.

Page 39/186 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Unique identifier for an email

    - by Skywalker
    I am writing a C# application which allows users to store emails in a MS SQL Server database. Many times, multiple users will be copied on an email from a customer. If they all try to add the same email to the database, I want to make sure that the email is only added once. MD5 springs to mind as a way to do this. I don't need to worry about malicious tampering, only to make sure that the same email will map to the same hash and that no two emails with different content will map to the same hash. My question really boils down to how one would combine multiple fields into one MD5 (or other) hash value. Some of these fields will have a single value per email (e.g. subject, body, sender email address) while others will have multiple values (varying numbers of attachments, recipients). I want to develop a way of uniquely identifying an email that will be platform and language independent (not based on serialization). Any advice?

    Read the article

  • Are some data structures more suitable for functional programming than others?

    - by Rob Lachlan
    In Real World Haskell, there is a section titled "Life without arrays or hash tables" where the authors suggest that list and trees are preferred in functional programming, whereas an array or a hash table might be used instead in an imperative program. This makes sense, since it's much easier to reuse part of an (immutable) list or tree when creating a new one than to do so with an array. So my questions are: Are there really significantly different usage patterns for data structures between functional and imperative programming? If so, is this a problem? What if you really do need a hash table for some application? Do you simply swallow the extra expense incurred for modifications?

    Read the article

  • multiple records using the select_tag

    - by JZ
    I'm attempting to display multiple records, with each record displaying its own selection box. I would like to submit this form and pass a hash, having the Add.id function as the key of the hash, and the selection box option pass as the information in the hash. How could I fix my code? is this even possible with the select_tag method? <%= form_tag yardsign_adds_path, :method => :post do %> <%= select_tag "support_code[]", options_for_select([[ "1 - Strong Supporter", add.id ], [ "2 - Likely Voter" ], [ "3 - Undecided" ], [ "4 - Likely Opposed" ], [ "5 - Strongly Opposed" ]]) %> <%= submit_tag "Update" %> <% end %> Current terminal output: Started POST "/adds/yardsign" for 127.0.0.1 at 2010-04-17 01:36:03 Processing by AddsController#yardsign as HTML Parameters: {"commit"=>"Update", "authenticity_token"=>"VQ2jVfzHI7pB+87lQa9NWqvUK3zwJWiJE7CwAnIewiw=", "support_code"=>["1", "3 - Undecided", "3 - Undecided"]}

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • Prevent Ruby on Rails from sending the session header

    - by hurikhan77
    How do I prevent Rails from always sending the session header (Set-Cookie). This is a security problem if the application also sends the Cache-Control: public header. My application touches (but does not modify) the session hash in some/most actions. These pages display no private content so I want them to be cacheable - but Rails always sends the cookie header, no matter if the sent session hash is different from the previous or not. What I want to achieve is to only send the hash if it is different from the one received from the client. How can you do that? And probably that fix should also go into official Rails release? What do you think?

    Read the article

  • Racket: change dotted pair to list

    - by user2963128
    I have a program that recursively calls a hashtable and prints out data from it. Unfortunately my hashtable seems to be saving data as dotted pairs so when I call the hashtable I get an error saying that there is no data for it because its tryign to search the hashtable for a dotted pair instead of a list. Is there an easy way to make the dotted pair into a regular list? IE im getting '("was" . "beginning") instead of '("was" "beginning") Is there a way to change this without re-writing how my hashtable store stuff? im using the let function to set a variable to this and then calling another function based on this variable (let ((data ( list-ref(hash-ref Ngram-table key) (random (length (hash-ref Ngram-table key)))))) is there a way to make the value stored in data just a list like this '("var1" "var2") instead of a dotted pair? edit: im getting dotted pairs because im using let to set data to the part of the hashtable's key and one of the elements in that hash.

    Read the article

  • Compile time string hashing

    - by Caspin
    I have read in few different places that using c++0x's new string literals it might be possible to compute a string's hash at compile time. However, no one seems to be ready to come out and say that it will be possible or how it would be done. Is this possible? What would the operator look like? I'm particularly interested use cases like this. void foo( const std::string& value ) { switch( std::hash(value) ) { case "one"_hash: one(); break; case "two"_hash: two(); break; /*many more cases*/ default: other(); break; } } Note: the compile time hash function doesn't have to look exactly as I've written it. I did my best to guess what the final solution would look like, but meta_hash<"string"_meta>::value could also be a viable solution.

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • SHA1 form .exe file in C#

    - by alibipl
    Hi! I hope that someone could help me with reading exe files in C# and create a SHA1 hash from it. I have tried to read from executable file using StreamReader and BinaryReader. Then using built-in SHA1 algorithm I tried to create a hash but without success. The algorithm results for StreamReader was "AEUj+Ppo5QdHoeboidah3P65N3s=" and for BinaryReader was "rWXzn/CoLLPBWqMCE4qcE3XmUKw=". Can anyone help me to acheive SHA1 hash from exe file? Thx. BTW Sorry for my English ;)

    Read the article

  • Is there any way to limit the size of an STL Map?

    - by Nathan Fellman
    I want to implement some sort of lookup table in C++ that will act as a cache. It is meant to emulate a piece of hardware I'm simulating. The keys are non-integer, so I'm guessing a hash is in order. I have no intention of inventing the wheel so I intend to use stl::map for this (though suggestions for alternatives are welcome). The question is, is there any way to limit the size of the hash to emulate the fact that my hardware is of finite size? I'd expect the hash's insert method to return an error message or throw an exception if the limit is reached. If there is no such way, I'll simply check its size before trying to insert, but that seems like an inelegant way to do it.

    Read the article

  • How should I change my Graph structure (very slow insertion)?

    - by Nazgulled
    Hi, This program I'm doing is about a social network, which means there are users and their profiles. The profiles structure is UserProfile. Now, there are various possible Graph implementations and I don't think I'm using the best one. I have a Graph structure and inside, there's a pointer to a linked list of type Vertex. Each Vertex element has a value, a pointer to the next Vertex and a pointer to a linked list of type Edge. Each Edge element has a value (so I can define weights and whatever it's needed), a pointer to the next Edge and a pointer to the Vertex owner. I have a 2 sample files with data to process (in CSV style) and insert into the Graph. The first one is the user data (one user per line); the second one is the user relations (for the graph). The first file is quickly inserted into the graph cause I always insert at the head and there's like ~18000 users. The second file takes ages but I still insert the edges at the head. The file has about ~520000 lines of user relations and takes between 13-15mins to insert into the Graph. I made a quick test and reading the data is pretty quickly, instantaneously really. The problem is in the insertion. This problem exists because I have a Graph implemented with linked lists for the vertices. Every time I need to insert a relation, I need to lookup for 2 vertices, so I can link them together. This is the problem... Doing this for ~520000 relations, takes a while. How should I solve this? Solution 1) Some people recommended me to implement the Graph (the vertices part) as an array instead of a linked list. This way I have direct access to every vertex and the insertion is probably going to drop considerably. But, I don't like the idea of allocating an array with [18000] elements. How practically is this? My sample data has ~18000, but what if I need much less or much more? The linked list approach has that flexibility, I can have whatever size I want as long as there's memory for it. But the array doesn't, how am I going to handle such situation? What are your suggestions? Using linked lists is good for space complexity but bad for time complexity. And using an array is good for time complexity but bad for space complexity. Any thoughts about this solution? Solution 2) This project also demands that I have some sort of data structures that allows quick lookup based on a name index and an ID index. For this I decided to use Hash Tables. My tables are implemented with separate chaining as collision resolution and when a load factor of 0.70 is reach, I normally recreate the table. I base the next table size on this http://planetmath.org/encyclopedia/GoodHashTablePrimes.html. Currently, both Hash Tables hold a pointer to the UserProfile instead of duplication the user profile itself. That would be stupid, changing data would require 3 changes and it's really dumb to do it that way. So I just save the pointer to the UserProfile. The same user profile pointer is also saved as value in each Graph Vertex. So, I have 3 data structures, one Graph and two Hash Tables and every single one of them point to the same exact UserProfile. The Graph structure will serve the purpose of finding the shortest path and stuff like that while the Hash Tables serve as quick index by name and ID. What I'm thinking to solve my Graph problem is to, instead of having the Hash Tables value point to the UserProfile, I point it to the corresponding Vertex. It's still a pointer, no more and no less space is used, I just change what I point to. Like this, I can easily and quickly lookup for each Vertex I need and link them together. This will insert the ~520000 relations pretty quickly. I thought of this solution because I already have the Hash Tables and I need to have them, then, why not take advantage of them for indexing the Graph vertices instead of the user profile? It's basically the same thing, I can still access the UserProfile pretty quickly, just go to the Vertex and then to the UserProfile. But, do you see any cons on this second solution against the first one? Or only pros that overpower the pros and cons on the first solution? Other Solution) If you have any other solution, I'm all ears. But please explain the pros and cons of that solution over the previous 2. I really don't have much time to be wasting with this right now, I need to move on with this project, so, if I'm doing to do such a change, I need to understand exactly what to change and if that's really the way to go. Hopefully no one fell asleep reading this and closed the browser, sorry for the big testament. But I really need to decide what to do about this and I really need to make a change. P.S: When answering my proposed solutions, please enumerate them as I did so I know exactly what are you talking about and don't confuse my self more than I already am.

    Read the article

  • Generating short license keys with OpenSSL

    - by Marc Charbonneau
    I'm working on a new licensing scheme for my software, based on OpenSSL public / private key encryption. My past approach, based on this article, was to use a large private key size and encrypt an SHA1 hashed string, which I sent to the customer as a license file (the base64 encoded hash is about a paragraph in length). I know someone could still easily crack my application, but it prevented someone from making a key generator, which I think would hurt more in the long run. For various reasons I want to move away from license files and simply email a 16 character base32 string the customer can type into the application. Even using small private keys (which I understand are trivial to crack), it's hard to get the encrypted hash this small. Would there be any benefit to using the same strategy to generated an encrypted hash, but simply using the first 16 characters as a license key? If not, is there a better alternative that will create keys in the format I want?

    Read the article

  • Optimal password salt length

    - by Juliusz Gonera
    I tried to find the answer to this question on Stack Overflow without any success. Let's say I store passwords using SHA-1 hash (so it's 160 bits) and let's assume that SHA-1 is enough for my application. How long should be the salt used to generated password's hash? The only answer I found was that there's no point in making it longer than the hash itself (160 bits in this case) which sounds logical, but should I make it that long? E.g. Ubuntu uses 8-byte salt with SHA-512 (I guess), so would 8 bytes be enough for SHA-1 too or maybe it would be too much?

    Read the article

  • MVC Implementation PHP Zend PDF generation

    - by zod
    Am using Zend framework and PHP Am going to generate a PDF using Zend . So the View is PDF.There is no PHTML. But if i dont use PHTML in view , is it a perfect MVC? if i want to be a perfect MVC shall i do the db retrieval and variable declaration and assigning in controller and use view and use all pdf functions in view phtml file will it make a perfect MVC? What is the advantage of MVC in this case? :-) can i do the include of zend pdf in phtml file or controller php file? what is the difference ?

    Read the article

  • How can I compare my PHPASS-hashed stored password to my incoming POST data?

    - by Ygam
    Here's a better example, just a simple checking..stored value in database has password: fafa (hashed with phpass in registration) and username: fafa; i am using the phpass password hashing framework public function demoHash($data) //$data is the post data named password { $hash =new PasswordHash(8, false); $query = ORM::factory('user'); $result = $query ->select('username, password') ->where('username', 'fafa') ->find(); $hashed = $hash->HashPassword($data); $check = $hash->CheckPassword($hashed, $result->password); echo $result->username . "<br/>"; echo $result->password . "<br/>"; return $check; } check is returning false

    Read the article

  • My Ruby Code: How can I improve? (Java to Ruby guy)

    - by steve
    Greetings, I get the feeling that I'm using ruby in an ugly way and possibly missing out on tonnes of useful features. I was wondering if anyone could point out a cleaner better way to write my code which is pasted here. The code itself simply scrapes some data from yelp and processes it into a json format. The reason I'm not using hash.to_json is because it throws some sort of stack error which I can only assume is due to the hash being too large (It's not particularly large). Response object = a hash text = the output which saves to file Anyways guidance appreciated. def mineLocation client = Yelp::Client.new request = Yelp::Review::Request::GeoPoint.new(:latitude=>13.3125,:longitude => -6.2468,:yws_id => 'nicetry') response = client.search(request) response['businesses'].length.times do |businessEntry| text ="" response['businesses'][businessEntry].each { |key, value| if value.class == Array value.length.times { |arrayEntry| text+= "\"#{key}\":[" value[arrayEntry].each { |arrayKey,arrayValue| text+= "{\"#{arrayKey}\":\"#{arrayValue}\"}," } text+="]" } else text+="\"#{arrayKey}\":\"#{arrayValue}\"," end } end end

    Read the article

  • Checking server load with PHP and taking appropriate action

    - by teehoo
    Hi, I'm creating a project in which a server receives operations from clients to apply to a local server document. The server and client both share the same document and therefore each message the client sends contains an MD5 hash, which the server compares to after generating its own hash to ensure the server and client documents are synchronized. My question is, if the server is overloaded, could I somehow detect this in PHP, which would in turn let me decide whether I want to execute the hash generation function or not? Perhaps in the scenario defined, this is not a perfect use-case, but I'm interested in this approach in general.

    Read the article

  • Prevent strings stored in memory from being read by other programs

    - by Roy
    Some programs like ProcessExplorer are able to read strings in memory (for example, my error message written in the code could be displayed easily, even though it is compiled already). Imagine if I have a password string "123456" allocated sequentially in memory. What if hackers are able to get hold of the password typed by the user? Is there anyway to prevent strings from being seen so clearly? Oh yes, also, if I hash the password and sent it from client to server to compare the stored database hash value, won't the hacker be able to store the same hash and replay it to gain access to the user account? Is there anyway to prevent replaying? Thank You!

    Read the article

  • Silverlight: Creating a round button template

    - by Jeremy
    I decided to try making a circular button, so using expression blend, I dropped a button control on my xaml. I then created a template from it by choosing "Edit Control Parts (Template)" - "Edit a Copy". I am trying to design it so that the left and right sides of the button were always perfect semi circles, so that no matter how tall or wide the button grew, the corner radius would max out at either half the width or half the length of the button, depending on which was smaller. That way, if the button was stretched tall, the top and buttom would be perfect half circles, and if the button was stretched wide, the left and right would be perfect half circles. Is it possible to do this?

    Read the article

  • GetHashCode Method reliability in Silverlight/WP7.1

    - by abhinav
    I am attempting to hash and keep(the hash) an object of type IEnumerable<anotherobject> which has about a 1000 entries. I'll be generating another such object, but this time I'd like to check for any changes in the values of the entries using the hash codes of the two objects. Basically, I was wondering if GetHashCode() is apt for this, both from a performance perspective and reliability perspective (getting different values for different object values and same values for same object values, always). If I have to override it, what would be a good way to do so, does it always depend on the type of anotherobject and what Equals means when comparing two anotherobjects? Is there a generic way to do it? This concern is because my object can be quite big.

    Read the article

  • Is there a secure p2p distributed database?

    - by p2pgirl
    I'm looking for a distributed hash table to store and retrieve values securely. These are my requirements: It must use an existing popular p2p network (I must guarantee my key/value will be stored and kept in multiple peers). None but myself should be able to edit or delete the key/value. Ideally an encryption key that only I have access to would be required to edit my key value. All peers would be able to read the key value (read-only access, only the key holder would be able to edit the value) Is there such p2p distributed hash table? Would the bittorrent distributed hash table meet my requirements?' Where could I find documentation?

    Read the article

  • Storing cvs data for further manipulation using Ruby

    - by ischnura
    I am dealing with a csv file that has some customer information (email, name, address, amount, [shopping_list: item 1, item 2]). I would like work with the data and produce some labels for printing... as well as to gather some extra information (total amounts, total items 1...) My main concern is to find the appropriate structure to store the data in ruby for future manipulation. For now I have thought about the following possibilities: multidimensional arrays: pretty simple to build, but pretty hard to access the data in a beautiful ruby way. hashes: having the email as key, and storing the information in different hashes (one hash for name, another hash for address, another hash for shopping list...) (getting the cvs data in to a Database and working with the data from ruby??) I would really appreciate your advice and guidance!!

    Read the article

  • How do I assign a value from params, or session, whichever exists?

    - by irkenInvader
    What is the "Rails-way" or the "Ruby-way" of doing the following: In my controller, I'm creating and instance of an Options class. It will be initialized with information in the params hash if the params hash exists. Otherwise, it will check the sessions hash for the information. Finally, it will initialize with defaults if neither params nor session has the data it needs. Here is how I'm doing it now (it works fine, but it seems a little ugly): if params[:cust_options] @options = CustomOptions.new( params[:cust_options] ) else if session[:cust_options @options = CustomOptions.new( session[:cust_options] ) else @options = CustomOptions.new end end session[:cust_options] = @options.to_hash Like I said, everything is working fine, I'm just looking for a more idiomatically Ruby way of writing this block of code.

    Read the article

  • How to change password hashing algorithm when using spring security?

    - by harry
    I'm working on a legacy Spring MVC based web Application which is using a - by current standards - inappropriate hashing algorithm. Now I want to gradually migrate all hashes to bcrypt. My high level strategy is: New hashes are generated with bcrypt by default When a user successfully logs in and has still a legacy hash, the app replaces the old hash with a new bcrypt hash. What is the most idiomatic way of implementing this strategy with Spring Security? Should I use a custom Filter or my on AccessDecisionManager or …?

    Read the article

  • How can I allow only privledged users to download a pdf with php?

    - by ThinkingInBits
    Lets say I have some pdf files stored on my server and I only want to allow a person who's paid have access to download a particular pdf. So for an example, let's say I have a bunch of e-books. The only way a user would be able to download e-book A is if his account contains the right credentials for that particular book. What's the best way to accomplish this? Any ideas/advice on how to improve my idea are greatly appreciated! My current idea: A user places an order Upon success, a new folder would be created by their /account_num/order_id/ A copy of the particular file would be stored in this directory Have php generate an .htaccess that would only allow access from a url that contains a random hash embedded into it. The only way a user would be able to access this random hashed page is if they are signed in as the right user, and the hash matches up with the hash stored in the database, otherwise they are redirected to home page.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >