Search Results

Search found 2495 results on 100 pages for 'hash of hashes'.

Page 36/100 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Racket: change dotted pair to list

    - by user2963128
    I have a program that recursively calls a hashtable and prints out data from it. Unfortunately my hashtable seems to be saving data as dotted pairs so when I call the hashtable I get an error saying that there is no data for it because its tryign to search the hashtable for a dotted pair instead of a list. Is there an easy way to make the dotted pair into a regular list? IE im getting '("was" . "beginning") instead of '("was" "beginning") Is there a way to change this without re-writing how my hashtable store stuff? im using the let function to set a variable to this and then calling another function based on this variable (let ((data ( list-ref(hash-ref Ngram-table key) (random (length (hash-ref Ngram-table key)))))) is there a way to make the value stored in data just a list like this '("var1" "var2") instead of a dotted pair? edit: im getting dotted pairs because im using let to set data to the part of the hashtable's key and one of the elements in that hash.

    Read the article

  • Prevent Ruby on Rails from sending the session header

    - by hurikhan77
    How do I prevent Rails from always sending the session header (Set-Cookie). This is a security problem if the application also sends the Cache-Control: public header. My application touches (but does not modify) the session hash in some/most actions. These pages display no private content so I want them to be cacheable - but Rails always sends the cookie header, no matter if the sent session hash is different from the previous or not. What I want to achieve is to only send the hash if it is different from the one received from the client. How can you do that? And probably that fix should also go into official Rails release? What do you think?

    Read the article

  • Compile time string hashing

    - by Caspin
    I have read in few different places that using c++0x's new string literals it might be possible to compute a string's hash at compile time. However, no one seems to be ready to come out and say that it will be possible or how it would be done. Is this possible? What would the operator look like? I'm particularly interested use cases like this. void foo( const std::string& value ) { switch( std::hash(value) ) { case "one"_hash: one(); break; case "two"_hash: two(); break; /*many more cases*/ default: other(); break; } } Note: the compile time hash function doesn't have to look exactly as I've written it. I did my best to guess what the final solution would look like, but meta_hash<"string"_meta>::value could also be a viable solution.

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • SHA1 form .exe file in C#

    - by alibipl
    Hi! I hope that someone could help me with reading exe files in C# and create a SHA1 hash from it. I have tried to read from executable file using StreamReader and BinaryReader. Then using built-in SHA1 algorithm I tried to create a hash but without success. The algorithm results for StreamReader was "AEUj+Ppo5QdHoeboidah3P65N3s=" and for BinaryReader was "rWXzn/CoLLPBWqMCE4qcE3XmUKw=". Can anyone help me to acheive SHA1 hash from exe file? Thx. BTW Sorry for my English ;)

    Read the article

  • Is there any way to limit the size of an STL Map?

    - by Nathan Fellman
    I want to implement some sort of lookup table in C++ that will act as a cache. It is meant to emulate a piece of hardware I'm simulating. The keys are non-integer, so I'm guessing a hash is in order. I have no intention of inventing the wheel so I intend to use stl::map for this (though suggestions for alternatives are welcome). The question is, is there any way to limit the size of the hash to emulate the fact that my hardware is of finite size? I'd expect the hash's insert method to return an error message or throw an exception if the limit is reached. If there is no such way, I'll simply check its size before trying to insert, but that seems like an inelegant way to do it.

    Read the article

  • How should I change my Graph structure (very slow insertion)?

    - by Nazgulled
    Hi, This program I'm doing is about a social network, which means there are users and their profiles. The profiles structure is UserProfile. Now, there are various possible Graph implementations and I don't think I'm using the best one. I have a Graph structure and inside, there's a pointer to a linked list of type Vertex. Each Vertex element has a value, a pointer to the next Vertex and a pointer to a linked list of type Edge. Each Edge element has a value (so I can define weights and whatever it's needed), a pointer to the next Edge and a pointer to the Vertex owner. I have a 2 sample files with data to process (in CSV style) and insert into the Graph. The first one is the user data (one user per line); the second one is the user relations (for the graph). The first file is quickly inserted into the graph cause I always insert at the head and there's like ~18000 users. The second file takes ages but I still insert the edges at the head. The file has about ~520000 lines of user relations and takes between 13-15mins to insert into the Graph. I made a quick test and reading the data is pretty quickly, instantaneously really. The problem is in the insertion. This problem exists because I have a Graph implemented with linked lists for the vertices. Every time I need to insert a relation, I need to lookup for 2 vertices, so I can link them together. This is the problem... Doing this for ~520000 relations, takes a while. How should I solve this? Solution 1) Some people recommended me to implement the Graph (the vertices part) as an array instead of a linked list. This way I have direct access to every vertex and the insertion is probably going to drop considerably. But, I don't like the idea of allocating an array with [18000] elements. How practically is this? My sample data has ~18000, but what if I need much less or much more? The linked list approach has that flexibility, I can have whatever size I want as long as there's memory for it. But the array doesn't, how am I going to handle such situation? What are your suggestions? Using linked lists is good for space complexity but bad for time complexity. And using an array is good for time complexity but bad for space complexity. Any thoughts about this solution? Solution 2) This project also demands that I have some sort of data structures that allows quick lookup based on a name index and an ID index. For this I decided to use Hash Tables. My tables are implemented with separate chaining as collision resolution and when a load factor of 0.70 is reach, I normally recreate the table. I base the next table size on this http://planetmath.org/encyclopedia/GoodHashTablePrimes.html. Currently, both Hash Tables hold a pointer to the UserProfile instead of duplication the user profile itself. That would be stupid, changing data would require 3 changes and it's really dumb to do it that way. So I just save the pointer to the UserProfile. The same user profile pointer is also saved as value in each Graph Vertex. So, I have 3 data structures, one Graph and two Hash Tables and every single one of them point to the same exact UserProfile. The Graph structure will serve the purpose of finding the shortest path and stuff like that while the Hash Tables serve as quick index by name and ID. What I'm thinking to solve my Graph problem is to, instead of having the Hash Tables value point to the UserProfile, I point it to the corresponding Vertex. It's still a pointer, no more and no less space is used, I just change what I point to. Like this, I can easily and quickly lookup for each Vertex I need and link them together. This will insert the ~520000 relations pretty quickly. I thought of this solution because I already have the Hash Tables and I need to have them, then, why not take advantage of them for indexing the Graph vertices instead of the user profile? It's basically the same thing, I can still access the UserProfile pretty quickly, just go to the Vertex and then to the UserProfile. But, do you see any cons on this second solution against the first one? Or only pros that overpower the pros and cons on the first solution? Other Solution) If you have any other solution, I'm all ears. But please explain the pros and cons of that solution over the previous 2. I really don't have much time to be wasting with this right now, I need to move on with this project, so, if I'm doing to do such a change, I need to understand exactly what to change and if that's really the way to go. Hopefully no one fell asleep reading this and closed the browser, sorry for the big testament. But I really need to decide what to do about this and I really need to make a change. P.S: When answering my proposed solutions, please enumerate them as I did so I know exactly what are you talking about and don't confuse my self more than I already am.

    Read the article

  • Generating short license keys with OpenSSL

    - by Marc Charbonneau
    I'm working on a new licensing scheme for my software, based on OpenSSL public / private key encryption. My past approach, based on this article, was to use a large private key size and encrypt an SHA1 hashed string, which I sent to the customer as a license file (the base64 encoded hash is about a paragraph in length). I know someone could still easily crack my application, but it prevented someone from making a key generator, which I think would hurt more in the long run. For various reasons I want to move away from license files and simply email a 16 character base32 string the customer can type into the application. Even using small private keys (which I understand are trivial to crack), it's hard to get the encrypted hash this small. Would there be any benefit to using the same strategy to generated an encrypted hash, but simply using the first 16 characters as a license key? If not, is there a better alternative that will create keys in the format I want?

    Read the article

  • Optimal password salt length

    - by Juliusz Gonera
    I tried to find the answer to this question on Stack Overflow without any success. Let's say I store passwords using SHA-1 hash (so it's 160 bits) and let's assume that SHA-1 is enough for my application. How long should be the salt used to generated password's hash? The only answer I found was that there's no point in making it longer than the hash itself (160 bits in this case) which sounds logical, but should I make it that long? E.g. Ubuntu uses 8-byte salt with SHA-512 (I guess), so would 8 bytes be enough for SHA-1 too or maybe it would be too much?

    Read the article

  • How can I compare my PHPASS-hashed stored password to my incoming POST data?

    - by Ygam
    Here's a better example, just a simple checking..stored value in database has password: fafa (hashed with phpass in registration) and username: fafa; i am using the phpass password hashing framework public function demoHash($data) //$data is the post data named password { $hash =new PasswordHash(8, false); $query = ORM::factory('user'); $result = $query ->select('username, password') ->where('username', 'fafa') ->find(); $hashed = $hash->HashPassword($data); $check = $hash->CheckPassword($hashed, $result->password); echo $result->username . "<br/>"; echo $result->password . "<br/>"; return $check; } check is returning false

    Read the article

  • My Ruby Code: How can I improve? (Java to Ruby guy)

    - by steve
    Greetings, I get the feeling that I'm using ruby in an ugly way and possibly missing out on tonnes of useful features. I was wondering if anyone could point out a cleaner better way to write my code which is pasted here. The code itself simply scrapes some data from yelp and processes it into a json format. The reason I'm not using hash.to_json is because it throws some sort of stack error which I can only assume is due to the hash being too large (It's not particularly large). Response object = a hash text = the output which saves to file Anyways guidance appreciated. def mineLocation client = Yelp::Client.new request = Yelp::Review::Request::GeoPoint.new(:latitude=>13.3125,:longitude => -6.2468,:yws_id => 'nicetry') response = client.search(request) response['businesses'].length.times do |businessEntry| text ="" response['businesses'][businessEntry].each { |key, value| if value.class == Array value.length.times { |arrayEntry| text+= "\"#{key}\":[" value[arrayEntry].each { |arrayKey,arrayValue| text+= "{\"#{arrayKey}\":\"#{arrayValue}\"}," } text+="]" } else text+="\"#{arrayKey}\":\"#{arrayValue}\"," end } end end

    Read the article

  • Checking server load with PHP and taking appropriate action

    - by teehoo
    Hi, I'm creating a project in which a server receives operations from clients to apply to a local server document. The server and client both share the same document and therefore each message the client sends contains an MD5 hash, which the server compares to after generating its own hash to ensure the server and client documents are synchronized. My question is, if the server is overloaded, could I somehow detect this in PHP, which would in turn let me decide whether I want to execute the hash generation function or not? Perhaps in the scenario defined, this is not a perfect use-case, but I'm interested in this approach in general.

    Read the article

  • Prevent strings stored in memory from being read by other programs

    - by Roy
    Some programs like ProcessExplorer are able to read strings in memory (for example, my error message written in the code could be displayed easily, even though it is compiled already). Imagine if I have a password string "123456" allocated sequentially in memory. What if hackers are able to get hold of the password typed by the user? Is there anyway to prevent strings from being seen so clearly? Oh yes, also, if I hash the password and sent it from client to server to compare the stored database hash value, won't the hacker be able to store the same hash and replay it to gain access to the user account? Is there anyway to prevent replaying? Thank You!

    Read the article

  • Is there a secure p2p distributed database?

    - by p2pgirl
    I'm looking for a distributed hash table to store and retrieve values securely. These are my requirements: It must use an existing popular p2p network (I must guarantee my key/value will be stored and kept in multiple peers). None but myself should be able to edit or delete the key/value. Ideally an encryption key that only I have access to would be required to edit my key value. All peers would be able to read the key value (read-only access, only the key holder would be able to edit the value) Is there such p2p distributed hash table? Would the bittorrent distributed hash table meet my requirements?' Where could I find documentation?

    Read the article

  • GetHashCode Method reliability in Silverlight/WP7.1

    - by abhinav
    I am attempting to hash and keep(the hash) an object of type IEnumerable<anotherobject> which has about a 1000 entries. I'll be generating another such object, but this time I'd like to check for any changes in the values of the entries using the hash codes of the two objects. Basically, I was wondering if GetHashCode() is apt for this, both from a performance perspective and reliability perspective (getting different values for different object values and same values for same object values, always). If I have to override it, what would be a good way to do so, does it always depend on the type of anotherobject and what Equals means when comparing two anotherobjects? Is there a generic way to do it? This concern is because my object can be quite big.

    Read the article

  • Storing cvs data for further manipulation using Ruby

    - by ischnura
    I am dealing with a csv file that has some customer information (email, name, address, amount, [shopping_list: item 1, item 2]). I would like work with the data and produce some labels for printing... as well as to gather some extra information (total amounts, total items 1...) My main concern is to find the appropriate structure to store the data in ruby for future manipulation. For now I have thought about the following possibilities: multidimensional arrays: pretty simple to build, but pretty hard to access the data in a beautiful ruby way. hashes: having the email as key, and storing the information in different hashes (one hash for name, another hash for address, another hash for shopping list...) (getting the cvs data in to a Database and working with the data from ruby??) I would really appreciate your advice and guidance!!

    Read the article

  • How do I assign a value from params, or session, whichever exists?

    - by irkenInvader
    What is the "Rails-way" or the "Ruby-way" of doing the following: In my controller, I'm creating and instance of an Options class. It will be initialized with information in the params hash if the params hash exists. Otherwise, it will check the sessions hash for the information. Finally, it will initialize with defaults if neither params nor session has the data it needs. Here is how I'm doing it now (it works fine, but it seems a little ugly): if params[:cust_options] @options = CustomOptions.new( params[:cust_options] ) else if session[:cust_options @options = CustomOptions.new( session[:cust_options] ) else @options = CustomOptions.new end end session[:cust_options] = @options.to_hash Like I said, everything is working fine, I'm just looking for a more idiomatically Ruby way of writing this block of code.

    Read the article

  • How to change password hashing algorithm when using spring security?

    - by harry
    I'm working on a legacy Spring MVC based web Application which is using a - by current standards - inappropriate hashing algorithm. Now I want to gradually migrate all hashes to bcrypt. My high level strategy is: New hashes are generated with bcrypt by default When a user successfully logs in and has still a legacy hash, the app replaces the old hash with a new bcrypt hash. What is the most idiomatic way of implementing this strategy with Spring Security? Should I use a custom Filter or my on AccessDecisionManager or …?

    Read the article

  • How can I allow only privledged users to download a pdf with php?

    - by ThinkingInBits
    Lets say I have some pdf files stored on my server and I only want to allow a person who's paid have access to download a particular pdf. So for an example, let's say I have a bunch of e-books. The only way a user would be able to download e-book A is if his account contains the right credentials for that particular book. What's the best way to accomplish this? Any ideas/advice on how to improve my idea are greatly appreciated! My current idea: A user places an order Upon success, a new folder would be created by their /account_num/order_id/ A copy of the particular file would be stored in this directory Have php generate an .htaccess that would only allow access from a url that contains a random hash embedded into it. The only way a user would be able to access this random hashed page is if they are signed in as the right user, and the hash matches up with the hash stored in the database, otherwise they are redirected to home page.

    Read the article

  • Intersection between sets containing different types of variables

    - by Gacek
    Let's assume we have two collections: List<double> values List<SomePoint> points where SomePoint is a type containing three coordinates of the point: SomePoint { double X; double Y; double Z; } Now, I would like to perform the intersection between these two collections to find out for which points in points the z coordinate is eqal to one of the elements of values I created something like that: HashSet<double> hash = new HashSet<double>(points.Select(p=>p.Z)); hash.IntersectWith(values); var result = new List<SomePoints>(); foreach(var h in hash) result.Add(points.Find(p => p.Z == h)); But it won't return these points for which there is the same Z value, but different X and Y. Is there any better way to do it?

    Read the article

  • Purely functional equivalent of weakhashmap?

    - by Jon Harrop
    Weak hash tables like Java's weak hash map use weak references to track the collection of unreachable keys by the garbage collector and remove bindings with that key from the collection. Weak hash tables are typically used to implement indirections from one vertex or edge in a graph to another because they allow the garbage collector to collect unreachable portions of the graph. Is there a purely functional equivalent of this data structure? If not, how might one be created? This seems like an interesting challenge. The internal implementation cannot be pure because it must collect (i.e. mutate) the data structure in order to remove unreachable parts but I believe it could present a pure interface to the user, who could never observe the impurities because they only affect portions of the data structure that the user can, by definition, no longer reach.

    Read the article

  • Is there a way to pass a regex capture to a block in Ruby?

    - by Gordon Fontenot
    I have a hash with a regex for the key and a block for the value. Something like the following: { 'test (.+?)' => { puts $1 } } Not exactly like that, obviously, since the block is being stored as a Proc, but that's the idea. I then have a regex match later on that looks a lot like this hash.each do |pattern, action| if /#{pattern}/i.match(string) action.call end end The idea was to store the block away in the hash to make it a bit easier for me to expand upon in the future, but now the regex capture doesn't get passed to the block. Is there a way to do this cleanly that would support any number of captures I put in the regex (as in, some regex patterns may have 1 capture, others may have 3)?

    Read the article

  • CouchDB emit with lookup key that is array, such that order of array elements are ignored.

    - by MatternPatching
    When indexing a couchdb view, you can emit an array as the key such as: emit(["one", "two", "three"], doc); I appreciate the fact that when searching the view, the order is important, but sometimes I would like the view to ignore it. I have thought of a couple of options. 1. By convention, just emit the contents in alphabetical order, and ensure that looking up uses the same convention. 2. Somehow hash in a manner that disregards the order, and emit/search based on that hash. (This is fairly easy, if you simply hash each one individually, "sum" the hashes, then mod.) Note: I'm sure this may be covered somewhere in the authoritative guide, but I was unsuccessful in finding it.

    Read the article

  • Is it possible to change iframe src using javascript exists in page inside that iframe?

    - by Amr ElGarhy
    I want to change the iframe source when something change in this iframe content, the iframe is in a different domian than the parent page. is this can be done or not? I just need to add a hash to this iframe src, so that i can access this hash value from the parent page and do some stuff based on this value. What i did: In the iframe page i wrote: window.location.hash = "close_child"; and in the parent page i wrote: if (document.getElementById("MyIFrameId").src.indexOf("#close_child") > 0) { but i always find that the src is empty

    Read the article

  • Why hashCode() returns the same value for a object in all consecutive executions?

    - by Vijay Shanker
    Hi, I am trying some code around object equality in java. As I have read somewhere hashCode() is a number which is generated by applying the hash function. Hash Function can be different for each object but can also be same. At the object level, it returns the memory address of the object. Now, I have sample program, which I run 10 times, consecutively. Every time i run the program I get the same value as hash code. If hashCode() function returns the memory location for the object, how come the java(JVM) store the object at same memory address in the consecutive runs? Can you please give me some insight and your view over this issue?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >