Search Results

Search found 2495 results on 100 pages for 'hash of hashes'.

Page 41/100 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Drupal Views api, add simple argument handler

    - by LanguaFlash
    Background: I have a complex search form that stores the query and it's hash in a cache. Once the cache is set, I redirect to something like /searchresults/e6c86fadc7e4b7a2d068932efc9cc358 where that big long string on the end is the md5 hash of my query. I need to make a new argument for views to know what the hash is good for. The reason for all this hastle is because my original search form is way to complex and has way to many arguments to consider putting them all into the path and expecting to do the filtering with the normal views arguments. Now for my question. I have been reading views 2 documentation but not figuring out how to accomplish this custom argument. It doesn't seem to me like this should be as hard as it seems to me like it must be. Leaving aside any knowledge of the veiws api, it would seem that all I need is a callback function that will take the argument from the path as it's only argument and return a list of node id's to filter to. Can anyone point me to a solution or give me some example code? Thanks for your help! You guys are great. PS. I am pretty sure that my design is the best I can come up with, lets don't get off my question and into cross checking my design logic if we can help it.

    Read the article

  • Hashtable is that fast

    - by Costa
    Hi s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]. Is the hash function of the java string, I assume the rest of languages is similar or close to this implementation. If we have hash-Table and a list of 50 elements. each element is 7 chars ABCDEF1, ABCDEF2, ABCDEF3..... ABCDEFn If each bucket of hashtable contains 5 strings (I think this function will make it one string per bucket, but let us assume it is 5). If we call col.Contains("ABCDEFn"); // will do 6 comparisons and discover the difference on the 7th. The hash-table will take around 70 operations (multiplication and additions) to get the hashcode and to compare with 5 strings in bucket. and BANG it found. For list it will take around 300 comparisons to find it. for the case that there is only 10 elements, the list will take around 70 operations but the Hashtable will take around 50 operations. and note that hashtable operations are more time consuming (it is multiplications). I conclude that HybirdDictionary in .Net probably is the best choice for that most cases that require Hashtable with unknown size, because it will let me use a list till the list becomes more than 10 elements. still need something like HashSet rather than a Dictionary of keys and values, I wonder why there is no HybirdSet!! So what do u think? Thanks

    Read the article

  • What's the recommended implementation for hashing OLE Variants?

    - by Barry Kelly
    OLE Variants, as used by older versions of Visual Basic and pervasively in COM Automation, can store lots of different types: basic types like integers and floats, more complicated types like strings and arrays, and all the way up to IDispatch implementations and pointers in the form of ByRef variants. Variants are also weakly typed: they convert the value to another type without warning depending on which operator you apply and what the current types are of the values passed to the operator. For example, comparing two variants, one containing the integer 1 and another containing the string "1", for equality will return True. So assuming that I'm working with variants at the underlying data level (e.g. VARIANT in C++ or TVarData in Delphi - i.e. the big union of different possible values), how should I hash variants consistently so that they obey the right rules? Rules: Variants that hash unequally should compare as unequal, both in sorting and direct equality Variants that compare as equal for both sorting and direct equality should hash as equal It's OK if I have to use different sorting and direct comparison rules in order to make the hashing fit. The way I'm currently working is I'm normalizing the variants to strings (if they fit), and treating them as strings, otherwise I'm working with the variant data as if it was an opaque blob, and hashing and comparing its raw bytes. That has some limitations, of course: numbers 1..10 sort as [1, 10, 2, ... 9] etc. This is mildly annoying, but it is consistent and it is very little work. However, I do wonder if there is an accepted practice for this problem.

    Read the article

  • autologin component doesn't work on remote server

    - by user606521
    I am using autologin component from http://milesj.me/code/cakephp/auto-login (v 3.5.1). It works on my localhost WAMP server but fails on remote server. I am using this settings in beforeFilter() callback: $this->AutoLogin->settings = array( // Model settings 'model' => 'User', 'username' => 'username', 'password' => 'password', // Controller settings 'plugin' => '', 'controller' => 'users', // Cookie settings 'cookieName' => 'rememberMe', 'expires' => '+1 month', // Process logic 'active' => true, 'redirect' => true, 'requirePrompt' => true ); On remote server it simply doesn't autolog users after the browser was closed. I can't figure out what may cause the problem. -------------------- edit I figured out what is causing the problem but I don't know how to fix this. First of all cookie is set like this: $this->Cookie->write('key',array('username' => 'someusername', 'hash' => 'somehash', ...) ); Then it's readed like this: $cookie = $this->Cookie->read('key'); On my WAMP server $cookie is array('username' => 'someusername', 'hash' => 'somehash', ...) and on remote server returned $cookie is string(159) "{\"username\":\"YWxlay5iYXJzsdsdZXdza2ldssd21haWwuY29t\",\"password\":\"YWxlazc3ODEy\",\"hash\":\"aa15bffff9ca12cdcgfgb351d8bfg2f370bf458\",\"time\":1339923926}" and it should be: array( 'username' => "YWxlay5iYXJzsdsdZXdza2ldssd21haWwuY29t", 'password' => "YWxlazc3ODEy", ...) Why the retuned cookie is string not array?

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

  • SQL Server compare table entries for update

    - by Dave
    I have a trade table with several million rows. Each row represents the version of a trade. If I'm given a possibly new trade I compare it to the latest version in the trade table. If it has changed I add a new version, otherwise I do nothing. In order to compare the 2 trades I read the version from the trade table into my application. This doesn't work well when I'm given 10s of thousands of possibly new trades. Even batching reads to read in a 1000 trades at once and compare them the whole process can take several minutes. All the time is spent in the DB. I'm trying to find a way to compare the possibly new trades to the ones in the trade table without so much I/O. What I've come up with so far is adding a hash column to each row in the trade table. The hash is of all the trade fields. Then when I'm given possibly new trades I compute their hash, put the values into a temporary table, then find ones that are different. This feels very hacky. Is there a better way of doing it? Thanks

    Read the article

  • PHP does not store all variables in session

    - by Flurin Juvalta
    Dear all I'm assigning session variables by filling the $_SESSION - Array throughout my script. My problem is, that for some reason not all variables are available in the session. here is a shortened version of my code for explaining this issue: session_start(); print_r($_SESSION); $_SESSION['lang'] = 'de'; $_SESSION['location_id'] = 11; $_SESSION['region_id'] = 1; $_SESSION['userid'] = 'eccbc87e4b5ce2fe28308fd9f2a7baf3'; $_SESSION['hash'] = 'dce57f1e3bc6fba32afab93b0c38b662'; print_r($_SESSION); first call prints something like this: Array ( ) Array ( [lang] => de [location_id] => 11 [region_id] => 1 [userid] => eccbc87e4b5ce2fe28308fd9f2a7baf3 [hash] => dce57f1e3bc6fba32afab93b0c38b662 ) the second call prints: Array ( [lang] => de [location_id] => 11 [region_id] => 1 ) Array ( [lang] => de [location_id] => 11 [region_id] => 1 [userid] => eccbc87e4b5ce2fe28308fd9f2a7baf3 [hash] => dce57f1e3bc6fba32afab93b0c38b662 ) As you can see, the important login information is not stored in the session. Does anybody has an idea what could be wrong with my session? Thanks for your answers!

    Read the article

  • Digitally sign MS Office (Word, Excel, etc..) and PDF files on the server

    - by Sébastien Nussbaumer
    I need to digitally sign MS Office and PDF files that are stored on a server. I really mean a digital signature that is integrated in the document, according to each specific file formats. This is the process I had in mind : Create a hash of the file's content Send the hash to a custom written java applet in the browser The user encrypts the hash with his/her private key (on an usb token via PKCS#11 for example), thus effectively signing the file. The applet then sends the signature to the server On the server I would then incorporate the signature in the file's (MS Office and PDF files can do that without changing the file's content, probably by just setting some metadata field) What is cool is that you never have to download and upload the complete file to the server again. What is even cooler, the customer doesn't need Office or PDF Writer to sign the files. Parts 2, 3 and 4 are OK for me, my company bought all the JAVA technology I need for that for a previous project I worked on. Problem : I can't seem to find any documentation/examples to do parts 1 and 5 for Office files . Are my google skills failing me this time ? Do you have any pointers to documentation or examples for doing that for MS Office files ? The underlying technology isn't that important to me : I can use Java, .Net, COM, any working technology is OK ! Note : I'm 95% sure I can nail points 1 and 5 for PDF files using iText Thanks ** Edit : If I can't do that with hashes and must download the complete file to the client, it's also possible. But then I still need the documentation to be able to sign Office file... in java this time (from an applet)

    Read the article

  • If attacker has original data, and encrypted data, can they determine the passphrase?

    - by Brad Cupit
    If an attacker has several distinct items (for example: e-mail addresses) and knows the encrypted value of each item, can the attacker more easily determine the secret passphrase used to encrypt those items? Meaning, can they determine the passphrase without resorting to brute force? This question may sound strange, so let me provide a use-case: User signs up to a site with their e-mail address Server sends that e-mail address a confirmation URL (for example: https://my.app.com/confirmEmailAddress/bill%40yahoo.com) Attacker can guess the confirmation URL and therefore can sign up with someone else's e-mail address, and 'confirm' it without ever having to sign in to that person's e-mail account and see the confirmation URL. This is a problem. Instead of sending the e-mail address plain text in the URL, we'll send it encrypted by a secret passphrase. (I know the attacker could still intercept the e-mail sent by the server, since e-mail are plain text, but bear with me here.) If an attacker then signs up with multiple free e-mail accounts and sees multiple URLs, each with the corresponding encrypted e-mail address, could the attacker more easily determine the passphrase used for encryption? Alternative Solution I could instead send a random number or one-way hash of their e-mail address (plus random salt). This eliminates storing the secret passphrase, but it means I need to store that random number/hash in the database. The original approach above does not require this extra table. I'm leaning towards the the one-way hash + extra table solution, but I still would like to know the answer: does having multiple unencrypted e-mail addresses and their encrypted counterparts make it easier to determine the passphrase used?

    Read the article

  • jQuery / Loading content into div and changing url's (working but buggy)

    - by Bruno
    This is working, but I'm not being able to set an index.html file on my server root where i can specify the first page to go. It also get very buggy in some situations. Basically it's a common site (menu content) but the idea is to load the content without refreshing the page, defining the div to load the content, and make each page accessible by the url. One of the biggest problems here it's dealing with all url situations that may occur. The ideal would be to have a rel="divToLoadOn" and then pass it on my loadContent() function... so I would like or ideas/solutions for this please. Thanks in advance! //if page comes from URL if(window.location.hash != ''){ var url = window.location.hash; url = '..'+url.substr(1, url.length); loadContent(url); } //if page comes from an internal link $("a:not([target])").click(function(e){ e.preventDefault(); var url = $(this).attr("href"); if(url != '#'){ loadContent($(this).attr("href")); } }); //LOAD CONTENT function loadContent(url){ var contentContainer = $("#content"); //set load animation $(contentContainer).ajaxStart(function() { $(this).html('loading...'); }); $.ajax({ url: url, dataType: "html", success: function(data){ //store data globally so it can be used on complete window.data = data; }, complete: function(){ var content = $(data).find("#content").html(); var contentTitle = $(data).find("title").text(); //change url var parsedUrl = url.substr(2,url.length) window.location.hash = parsedUrl; //change title var titleRegex = /(.*)<\/title/.exec(data); contentTitle = titleRegex[1]; document.title = contentTitle; //renew content $(contentContainer).fadeOut(function(){ $(this).html(content).fadeIn(); }); }); }

    Read the article

  • how to run href="#var=something" before onclick function fires in javascript?

    - by korben
    i'm using the below javascript to change an image on an aspx in asp.net c# <script language="JavaScript" type="text/javascript"> var updateImageWhenHashChanges = function() { theImage = document.getElementById("ctl00_ContentPlaceHolder1_Image1a"); if(window.location.hash == "#size=2") { theImage.src = "<%# Eval("realfilename", "/files/l{0}") %>"; } else if(window.location.hash == "#size=3") { theImage.src = "<%# Eval("realfilename", "/files/{0}") %>"; } else if(window.location.hash == "#size=1") { theImage.src = "<%# Eval("fullthumbname", "/thumbnails/{0}") %>"; } else { } } </script> here's how i call it with a link <a href="#size=3" onclick="updateImageWhenHashChanges();">test</a> the problem is that it only does what i'm expecting on the SECOND click of the link, because it seems onclick fires before the href, so the first time i'm just placing the var and the 2nd time i'm actually getting what i want. does anyone know how i can fix this? i'm trying to get the image to change on each click

    Read the article

  • Datastructure choices for highspeed and memory efficient detection of duplicate of strings

    - by Jonathan Holland
    I have a interesting problem that could be solved in a number of ways: I have a function that takes in a string. If this function has never seen this string before, it needs to perform some processing. If the function has seen the string before, it needs to skip processing. After a specified amount of time, the function should accept duplicate strings. This function may be called thousands of time per second, and the string data may be very large. This is a highly abstracted explanation of the real application, just trying to get down to the core concept for the purpose of the question. The function will need to store state in order to detect duplicates. It also will need to store an associated timestamp in order to expire duplicates. It does NOT need to store the strings, a unique hash of the string would be fine, providing there is no false positives due to collisions (Use a perfect hash?), and the hash function was performant enough. The naive implementation would be simply (in C#): Dictionary<String,DateTime> though in the interest of lowering memory footprint and potentially increasing performance I'm evaluating a custom data structures to handle this instead of a basic hashtable. So, given these constraints, what would you use? EDIT, some additional information that might change proposed implementations: 99% of the strings will not be duplicates. Almost all of the duplicates will arrive back to back, or nearly sequentially. In the real world, the function will be called from multiple worker threads, so state management will need to be synchronized.

    Read the article

  • Why doesn't String's hashCode() cache 0?

    - by polygenelubricants
    I noticed in the Java 6 source code for String that hashCode only caches values other than 0. The difference in performance is exhibited by the following snippet: public class Main{ static void test(String s) { long start = System.currentTimeMillis(); for (int i = 0; i < 10000000; i++) { s.hashCode(); } System.out.format("Took %d ms.%n", System.currentTimeMillis() - start); } public static void main(String[] args) { String z = "Allocator redistricts; strict allocator redistricts strictly."; test(z); test(z.toUpperCase()); } } Running this in ideone.com gives the following output: Took 1470 ms. Took 58 ms. So my questions are: Why doesn't String's hashCode() cache 0? What is the probability that a Java string hashes to 0? What's the best way to avoid the performance penalty of recomputing the hash value every time for strings that hash to 0? Is this the best-practice way of caching values? (i.e. cache all except one?) For your amusement, each line here is a string that hash to 0: pollinating sandboxes amusement & hemophilias schoolworks = perversive electrolysissweeteners.net constitutionalunstableness.net grinnerslaphappier.org BLEACHINGFEMININELY.NET WWW.BUMRACEGOERS.ORG WWW.RACCOONPRUDENTIALS.NET Microcomputers: the unredeemed lollipop... Incentively, my dear, I don't tessellate a derangement. A person who never yodelled an apology, never preened vocalizing transsexuals.

    Read the article

  • Been asked a dozen times, but no luck from what I've read. Prevent Anchor Jumping on page load

    - by jasenmp
    I'm currently working with WP theme that can be found here: sanjay.dmediastudios.com I'm currently using 'smooth scroll' on my page, I'm attempting to have the page smoothly scroll to the requested section when coming from an external link (for instance coming from the blog page takes you to sanjay.dmediastudios.com/#portfolio) from there I want the page to start at the top and THEN scroll to the portfolio section. What's happening is it briefly displays the 'portfolio section' (anchor jump) and THEN resets to the top and scrolls down. It's driving me nuts :(. Here is the code I'm using: Click function for smooth scroll: $(function() { $('.menu li a').click(function() { if (location.pathname.replace(/^\//, '') == this.pathname.replace(/^\//, '') && location.hostname == this.hostname) { var target = $(this.hash); target = target.length ? target : $('[name=' + this.hash.slice(1) + ']'); if (target.length) { $root.animate({ scrollTop: target.offset().top - 75 }, 800, 'swing'); return false; } } }); //end of click function }); The page load function: $(window).on("load", function() { if (location.hash) { // do the test straight away window.scrollTo(0, 0); // execute it straight away setTimeout(function() { window.scrollTo(0, 0); // run it a bit later also for browser compatibility }, 1); } var urlHash = window.location.href.split("#")[1]; if (urlHash && $('#' + urlHash).length) $('html,body').animate({ scrollTop: $('#' + urlHash).offset().top - 75 }, 800, 'swing'); }); Any help would be MUCH appreciated.

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • In Rails 3, how does one render HTML within a JSON response?

    - by ylg
    I'm porting an application from Merb 1.1 / 1.8.7 to Rails 3 (beta) / 1.9.1 that uses JSON responses containing HTML fragments, e.g., a JSON container specifying an update, on a user record, and the updated user row looks like . In Merb, since whatever a controller method returns is given to the client, one can put together a Hash, assign a rendered partial to one of the keys and return hash.to_json (though that certainly may not be the best way.) In Rails, it seems that to get data back to the client one must use render and render can only be called once, so rendering the hash to json won't work because of the partial render. From reading around, it seems one could put that data into a JSON .erb view file, with <%= render partial % in and render that. Is there a Rails-way of solving this problem (return JSON containing one or more HTML fragments) other than that? In Merb: only_provides :json ... self.status = 204 # or appropriate if not async return { 'action' => 'update', 'type' => 'user', 'id' => @user.id, 'html' => partial('user_row', format: :html, user: @user) }.to_json In Rails?

    Read the article

  • Python: Behavior of object in set operations

    - by Josh Arenberg
    I'm trying to create a custom object that behaves properly in set operations. I've generally got it working, but I want to make sure I fully understand the implications. In particular, I'm interested in the behavior when there is additional data in the object that is not included in the equal / hash methods. It seems that in the 'intersection' operation, it returns the set of objects that are being compared to, where the 'union' operations returns the set of objects that are being compared. To illustrate: class MyObject: def __init__(self,value,meta): self.value = value self.meta = meta def __eq__(self,other): if self.value == other.value: return True else: return False def __hash__(self): return hash(self.value) a = MyObject('1','left') b = MyObject('1','right') c = MyObject('2','left') d = MyObject('2','right') e = MyObject('3','left') print a == b # True print a == c # False for i in set([a,c,e]).intersection(set([b,d])): print "%s %s" % (i.value,i.meta) #returns: #1 right #2 right for i in set([a,c,e]).union(set([b,d])): print "%s %s" % (i.value,i.meta) #returns: #1 left #3 left #2 left Is this behavior documented somewhere and deterministic? If so, what is the governing principle?

    Read the article

  • jQuery: part of a function not executing

    - by SODA
    Hi. I have a tabbed setup on the page and I want to automatically make corresponding menu tab highlighted as well as corresponding content div show depending on # hash. Example: http://design.vitalbmx.com/user_menu/member_profile_so.html -- no hash, opens 1st tab http://design.vitalbmx.com/user_menu/member_profile_so.html#setup -- #setup, should open "Setup" tab As you can see it works for highlighting "Setup" tab. But content div does not change. The script is below: var tab_content_current = 1; switch (window.location.hash) { case '#activity': tab_content_current = 1; break; case '#friends': tab_content_current = 2; break; case '#photos': tab_content_current = 3; break; case '#videos': tab_content_current = 4; break; case '#setup': tab_content_current = 5; break; case '#forum': tab_content_current = 6; break; case '#blog': tab_content_current = 7; break; case '#comments': tab_content_current = 8; break; case '#favorites': tab_content_current = 9; break; case '#profile-comments': tab_content_current = 10; break; default: tab_content_current = 1; } if (tab_content_current != 1) { change_active_tab (tab_content_current); } function tabs_toggle (id) { if (id != tab_content_current) { change_active_tab (id); tab_content_current = id; } } function change_active_tab (id) { $j('.profile_tabs li').removeClass('active'); if (id < 8) $j('.profile_tab_'+id).addClass('active'); $j('.profile_content').hide(); $j('#profile_content_'+id).fadeIn(); } Note that it works when you actually click menu tabs. Any help to fix this problem would be greatly appreciated.

    Read the article

  • MySQL table doesn't update, can't find the error message

    - by mobius1ski
    My knowledge level here is like zilch, but please bear with me. I have a site built in PHP/MySQL that uses the Smarty template engine. There's a registration form that, for some reason, isn't posting the data to the DB. Here's the function: $u = new H_User; $u->setFrom($p); $smarty->assign('user', $u); $val = $u->validate(); if ($val === true) { $temp = new H_User; $temp->orderBy('user_id desc'); $temp->find(true); $next_id = $temp->user_id + 1; $u->user_id = $next_id; $u->user_password = md5($p['user_password']); $u->user_regdate = mktime(); $u->user_active = 0; $u->insert(); $hash = md5($u->user_email . $u->user_regdate); $smarty->assign('hash', $hash); $smarty->assign('user', $u); $smarty->assign('registration_complete', true); $d = new H_Demographic; $d->setFrom($p); $d->insert(); How can I figure out what's wrong here? I don't get any PHP errors and I don't know how to get MySQL to display the errors that might indicate what's wrong with that syntax.

    Read the article

  • List of values as keys for a Map

    - by thr
    I have lists of variable length where each item can be one of four unique, that I need to use as keys for another object in a map. Assume that each value can be either 0, 1, 2 or 3 (it's not integer in my real code, but a lot easier to explain this way) so a few examples of key lists could be: [1, 0, 2, 3] [3, 2, 1] [1, 0, 0, 1, 1, 3] [2, 3, 1, 1, 2] [1, 2] So, to re-iterate: each item in the list can be either 0, 1, 2 or 3 and there can be any number of items in a list. My first approach was to try to hash the contents of the array, using the built in GetHashCode() in .NET to combine the hash of each element. But since this would return an int I would have to deal with collisions manually (two equal int values are identical to a Dictionary). So my second approach was to use a quad tree, breaking down each item in the list into a Node that has four pointers (one for each possible value) to the next four possible values (with the root node representing [], an empty list), inserting [1, 0, 2] => Foo, [1, 3] => Bar and [1, 0] => Baz into this tree would look like this: Grey nodes nodes being unused pointers/nodes. Though I worry about the performance of this setup, but there will be no need to deal with hash collisions and the tree won't become to deep (there will mostly be lists with 2-6 items stored, rarely over 6). Is there some other magic way to store items with lists of values as keys that I have missed?

    Read the article

  • Can I retrieve objects from a complex query that limits results to fields from a single table?

    - by Sean Redmond
    I have a model whose rows I always want to sort based on the values in another associated model and I was thinking that the way to implement this would be to use set_dataset in the model. This is causing query results to be returned as hashes rather than objects, though, so none of the methods from the class can be used when iterating over the dataset. I basically have two classes class SortFields < Sequel::Model(:sort_fields) set_primary_key :objectid end class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid end Some backstory: the data is imported from a legacy system into mysql. The values in sort_fields are calculated from multiple other associated tables (some one-to-many, some many-to-many) according to some complicated rules. The likely solution will be to just add the values in sort_fields to items (I want to keep the imported data separate from the calculated data, but I don't have to). First, though, I just want to understand how far you can go with a dataset and still get objects rather than hashes. If I set the dataset to sort on a field in items like so class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset(order(:sortnumber)) end then the expected clause is added to the generated SQL, e.g.: >> Items.limit(1).sql => "SELECT * FROM `items` ORDER BY `sortnumber` LIMIT 1" and queries still return objects: >> Items.limit(1).first.class => Items If I order it by the associated fields though... class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). order(:sort1, :sort2, :sort3) ) end ...I get hashes ?> Items.limit(1).first.class => Hash My first thought was that this happens because all fields from sort_fields are included in the results and maybe if selected only the fields from items I would get Items objects again: class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). select(:items.*). order(:sort1, :sort2, :sort3) ) end The generated SQL is what I would expect: >> Items.limit(1).sql => "SELECT `items`.* FROM `items` LEFT OUTER JOIN `sort_fields` ON (`sort_fields`.`objectid` = `items`.`objectid`) ORDER BY `sort1`, `sort2`, `sort3` LIMIT 1" It returns the same rows as the set_dataset(order(:sortnumber)) version but it still doesn't work: >> Items.limit(1).first.class => Hash Before I add the sort fields to the items table so that they can all live happily in the same model, is there a way to tell Sequel to return on object when it wants to return a hash?

    Read the article

  • Postfix Submission port issue

    - by RevSpot
    I have setup postfix+mailman on my debian server and i have an issue with postfix submission port. My ISP blocks SMTP on port 25 to prevent *spams and i must to use submission port (587). I have uncomment the following line from master.cf (/etc/postfix/) but nothing happens. submission inet n - - - - smtpd This is my mail logs file when i try to invite a user to mailman list Nov 6 00:35:34 myhostname postfix/qmgr[1763]: C90BF1060D: from=<[email protected]>, size=1743, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: DF54B10608: from=<[email protected]>, size=488, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: 80F0D10609: from=<[email protected]>, size=483, nrcpt=1 (queue active) Nov 6 00:35:55 myhostname postfix/smtp[2269]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2270]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2271]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2269]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2270]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2271]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2269]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2270]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname4 postfix/smtp[2271]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2269]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2270]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2271]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: C90BF1060D: to=<[email protected]>, relay=none, delay=23711, delays=23606/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: DF54B10608: to=<[email protected]>, relay=none, delay=23882, delays=23777/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: 80F0D10609: to=<[email protected]>, relay=none, delay=23875, delays=23770/0.04/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) main.cf smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = mail.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mail.mydomain.com, localhost.mydomain.com,localhost relayhost = relay_domains = $mydestination, mail.mydomain.com relay_recipient_maps = hash:/var/lib/mailman/data/virtual-mailman transport_maps = hash:/etc/postfix/transport mailman_destination_recipient_limit = 1 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all local_recipient_maps = master.cf smtp inet n - - - - smtpd submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user}

    Read the article

  • postfix relaying all mail through office365 problems

    - by amrith
    This is a rather long question with a long list of things tried and travails so please bear with me. The summary is this. I am able to relay email from ubuntu through office365 using postfix; the configuration works. It only works as one of the users; more specifically the user who authenticates against office365 is the only valid "from" More details follow. I have a machine in Amazon's cloud on which I run a bunch of jobs and would like to have statuses mailed over to me. I use office365 at work so I want to relay mail through office365. I'm most familiar with postfix so I used that as the MTA. Configuration is ubuntu 12.04LTS; I've installed postfix and mail-utils. For this example, let me say my company is "company.com" and the machine in question (through an elastic IP and a DNS entry) is called "plaything.company.com". hostname is set to "plaything.company.com", so is /etc/mailname On plaything, I have the following users registered alpha, bravo, and charlie. I have the following configuration files. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all inet_protocols = ipv4 mailbox_size_limit = 0 mydestination = plaything.company.com, localhost.company.com, , localhost myhostname = plaything.company.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = [smtp.office365.com]:587 sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes As the machine is called plaything.company.com I went through the exercise of registering all the appropriate DNS entries to make office365 recognize that I owned plaything.company.com and allowed me to create a user called [email protected] in office365. In office365, I setup [email protected] as having another email address of [email protected]. Then, I made the following sender_canonical [email protected] [email protected] I created a sasl_passwd file that reads: smtp.office365.com [email protected]:123456password123456 let's just say that the password for [email protected] is 1234...456 With all this setup, login as alpha and mail [email protected] Cc: Subject: test test and the whole thing works wonderfully. email gets sent off by postfix, TLS works like a champ, authenticates as daemon@... and [email protected] in Office365 gets an email message. The issue comes up when logged in as bravo to the machine. sender is [email protected] and office365 says: status=bounced (host smtp.office365.com[132.245.12.25] said: 550 5.7.1 Client does not have permissions to send as this sender (in reply to end of DATA command)) this is because I'm trying to send mail as bravo@... and authenticating with office365 as daemon@.... The reason it works with alpha@... is because in office365, I setup [email protected] as having another email address of [email protected]. In Postfix Relay to Office365, Miles Erickson answers the question thusly: Don't send mail to Office365 as a user from your Office365-hosted e-mail domain. Use a subdomain instead, e.g. [email protected] instead of [email protected]. It wouldn't hurt to set up an SPF record for services.mydomain.com or whatever you decide to use. Don't authenticate against mail.messaging.microsoft.com as an Office365 user. Just connect on port 25 and deliver the mail to your domain as any foreign SMTP agent would do. OK, I've done #1, I have those records on DNS but for the most part they are not relevant once Office365 recognizes that I own the domain. Here are those records: CNAME records: - msoid.plaything.company.com - autodiscover.plaything.company.com MX record: - plaything.company.com (plaything-company-com.mail.protection.outlook.com) TXT record: - plaything.company.com (v=spf1 include:spf.protection.outlook.com -all) I've tried #2 but no matter what I do, office365 just blows away the connection with "not authenticated". I can try even a simple telnet to port 25 and attempt to send and it doesn't work. 250 BY2PR01CA007.outlook.office365.com Hello [54.221.245.236] 530 5.7.1 Client was not authenticated Connection closed by foreign host. Is there someone out there who has this kind of a configuration working where multiple users on a linux machine are able to relay mail using postfix through office365? There has to be someone out there doing this who can tell me what is wrong with my setup ...

    Read the article

  • Postfix Not Sending Email to Some Addresses?

    - by Jake
    I'm using Jetpack on Wordpress, and it wasn't working. I was getting the following error: Diagnostic-Code: X-Postfix; unknown user: "jake" --60FD1138CAD.1354039466/example.com Content-Description: Undelivered Message (example.com substituted for our domain) We set up a test mail function, and that wasn't sending either. We changed the email to an outside email and it worked. Any thoughts why it won't send to an email that is at the same domain? Or why it sends to some emails but not others? Upon running postconf -n, I get the following: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all inet_protocols = all mailbox_size_limit = 0 mydestination = example.com, Example, localhost.localdomain, localhost myhostname = example.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >