Search Results

Search found 3659 results on 147 pages for 'sorted hash'.

Page 42/147 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • tc rules block traffic from some hosts at network

    - by user139430
    I have a problem I can not solve. The script, which sets the rules for traffic shaping is blocking the traffic from some hosts.If I remove all the rules, then it works. I can not understand why? Here is my script... #!/bin/sh cmdTC=/sbin/tc rateLANDl="60mbit" ceilLANDl="60mbit" rateLANUl="40mbit" ceilLANUl="40mbit" quantLAN="1514" # Nowaday bandwidth limit set to 100mbit. # We devide it with 60mbit download and 40mbit upload bandthes. rateHiDl="30mbit" ceilHiDl="60mbit" rateHiUl="20mbit" ceilHiUl="40mbit" quantHi="1514" rateLoDl="30mbit" ceilLoDl="60mbit" rateLoUl="20mbit" ceilLoUl="40mbit" quantLo="1514" devNIF=eth0 devFIF=ifb0 modprobe ifb ip link set $devFIF up 2>/dev/null #exit 0 ################################################################################################ # Remove discuiplines from network and fake interfaces ################################################################################################ $cmdTC qdisc del dev $devNIF root 2>/dev/null $cmdTC qdisc del dev $devFIF root 2>/dev/null $cmdTC qdisc del dev $devNIF ingress 2>/dev/null if [ "$1" = "down" ]; then exit 0 fi ################################################################################################ # Create discuiplines for network interface ################################################################################################ $cmdTC qdisc add dev $devNIF root handle 1:0 htb default 12 # Create classes for network interface $cmdTC class add dev $devNIF parent 1:0 classid 1:1 htb rate ${rateLANDl} ceil ${ceilLANDl} quantum ${quantLAN} $cmdTC class add dev $devNIF parent 1:1 classid 1:11 htb rate ${rateHiDl} ceil ${ceilHiDl} quantum ${quantHi} $cmdTC class add dev $devNIF parent 1:1 classid 1:12 htb rate ${rateLoDl} ceil ${ceilLoDl} quantum ${quantLo} $cmdTC qdisc add dev $devNIF parent 1:11 handle 111: sfq perturb 10 $cmdTC qdisc add dev $devNIF parent 1:12 handle 112: sfq perturb 10 # Create filters for network interface $cmdTC filter add dev $devNIF protocol all parent 1:0 u32 match ip dst 10.252.2.0/24 flowid 1:11 $cmdTC filter add dev $devNIF protocol all parent 111: handle 111 flow hash keys dst divisor 1024 baseclass 1:11 $cmdTC filter add dev $devNIF protocol all parent 112: handle 112 flow hash keys dst divisor 1024 baseclass 1:12 ################################################################################################ # Create discuiplines for fake interface ################################################################################################ $cmdTC qdisc add dev $devFIF root handle 1:0 htb default 12 # Create classes for network interface $cmdTC class add dev $devFIF parent 1:0 classid 1:1 htb rate ${rateLANUl} ceil ${ceilLANUl} quantum ${quantLAN} $cmdTC class add dev $devFIF parent 1:1 classid 1:11 htb rate ${rateHiUl} ceil ${ceilHiUl} quantum ${quantHi} $cmdTC class add dev $devFIF parent 1:1 classid 1:12 htb rate ${rateLoUl} ceil ${ceilLoUl} quantum ${quantLo} $cmdTC qdisc add dev $devFIF parent 1:11 handle 111: sfq perturb 10 $cmdTC qdisc add dev $devFIF parent 1:12 handle 112: sfq perturb 10 # Create filters for network interface $cmdTC filter add dev $devFIF protocol all parent 1:0 u32 match ip src 10.252.2.0/24 flowid 1:11 $cmdTC filter add dev $devFIF protocol all parent 111: handle 111 flow hash keys src divisor 1024 baseclass 1:11 $cmdTC filter add dev $devFIF protocol all parent 112: handle 112 flow hash keys src divisor 1024 baseclass 1:12 ################################################################################################ # Create redirect discuiplines from network to fake interface ################################################################################################ $cmdTC qdisc add dev $devNIF handle ffff:0 ingress $cmdTC filter add dev $devNIF parent ffff:0 protocol all u32 match u32 0 0 action mirred egress redirect dev $devFIF Here is my /etc/modules: loop ifb ppp_mppe nf_conntrack_pptp nt_conntrack_proto_gre nf_nat_pptp nf_nat_proto_gre The system is Linux wall 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux

    Read the article

  • How to generate Serial Keys? [closed]

    - by vincent mathew
    Which software can I use to generate Product keys if I have the GroupId, KeyId, Secret and Hash for the generation? Edit: I had seen a post which generated Product Keys using this information. [Additional Key Details/Activation Decryption*: GroupId = 86f 2159 KeyId = ed46 60742 Secret = e0cdc320ba048 3954789545910344 Hash = 5f 95 ] So I was wondering if there is any software which could generate keys using this information? Thanks.

    Read the article

  • (PHP) SHA1 vs md5 vs SHA256: which to use for a PHP login?

    - by hatorade
    I'm making a php login, and I'm trying to decide whether to use SHA1 or Md5, or SHA256 which I read about in another stackoverflow article. Are any of them more secure than others? For SHA1/256, do I still use a salt? Also, is this a secure way to store the password as a hash in mysql? function createSalt() { $string = md5(uniqid(rand(), true)); return substr($string, 0, 3); } $salt = createSalt(); $hash = sha1($salt . $hash);

    Read the article

  • How do I send DTMF tones and pauses using Android ACTION_CALL Intent with commas in the number?

    - by Rob Kent
    I have an application that calls a number stored by the user. Everything works okay unless the number contains commas or hash signs, in which case the Uri gets truncated after the digits. I have read that you need to encode the hash sign but even doing that, or without a hash sign, the commas never get passed through. However, they do get passed through if you just pick the number from your contacts. I must be doing something wrong. For example: String number = "1234,,,,4#1"; Uri uri = Uri.parse(String.format("tel:%s", number)); try { startActivity(new Intent(callType, uri)); } catch (ActivityNotFoundException e) { ... Only the number '1234' would end up in the dialer.

    Read the article

  • Best way to cache resized images using PHP and MySQL

    - by Chris Hawes
    What would be the best practice way to handle the caching of images using PHP. The filename is currently stored in a MySQL database which is renamed to a GUID on upload, along with the original filename and alt tag. When the image is put into the HTML pages it is done so using a url such as '/images/get/200x200/{guid}.jpg which is rewritten to a php script. This allows my designers to specify (roughly - the source image maybe smaller) the file size. The php script then creates a hash of the size (200x200 in the url) and the GUID filename and if the file has been generated before (file with the name of the hash exists in TMP directory) sends the file from the application TMP directory. If the hashed filename does not exist, then it is created, written to disk and served up in the same manner, Is this efficient as it could be? (It also supports watermarking the images and the watermarking settings are stored in the hash as well, but thats out of scope for this.)

    Read the article

  • Response.Redirect with a fragment identifier causes unexpected refresh when later using location.has

    - by Matt
    Hi All, I was hoping someone can assist in describing a workaround solution to the following issue I am running into on my ASP.NET website on IE. In the following I will describe the bug and clarify the requirements of the needed solution. Repro Steps: User visits A.aspx A.aspx uses Response.Redirect to bring the user to B.aspx#house On B.aspx#house, the user clicks a button that sets window.location.hash='test' Actual Results: B.aspx is loaded again. The URL now shows B.aspx#test Expected Results: No reload. The URL will just change to B.aspx#test Requirements: Page A must redirect to page B with a fragment identifier in the url Any user action on page B will set the location.hash Setting location.hash must not make page B refresh This must work on IE Notes: Bug only repros on IE (tested on ie6|7|8). Opera, FF, Chrome, Safari all have the expected results of no reload. This error may have nothing to do with ASP.NET, and everything to do with IE For any kind soul willing to have a look at this, I have created a minimal ASP.NET web project to make it easy to repro here

    Read the article

  • Unique identifier for an email

    - by Skywalker
    I am writing a C# application which allows users to store emails in a MS SQL Server database. Many times, multiple users will be copied on an email from a customer. If they all try to add the same email to the database, I want to make sure that the email is only added once. MD5 springs to mind as a way to do this. I don't need to worry about malicious tampering, only to make sure that the same email will map to the same hash and that no two emails with different content will map to the same hash. My question really boils down to how one would combine multiple fields into one MD5 (or other) hash value. Some of these fields will have a single value per email (e.g. subject, body, sender email address) while others will have multiple values (varying numbers of attachments, recipients). I want to develop a way of uniquely identifying an email that will be platform and language independent (not based on serialization). Any advice?

    Read the article

  • sort associative array in awk - help?

    - by Richard
    Hi all, I have an associative array in awk that gets populated like this... chr_count[$3]++ When I try to print my chr_counts I use this: for (i in chr_count) { print i,":",chr_count[i]; } But not surprisingly, the order of i is not sorted in any way. Is there an easy way to iterate over the sorted "keys" of chr_count?

    Read the article

  • Are some data structures more suitable for functional programming than others?

    - by Rob Lachlan
    In Real World Haskell, there is a section titled "Life without arrays or hash tables" where the authors suggest that list and trees are preferred in functional programming, whereas an array or a hash table might be used instead in an imperative program. This makes sense, since it's much easier to reuse part of an (immutable) list or tree when creating a new one than to do so with an array. So my questions are: Are there really significantly different usage patterns for data structures between functional and imperative programming? If so, is this a problem? What if you really do need a hash table for some application? Do you simply swallow the extra expense incurred for modifications?

    Read the article

  • multiple records using the select_tag

    - by JZ
    I'm attempting to display multiple records, with each record displaying its own selection box. I would like to submit this form and pass a hash, having the Add.id function as the key of the hash, and the selection box option pass as the information in the hash. How could I fix my code? is this even possible with the select_tag method? <%= form_tag yardsign_adds_path, :method => :post do %> <%= select_tag "support_code[]", options_for_select([[ "1 - Strong Supporter", add.id ], [ "2 - Likely Voter" ], [ "3 - Undecided" ], [ "4 - Likely Opposed" ], [ "5 - Strongly Opposed" ]]) %> <%= submit_tag "Update" %> <% end %> Current terminal output: Started POST "/adds/yardsign" for 127.0.0.1 at 2010-04-17 01:36:03 Processing by AddsController#yardsign as HTML Parameters: {"commit"=>"Update", "authenticity_token"=>"VQ2jVfzHI7pB+87lQa9NWqvUK3zwJWiJE7CwAnIewiw=", "support_code"=>["1", "3 - Undecided", "3 - Undecided"]}

    Read the article

  • Ember.js CollectionView order

    - by ilia choly
    I have an ArrayController which is periodically updated. It has a sorted computed property which keeps things in order. I'm trying to use a CollectionView with it's content property bound to the sorted property, but it's not rendering them in the correct order. demo I've obviously made the false assumption that order is maintained between the content and childViews property. What is the correct way to do this?

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • Racket: change dotted pair to list

    - by user2963128
    I have a program that recursively calls a hashtable and prints out data from it. Unfortunately my hashtable seems to be saving data as dotted pairs so when I call the hashtable I get an error saying that there is no data for it because its tryign to search the hashtable for a dotted pair instead of a list. Is there an easy way to make the dotted pair into a regular list? IE im getting '("was" . "beginning") instead of '("was" "beginning") Is there a way to change this without re-writing how my hashtable store stuff? im using the let function to set a variable to this and then calling another function based on this variable (let ((data ( list-ref(hash-ref Ngram-table key) (random (length (hash-ref Ngram-table key)))))) is there a way to make the value stored in data just a list like this '("var1" "var2") instead of a dotted pair? edit: im getting dotted pairs because im using let to set data to the part of the hashtable's key and one of the elements in that hash.

    Read the article

  • Prevent Ruby on Rails from sending the session header

    - by hurikhan77
    How do I prevent Rails from always sending the session header (Set-Cookie). This is a security problem if the application also sends the Cache-Control: public header. My application touches (but does not modify) the session hash in some/most actions. These pages display no private content so I want them to be cacheable - but Rails always sends the cookie header, no matter if the sent session hash is different from the previous or not. What I want to achieve is to only send the hash if it is different from the one received from the client. How can you do that? And probably that fix should also go into official Rails release? What do you think?

    Read the article

  • sorting hashes inside an array on values

    - by srk
    @aoh =( { 3 => 15, 4 => 8, 5 => 9, }, { 3 => 11, 4 => 25, 5 => 6, }, { 3 => 5, 4 => 18, 5 => 5, }, { 0 => 16, 1 => 11, 2 => 7, }, { 0 => 21, 1 => 13, 2 => 31, }, { 0 => 11, 1 => 14, 2 => 31, }, ); I want the hashes in each array index sorted in reverse order based on values.. @sorted = sort { ........... please fill this..........} @aoh; expected output @aoh =( { 4 => 8, 5 => 9, 3 => 15, }, { 5 => 6, 3 => 11, 4 => 25, }, { 5 => 5, 3 => 5, 4 => 18, }, { 2 => 7, 1 => 11, 0 => 16, }, { 1 => 13, 0 => 21, 2 => 31, }, { 0 => 11, 1 => 14, 2 => 31, }, ); Please help.. Thanks in advance.. Stating my request again: I only want the hashes in each array index to be sorted by values.. i dont want the array to be sorted..

    Read the article

  • Compile time string hashing

    - by Caspin
    I have read in few different places that using c++0x's new string literals it might be possible to compute a string's hash at compile time. However, no one seems to be ready to come out and say that it will be possible or how it would be done. Is this possible? What would the operator look like? I'm particularly interested use cases like this. void foo( const std::string& value ) { switch( std::hash(value) ) { case "one"_hash: one(); break; case "two"_hash: two(); break; /*many more cases*/ default: other(); break; } } Note: the compile time hash function doesn't have to look exactly as I've written it. I did my best to guess what the final solution would look like, but meta_hash<"string"_meta>::value could also be a viable solution.

    Read the article

  • Need to sort using Obout Grid ?

    - by Anand
    By default when clicking the each column header it will automatically sorted. but I have placed a image in header column by clicking that image the column has to be sorted by priority level such as 0,1,2,.....the problem if i take datafield as priority the image disappears. I want use the datafield priority and sort according the image by priority. If anyone has suggestion please reply me asap.\ Thank you

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • SHA1 form .exe file in C#

    - by alibipl
    Hi! I hope that someone could help me with reading exe files in C# and create a SHA1 hash from it. I have tried to read from executable file using StreamReader and BinaryReader. Then using built-in SHA1 algorithm I tried to create a hash but without success. The algorithm results for StreamReader was "AEUj+Ppo5QdHoeboidah3P65N3s=" and for BinaryReader was "rWXzn/CoLLPBWqMCE4qcE3XmUKw=". Can anyone help me to acheive SHA1 hash from exe file? Thx. BTW Sorry for my English ;)

    Read the article

  • Is there any way to limit the size of an STL Map?

    - by Nathan Fellman
    I want to implement some sort of lookup table in C++ that will act as a cache. It is meant to emulate a piece of hardware I'm simulating. The keys are non-integer, so I'm guessing a hash is in order. I have no intention of inventing the wheel so I intend to use stl::map for this (though suggestions for alternatives are welcome). The question is, is there any way to limit the size of the hash to emulate the fact that my hardware is of finite size? I'd expect the hash's insert method to return an error message or throw an exception if the limit is reached. If there is no such way, I'll simply check its size before trying to insert, but that seems like an inelegant way to do it.

    Read the article

  • Sort file as Month name in ColdFusion

    - by Simon Guo
    I have a directory, it contains files like: january2009.xml, february2009.xml, march2009.xml,april2009.xml,january2010.xml, february2010.xml, march2010.xml,april2010.xml ... I use the cfdirectory to get the file by year. Right now, I want to display it as sorted order in month. Say If I only want year 2009 data. I want it sorted as january2009.xml, february2009.xml, march2009.xml,april2009.xml but not april2009.xml, february2009.xml, january2009.xml, march2009.xml Anyone has easy way to do it in ColdFusion?

    Read the article

  • How sort a QuantumGrid on data from a different column

    - by norgepaul
    Is there a way to sort Devexpress QuantumGrid rows on data from a different column other than the one whose header has been clicked? For example, when the header of column A is clicked the rows of the grid are sorted on the data from column B. Visually it should still appear that it is column A that has been sorted as the sort glyphys will be shown in column A's header.

    Read the article

  • How should I change my Graph structure (very slow insertion)?

    - by Nazgulled
    Hi, This program I'm doing is about a social network, which means there are users and their profiles. The profiles structure is UserProfile. Now, there are various possible Graph implementations and I don't think I'm using the best one. I have a Graph structure and inside, there's a pointer to a linked list of type Vertex. Each Vertex element has a value, a pointer to the next Vertex and a pointer to a linked list of type Edge. Each Edge element has a value (so I can define weights and whatever it's needed), a pointer to the next Edge and a pointer to the Vertex owner. I have a 2 sample files with data to process (in CSV style) and insert into the Graph. The first one is the user data (one user per line); the second one is the user relations (for the graph). The first file is quickly inserted into the graph cause I always insert at the head and there's like ~18000 users. The second file takes ages but I still insert the edges at the head. The file has about ~520000 lines of user relations and takes between 13-15mins to insert into the Graph. I made a quick test and reading the data is pretty quickly, instantaneously really. The problem is in the insertion. This problem exists because I have a Graph implemented with linked lists for the vertices. Every time I need to insert a relation, I need to lookup for 2 vertices, so I can link them together. This is the problem... Doing this for ~520000 relations, takes a while. How should I solve this? Solution 1) Some people recommended me to implement the Graph (the vertices part) as an array instead of a linked list. This way I have direct access to every vertex and the insertion is probably going to drop considerably. But, I don't like the idea of allocating an array with [18000] elements. How practically is this? My sample data has ~18000, but what if I need much less or much more? The linked list approach has that flexibility, I can have whatever size I want as long as there's memory for it. But the array doesn't, how am I going to handle such situation? What are your suggestions? Using linked lists is good for space complexity but bad for time complexity. And using an array is good for time complexity but bad for space complexity. Any thoughts about this solution? Solution 2) This project also demands that I have some sort of data structures that allows quick lookup based on a name index and an ID index. For this I decided to use Hash Tables. My tables are implemented with separate chaining as collision resolution and when a load factor of 0.70 is reach, I normally recreate the table. I base the next table size on this http://planetmath.org/encyclopedia/GoodHashTablePrimes.html. Currently, both Hash Tables hold a pointer to the UserProfile instead of duplication the user profile itself. That would be stupid, changing data would require 3 changes and it's really dumb to do it that way. So I just save the pointer to the UserProfile. The same user profile pointer is also saved as value in each Graph Vertex. So, I have 3 data structures, one Graph and two Hash Tables and every single one of them point to the same exact UserProfile. The Graph structure will serve the purpose of finding the shortest path and stuff like that while the Hash Tables serve as quick index by name and ID. What I'm thinking to solve my Graph problem is to, instead of having the Hash Tables value point to the UserProfile, I point it to the corresponding Vertex. It's still a pointer, no more and no less space is used, I just change what I point to. Like this, I can easily and quickly lookup for each Vertex I need and link them together. This will insert the ~520000 relations pretty quickly. I thought of this solution because I already have the Hash Tables and I need to have them, then, why not take advantage of them for indexing the Graph vertices instead of the user profile? It's basically the same thing, I can still access the UserProfile pretty quickly, just go to the Vertex and then to the UserProfile. But, do you see any cons on this second solution against the first one? Or only pros that overpower the pros and cons on the first solution? Other Solution) If you have any other solution, I'm all ears. But please explain the pros and cons of that solution over the previous 2. I really don't have much time to be wasting with this right now, I need to move on with this project, so, if I'm doing to do such a change, I need to understand exactly what to change and if that's really the way to go. Hopefully no one fell asleep reading this and closed the browser, sorry for the big testament. But I really need to decide what to do about this and I really need to make a change. P.S: When answering my proposed solutions, please enumerate them as I did so I know exactly what are you talking about and don't confuse my self more than I already am.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >