Search Results

Search found 3489 results on 140 pages for 'all numeric no hash'.

Page 45/140 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Configuring varnish and django (apache/modwsgi)

    - by Hedde
    I am trying to work out why my application keeps hitting the database while I have setup varnish infront of apache. I think I am missing some vital configuration, any tips are welcome This is my curl result: HTTP/1.1 200 OK Server: Apache/2.2.16 (Debian) Content-Language: en-us Vary: Accept,Accept-Encoding,Accept-Language,Cookie Cache-Control: s-maxage=60, no-transform, max-age=60 Content-Type: application/json; charset=utf-8 Date: Sat, 15 Sep 2012 08:19:17 GMT Connection: keep-alive My varnishlog: 13 BackendClose - apache 13 BackendOpen b apache 127.0.0.1 47665 127.0.0.1 8000 13 TxRequest b GET 13 TxURL b /api/v1/events/?format=json 13 TxProtocol b HTTP/1.1 13 TxHeader b User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 13 TxHeader b Host: foobar.com 13 TxHeader b Accept: */* 13 TxHeader b X-Forwarded-For: 92.64.200.145 13 TxHeader b X-Varnish: 979305817 13 TxHeader b Accept-Encoding: gzip 13 RxProtocol b HTTP/1.1 13 RxStatus b 200 13 RxResponse b OK 13 RxHeader b Date: Sat, 15 Sep 2012 08:21:28 GMT 13 RxHeader b Server: Apache/2.2.16 (Debian) 13 RxHeader b Content-Language: en-us 13 RxHeader b Content-Encoding: gzip 13 RxHeader b Vary: Accept,Accept-Encoding,Accept-Language,Cookie 13 RxHeader b Cache-Control: s-maxage=60, no-transform, max-age=60 13 RxHeader b Content-Length: 6399 13 RxHeader b Content-Type: application/json; charset=utf-8 13 Fetch_Body b 4(length) cls 0 mklen 1 13 Length b 6399 13 BackendReuse b apache 11 SessionOpen c 92.64.200.145 53236 :80 11 ReqStart c 92.64.200.145 53236 979305817 11 RxRequest c HEAD 11 RxURL c /api/v1/events/?format=json 11 RxProtocol c HTTP/1.1 11 RxHeader c User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 11 RxHeader c Host: foobar.com 11 RxHeader c Accept: */* 11 VCL_call c recv lookup 11 VCL_call c hash 11 Hash c /api/v1/events/?format=json 11 Hash c foobar.com 11 VCL_return c hash 11 VCL_call c miss fetch 11 Backend c 13 apache apache 11 TTL c 979305817 RFC 60 -1 -1 1347697289 0 1347697288 0 60 11 VCL_call c fetch deliver 11 ObjProtocol c HTTP/1.1 11 ObjResponse c OK 11 ObjHeader c Date: Sat, 15 Sep 2012 08:21:28 GMT 11 ObjHeader c Server: Apache/2.2.16 (Debian) 11 ObjHeader c Content-Language: en-us 11 ObjHeader c Content-Encoding: gzip 11 ObjHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 ObjHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 ObjHeader c Content-Type: application/json; charset=utf-8 11 Gzip c u F - 6399 69865 80 80 51128 11 VCL_call c deliver deliver 11 TxProtocol c HTTP/1.1 11 TxStatus c 200 11 TxResponse c OK 11 TxHeader c Server: Apache/2.2.16 (Debian) 11 TxHeader c Content-Language: en-us 11 TxHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 TxHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 TxHeader c Content-Type: application/json; charset=utf-8 11 TxHeader c Date: Sat, 15 Sep 2012 08:21:29 GMT 11 TxHeader c Connection: keep-alive 11 Length c 0 11 ReqEnd c 979305817 1347697288.292612076 1347697289.456128597 0.000086784 1.163468122 0.000048399

    Read the article

  • postfix error: open database /var/lib/mailman/data/aliases.db: No such file

    - by Thufir
    In trying to follow the Ubuntu guide for postfix and mailman, I do not understand these directions: This build of mailman runs as list. It must have permission to read /etc/aliases and read and write /var/lib/mailman/data/aliases. Do this with these commands: sudo chown root:list /var/lib/mailman/data/aliases sudo chown root:list /etc/aliases Save and run: sudo newaliases I'm getting this kind of error: root@dur:~# root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo dur 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# tail /var/log/mail.log Aug 28 01:16:43 dur postfix/master[19444]: terminating on signal 15 Aug 28 01:16:43 dur postfix/postfix-script[19558]: starting the Postfix mail system Aug 28 01:16:43 dur postfix/master[19559]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 01:16:45 dur postfix/postfix-script[19568]: stopping the Postfix mail system Aug 28 01:16:45 dur postfix/master[19559]: terminating on signal 15 Aug 28 01:16:45 dur postfix/postfix-script[19673]: starting the Postfix mail system Aug 28 01:16:45 dur postfix/master[19674]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 01:17:22 dur postfix/smtpd[19709]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 01:17:22 dur postfix/smtpd[19709]: connect from localhost[127.0.0.1] Aug 28 01:18:37 dur postfix/smtpd[19709]: disconnect from localhost[127.0.0.1] root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# root@dur:~# And am wondering what connection might be. I do see that I don't have the requisite files: root@dur:~# root@dur:~# ll /var/lib/mailman/data/aliases ls: cannot access /var/lib/mailman/data/aliases: No such file or directory root@dur:~# At what stage were those aliases created? How can I create them? Is that what's causing the error error: open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 01:17:22 dur postfix/smtpd[19709]: connect from localhost[127.0.0.1]?

    Read the article

  • Why MySQL sat for 2 minutes doing nothing?

    - by Alex R
    This was a one-time thing, not reproducible... But I saved the show innodb status output. Can anybody tell what's going on here? The simple insert took almost 3 minutes to complete. | InnoDB | | ===================================== 110201 15:58:10 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 34 seconds ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 11963, signal count 11766 --Thread 1824 has waited at .\btr\btr0cur.c line 443 for 118.00 seconds the sema phore: S-lock on RW-latch at 09D6453C created in file .\buf\buf0buf.c line 550 a writer (thread id 1824) has reserved it in mode wait exclusive number of readers 1, waiters flag 1 Last time read locked in file .\buf\buf0flu.c line 599 Last time write locked in file .\btr\btr0cur.c line 443 Mutex spin waits 0, rounds 527817, OS waits 7133 RW-shared spins 2532, OS waits 1226; RW-excl spins 1652, OS waits 1118 ------------ TRANSACTIONS ------------ Trx id counter 0 95830 Purge done for trx's n:o < 0 95814 undo n:o < 0 0 History list length 11 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 0 0, not started, OS thread id 3704 MySQL thread id 551, query id 2702112 localhost 127.0.0.1 root show innodb status ---TRANSACTION 0 95829, not started, OS thread id 3132 MySQL thread id 534, query id 2702020 localhost 127.0.0.1 root ---TRANSACTION 0 95828, not started, OS thread id 3152 MySQL thread id 527, query id 2701973 localhost 127.0.0.1 root ---TRANSACTION 0 95827, ACTIVE 118 sec, OS thread id 1824 inserting, thread decl ared inside InnoDB 500 mysql tables in use 1, locked 1 1 lock struct(s), heap size 320, 0 row lock(s) MySQL thread id 526, query id 2701972 localhost 127.0.0.1 root update INSERT INTO log_searchcriteria (userid,search_criteria,date,search_type) VALUES ( NAME_CONST('userid',NULL), NAME_CONST('search_criteria',_latin1' SELECT SQL_C ALC_FOUND_ROWS idx_search.CTCX_LATITUDE, idx_search.CTCX_LONGITUDE, idx_search.b uilding_id, idx_search.LN_LIST_NUMBER, idx_search.LP_LIST_PRICE, idx_search.HSN_ ADRESS_HOUSE_NUMBER, idx_search.STR_ADDRESS_STREET, idx_search.CP_ADDRESS_COMPAS S_POINT, idx_search.UN_UNIT, idx_search.CIT_CITY, idx_search.ZP_ZIP_CODE, idx_se arch.AR_AREA_NAME, idx_search.BR_BEDROOMS, idx_search.BTH_BATHS, idx_search.ST_S TATUS, idx_search.CTCX_STYLE_TYPE, idx_s -------- FILE I/O -------- I/O thread 0 state: wait Windows aio (insert buffer thread) I/O thread 1 state: wait Windows aio (log thread) I/O thread 2 state: wait Windows aio (read thread) I/O thread 3 state: wait Windows aio (write thread) Pending normal aio reads: 0, aio writes: 1, ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 0; buffer pool: 0 151006 OS file reads, 120758 OS file writes, 6844 OS fsyncs 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s ------------------------------------- INSERT BUFFER AND ADAPTIVE HASH INDEX ------------------------------------- Ibuf: size 1, free list len 5, seg size 7, 24664 inserts, 24664 merged recs, 4612 merges Hash table size 553253, node heap has 629 buffer(s) 0.00 hash searches/s, 0.00 non-hash searches/s --- LOG --- Log sequence number 5 2318193115 Log flushed up to 5 2318193115 Last checkpoint at 5 2318129891 0 pending log writes, 0 pending chkp writes 3036 log i/o's done, 0.00 log i/o's/second ---------------------- BUFFER POOL AND MEMORY ---------------------- Total memory allocated 213459462; in additional pool allocated 1720192 Dictionary memory allocated 240416 Buffer pool size 8192 Free buffers 0 Database pages 7563 Modified db pages 18 Pending reads 0 Pending writes: LRU 0, flush list 18, single page 0 Pages read 150973, created 28788, written 115137 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout -------------- ROW OPERATIONS -------------- 1 queries inside InnoDB, 0 queries in queue 1 read views open inside InnoDB Main thread id 2992, state: flushing buffer pool pages Number of rows inserted 794294, updated 89203, deleted 13698, read 1453084305 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s ---------------------------- END OF INNODB MONITOR OUTPUT ============================ Thanks

    Read the article

  • Postfix issues sending mail to addresses under domain located on server

    - by iamthewit
    I recently installed virtualmin on my nice shiny new rackspace cloud. Everything went seemlessly but I've been having some issues getting emails to send properly. The problem seems to be that the server can not send mail to email addresses where the domain is owned by my server. For example, on my server I run multiple virtual domains, lets call this one test.com. When I run the mail command from shell (mail [email protected]) I get the following back from my maillog: Oct 6 14:55:18 test postfix/pickup[8737]: DC1131612CC: uid=0 from= Oct 6 14:55:18 test postfix/cleanup[8769]: DC1131612CC: [email protected] Oct 6 14:55:18 test postfix/qmgr[8738]: DC1131612CC: [email protected], size=353, nrcpt=1 (queue active) Oct 6 14:55:18 test postfix/error[8771]: DC1131612CC: [email protected], relay=none, delay=0, delays=0/0/0/0, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Oct 6 14:55:18 test postfix/cleanup[8769]: DD07D1612D1: [email protected] Oct 6 14:55:18 test postfix/bounce[8772]: DC1131612CC: sender non-delivery notification: DD07D1612D1 Oct 6 14:55:18 test postfix/qmgr[8738]: DD07D1612D1: from=<, size=2268, nrcpt=1 (queue active) Oct 6 14:55:18 test postfix/qmgr[8738]: DC1131612CC: removed Oct 6 14:55:18 test postfix/local[8773]: DD07D1612D1: [email protected], relay=local, delay=0.03, delays=0/0/0/0.03, dsn=2.0.0, status=sent (delivered to command: /usr/bin/procmail-wrapper -o -a $DOMAIN -d $LOGNAME) Oct 6 14:55:18 test postfix/qmgr[8738]: DD07D1612D1: removed when I run mail [email protected] the message is sent and received perfectly fine. I'm a bit of a noob when it comes to servers, but I pick things up fairly quickly, so please excuse any incorrect terminology and my general noobiness. Any help would be greatly appreciated, I've been googling for quite a while but I haven't found a solution yet, I'll add a copy of my main.cf file in a response below cheers guys here is the reformatted postconf, do you want the reformatted main.cf file too, or is this enough? alias_database = hash:/etc/postfix/aliases alias_maps = hash:/etc/postfix/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no mailbox_command = /usr/bin/procmail-wrapper -o -a $DOMAIN -d $LOGNAME mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man myhostname = server.test.com newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sender_bcc_maps = hash:/etc/postfix/bcc sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual

    Read the article

  • tc rules block traffic from some hosts at network

    - by user139430
    I have a problem I can not solve. The script, which sets the rules for traffic shaping is blocking the traffic from some hosts.If I remove all the rules, then it works. I can not understand why? Here is my script... #!/bin/sh cmdTC=/sbin/tc rateLANDl="60mbit" ceilLANDl="60mbit" rateLANUl="40mbit" ceilLANUl="40mbit" quantLAN="1514" # Nowaday bandwidth limit set to 100mbit. # We devide it with 60mbit download and 40mbit upload bandthes. rateHiDl="30mbit" ceilHiDl="60mbit" rateHiUl="20mbit" ceilHiUl="40mbit" quantHi="1514" rateLoDl="30mbit" ceilLoDl="60mbit" rateLoUl="20mbit" ceilLoUl="40mbit" quantLo="1514" devNIF=eth0 devFIF=ifb0 modprobe ifb ip link set $devFIF up 2>/dev/null #exit 0 ################################################################################################ # Remove discuiplines from network and fake interfaces ################################################################################################ $cmdTC qdisc del dev $devNIF root 2>/dev/null $cmdTC qdisc del dev $devFIF root 2>/dev/null $cmdTC qdisc del dev $devNIF ingress 2>/dev/null if [ "$1" = "down" ]; then exit 0 fi ################################################################################################ # Create discuiplines for network interface ################################################################################################ $cmdTC qdisc add dev $devNIF root handle 1:0 htb default 12 # Create classes for network interface $cmdTC class add dev $devNIF parent 1:0 classid 1:1 htb rate ${rateLANDl} ceil ${ceilLANDl} quantum ${quantLAN} $cmdTC class add dev $devNIF parent 1:1 classid 1:11 htb rate ${rateHiDl} ceil ${ceilHiDl} quantum ${quantHi} $cmdTC class add dev $devNIF parent 1:1 classid 1:12 htb rate ${rateLoDl} ceil ${ceilLoDl} quantum ${quantLo} $cmdTC qdisc add dev $devNIF parent 1:11 handle 111: sfq perturb 10 $cmdTC qdisc add dev $devNIF parent 1:12 handle 112: sfq perturb 10 # Create filters for network interface $cmdTC filter add dev $devNIF protocol all parent 1:0 u32 match ip dst 10.252.2.0/24 flowid 1:11 $cmdTC filter add dev $devNIF protocol all parent 111: handle 111 flow hash keys dst divisor 1024 baseclass 1:11 $cmdTC filter add dev $devNIF protocol all parent 112: handle 112 flow hash keys dst divisor 1024 baseclass 1:12 ################################################################################################ # Create discuiplines for fake interface ################################################################################################ $cmdTC qdisc add dev $devFIF root handle 1:0 htb default 12 # Create classes for network interface $cmdTC class add dev $devFIF parent 1:0 classid 1:1 htb rate ${rateLANUl} ceil ${ceilLANUl} quantum ${quantLAN} $cmdTC class add dev $devFIF parent 1:1 classid 1:11 htb rate ${rateHiUl} ceil ${ceilHiUl} quantum ${quantHi} $cmdTC class add dev $devFIF parent 1:1 classid 1:12 htb rate ${rateLoUl} ceil ${ceilLoUl} quantum ${quantLo} $cmdTC qdisc add dev $devFIF parent 1:11 handle 111: sfq perturb 10 $cmdTC qdisc add dev $devFIF parent 1:12 handle 112: sfq perturb 10 # Create filters for network interface $cmdTC filter add dev $devFIF protocol all parent 1:0 u32 match ip src 10.252.2.0/24 flowid 1:11 $cmdTC filter add dev $devFIF protocol all parent 111: handle 111 flow hash keys src divisor 1024 baseclass 1:11 $cmdTC filter add dev $devFIF protocol all parent 112: handle 112 flow hash keys src divisor 1024 baseclass 1:12 ################################################################################################ # Create redirect discuiplines from network to fake interface ################################################################################################ $cmdTC qdisc add dev $devNIF handle ffff:0 ingress $cmdTC filter add dev $devNIF parent ffff:0 protocol all u32 match u32 0 0 action mirred egress redirect dev $devFIF Here is my /etc/modules: loop ifb ppp_mppe nf_conntrack_pptp nt_conntrack_proto_gre nf_nat_pptp nf_nat_proto_gre The system is Linux wall 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux

    Read the article

  • How to generate Serial Keys? [closed]

    - by vincent mathew
    Which software can I use to generate Product keys if I have the GroupId, KeyId, Secret and Hash for the generation? Edit: I had seen a post which generated Product Keys using this information. [Additional Key Details/Activation Decryption*: GroupId = 86f 2159 KeyId = ed46 60742 Secret = e0cdc320ba048 3954789545910344 Hash = 5f 95 ] So I was wondering if there is any software which could generate keys using this information? Thanks.

    Read the article

  • (PHP) SHA1 vs md5 vs SHA256: which to use for a PHP login?

    - by hatorade
    I'm making a php login, and I'm trying to decide whether to use SHA1 or Md5, or SHA256 which I read about in another stackoverflow article. Are any of them more secure than others? For SHA1/256, do I still use a salt? Also, is this a secure way to store the password as a hash in mysql? function createSalt() { $string = md5(uniqid(rand(), true)); return substr($string, 0, 3); } $salt = createSalt(); $hash = sha1($salt . $hash);

    Read the article

  • How do I send DTMF tones and pauses using Android ACTION_CALL Intent with commas in the number?

    - by Rob Kent
    I have an application that calls a number stored by the user. Everything works okay unless the number contains commas or hash signs, in which case the Uri gets truncated after the digits. I have read that you need to encode the hash sign but even doing that, or without a hash sign, the commas never get passed through. However, they do get passed through if you just pick the number from your contacts. I must be doing something wrong. For example: String number = "1234,,,,4#1"; Uri uri = Uri.parse(String.format("tel:%s", number)); try { startActivity(new Intent(callType, uri)); } catch (ActivityNotFoundException e) { ... Only the number '1234' would end up in the dialer.

    Read the article

  • Best way to cache resized images using PHP and MySQL

    - by Chris Hawes
    What would be the best practice way to handle the caching of images using PHP. The filename is currently stored in a MySQL database which is renamed to a GUID on upload, along with the original filename and alt tag. When the image is put into the HTML pages it is done so using a url such as '/images/get/200x200/{guid}.jpg which is rewritten to a php script. This allows my designers to specify (roughly - the source image maybe smaller) the file size. The php script then creates a hash of the size (200x200 in the url) and the GUID filename and if the file has been generated before (file with the name of the hash exists in TMP directory) sends the file from the application TMP directory. If the hashed filename does not exist, then it is created, written to disk and served up in the same manner, Is this efficient as it could be? (It also supports watermarking the images and the watermarking settings are stored in the hash as well, but thats out of scope for this.)

    Read the article

  • Response.Redirect with a fragment identifier causes unexpected refresh when later using location.has

    - by Matt
    Hi All, I was hoping someone can assist in describing a workaround solution to the following issue I am running into on my ASP.NET website on IE. In the following I will describe the bug and clarify the requirements of the needed solution. Repro Steps: User visits A.aspx A.aspx uses Response.Redirect to bring the user to B.aspx#house On B.aspx#house, the user clicks a button that sets window.location.hash='test' Actual Results: B.aspx is loaded again. The URL now shows B.aspx#test Expected Results: No reload. The URL will just change to B.aspx#test Requirements: Page A must redirect to page B with a fragment identifier in the url Any user action on page B will set the location.hash Setting location.hash must not make page B refresh This must work on IE Notes: Bug only repros on IE (tested on ie6|7|8). Opera, FF, Chrome, Safari all have the expected results of no reload. This error may have nothing to do with ASP.NET, and everything to do with IE For any kind soul willing to have a look at this, I have created a minimal ASP.NET web project to make it easy to repro here

    Read the article

  • Nominal Attributes in LibSVM

    - by Chris S
    When creating a libsvm training file, how do you differentiate between a nominal attribute verses a numeric attribute? I'm trying to encode certain nominal attributes as integers, but I want to ensure libsvm doesn't misinterpret them as numeric values. Unfortunately, libsvm's site seems to have very little documentation. Pentaho's docs seem to imply libsvm makes this distinction, but I'm still not clear how it's made.

    Read the article

  • Android 2.2 - and exchange password policy enforcement

    - by Moshe
    Hi, In Android 2.2 site (link text it's written: Improved security with the addition of numeric pin or alpha-numeric password options to unlock device. Exchange administrators can enforce password policy across devices But while I'm using N1 with 2.2 and try to connect to my company exchange server it didn't enforce me to set a password, although connecting to the same server from Windows Mobile 6 device enforce this. I know that exchange server is configured to enforce password. Is there anything special the administrator need to do? Thank you, Moshe

    Read the article

  • Unique identifier for an email

    - by Skywalker
    I am writing a C# application which allows users to store emails in a MS SQL Server database. Many times, multiple users will be copied on an email from a customer. If they all try to add the same email to the database, I want to make sure that the email is only added once. MD5 springs to mind as a way to do this. I don't need to worry about malicious tampering, only to make sure that the same email will map to the same hash and that no two emails with different content will map to the same hash. My question really boils down to how one would combine multiple fields into one MD5 (or other) hash value. Some of these fields will have a single value per email (e.g. subject, body, sender email address) while others will have multiple values (varying numbers of attachments, recipients). I want to develop a way of uniquely identifying an email that will be platform and language independent (not based on serialization). Any advice?

    Read the article

  • Are some data structures more suitable for functional programming than others?

    - by Rob Lachlan
    In Real World Haskell, there is a section titled "Life without arrays or hash tables" where the authors suggest that list and trees are preferred in functional programming, whereas an array or a hash table might be used instead in an imperative program. This makes sense, since it's much easier to reuse part of an (immutable) list or tree when creating a new one than to do so with an array. So my questions are: Are there really significantly different usage patterns for data structures between functional and imperative programming? If so, is this a problem? What if you really do need a hash table for some application? Do you simply swallow the extra expense incurred for modifications?

    Read the article

  • multiple records using the select_tag

    - by JZ
    I'm attempting to display multiple records, with each record displaying its own selection box. I would like to submit this form and pass a hash, having the Add.id function as the key of the hash, and the selection box option pass as the information in the hash. How could I fix my code? is this even possible with the select_tag method? <%= form_tag yardsign_adds_path, :method => :post do %> <%= select_tag "support_code[]", options_for_select([[ "1 - Strong Supporter", add.id ], [ "2 - Likely Voter" ], [ "3 - Undecided" ], [ "4 - Likely Opposed" ], [ "5 - Strongly Opposed" ]]) %> <%= submit_tag "Update" %> <% end %> Current terminal output: Started POST "/adds/yardsign" for 127.0.0.1 at 2010-04-17 01:36:03 Processing by AddsController#yardsign as HTML Parameters: {"commit"=>"Update", "authenticity_token"=>"VQ2jVfzHI7pB+87lQa9NWqvUK3zwJWiJE7CwAnIewiw=", "support_code"=>["1", "3 - Undecided", "3 - Undecided"]}

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • Racket: change dotted pair to list

    - by user2963128
    I have a program that recursively calls a hashtable and prints out data from it. Unfortunately my hashtable seems to be saving data as dotted pairs so when I call the hashtable I get an error saying that there is no data for it because its tryign to search the hashtable for a dotted pair instead of a list. Is there an easy way to make the dotted pair into a regular list? IE im getting '("was" . "beginning") instead of '("was" "beginning") Is there a way to change this without re-writing how my hashtable store stuff? im using the let function to set a variable to this and then calling another function based on this variable (let ((data ( list-ref(hash-ref Ngram-table key) (random (length (hash-ref Ngram-table key)))))) is there a way to make the value stored in data just a list like this '("var1" "var2") instead of a dotted pair? edit: im getting dotted pairs because im using let to set data to the part of the hashtable's key and one of the elements in that hash.

    Read the article

  • Prevent Ruby on Rails from sending the session header

    - by hurikhan77
    How do I prevent Rails from always sending the session header (Set-Cookie). This is a security problem if the application also sends the Cache-Control: public header. My application touches (but does not modify) the session hash in some/most actions. These pages display no private content so I want them to be cacheable - but Rails always sends the cookie header, no matter if the sent session hash is different from the previous or not. What I want to achieve is to only send the hash if it is different from the one received from the client. How can you do that? And probably that fix should also go into official Rails release? What do you think?

    Read the article

  • wrapping boost::ublas with swig

    - by leon
    I am trying to pass data around the numpy and boost::ublas layers. I have written an ultra thin wrapper because swig cannot parse ublas' header correctly. The code is shown below #include <boost/numeric/ublas/vector.hpp> #include <boost/numeric/ublas/matrix.hpp> #include <boost/lexical_cast.hpp> #include <algorithm> #include <sstream> #include <string> using std::copy; using namespace boost; typedef boost::numeric::ublas::matrix<double> dm; typedef boost::numeric::ublas::vector<double> dv; class dvector : public dv{ public: dvector(const int rhs):dv(rhs){;}; dvector(); dvector(const int size, double* ptr):dv(size){ copy(ptr, ptr+sizeof(double)*size, &(dv::data()[0])); } ~dvector(){} }; with the SWIG interface that looks something like %apply(int DIM1, double* INPLACE_ARRAY1) {(const int size, double* ptr)} class dvector{ public: dvector(const int rhs); dvector(); dvector(const int size, double* ptr); %newobject toString; char* toString(); ~dvector(); }; I have compiled them successfully via gcc 4.3 and vc++9.0. However when I simply run a = dvector(array([1.,2.,3.])) it gives me a segfault. This is the first time I use swigh with numpy and not have fully understanding between the data conversion and memory buffer passing. Does anyone see something obvious I have missed? I have tried to trace through with a debugger but it crashed within the assmeblys of python.exe. I have no clue if this is a swig problem or of my simple wrapper. Anything is appreciated.

    Read the article

  • SQL Server cast fails with arithmetic overflow

    - by Jamie Ide
    According to the entry for decimal and numeric data types in SQL Server 2008 Books Online, precision is: p (precision) The maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal point. The precision must be a value from 1 through the maximum precision of 38. The default precision is 18. However, the second select below fails with "Arithmetic overflow error converting int to data type numeric." SELECT CAST(123456789 as decimal(9,0)) SELECT CAST(123456789 as decimal(9,1))

    Read the article

  • What Date Format Should I Send When Using Oracle.DataAcess.

    - by discwiz
    Converting from usind Micorsofts Syste.Data.OracleClient to what I believe is called Oracles ODT (Oracle.DataAccess 10.2.0.100). When I try and send a date I get this error "ORA-1858: a non-numeric character was found where a numeric was expected". This code worked great using System.Data.OracleClient. cmd.Parameters.Add(New OracleParameter("I_FIRST_LOSS_EVENT_DATE", OracleDbType.Date)).Value = .LossEventsMessages(0).LossEventTime Thanks, Dave

    Read the article

  • Compile time string hashing

    - by Caspin
    I have read in few different places that using c++0x's new string literals it might be possible to compute a string's hash at compile time. However, no one seems to be ready to come out and say that it will be possible or how it would be done. Is this possible? What would the operator look like? I'm particularly interested use cases like this. void foo( const std::string& value ) { switch( std::hash(value) ) { case "one"_hash: one(); break; case "two"_hash: two(); break; /*many more cases*/ default: other(); break; } } Note: the compile time hash function doesn't have to look exactly as I've written it. I did my best to guess what the final solution would look like, but meta_hash<"string"_meta>::value could also be a viable solution.

    Read the article

  • hibernate sybase db power function

    - by Vipin Thomas
    We are trying to use sybase function power to do mathematical calculation for one of the DB columns. The hibernate is generating power function as pow(?, xyzo0_.AmtScale) whereas sybase supports power function as Syntax POWER( numeric-expression-1, numeric-expression-2 ) We have tried modifying the hibernate.dialect. Have tried org.hibernate.dialect.SybaseASE15Dialect org.hibernate.dialect.Sybase11Dialect org.hibernate.dialect.SybaseAnywhereDialect but all dialects generate the power function as pow(?, xyzo0_.AmtScale). Is this hibernate issue or are we missing something?

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >