Search Results

Search found 20761 results on 831 pages for 'chef client'.

Page 502/831 | < Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >

  • centos6.3 varnish3.03 get the wrong backend

    - by Sola.Shawn
    I install varnish3.03 with yum! I got a problem with it my varnish config bellow:** # #backend weibo { .host = "192.168.1.178"; .port = "8080"; .connect_timeout=20s; .first_byte_timeout=20s; .between_bytes_timeout=20s; } #backend smth { .host = "192.168.1.115"; .port = "8080"; .connect_timeout=20s; .first_byte_timeout=20s; .between_bytes_timeout=20s; } #sub vcl_recv { if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { # /* Non-RFC2616 or CONNECT which is weird. */ return(pipe); } if (req.request != "GET" && req.request != "HEAD") { # /* We only deal with GET and HEAD by default */ return(pass); } if (req.http.Authorization || req.http.Cookie) { /* Not cacheable by default */ return(pass); } if (req.http.host ~ "^(hk.)?weibo.com"){ set req.http.host = "hk.weibo.com"; set req.backend = weibo; } elseif (req.http.host ~ "^(www.)?newsmth.net"){ set req.http.host = "www.newsmth.net"; set req.backend = smth; } else { error 404 "Unknown virtual host"; } return(lookup); } ##sub vcl_pipe { return(pipe); } #sub vcl_pass { return(pass); } #sub vcl_hash { hash_data(req.url); if(req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return(hash); } #sub vcl_hit { if(req.http.Cache-Control~"no-cache"||req.http.Cache-Control~"max-age=0"||req.http.Pragma~"no-cache"){ set obj.ttl=0s; return (restart); } return(deliver); } #sub vcl_miss { return(fetch); } #sub vcl_fetch { if (beresp.ttl <= 120s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { /* * Mark as "Hit-For-Pass" for the next 2 minutes */ set beresp.ttl = 10s; return (hit_for_pass); } return(deliver); } #sub vcl_deliver { return(deliver); } #sub vcl_init { return(ok); } #sub vcl_fini { return(ok); } and my Win7's hosts file add bellow: 192.168.1.178 www.newsmth.net 192.168.1.178 hk.weibo.com start varnish varnishd -f /etc/varnish/dd.vcl -s malloc,100M -a 0.0.0.0:8000 -T 0.0.0.0:3500<br> but when I access the "hk.weibo.com:8000" it fine, and got: Hello,I am hk.weibo.com! but when access http://www.newsmth.net:8000/, got: Hello,I am hk.weibo.com! <br> My question is why it isn't "Hello,I am www.newsmth.net!"? varnish fetched the content from the wrong backend. Does anyone know how to fix this?

    Read the article

  • /usr/bin/sshd isn't linked against PAM on one of my systems. What is wrong and how can I fix it?

    - by marc.riera
    Hi, I'm using AD as my user account server with ldap. Most of the servers run with UsePam yes except this one, it has lack of pam support on sshd. root@linserv9:~# ldd /usr/sbin/sshd linux-vdso.so.1 => (0x00007fff621fe000) libutil.so.1 => /lib/libutil.so.1 (0x00007fd759d0b000) libz.so.1 => /usr/lib/libz.so.1 (0x00007fd759af4000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007fd7598db000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007fd75955b000) libcrypt.so.1 => /lib/libcrypt.so.1 (0x00007fd759323000) libc.so.6 => /lib/libc.so.6 (0x00007fd758fc1000) libdl.so.2 => /lib/libdl.so.2 (0x00007fd758dbd000) /lib64/ld-linux-x86-64.so.2 (0x00007fd759f0e000) I have this packages installed root@linserv9:~# dpkg -l|grep -E 'pam|ssh' ii denyhosts 2.6-2.1 an utility to help sys admins thwart ssh hac ii libpam-modules 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules for PAM ii libpam-runtime 0.99.7.1-5ubuntu6.1 Runtime support for the PAM library ii libpam-ssh 1.91.0-9.2 enable SSO behavior for ssh and pam ii libpam0g 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules library ii libpam0g-dev 0.99.7.1-5ubuntu6.1 Development files for PAM ii openssh-blacklist 0.1-1ubuntu0.8.04.1 list of blacklisted OpenSSH RSA and DSA keys ii openssh-client 1:4.7p1-8ubuntu1.2 secure shell client, an rlogin/rsh/rcp repla ii openssh-server 1:4.7p1-8ubuntu1.2 secure shell server, an rshd replacement ii quest-openssh 5.2p1_q13-1 Secure shell root@linserv9:~# What I'm doing wrong? thanks. Edit: root@linserv9:~# cat /etc/pam.d/sshd # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password Edit2: UsePAM yes fails With this configuration ssh fails to start : root@linserv9:/home/admmarc# cat /etc/ssh/sshd_config |grep -vE "^[ \t]*$|^#" Port 22 Protocol 2 ListenAddress 0.0.0.0 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys ChallengeResponseAuthentication yes UsePAM yes Subsystem sftp /usr/lib/sftp-server root@linserv9:/home/admmarc# The error it gives is as follows root@linserv9:/home/admmarc# /etc/init.d/ssh start * Starting OpenBSD Secure Shell server sshd /etc/ssh/sshd_config: line 75: Bad configuration option: UsePAM /etc/ssh/sshd_config: terminating, 1 bad configuration options ...fail! root@linserv9:/home/admmarc#

    Read the article

  • Forwarding rsyslog to syslog-ng, with FQDN and facility separation

    - by Joshua Miller
    I'm attempting to configure my rsyslog clients to forward messages to my syslog-ng log repository systems. Forwarding messages works "out of the box", but my clients are logging short names, not FQDNs. As a result the messages on the syslog repo use short names as well, which is a problem because one can't determine which system the message originated from easily. My clients get their names through DHCP / DNS. I've tried a number of solutions trying to get this working, but without success. I'm using rsyslog 4.6.2 and syslog-ng 3.2.5. I've tried setting $PreserveFQDN on as the first directive in /etc/rsyslog.conf (and restarting rsyslog of course). It seems to have no effect. hostname --fqdn on the client returns the proper FQDN, so the problem isn't whether the system can actually figure out its own FQDN. $LocalHostName <fqdn> looked promising, but this directive isn't available in my version of rsyslog (Available since 4.7.4+, 5.7.3+, 6.1.3+). Upgrading isn't an option at the moment. Configuring the syslog-ng server to populate names based on reverse lookups via DNS isn't an option. There are complexities with reverse DNS and the public cloud. Specifying for the forwarder to use a custom template seems like a viable option at first glance. I can specify the following, which causes local logging to begin using the FQDN on the syslog-ng repo. $template MyTemplate, "%timestamp% <FQDN> %syslogtag%%msg%" $ActionForwardDefaultTemplate MyTemplate However, when I put this in place syslog-ng seems to be unable to categorize messages by facility or priority. Messages come in as FQDN, but everything is put in to user.log. When I don't use the custom template, messages are properly categorized under facility and priority, but with the short name. So, in summary, if I manually trick rsyslog into including the FQDN, priority and facility becomes lost details to syslog-ng. How can I get rsyslog to do FQDN logging which works properly going to a syslog-ng repository? rsyslog client config: $ModLoad imuxsock.so # provides support for local system logging (e.g. via logger command) $ModLoad imklog.so # provides kernel logging support (previously done by rklogd) $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat *.info;mail.none;authpriv.none;cron.none /var/log/messages authpriv.* /var/log/secure mail.* -/var/log/maillog cron.* /var/log/cron *.emerg * uucp,news.crit /var/log/spooler local7.* /var/log/boot.log $WorkDirectory /var/spool/rsyslog # where to place spool files $ActionQueueFileName fwdRule1 # unique name prefix for spool files $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) $ActionQueueSaveOnShutdown on # save messages to disk on shutdown $ActionQueueType LinkedList # run asynchronously $ActionResumeRetryCount -1 # infinite retries if host is down *.* @syslog-ng1.example.com *.* @syslog-ng2.example.com syslog-ng configuration (abridged for brevity): options { flush_lines (0); time_reopen (10); log_fifo_size (1000); long_hostnames (off); use_dns (no); use_fqdn (yes); create_dirs (no); keep_hostname (yes); }; source src { unix-stream("/dev/log"); internal(); udp(ip(0.0.0.0) port(514)); }; destination per_host_destination { file( "/var/log/syslog-ng/devices/$HOST/$FACILITY.log" owner("root") group("root") perm(0644) dir_owner(root) dir_group(root) dir_perm(0775) create_dirs(yes)); }; log { source(src); destination(per_facility_destination); };

    Read the article

  • obtaining nimbuzz server certificate for nmdecrypt expert in NetMon

    - by lurscher
    I'm using Network Monitor 3.4 with the nmdecrypt expert. I'm opening a nimbuzz conversation node in the conversation window and i click Expert- nmDecrpt - run Expert that shows up a window where i have to add the server certificate. I am not sure how to retrieve the server certificate for nimbuzz XMPP chat service. Any idea how to do this? this question is a follow up question of this one. Edit for some background so it might be that this is encrypted with the server pubkey and i cannot retrieve the message, unless i debug the native binary and try to intercept the encryption code. I have a test client (using agsXMPP) that is able to connect with nimbuzz with no problems. the only thing that is not working is adding invisible mode. It seems this is some packet sent from the official client during login which i want to obtain. any suggestions to try to grab this info would be greatly appreciated. Maybe i should get myself (and learn) IDA pro? This is what i get inspecting the TLS frames on Network Monitor: Frame: Number = 81, Captured Frame Length = 769, MediaType = ETHERNET + Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[...],SourceAddress:[....] + Ipv4: Src = ..., Dest = 192.168.2.101, Next Protocol = TCP, Packet ID = 9939, Total IP Length = 755 - Tcp: Flags=...AP..., SrcPort=5222, DstPort=3578, PayloadLen=715, Seq=4101074854 - 4101075569, Ack=1127356300, Win=4050 (scale factor 0x0) = 4050 SrcPort: 5222 DstPort: 3578 SequenceNumber: 4101074854 (0xF4716FA6) AcknowledgementNumber: 1127356300 (0x4332178C) + DataOffset: 80 (0x50) + Flags: ...AP... Window: 4050 (scale factor 0x0) = 4050 Checksum: 0x8841, Good UrgentPointer: 0 (0x0) TCPPayload: SourcePort = 5222, DestinationPort = 3578 TLSSSLData: Transport Layer Security (TLS) Payload Data - TLS: TLS Rec Layer-1 HandShake: Server Hello.; TLS Rec Layer-2 HandShake: Certificate.; TLS Rec Layer-3 HandShake: Server Hello Done. - TlsRecordLayer: TLS Rec Layer-1 HandShake: ContentType: HandShake: - Version: TLS 1.0 Major: 3 (0x3) Minor: 1 (0x1) Length: 42 (0x2A) - SSLHandshake: SSL HandShake ServerHello(0x02) HandShakeType: ServerHello(0x02) Length: 38 (0x26) - ServerHello: 0x1 + Version: TLS 1.0 + RandomBytes: SessionIDLength: 0 (0x0) TLSCipherSuite: TLS_RSA_WITH_AES_256_CBC_SHA { 0x00, 0x35 } CompressionMethod: 0 (0x0) - TlsRecordLayer: TLS Rec Layer-2 HandShake: ContentType: HandShake: - Version: TLS 1.0 Major: 3 (0x3) Minor: 1 (0x1) Length: 654 (0x28E) - SSLHandshake: SSL HandShake Certificate(0x0B) HandShakeType: Certificate(0x0B) Length: 650 (0x28A) - Cert: 0x1 CertLength: 647 (0x287) - Certificates: CertificateLength: 644 (0x284) - X509Cert: Issuer: nimbuzz.com,Nimbuzz,NL, Subject: nimbuzz.com,Nimbuzz,NL + SequenceHeader: - TbsCertificate: Issuer: nimbuzz.com,Nimbuzz,NL, Subject: nimbuzz.com,Nimbuzz,NL + SequenceHeader: + Tag0: + Version: (2) + SerialNumber: -1018418383 + Signature: Sha1WithRSAEncryption (1.2.840.113549.1.1.5) - Issuer: nimbuzz.com,Nimbuzz,NL - RdnSequence: nimbuzz.com,Nimbuzz,NL + SequenceOfHeader: 0x1 + Name: NL + Name: Nimbuzz + Name: nimbuzz.com + Validity: From: 02/22/10 20:22:32 UTC To: 02/20/20 20:22:32 UTC + Subject: nimbuzz.com,Nimbuzz,NL - SubjectPublicKeyInfo: RsaEncryption (1.2.840.113549.1.1.1) + SequenceHeader: + Algorithm: RsaEncryption (1.2.840.113549.1.1.1) - SubjectPublicKey: - AsnBitStringHeader: - AsnId: BitString type (Universal 3) - LowTag: Class: (00......) Universal (0) Type: (..0.....) Primitive TagValue: (...00011) 3 - AsnLen: Length = 141, LengthOfLength = 1 LengthType: LengthOfLength = 1 Length: 141 bytes BitString: + Tag3: + Extensions: - SignatureAlgorithm: Sha1WithRSAEncryption (1.2.840.113549.1.1.5) - SequenceHeader: - AsnId: Sequence and SequenceOf types (Universal 16) + LowTag: - AsnLen: Length = 13, LengthOfLength = 0 Length: 13 bytes, LengthOfLength = 0 + Algorithm: Sha1WithRSAEncryption (1.2.840.113549.1.1.5) - Parameters: Null Value - Sha1WithRSAEncryption: Null Value + AsnNullHeader: - Signature: - AsnBitStringHeader: - AsnId: BitString type (Universal 3) - LowTag: Class: (00......) Universal (0) Type: (..0.....) Primitive TagValue: (...00011) 3 - AsnLen: Length = 129, LengthOfLength = 1 LengthType: LengthOfLength = 1 Length: 129 bytes BitString: + TlsRecordLayer: TLS Rec Layer-3 HandShake:

    Read the article

  • Cannot connect to website - SSL handshaking fails

    - by ravenspoint
    So I cannot connect to certain websites. Just a few, most are OK. The one I really care about is paypal.com. I have done the usual things. Let's see: Checked my etc/hosts Flushed the DNS cache Checked firewall Switched on & off virus protection Switched on and off ad blocking pinged the sites Eventually, I decided to look at what curl is saying in detail == Info: About to connect() to www.paypal.com port 443 (#0) == Info: Trying 66.211.169.2... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 110 bytes (0x6e) 0000: 01 00 00 6a 03 01 4f 6c aa 8c 57 2b 3d 1e 74 64 ...j..Ol..W+=.td 0010: c1 27 25 a5 3a 12 7f 3f 41 0a 17 15 2e c9 67 7c .'%.:.?A.....g| 0020: b3 e1 f6 9a db a9 00 00 2a 00 39 00 38 00 35 00 ........*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 17 00 00 00 13 00 11 00 00 0e ................ 0060: 77 77 77 2e 70 61 79 70 61 6c 2e 63 6f 6d www.paypal.com (hangs here for ever) This looks to me like paypal is refusing to reply to the first SSL handshake. I don't know much about SSL, but compaing to the output from a site that works for me seems to make it obvious == Info: About to connect() to www.cibc.com port 443 (#0) == Info: Trying 159.231.80.200... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 108 bytes (0x6c) 0000: 01 00 00 68 03 01 4f 6c ad 6a 1f 67 d5 84 c4 4b ...h..Ol.j.g...K 0010: 0d 49 ae d6 b9 5b c3 63 f9 48 aa 18 da 43 d1 32 .I...[.c.H...C.2 0020: 47 ae 17 e5 cd e9 00 00 2a 00 39 00 38 00 35 00 G.......*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 15 00 00 00 11 00 0f 00 00 0c ................ 0060: 77 77 77 2e 63 69 62 63 2e 63 6f 6d www.cibc.com == Info: SSLv3, TLS handshake, Server hello (2): <= Recv SSL data, 74 bytes (0x4a) 0000: 02 00 00 46 03 01 00 00 58 cf 26 e2 e1 65 db 11 ...F....X.&..e.. 0010: bc 6f 26 7b 3b 6d eb 14 5f ad 47 dd 86 ea 4d a3 .o&{;m.._.G...M. 0020: fb 9f b7 2a 54 3e 20 5f 6b 04 5a 12 38 64 5d 18 ...*T> _k.Z.8d]. 0030: 65 9e e9 cd 61 eb 91 c1 16 25 61 30 bb 08 2a 78 e...a....%a0..*x 0040: b8 ee b8 7e f2 65 6a 00 04 00 ...~.ej... == Info: SSLv3, TLS handshake, CERT (11): ... and so on - working nicely eventually get some nice HTML Now I am reaaly stuck. This has been going on for five days, so I am pretty sure that the problem is not with paypal. But what on my system could be interfering with the SSL handshaking done by curl with this particular site? I suppose I could not be offering any certificates that PayPal accepts, but wouldn't I get a reply telling me so, or at least giving an error?

    Read the article

  • Email from my new vps is marked as spam

    - by Chriswede
    I got a new vps from x10vps (x10hosting) and set up the domain via cloudflare. This is what the email looks like: Delivered-To: [email protected] Received: by 10.64.19.240 with SMTP id i16csp357708iee; Tue, 9 Oct 2012 01:29:48 -0700 (PDT) Received: by 10.50.57.130 with SMTP id i2mr908846igq.56.1349771387599; Tue, 09 Oct 2012 01:29:47 -0700 (PDT) Return-Path: <[email protected]> Received: from power.SOURCEAPE.COM ([198.91.90.116]) by mx.google.com with ESMTPS id v8si25630942ica.46.2012.10.09.01.29.46 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 09 Oct 2012 01:29:47 -0700 (PDT) Received-SPF: temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) client-ip=198.91.90.116; Authentication-Results: mx.google.com; spf=temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) [email protected] Received: from nk11p03mm-asmtp010.mac.com ([17.158.232.169]:54276) by power.SOURCEAPE.COM with esmtp (Exim 4.80) (envelope-from <[email protected]>) id 1TLVBD-0004Ig-1Y for [email protected]; Tue, 09 Oct 2012 12:28:43 +0400 I then tried to enable SPF and DKIM and got following massage In order to ensure that SPF or DKIM takes effect, you must confirm that this server is an authoritative nameserver for chvw.de. If you need help, contact your hosting provider. Status: Enabled Warning: cPanel is unable to verify that this server is an authoritative nameserver for chvw.de. [?] and the email header now looks like this: Delivered-To: [email protected] Received: by 10.50.183.227 with SMTP id ep3csp14506igc; Tue, 9 Oct 2012 01:55:23 -0700 (PDT) Received: by 10.50.40.133 with SMTP id x5mr992934igk.32.1349772923717; Tue, 09 Oct 2012 01:55:23 -0700 (PDT) Return-Path: <[email protected]> Received: from power.SOURCEAPE.COM ([198.91.90.116]) by mx.google.com with ESMTPS id ng8si25688859icb.42.2012.10.09.01.55.23 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 09 Oct 2012 01:55:23 -0700 (PDT) Received-SPF: temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) client-ip=198.91.90.116; Authentication-Results: mx.google.com; spf=temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) [email protected]; dkim=neutral (bad format) [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=chvw.de; s=default; h=Message-ID:Subject:To:From:Date:Content-Transfer-Encoding:Content-Type:MIME-Version; bh=iugsx3Lx0KnqjR7dj3wyQHnJ9pe/z3ntYEVk80k8rx4=; b=IrYsCtHdoPubXVOvLqxd7sLE/TyQTS5P3OrEg5SSUSKnQQcQ/fWWyBrmsrgkFSsw6jCmmRWMDR09vH5bQRpFPMA57B7pf8QRKhwXOWFBV+GnVUqICsfRjnNPvhx/lNp5; Received: from localhost ([127.0.0.1]:46539 helo=direct.chvw.de) by power.SOURCEAPE.COM with esmtpa (Exim 4.80) (envelope-from <[email protected]>) id 1TLVb0-0004dZ-Kd for [email protected]; Tue, 09 Oct 2012 12:55:22 +0400

    Read the article

  • cannot send mail to postfix /w iptables linux proxy

    - by Juzzam
    I have two separate servers, both running Ubuntu 8.04. Server 1 has the real domain name of our site, let's refer to it as example.com. Server 2 is a mail server I have setup with postfix/courier. The hostname for this server is mail.example.com. I've setup iptables on Server 1 to forward all traffic on port 25 to Server 2. I used this script (except I changed the target ip address and the port from 80 to 25). When I send an email to [email protected] it works. However, when I try to send an email to [email protected] from gmail, I get this error: 550 550 #5.1.0 Address rejected [email protected] (state 14) /var/log/mail.log shows no new lines when this happens. What is strange is that it works with telnet from my local machine. For example: $ telnet example.com 25 220 VO13421.localdomain SMTP Postfix EHLO example.com 250-VO13421.localdomain 250-PIPELINING 250-SIZE 10240000 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN MAIL FROM: [email protected] 250 2.1.0 Ok RCPT TO: [email protected] 250 2.1.5 Ok data 354 Please start mail input. hello user... how have you been? . 250 Mail queued for delivery. quit 221 Closing connection. Good bye. /var/log/mail.log shows success (and the email goes to the maildr): Feb 24 09:47:36 VO13421 postfix/smtpd[2212]: connect from 81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:01 VO13421 postfix/smtpd[2212]: warning: restriction `smtpd_data_restrictions' after `permit' is ignored Feb 24 09:48:01 VO13421 postfix/smtpd[2212]: 65C68120321: client=81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:29 VO13421 postfix/smtpd[2212]: warning: restriction `smtpd_data_restrictions' after `permit' is ignored Feb 24 09:48:29 VO13421 postfix/smtpd[2212]: 6BDFA120321: client=81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:29 VO13421 postfix/cleanup[2216]: 6BDFA120321: message-id= Feb 24 09:48:29 VO13421 postfix/qmgr[2042]: 6BDFA120321: from=, size=395, nrcpt=1 (queue active) Feb 24 09:48:29 VO13421 postfix/virtual[2217]: 6BDFA120321: to=, relay=virtual, delay=0.28, delays=0.25/0.02/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Feb 24 09:48:29 VO13421 postfix/qmgr[2042]: 6BDFA120321: removed Feb 24 09:48:30 VO13421 postfix/smtpd[2212]: disconnect from 81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] iptables -L -n -v --line on example.com yields the following. Anyone know an iptables command to see the port forwarding? Also, it seems to accept all traffic, that's probably bad right? ;] num pkts bytes target prot opt in out source destination 1 14041 1023K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 338 20722 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 419K packets, 425M bytes) num pkts bytes target prot opt in out source destination 1 13711 2824K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 postconf -n results in: alias_database = hash:/etc/postfix/aliases alias_maps = hash:/etc/postfix/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix delay_warning_time = 4h disable_vrfy_command = yes inet_interfaces = all local_recipient_maps = mailbox_size_limit = 0 masquerade_domains = mail.example.com mail1.example.com masquerade_exceptions = root maximal_backoff_time = 8000s maximal_queue_lifetime = 7d minimal_backoff_time = 1000s mydestination = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mynetworks_style = host myorigin = example.com readme_directory = no recipient_delimiter = + relayhost = smtp_helo_timeout = 60s smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname SMTP $mail_name smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org smtpd_delay_reject = yes smtpd_hard_error_limit = 12 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_limit = 16 smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit smtpd_soft_error_limit = 3 smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes unknown_local_recipient_reject_code = 450 virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf virtual_gid_maps = mysql:/etc/postfix/mysql_gid.cf virtual_mailbox_base = /var/spool/mail/virtual virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf virtual_uid_maps = mysql:/etc/postfix/mysql_uid.cf

    Read the article

  • Why don't mails show up in the recipient's mailspool?

    - by Jason
    I have postfix dovecot running with local email system on thunderbird. I have two users on by ubuntu, mailuser 1 and mailuser 2 whom i added to thunderbird. Everything went fine, except the users dont have anything on their inbox on thunderbird and sent mails dont get through. Im using maildir as well. Checking /var/log/mail.log reveals this This what is happining: Restarting postfix and dovecot and then sending mail from one user to another user... I believe this line is the problem May 30 18:31:55 postfix/smtpd[12804]: disconnect from localhost[127.0.0.1] Why is it not connecting ? What could be wrong ? /var/log/mail.log May 30 18:30:21 dovecot: imap: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: master: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: imap: Server shutting down. in=467 out=475 May 30 18:30:21 dovecot: config: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: log: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: anvil: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: master: Dovecot v2.2.9 starting up (core dumps disabled) May 30 18:30:54 dovecot: imap-login: Login: user=<mailuser2>, method=PLAIN, rip=::1, lip=::1, mpid=12638, TLS, session=<xUfQkaD66gAAAAAAAAAAAAAAAAAAAAAB> May 30 18:31:04 postfix/master[12245]: terminating on signal 15 May 30 18:31:04 postfix/master[12795]: daemon started -- version 2.11.0, configuration /etc/postfix May 30 18:31:55 postfix/postscreen[12803]: CONNECT from [127.0.0.1]:33668 to [127.0.0.1]:25 May 30 18:31:55 postfix/postscreen[12803]: WHITELISTED [127.0.0.1]:33668 May 30 18:31:55 postfix/smtpd[12804]: connect from localhost[127.0.0.1] May 30 18:31:55 postfix/smtpd[12804]: 1ED7120EB9: client=localhost[127.0.0.1] May 30 18:31:55 postfix/cleanup[12809]: 1ED7120EB9: message-id=<[email protected]> May 30 18:31:55 postfix/qmgr[12799]: 1ED7120EB9: from=<[email protected]>, size=546, nrcpt=1 (queue active) May 30 18:31:55 postfix/local[12810]: 1ED7120EB9: to=<mailuser2@mysitecom>, relay=local, delay=0.03, delays=0.02/0.01/0/0, dsn=2.0.0, status=sent (delivered to maildir) May 30 18:31:55 postfix/qmgr[12799]: 1ED7120EB9: removed May 30 18:31:55 postfix/smtpd[12804]: disconnect from localhost[127.0.0.1] May 30 18:31:55 dovecot: imap-login: Login: user=<mailuser1>, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=12814, TLS, session=<sD9plaD6PgB/AAAB> This is my postfix main.cf See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination myhostname = server mydomain = mysite.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = $mydomain mydestination = mysite.com #relayhost = smtp.192.168.10.1.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.10.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all home_mailbox = Maildir / mailbox_command= All ports are listening tcp 0 0 *:imaps *:* LISTEN tcp 0 0 *:submission *:* LISTEN tcp 0 0 *:imap2 *:* LISTEN tcp 0 0 s148134.s148134.:domain *:* LISTEN tcp 0 0 192.168.56.101:domain *:* LISTEN tcp 0 0 10.0.2.15:domain *:* LISTEN tcp 0 0 localhost:domain *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp 0 0 localhost:953 *:* LISTEN tcp6 0 0 [::]:imaps [::]:* LISTEN tcp6 0 0 [::]:submission [::]:* LISTEN tcp6 0 0 [::]:imap2 [::]:* LISTEN tcp6 0 0 [::]:domain [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN tcp6 0 0 [::]:smtp [::]:* LISTEN tcp6 0 0 localhost:953 [::]:* LISTEN

    Read the article

  • cannot send mail to postfix /w iptables linux proxy

    - by Juzzam
    I have two separate servers, both running Ubuntu 8.04. Server 1 has the real domain name of our site, let's refer to it as example.com. Server 2 is a mail server I have setup with postfix/courier. The hostname for this server is mail.example.com. I've setup iptables on Server 1 to forward all traffic on port 25 to Server 2. I used this script (except I changed the target ip address and the port from 80 to 25). When I send an email to [email protected] it works. However, when I try to send an email to [email protected] from gmail, I get this error: 550 550 #5.1.0 Address rejected [email protected] (state 14) /var/log/mail.log shows no new lines when this happens. What is strange is that it works with telnet from my local machine. For example: $ telnet example.com 25 220 VO13421.localdomain SMTP Postfix EHLO example.com 250-VO13421.localdomain 250-PIPELINING 250-SIZE 10240000 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN MAIL FROM: [email protected] 250 2.1.0 Ok RCPT TO: [email protected] 250 2.1.5 Ok data 354 Please start mail input. hello user... how have you been? . 250 Mail queued for delivery. quit 221 Closing connection. Good bye. /var/log/mail.log shows success (and the email goes to the maildr): Feb 24 09:47:36 VO13421 postfix/smtpd[2212]: connect from 81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:01 VO13421 postfix/smtpd[2212]: warning: restriction `smtpd_data_restrictions' after `permit' is ignored Feb 24 09:48:01 VO13421 postfix/smtpd[2212]: 65C68120321: client=81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:29 VO13421 postfix/smtpd[2212]: warning: restriction `smtpd_data_restrictions' after `permit' is ignored Feb 24 09:48:29 VO13421 postfix/smtpd[2212]: 6BDFA120321: client=81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:29 VO13421 postfix/cleanup[2216]: 6BDFA120321: message-id= Feb 24 09:48:29 VO13421 postfix/qmgr[2042]: 6BDFA120321: from=, size=395, nrcpt=1 (queue active) Feb 24 09:48:29 VO13421 postfix/virtual[2217]: 6BDFA120321: to=, relay=virtual, delay=0.28, delays=0.25/0.02/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Feb 24 09:48:29 VO13421 postfix/qmgr[2042]: 6BDFA120321: removed Feb 24 09:48:30 VO13421 postfix/smtpd[2212]: disconnect from 81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] iptables -L -n -v --line on example.com yields the following. Anyone know an iptables command to see the port forwarding? Also, it seems to accept all traffic, that's probably bad right? ;] num pkts bytes target prot opt in out source destination 1 14041 1023K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 338 20722 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 419K packets, 425M bytes) num pkts bytes target prot opt in out source destination 1 13711 2824K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 postconf -n results in: alias_database = hash:/etc/postfix/aliases alias_maps = hash:/etc/postfix/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix delay_warning_time = 4h disable_vrfy_command = yes inet_interfaces = all local_recipient_maps = mailbox_size_limit = 0 masquerade_domains = mail.example.com mail1.example.com masquerade_exceptions = root maximal_backoff_time = 8000s maximal_queue_lifetime = 7d minimal_backoff_time = 1000s mydestination = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mynetworks_style = host myorigin = example.com readme_directory = no recipient_delimiter = + relayhost = smtp_helo_timeout = 60s smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname SMTP $mail_name smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org smtpd_delay_reject = yes smtpd_hard_error_limit = 12 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_limit = 16 smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit smtpd_soft_error_limit = 3 smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes unknown_local_recipient_reject_code = 450 virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf virtual_gid_maps = mysql:/etc/postfix/mysql_gid.cf virtual_mailbox_base = /var/spool/mail/virtual virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf virtual_uid_maps = mysql:/etc/postfix/mysql_uid.cf

    Read the article

  • can't connect 2 subnets through RRAS 2008 r2

    - by mcdwight6
    I'm working on a project for a networking class. In VMWare Workstation, I have to set up a 2008 r2 server with DHCP reservations for 2 clients on separate subnets and have them ping each other. Here is the output of the route print command: =========================================================================== Interface List 13 ...00 50 56 2a e7 11 ...... Intel(R) PRO/1000 MT Network Connection #3 10 ...00 0c 29 66 88 dd ...... Intel(R) PRO/1000 MT Network Connection 1 ........................... Software Loopback Interface 1 24 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 11 ...02 00 54 55 4e 01 ...... Teredo Tunneling Pseudo-Interface 14 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 16 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 17 ...00 00 00 00 00 00 00 e0 isatap.{5B8FB196-616F-4168-A020-03E63A309CEC} =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 On-link 10.0.0.2 266 0.0.0.0 0.0.0.0 On-link 223.6.6.2 266 10.0.0.0 255.0.0.0 On-link 10.0.0.2 266 10.0.0.2 255.255.255.255 On-link 10.0.0.2 266 10.255.255.255 255.255.255.255 On-link 10.0.0.2 266 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.6.0.0 255.255.0.0 On-link 10.0.0.2 11 128.6.255.255 255.255.255.255 On-link 10.0.0.2 266 223.6.6.0 255.255.255.0 On-link 10.0.0.2 11 223.6.6.0 255.255.255.0 On-link 223.6.6.2 266 223.6.6.2 255.255.255.255 On-link 223.6.6.2 266 223.6.6.255 255.255.255.255 On-link 10.0.0.2 266 223.6.6.255 255.255.255.255 On-link 223.6.6.2 266 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.0.0.2 266 224.0.0.0 240.0.0.0 On-link 223.6.6.2 266 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.0.0.2 266 255.255.255.255 255.255.255.255 On-link 223.6.6.2 266 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 10.0.0.2 Default 0.0.0.0 0.0.0.0 128.6.0.2 Default 0.0.0.0 0.0.0.0 223.6.6.2 Default 128.6.0.0 255.255.0.0 10.0.0.2 1 223.6.6.0 255.255.255.0 10.0.0.2 1 =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 14 1010 2002::/16 On-link 14 266 2002:8006:2::8006:2/128 On-link 1 306 ff00::/8 On-link =========================================================================== Persistent Routes: None My problem is that although I have set up both dynamic and persistent static routes in my r2 server, neither of the clients can ping even the NIC outside its own subnet. For example Client A can ping the NIC at 10.0.0.2 and vice-versa, but it gets a general transmit failure when it tries to ping the card at 223.6.6.2, let alone trying to ping the other client. I have completely disabled the firewalls on all machines and anything else I could think of, without success. What am I missing? Edit: Since posting this, I also noticed that the default gateways on my 2 NICs keep getting zeroed out. Does anyone know a fix for this?

    Read the article

  • duplicate cache pages: Varnish

    - by Sukhjinder Singh
    Recently we have configured Varnish on our server, it was successfully setup but we noticed that if we open any page in multiple browsers, the Varnish send request to Apache not matter page is cached or not. If we refresh twice on each browser it creates duplicate copies of the same page. What exactly should happen: If any page is cached by Varnish, the subsequent request should be served from Varnish itself when we are opening the same page in browser OR we are opening that page from different IP address. Following is my default.vcl file backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_recv { if( req.url ~ "^/search/.*$") { }else { set req.url = regsub(req.url, "\?.*", ""); } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (!req.backend.healthy) { unset req.http.Cookie; } set req.grace = 6h; if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } if (req.http.Cookie) { set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { unset req.http.Cookie; } else { return (pass); } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") {return(pipe);} /* Non-RFC2616 or CONNECT which is weird. */ if (req.request != "GET" && req.request != "HEAD") { return (pass); } if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm remove req.http.Accept-Encoding; } } return (lookup); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } sub vcl_fetch { if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } set beresp.grace = 6h; } sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } sub vcl_pipe { set req.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") {ban_url(req.url); error 200 "Purged";} if (!obj.ttl > 0s) {return(pass);} } sub vcl_miss { if (req.request == "PURGE") {error 200 "Not in cache";} }

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • ActiveX component can't create Object Error? Check 64 bit Status

    - by Rick Strahl
    If you're running on IIS 7 and a 64 bit operating system you might run into the following error using ASP classic or ASP.NET with COM interop. In classic ASP applications the error will show up as: ActiveX component can't create object   (Error 429) (actually without error handling the error just shows up as 500 error page) In my case the code that's been giving me problems has been a FoxPro COM object I'd been using to serve banner ads to some of my pages. The code basically looks up banners from a database table and displays them at random. The ASP classic code that uses it looks like this: <% Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> Originally this code had no specific error checking as above so the ASP pages just failed with 500 error pages from the Web server. To find out what the problem is this code is more useful at least for debugging: <% ON ERROR RESUME NEXT Set banner = Server.CreateObject("wwBanner.aspBanner") Response.Write(err.Number & " - " & err.Description) banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> which results in: 429 - ActiveX component can't create object which at least gives you a slight clue. In ASP.NET invoking the same COM object with code like this: <% dynamic banner = wwUtils.CreateComInstance("wwBanner.aspBanner") as dynamic; banner.cBANNERFILE = "wwsitebanners"; Response.Write(banner.getBanner(-1)); %> results in: Retrieving the COM class factory for component with CLSID {B5DCBB81-D5F5-11D2-B85E-00600889F23B} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). The class is in fact registered though and the COM server loads fine from a command prompt or other COM client. This error can be caused by a COM server that doesn't load. It looks like a COM registration error. There are a number of traditional reasons why this error can crop up of course. The server isn't registered (run regserver32 to register a DLL server or /regserver on an EXE server) Access permissions aren't set on the COM server (Web account has to be able to read the DLL ie. Network service) The COM server fails to load during initialization ie. failing during startup One thing I always do to check for COM errors fire up the server in a COM client outside of IIS and ensure that it works there first - it's almost always easier to debug a server outside of the Web environment. In my case I tried the server in Visual FoxPro on the server with: loBanners = CREATEOBJECT("wwBanner.aspBanner") loBanners.cBannerFile = "wwsitebanners" ? loBanners.GetBanner(-1) and it worked just fine. If you don't have a full dev environment on the server you can also use VBScript do the same thing and run the .vbs file from the command prompt: Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" MsgBox(banner.getBanner(-1)) Since this both works it tells me the server is registered and working properly. This leaves startup failures or permissions as the problem. I double checked permissions for the Application Pool and the permissions of the folder where the DLL lives and both are properly set to allow access by the Application Pool impersonated user. Just to be sure I assigned an Admin user to the Application Pool but still no go. So now what? 64 bit Servers Ahoy A couple of weeks back I had set up a few of my Application pools to 64 bit mode. My server is Server 2008 64 bit and by default Application Pools run 64 bit. Originally when I installed the server I set up most of my Application Pools to 32 bit mainly for backwards compatibility. But as more of my code migrates to 64 bit OS's I figured it'd be a good idea to see how well code runs under 64 bit code. The transition has been mostly painless. Until today when I noticed the problem with the code above when scrolling to my IIS logs and noticing a lot of 500 errors on many of my ASP classic pages. The code in question in most of these pages deals with this single simple COM object. It took a while to figure out that the problem is caused by the Application Pool running in 64 bit mode. The issue is that 32 bit COM objects (ie. my old Visual FoxPro COM component) cannot be loaded in a 64 bit Application Pool. The ASP pages using this COM component broke on the day I switched my main Application Pool into 64 bit mode but I didn't find the problem until I searched my logs for errors by pure chance. To fix this is easy enough once you know what the problem is by switching the Application Pool to Enable 32-bit Applications: Once this is done the COM objects started working correctly again. 64 bit ASP and ASP.NET with DCOM Servers This is kind of off topic, but incidentally it's possible to load 32 bit DCOM (out of process) servers from ASP.NET and ASP classic even if those applications run in 64 bit application pools. In fact, in West Wind Web Connection I use this capability to run a 64 bit ASP.NET handler that talks to a 32 bit FoxPro COM server which allows West Wind Web Connection to run in native 64 bit mode without custom configuration (which is actually quite useful). It's probably not a common usage scenario but it's good to know that you can actually access 32 bit COM objects this way from ASP.NET. For West Wind Web Connection this works out well as the DCOM interface only makes one non-chatty call to the backend server that handles all the rest of the request processing. Application Pool Isolation is your Friend For me the recent incident of failure in the classic ASP pages has just been another reminder to be very careful with moving applications to 64 bit operation. There are many little traps when switching to 64 bit that are very difficult to track and test for. I described one issue I had a couple of months ago where one of the default ASP.NET filters was loading the wrong version (32bit instead of 64bit) which was extremely difficult to track down and was caused by a very sneaky configuration switch error (basically 3 different entries for the same ISAPI filter all with different bitness settings). It took me almost a full day to track this down). Recently I've been taken to isolate individual applications into separate Application Pools rather than my past practice of combining many apps into shared AppPools. This is a good practice assuming you have enough memory to make this work. Application Pool isolate provides more modularity and allows me to selectively move applications to 64 bit. The error above came about precisely because I moved one of my most populous app pools to 64 bit and forgot about the minimal COM object use in some of my old pages. It's easy to forget. To 64bit or Not Is it worth it to move to 64 bit? Currently I'd say -not really. In my - admittedly limited - testing I don't see any significant performance increases. In fact 64 bit apps just seem to consume considerably more memory (30-50% more in my pools on average) and performance is minimally improved (less than 5% at the very best) in the load testing I've performed on a couple of sites in both modes. The only real incentive for 64 bit would be applications that require huge data spaces that exceed the 32 bit 4 gigabyte memory limit. However I have a hard time imagining an application that needs 4 gigs of memory in a single Application Pool :-). Curious to hear other opinions on benefits of 64 bit operation. © Rick Strahl, West Wind Technologies, 2005-2011Posted in COM   ASP.NET  FoxPro  

    Read the article

  • I have finally traded my Blackberry in for a Droid!

    - by Bob Porter
    Over the years I have used a number of different types of phones. Windows Mobile, Blackberry, Nokia, and now Android. Until the Blackberry, which was my last phone (and I still have one issued from my office) I had never found a phone that “just worked” especially with email and messaging. The Blackberry did, and does, excel at those functions. My last personal phone was a Storm 1 which was Blackberry’s first touch screen phone. The Storm 2 was an improved version that fixed some screen press detection issues from the first model and it added Wifi. Over the last few years I have watched others acquire and fall in love with their ‘Droid’s including a number of iPhone users which surprised me. Our office has until recently only supported Blackberry phones, adding iPhones within the last year or so. When I spoke with our internal telecom folks they confirmed they were evaluating Android phones, but felt they still were not secure enough out of the box for corporate use and SOX compliance. That being said, as a personal phone, the Droid Rocks! I am impressed with its speed, the number of apps available, and the overall design. It is not as “flashy” as an iPhone but it does everything that I care about and more. The model I bought is the Motorola Droid 2 Global from Verizon. It is currently running Android 2.2 for it’s OS, 2.3 is just around the corner. It has 8 gigs of internal flash memory and can handle up to a 32 gig SDCard. (I currently have 2 8 gig cards, one for backups, and have ordered a 16 gig card!) Being a geek at heart, I “rooted” the phone which means gained superuser access to the OS on the phone. And opens a number of doors for further modifications down the road. Also being a geek meant I have already setup a development environment and built and deployed the obligatory “Hello Droid” application. I will be writing of my development experiences with this new platform here often, to start off I thought I would share my current application list to give you an idea what I am using. Zedge: http://market.android.com/details?id=net.zedge.android XDA: http://market.android.com/details?id=com.quoord.tapatalkxda.activity WRAL.com: http://market.android.com/details?id=com.mylocaltv.wral Wireless Tether: http://market.android.com/details?id=android.tether Winamp: http://market.android.com/details?id=com.nullsoft.winamp Win7 Clock: http://market.android.com/details?id=com.androidapps.widget.toggles.win7 Wifi Analyzer: http://market.android.com/details?id=com.farproc.wifi.analyzer WeatherBug: http://market.android.com/details?id=com.aws.android Weather Widget Forecast Addon: http://market.android.com/details?id=com.androidapps.weather.forecastaddon Weather & Toggle Widgets: http://market.android.com/details?id=com.androidapps.widget.weather2 Vlingo: http://market.android.com/details?id=com.vlingo.client VirtualTENHO-G: http://market.android.com/details?id=jp.bustercurry.virtualtenho_g Twitter: http://market.android.com/details?id=com.twitter.android TweetDeck: http://market.android.com/details?id=com.thedeck.android.app Tricorder: http://market.android.com/details?id=org.hermit.tricorder Titanium Backup PRO: http://market.android.com/details?id=com.keramidas.TitaniumBackupPro Titanium Backup: http://market.android.com/details?id=com.keramidas.TitaniumBackup Terminal Emulator: http://market.android.com/details?id=jackpal.androidterm Talking Tom Free: http://market.android.com/details?id=com.outfit7.talkingtom Stock Blue: http://market.android.com/details?id=org.adw.theme.stockblue ST: Red Alert Free: http://market.android.com/details?id=com.oldplanets.redalertwallpaper ST: Red Alert: http://market.android.com/details?id=com.oldplanets.redalertwallpaperplus Solitaire: http://market.android.com/details?id=com.kmagic.solitaire Skype: http://market.android.com/details?id=com.skype.raider Silent Time Lite: http://market.android.com/details?id=com.QuiteHypnotic.SilentTime ShopSavvy: http://market.android.com/details?id=com.biggu.shopsavvy Shopper: http://market.android.com/details?id=com.google.android.apps.shopper Shiny clock: http://market.android.com/details?id=com.androidapps.clock.shiny ShareMyApps: http://market.android.com/details?id=com.mattlary.shareMyApps Sense Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.senseglassadwtheme ROM Manager: http://market.android.com/details?id=com.koushikdutta.rommanager Roboform Bookmarklet Installer: http://market.android.com/details?id=roboformBookmarkletInstaller.android.com RealCalc: http://market.android.com/details?id=uk.co.nickfines.RealCalc Package Buddy: http://market.android.com/details?id=com.psyrus.packagebuddy Overstock: http://market.android.com/details?id=com.overstock OMGPOP Toggle: http://market.android.com/details?id=com.androidapps.widget.toggle.omgpop OI File Manager: http://market.android.com/details?id=org.openintents.filemanager nook: http://market.android.com/details?id=bn.ereader MyAtlas-Google Maps Navigation ext: http://market.android.com/details?id=com.adaptdroid.navbookfree3 MSN Droid: http://market.android.com/details?id=msn.droid.im Matrix Live Wallpaper: http://market.android.com/details?id=com.jarodyv.livewallpaper.matrix LogMeIn: http://market.android.com/details?id=com.logmein.ignitionpro.android Liveshare: http://market.android.com/details?id=com.cooliris.app.liveshare Kobo: http://market.android.com/details?id=com.kobobooks.android Instant Heart Rate: http://market.android.com/details?id=si.modula.android.instantheartrate IMDb: http://market.android.com/details?id=com.imdb.mobile Home Plus Weather: http://market.android.com/details?id=com.androidapps.widget.skin.weather.homeplus Handcent SMS: http://market.android.com/details?id=com.handcent.nextsms H7C Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skin.h7c GTasks: http://market.android.com/details?id=org.dayup.gtask GPS Status: http://market.android.com/details?id=com.eclipsim.gpsstatus2 Google Voice: http://market.android.com/details?id=com.google.android.apps.googlevoice Google Sky Map: http://market.android.com/details?id=com.google.android.stardroid Google Reader: http://market.android.com/details?id=com.google.android.apps.reader GoMarks: http://market.android.com/details?id=com.androappsdev.gomarks Goggles: http://market.android.com/details?id=com.google.android.apps.unveil Glossy Black Weather: http://market.android.com/details?id=com.androidapps.widget.weather.skin.glossyblack Fox News: http://market.android.com/details?id=com.foxnews.android Foursquare: http://market.android.com/details?id=com.joelapenna.foursquared FBReader: http://market.android.com/details?id=org.geometerplus.zlibrary.ui.android Fandango: http://market.android.com/details?id=com.fandango Facebook: http://market.android.com/details?id=com.facebook.katana Extensive Notes Pro: http://market.android.com/details?id=com.flufflydelusions.app.extensive_notes_donate Expense Manager: http://market.android.com/details?id=com.expensemanager Espresso UI (LightShow w/ Slide): http://market.android.com/details?id=com.jaguirre.slide.lightshow Engadget: http://market.android.com/details?id=com.aol.mobile.engadget Earth: http://market.android.com/details?id=com.google.earth Drudge: http://market.android.com/details?id=com.iavian.dreport Dropbox: http://market.android.com/details?id=com.dropbox.android DroidForums: http://market.android.com/details?id=com.quoord.tapatalkdrodiforums.activity DroidArmor ADW: http://market.android.com/details?id=mobi.addesigns.droidarmorADW Droid Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.skins.white Droid 2 Bootstrapper: http://market.android.com/details?id=com.koushikdutta.droid2.bootstrap doubleTwist: http://market.android.com/details?id=com.doubleTwist.androidPlayer Documents To Go: http://market.android.com/details?id=com.dataviz.docstogo Digital Clock Widget: http://market.android.com/details?id=com.maize.digitalClock Desk Home: http://market.android.com/details?id=com.cowbellsoftware.deskdock Default Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skins.defaultclock Daily Expense Manager: http://market.android.com/details?id=com.techahead.ExpenseManager ConnectBot: http://market.android.com/details?id=org.connectbot Colorized Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.colorized Chrome to Phone: http://market.android.com/details?id=com.google.android.apps.chrometophone CardStar: http://market.android.com/details?id=com.cardstar.android Books: http://market.android.com/details?id=com.google.android.apps.books Black Ipad Toggle: http://market.android.com/details?id=com.androidapps.toggle.widget.skin.blackipad Black Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.blackglassadwtheme Bing: http://market.android.com/details?id=com.microsoft.mobileexperiences.bing BeyondPod Unlock Key: http://market.android.com/details?id=mobi.beyondpod.unlockkey BeyondPod: http://market.android.com/details?id=mobi.beyondpod BeejiveIM: http://market.android.com/details?id=com.beejive.im Beautiful Widgets Animations Addon: http://market.android.com/details?id=com.levelup.bw.forecast Beautiful Widgets: http://market.android.com/details?id=com.levelup.beautifulwidgets Beautiful Live Weather: http://market.android.com/details?id=com.levelup.beautifullive BBC News: http://market.android.com/details?id=net.jimblackler.newswidget Barnacle Wifi Tether: http://market.android.com/details?id=net.szym.barnacle Barcode Scanner: http://market.android.com/details?id=com.google.zxing.client.android ASTRO SMB Module: http://market.android.com/details?id=com.metago.astro.smb ASTRO Pro: http://market.android.com/details?id=com.metago.astro.pro ASTRO Bluetooth Module: http://market.android.com/details?id=com.metago.astro.network.bluetooth ASTRO: http://market.android.com/details?id=com.metago.astro AppBrain App Market: http://market.android.com/details?id=com.appspot.swisscodemonkeys.apps App Drawer Icon Pack: http://market.android.com/details?id=com.adwtheme.appdrawericonpack androidVNC: http://market.android.com/details?id=android.androidVNC AndroidGuys: http://market.android.com/details?id=com.handmark.mpp.AndroidGuys Android System Info: http://market.android.com/details?id=com.electricsheep.asi AndFTP: http://market.android.com/details?id=lysesoft.andftp ADWTheme Red: http://market.android.com/details?id=adw.theme.red ADWLauncher EX: http://market.android.com/details?id=org.adwfreak.launcher ADW.Theme.One: http://market.android.com/details?id=org.adw.theme.one ADW.Faded theme: http://market.android.com/details?id=com.xrcore.adwtheme.faded ADW Gingerbread: http://market.android.com/details?id=me.robertburns.android.adwtheme.gingerbread Advanced Task Killer Free: http://market.android.com/details?id=com.rechild.advancedtaskkiller Adobe Reader: http://market.android.com/details?id=com.adobe.reader Adobe Flash Player 10.1: http://market.android.com/details?id=com.adobe.flashplayer Adobe AIR: http://market.android.com/details?id=com.adobe.air 3G Auto OnOff: http://market.android.com/details?id=com.yuantuo --- Generated by ShareMyApps http://market.android.com/details?id=com.mattlary.shareMyApps Sent from my Droid

    Read the article

  • Ajax Control Toolkit Now Supports jQuery

    - by Stephen.Walther
    I’m excited to announce the September 2013 release of the Ajax Control Toolkit, which now supports building new Ajax Control Toolkit controls with jQuery. You can download the latest release of the Ajax Control Toolkit from http://AjaxControlToolkit.CodePlex.com or you can install the Ajax Control Toolkit directly within Visual Studio by executing the following NuGet command: The New jQuery Extender Base Class This release of the Ajax Control Toolkit introduces a new jQueryExtender base class. This new base class enables you to create Ajax Control Toolkit controls with jQuery instead of the Microsoft Ajax Library. Currently, only one control in the Ajax Control Toolkit has been rewritten to use the new jQueryExtender base class (only one control has been jQueryized). The ToggleButton control is the first of the Ajax Control Toolkit controls to undergo this dramatic transformation. All of the other controls in the Ajax Control Toolkit are written using the Microsoft Ajax Library. We hope to gradually rewrite these controls as jQuery controls over time. You can view the new jQuery ToggleButton live at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToggleButton/ToggleButton.aspx Why are we rewriting Ajax Control Toolkits with jQuery? There are very few developers actively working with the Microsoft Ajax Library while there are thousands of developers actively working with jQuery. Because we want talented developers in the community to continue to contribute to the Ajax Control Toolkit, and because almost all JavaScript developers are familiar with jQuery, it makes sense to support jQuery with the Ajax Control Toolkit. Also, we believe that the Ajax Control Toolkit is a great framework for Web Forms developers who want to build new ASP.NET controls that use JavaScript. The Ajax Control Toolkit has great features such as automatic bundling, minification, caching, and compression. We want to make it easy for ASP.NET developers to build new controls that take advantage of these features. Instantiating Controls with data-* Attributes We took advantage of the new JQueryExtender base class to change the way that Ajax Control Toolkit controls are instantiated. In the past, adding an Ajax Control Toolkit to a page resulted in inline JavaScript being injected into the page. For example, adding the ToggleButton control to a page injected the following HTML and script: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" /> <script type="text/javascript"> //<![CDATA[ Sys.Application.add_init(function() { $create(Sys.Extended.UI.ToggleButtonBehavior, {"CheckedImageAlternateText":"Check", "CheckedImageUrl":"ToggleButton_Checked.gif", "ImageHeight":19, "ImageWidth":19, "UncheckedImageAlternateText":"UnCheck", "UncheckedImageUrl":"ToggleButton_Unchecked.gif", "id":"ctl00_SampleContent_ToggleButtonExtender1"}, null, null, $get("ctl00_SampleContent_CheckBox1")); }); //]]> </script> Notice the call to the JavaScript $create() method at the bottom of the page. When using the Microsoft Ajax Library, this call to the $create() method is necessary to create the Ajax Control Toolkit control. This inline script looks pretty ugly to a modern JavaScript developer. Inline script! Horrible! The jQuery version of the ToggleButton injects the following HTML and script into the page: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" data-act-togglebuttonextender="imageWidth:19, imageHeight:19, uncheckedImageUrl:'ToggleButton_Unchecked.gif', checkedImageUrl:'ToggleButton_Checked.gif', uncheckedImageAlternateText:'I don&#39;t understand why you don&#39;t like ASP.NET', checkedImageAlternateText:'It&#39;s really nice to hear from you that you like ASP.NET'" /> Notice that there is no script! There is no call to the $create() method. In fact, there is no inline JavaScript at all. The jQuery version of the ToggleButton uses an HTML5 data-* attribute instead of an inline script. The ToggleButton control is instantiated with a data-act-togglebuttonextender attribute. Using data-* attributes results in much cleaner markup (You don’t need to feel embarrassed when selecting View Source in your browser). Ajax Control Toolkit versus jQuery So in a jQuery world why is the Ajax Control Toolkit needed at all? Why not just use jQuery plugins instead of the Ajax Control Toolkit? For example, there are lots of jQuery ToggleButton plugins floating around the Internet. Why not just use one of these jQuery plugins instead of using the Ajax Control Toolkit ToggleButton control? There are three main reasons why the Ajax Control Toolkit continues to be valuable in a jQuery world: Ajax Control Toolkit controls run on both the server and client jQuery plugins are client only. A jQuery plugin does not include any server-side code. If you need to perform any work on the server – think of the AjaxFileUpload control – then you can’t use a pure jQuery solution. Ajax Control Toolkit controls provide a better Visual Studio experience You don’t get any design time experience when you use jQuery plugins within Visual Studio. Ajax Control Toolkit controls, on the other hand, are designed to work with Visual Studio. For example, you can use the Visual Studio Properties window to set Ajax Control Toolkit control properties. Ajax Control Toolkit controls shield you from working with JavaScript I like writing code in JavaScript. However, not all developers like JavaScript and some developers want to completely avoid writing any JavaScript code at all. The Ajax Control Toolkit enables you to take advantage of JavaScript (and the latest features of HTML5) in your ASP.NET Web Forms websites without writing a single line of JavaScript. Better ToolkitScriptManager Documentation With this release, we have added more detailed documentation for using the ToolkitScriptManager. In particular, we added documentation that describes how to take advantage of the new bundling, minification, compression, and caching features of the Ajax Control Toolkit. The ToolkitScriptManager documentation is part of the Ajax Control Toolkit sample site and it can be read here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToolkitScriptManager/ToolkitScriptManager.aspx Other Fixes This release of the Ajax Control Toolkit includes several important bug fixes. For example, the Ajax Control Toolkit Twitter control was completely rewritten with this release. Twitter is in the process of retiring the first version of their API. You can read about their plans here: https://dev.twitter.com/blog/planning-for-api-v1-retirement We completely rewrote the Ajax Control Toolkit Twitter control to use the new Twitter API. To take advantage of the new Twitter API, you must get a key and access token from Twitter and add the key and token to your web.config file. Detailed instructions for using the new version of the Ajax Control Toolkit Twitter control can be found here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/Twitter/Twitter.aspx   Summary We’ve made some really great changes to the Ajax Control Toolkit over the last two releases to modernize the toolkit. In the previous release, we updated the Ajax Control Toolkit to use a better bundling, minification, compression, and caching system. With this release, we updated the Ajax Control Toolkit to support jQuery. We also continue to update the Ajax Control Toolkit with important bug fixes. I hope you like these changes and I look forward to hearing your feedback.

    Read the article

  • WinInet Apps failing when Internet Explorer is set to Offline Mode

    - by Rick Strahl
    Ran into a nasty issue last week when all of a sudden many of my old applications that are using WinInet for HTTP access started failing. Specifically, the WinInet HttpSendRequest() call started failing with an error of 2, which when retrieving the error boils down to: WinInet Error 2: The system cannot find the file specified Now this error can pop up in many legitimate scenarios with WinInet such as when no Internet connection is available or the HTTP configuration (usually configured in Internet Explorer’s options) is misconfigured. The error typically means that the server in question cannot be found or more specifically an Internet connection can’t be established. In this case the problem started suddenly and was causing some of my own applications (old Visual FoxPro apps using my own wwHttp library) and all Adobe Air applications (which apparently uses WinInet for its basic HTTP stack) along with a few more oddball applications to fail instantly when trying to connect via HTTP. Most other applications – all of my installed browsers, email clients, various social network updaters all worked just fine. It seems it was only WinInet apps that were failing. Yet oddly Internet Explorer appeared to be working. So the problem seemed to be isolated to those ‘classic’ applications using WinInet. WinInet’s base configuration uses the Internet Explorer options dialog. To check this out I typically go to the Internet Explorer options and find the Connection tab, and check out the LAN Setup. Make sure there are no rogue proxy settings or configuration scripts that are invalid. Trying with Auto-configuration on and off also can often fix ‘real’ configuration errors. This time however this wasn’t a problem – nothing in the LAN configuration was set (all default). I also played with the Automatic detection of settings which also had no effect. I also tried to use Fiddler to see if that would tell me something. Fiddler has a few additional WinInet configuration options in its configuration. Running Fiddler and hitting an HTTP request using WinInet would never actually hit Fiddler – the failure would occur before WinInet ever fired up the HTTP connection to go through the Fiddler HTTP proxy. And the Culprit is: Internet Explorer’s Work Offline Option The culprit in this situation was Internet Explorer which at some point, unknown to me switched into Offline Mode and was then shut down: When this Offline mode is checked when IE is running *or* if IE gets shut down with this flag set, all applications using WinInet by default assume that it’s running in offline mode. Depending on your caching HTTP headers and whether the page was cached previously you may or may not get a response or an error. For an independent non-browser application this will be highly unpredictable and likely result in failures getting online – especially if the application forces requests to always reload by disabling HTTP caching (as I do on most of my dynamic HTTP clients). What makes this especially tricky is that even when IE is in offline mode in the browser, you can still browse around the Web *if* you have a connection. IE will try to load anything it has cached from the local cache, but as soon as you hit a URL that isn’t cached it will automatically try to access that URL and uncheck the Work Offline option. Conversely if you get knocked off the Internet and browse in IE 9, IE will automatically go into offline mode. I never explicitly set offline mode – it just automatically sets itself on and off depending on the connection. Problem is if you’re not using IE all the time (as I do – rarely and just for testing so usually a few commonly used URLs) and you left it in offline mode when you exit, offline mode stays set which results in the above head scratcher. Ack. This isn’t new behavior in IE 9 BTW – this behavior has always been there, but I think what’s different is that IE now automatically switches between online and offline modes without notifying you at all, so it’s hard to tell when you are offline. Fixing the Issue in your Code If you have an application that is using WinInet, there’s a WinInet option called INTERNET_OPTION_IGNORE_OFFLINE. I just checked this out in my own applications and Internet Explorer 9 and it works, but apparently it’s been broken for some older releases (I can’t confirm how far back though) – lots of posts seem to suggest the flag doesn’t work. However, in IE 9 at least it does seem to work if you call InternetSetOption before you call HttpOpenRequest with the Http Session handle. In FoxPro code I use: DECLARE INTEGER InternetSetOption ;    IN WININET.DLL ;    INTEGER HINTERNET,;    INTEGER dwFlags,;    INTEGER @dwValue,;    INTEGER cbSize lnOptionValue = 1   && BOOL TRUE pass by reference   *** Set needed SSL flags lnResult=InternetSetOption(this.hHttpSession,;    INTERNET_OPTION_IGNORE_OFFLINE ,;  && 77    @lnOptionValue ,4)   DECLARE INTEGER HttpOpenRequest ;    IN WININET.DLL ;    INTEGER hHTTPHandle,;    STRING lpzReqMethod,;    STRING lpzPage,;    STRING lpzVersion,;    STRING lpzReferer,;    STRING lpzAcceptTypes,;    INTEGER dwFlags,;    INTEGER dwContextw     hHTTPResult=HttpOpenRequest(THIS.hHttpsession,;    lcVerb,;    tcPage,;    NULL,NULL,NULL,;    INTERNET_FLAG_RELOAD + ;    IIF(THIS.lsecurelink,INTERNET_FLAG_SECURE,0) + ;    this.nHTTPServiceFlags,0) …  And this fixes the issue at least for IE 9… In my FoxPro wwHttp class I now call this by default to never get bitten by this again… This solves the problem permanently for my HTTP client. I never want to see offline operation in an HTTP client API – it’s just too unpredictable in handling errors and the last thing you want is getting unpredictably stale data. Problem solved but this behavior is – well ugly. But then that’s to be expected from an API that’s based on Internet Explorer, eh?© Rick Strahl, West Wind Technologies, 2005-2011Posted in HTTP  Windows  

    Read the article

  • TFS 2010 Basic Concepts

    - by jehan
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here, I’m going to discuss some key Architectural changes and concepts that have taken place in TFS 2010 when compared to TFS 2008. In TFS 2010 Installation, First you need to do the Installation and then you have to configure the Installation Feature from the available features. This is bit similar to SharePoint Installation, where you will first do the Installation and then configure the SharePoint Farms. 1) Installation Features available in TFS2010: a) Basic: It is the most compact TFS installation possible. It will install and configure Source Control, Work Item tracking and Build Services only. (SharePoint and Reporting Integration will not be possible). b) Standard Single Server: This is suitable for Single Server deployment of TFS. It will install and configure Windows SharePoint Services for you and will use the default instance of SQL Server. c) Advanced: It is suitable, if you want use Remote Servers for SQL Server Databases, SharePoint Products and Technologies and SQL Server Reporting Services. d) Application Tier Only: If you want to configure high availability for Team Foundation Server in a Load Balanced Environment (NLB) or you want to move Team Foundation Server from one server to other or you want to restore TFS. e) Upgrade: If you want to upgrade from a prior version of TFS. Note: One more important thing to know here about  TFS 2010 Basic is that,  it can be installed on Client Operations Systems(Windows 7 and Windows Vista SP3), Where as  earlier you cannot Install previous version of TFS (2008 and 2005) on client OS. 2) Team Project Collections: Connect to TFS dialog box in TFS 2008:  In TFS 2008, the TFS Server contains a set of Team Projects and each project may or may not be independent of other projects and every checkin gets a ever increasing  changeset ID  irrespective of the team project in which it is checked in and the same applies to work items  also, who also gets unique Work Item Ids.The main problem with this approach was that there are certain things which were impossible to do; those were required as per the Application Development Process. a)      If something has gone wrong in one team project and now you want to restore it back to earlier state where it was working properly then it requires you to restore the Database of Team Foundation Server from the backup you have taken as per your Maintenance plans and because of this the other team projects may lose out on the work which is not backed up. b)       Your company had a merge with some other company and now you have two TFS servers. One TFS Server which you are working on and other TFS server which other company was working and now after the merge you want to integrate the team projects from two TFS servers into one, which is almost impossible to achieve in TFS 2008. Though you can create the Team Projects in one server manually (In Source Control) which you want to integrate from the other TFS Server, but will lose out on History of Change Sets and Work items and others which are very important. There were few more issues of this sort, which were difficult to resolve in TFS 2008. To resolve issues related to above kind of scenarios which were mainly related TFS Maintenance, Integration, migration and Security,  Microsoft has come up with Team Project Collections concept in TFS 2010.This concept is similar to SharePoint Site Collections and if you are familiar with SharePoint Architecture, then it will help you to understand TFS 2010 Architecture easily. Connect to TFS dialog box in TFS 2010: In above dialog box as you can see there are two Team Project Collections, each team project can contain any number of team projects as you can see on right side it shows the two Team Projects in Team Project Collection (Default Collection) which I have chosen. Note: You can connect to only one Team project Collection at a time using an instance of  TFS Team Explorer. How does it work? To introduce Team Project Collections, changes have been done in reorganization of TFS databases. TFS 2008 was composed of 5-7 databases partitioned by subsystem (each for Version Control, Work Item Tracking, Build, Integration, Project Management...) New TFS 2010 database architecture: TFS_Config: It’s the root database and it contains centralized TFS configuration data, including the list of all team projects exist in TFS server. TFS_Warehouse: The data warehouse contains all the reporting data of served by this server (farm). TFS_* : This contains individual team project collection data. This database contains all the operational data of team project collection regardless of subsystem.In additional to this, you will have databases for SharePoint and Report Server. 3) TFS Farms:  As TFS 2010 is more flexible to configure as multiple Application tiers and multiple Database tiers, so it will be more appropriate to call as TFS Farm if you going for multi server installation of TFS. NLB support for TFS application tiers – With TFS 2010: you can configure multiple TFS application tier machines to serve the same set of Team Project Collections. The primary purpose of NLB support is to enable a cleaner and more complete high availability than in TFS 2008. Even if any application tier in the farm fails then farm will automatically continue to work with hardly any indication to end users of a problem. SQL data tiers: With 2010 you can configure many SQL Servers. Each Database can be configured to be on any SQL Server because each Team Project Collection is an independent database. This feature can also be used to load balance databases across SQL Servers.These new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.

    Read the article

  • Parallelism in .NET – Part 14, The Different Forms of Task

    - by Reed
    Before discussing Task creation and actual usage in concurrent environments, I will briefly expand upon my introduction of the Task class and provide a short explanation of the distinct forms of Task.  The Task Parallel Library includes four distinct, though related, variations on the Task class. In my introduction to the Task class, I focused on the most basic version of Task.  This version of Task, the standard Task class, is most often used with an Action delegate.  This allows you to implement for each task within the task decomposition as a single delegate. Typically, when using the new threading constructs in .NET 4 and the Task Parallel Library, we use lambda expressions to define anonymous methods.  The advantage of using a lambda expression is that it allows the Action delegate to directly use variables in the calling scope.  This eliminates the need to make separate Task classes for Action<T>, Action<T1,T2>, and all of the other Action<…> delegate types.  As an example, suppose we wanted to make a Task to handle the ”Show Splash” task from our earlier decomposition.  Even if this task required parameters, such as a message to display, we could still use an Action delegate specified via a lambda: // Store this as a local variable string messageForSplashScreen = GetSplashScreenMessage(); // Create our task Task showSplashTask = new Task( () => { // We can use variables in our outer scope, // as well as methods scoped to our class! this.DisplaySplashScreen(messageForSplashScreen); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This provides a huge amount of flexibility.  We can use this single form of task for any task which performs an operation, provided the only information we need to track is whether the task has completed successfully or not.  This leads to my first observation: Use a Task with a System.Action delegate for any task for which no result is generated. This observation leads to an obvious corollary: we also need a way to define a task which generates a result.  The Task Parallel Library provides this via the Task<TResult> class. Task<TResult> subclasses the standard Task class, providing one additional feature – the ability to return a value back to the user of the task.  This is done by switching from providing an Action delegate to providing a Func<TResult> delegate.  If we decompose our problem, and we realize we have one task where its result is required by a future operation, this can be handled via Task<TResult>.  For example, suppose we want to make a task for our “Check for Update” task, we could do: Task<bool> checkForUpdateTask = new Task<bool>( () => { return this.CheckWebsiteForUpdate(); }); Later, we would start this task, and perform some other work.  At any point in the future, we could get the value from the Task<TResult>.Result property, which will cause our thread to block until the task has finished processing: // This uses Task<bool> checkForUpdateTask generated above... // Start the task, typically on a background thread checkForUpdateTask.Start(); // Do some other work on our current thread this.DoSomeWork(); // Discover, from our background task, whether an update is available // This will block until our task completes bool updateAvailable = checkForUpdateTask.Result; This leads me to my second observation: Use a Task<TResult> with a System.Func<TResult> delegate for any task which generates a result. Task and Task<TResult> provide a much cleaner alternative to the previous Asynchronous Programming design patterns in the .NET framework.  Instead of trying to implement IAsyncResult, and providing BeginXXX() and EndXXX() methods, implementing an asynchronous programming API can be as simple as creating a method that returns a Task or Task<TResult>.  The client side of the pattern also is dramatically simplified – the client can call a method, then either choose to call task.Wait() or use task.Result when it needs to wait for the operation’s completion. While this provides a much cleaner model for future APIs, there is quite a bit of infrastructure built around the current Asynchronous Programming design patterns.  In order to provide a model to work with existing APIs, two other forms of Task exist.  There is a constructor for Task which takes an Action<Object> and a state parameter.  In addition, there is a constructor for creating a Task<TResult> which takes a Func<Object, TResult> as well as a state parameter.  When using these constructors, the state parameter is stored in the Task.AsyncState property. While these two overloads exist, and are usable directly, I strongly recommend avoiding this for new development.  The two forms of Task which take an object state parameter exist primarily for interoperability with traditional .NET Asynchronous Programming methodologies.  Using lambda expressions to capture variables from the scope of the creator is a much cleaner approach than using the untyped state parameters, since lambda expressions provide full type safety without introducing new variables.

    Read the article

  • GZip/Deflate Compression in ASP.NET MVC

    - by Rick Strahl
    A long while back I wrote about GZip compression in ASP.NET. In that article I describe two generic helper methods that I've used in all sorts of ASP.NET application from WebForms apps to HttpModules and HttpHandlers that require gzip or deflate compression. The same static methods also work in ASP.NET MVC. Here are the two routines:/// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } The first method checks whether the client sending the request includes the accept-encoding for either gzip or deflate, and if if it does it returns true. The second function uses IsGzipSupported() to decide whether it should encode content and uses an Response Filter to do its job. Basically response filters look at the Response output stream as it's written and convert the data flowing through it. Filters are a bit tricky to work with but the two .NET filter streams for GZip and Deflate Compression make this a snap to implement. In my old code and even now in MVC I can always do:public ActionResult List(string keyword=null, int category=0) { WebUtils.GZipEncodePage(); …} to encode my content. And that works just fine. The proper way: Create an ActionFilterAttribute However in MVC this sort of thing is typically better handled by an ActionFilter which can be applied with an attribute. So to be all prim and proper I created an CompressContentAttribute ActionFilter that incorporates those two helper methods and which looks like this:/// <summary> /// Attribute that can be added to controller methods to force content /// to be GZip encoded if the client supports it /// </summary> public class CompressContentAttribute : ActionFilterAttribute { /// <summary> /// Override to compress the content that is generated by /// an action method. /// </summary> /// <param name="filterContext"></param> public override void OnActionExecuting(ActionExecutingContext filterContext) { GZipEncodePage(); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } } It's basically the same code wrapped into an ActionFilter attribute, which intercepts requests MVC requests to Controller methods and lets you hook up logic before and after the methods have executed. Here I want to override OnActionExecuting() which fires before the Controller action is fired. With the CompressContentAttribute created, it can now be applied to either the controller as a whole:[CompressContent] public class ClassifiedsController : ClassifiedsBaseController { … } or to one of the Action methods:[CompressContent] public ActionResult List(string keyword=null, int category=0) { … } The former applies compression to every action method, while the latter is selective and only applies it to the individual action method. Is the attribute better than the static utility function? Not really, but it is the standard MVC way to hook up 'filter' content and that's where others are likely to expect to set options like this. In fact,  you have a bit more control with the utility function because you can conditionally apply it in code, but this is actually much less likely in MVC applications than old WebForms apps since controller methods tend to be more focused. Compression Caveats Http compression is very cool and pretty easy to implement in ASP.NET but you have to be careful with it - especially if your content might get transformed or redirected inside of ASP.NET. A good example, is if an error occurs and a compression filter is applied. ASP.NET errors don't clear the filter, but clear the Response headers which results in some nasty garbage because the compressed content now no longer matches the headers. Another issue is Caching, which has to account for all possible ways of compression and non-compression that the content is served. Basically compressed content and caching don't mix well. I wrote about several of these issues in an old blog post and I recommend you take a quick peek before diving into making every bit of output Gzip encoded. None of these are show stoppers, but you have to be aware of the issues. Related Posts GZip Compression with ASP.NET Content ASP.NET GZip Encoding Caveats© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Security Trimmed Cross Site Collection Navigation

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This article will serve as documentation of a fully functional codeplex project that I just created. This project will give you a WebPart that will give you security trimmed navigation across site collections. The first question is, why create such a project? In every single SharePoint project you will do, one question you will always be faced with is, what should the boundaries of sites be, and what should the boundaries of site collections be? There is no good or bad answer to this, because it really really depends on your needs. There are some factors in play here. Site Collections will allow you to scale, as a Site collection is the smallest entity you can put inside a content database Site collections will allow you to offer different levels of SLAs, because you put a site collection on a separate content database, and put that database on a separate server. Site collections are a security boundary – and they can be moved around at will without affecting other site collections. Site collections are also a branding boundary. They are also a feature deployment boundary, so you can have two site collections on the same web application with completely different nature of services. But site collections break navigation, i.e. a site collection at “/”, and a site collection at “/sites/mySiteCollection”, are completely independent of each other. If you have access to both, the navigation of / won’t show you a link to /sites/mySiteCollection. Some people refer to this as a huge issue in SharePoint. Luckily, some workarounds exist. A long time ago, I had blogged about “Implementing Consistent Navigation across Site Collections”. That approach was a no-code solution, it worked – it gave you a consistent navigation across site collections. But, it didn’t work in a security trimmed fashion! i.e., if I don’t have access to Site Collection ‘X’, it would still show me a link to ‘X’. Well this project gets around that issue. Simply deploy this project, and it’ll give you a WebPart. You can use that WebPart as either a webpart or as a server control dropped via SharePoint designer, and it will give you Security Trimmed Cross Site Collection Navigation. The code has been written for SP2010, but it will work in SP2007 with the help of http://spwcfsupport.codeplex.com . What do I need to do to make it work? I’m glad you asked! Simple! Deploy the .wsp (which you can download here). This will give you a site collection feature called “Winsmarts Cross Site Collection Navigation” as shown below. Go ahead and activate it, and this will give you a WebPart called “Winsmarts Navigation Web Part” as shown below: Just drop this WebPart on your page, and it will show you all site collections that the currently logged in user has access to. Really it’s that easy! This is shown as below - In the above example, I have two site collections that I created at /sites/SiteCollection1 and /sites/SiteCollection2. The navigation shows the titles. You see some extraneous crap as well, you might want to clean that – I’ll talk about that in a minute. What? You’re running into problems? If the problem you’re running into is that you are prompted to login three times, and then it shows a blank webpart that says “Loading your applications ..” and then craps out!, then most probably you’re using a different authentication scheme. Behind the scenes I use a custom WCF service to perform this job. OOTB, I’ve set it to work with NTLM, but if you need to make it work alternate authentications such as forms based auth, or client side certs, you will need to edit the %14%\ISAPI\Winsmarts.CrossSCNav\web.config file, specifically, this section - 1: <bindings> 2: <webHttpBinding> 3: <binding name="customWebHttpBinding"> 4: <security mode="TransportCredentialOnly"> 5: <transport clientCredentialType="Ntlm"/> 6: </security> 7: </binding> 8: </webHttpBinding> 9: </bindings> For Kerberos, change the “clientCredentialType” to “Windows” For Forms auth, remove that transport line For client certs – well that’s a bit more involved, but it’s just web.config changes – hit a good book on WCF or hire me for a billion trillion $. But fair warning, I might be too busy to help immediately. If you’re running into a different problem, please leave a comment below, but the code is pretty rock solid, so .. hmm .. check what you’re doing! BTW, I don’t  make any guarantee/warranty on this – if this code makes you sterile, unpopular, bad hairstyle, anything else, that is your problem! But, there are some known issues - I wrote this as a concept – you can easily extend it to be more flexible. Example, hierarchical nav, or, horizontal nav, jazzy effects with jquery or silverlight– all those are possible very very easily. This webpart is not smart enough to co-exist with another instance of itself on the same page. I can easily extend it to do so, which I will do in my spare(!?) time! Okay good! But that’s not all! As you can see, just dropping the WebPart may show you many extraneous site collections, or maybe you want to restrict which site collections are shown, or exclude a certain site collection to be shown from the navigation. To support that, I created a property on the WebPart called “UrlMatchPattern”, which is a regex expression you specify to trim the results :). So, just edit the WebPart, and specify a string property of “http://sp2010/sites/” as shown below. Note that you can put in whatever regex expression you want! So go crazy, I don’t care! And this gives you a cleaner look.   w00t! Enjoy! Comment on the article ....

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 1

    - by rajbk
    This tutorial walks you through creating an report based on the Northwind sample database. You will add a client report definition file (RDLC), create a dataset for the RDLC, define queries using LINQ to Entities, design the report and add a ReportViewer web control to render the report in a ASP.NET web page. The report will have a chart control. Different results will be generated by changing filter criteria. At the end of the walkthrough, you should have a UI like the following.  From the UI below, a user is able to view the product list and can see a chart with the sum of Unit price for a given category. They can filter by Category and Supplier. The drop downs will auto post back when the selection is changed.  This demo uses Visual Studio 2010 RTM. This post is split into three parts. The last part has the sample code attached. Creating an ASP.NET report using Visual Studio 2010 - Part 2 Creating an ASP.NET report using Visual Studio 2010 - Part 3   Lets start by creating a new ASP.NET empty web application called “NorthwindReports” Creating the Data Access Layer (DAL) Add a web form called index.aspx to the root directory. You do this by right clicking on the NorthwindReports web project and selecting “Add item..” . Create a folder called “DAL”. We will store all our data access methods and any data transfer objects in here.   Right click on the DAL folder and add a ADO.NET Entity data model called Northwind. Select “Generate from database” and click Next. Create a connection to your database containing the Northwind sample database and click Next.   From the table list, select Categories, Products and Suppliers and click next. Our Entity data model gets created and looks like this:    Adding data transfer objects Right click on the DAL folder and add a ProductViewModel. Add the following code. This class contains properties we need to render our report. public class ProductViewModel { public int? ProductID { get; set; } public string ProductName { get; set; } public System.Nullable<decimal> UnitPrice { get; set; } public string CategoryName { get; set; } public int? CategoryID { get; set; } public int? SupplierID { get; set; } public bool Discontinued { get; set; } } Add a SupplierViewModel class. This will be used to render the supplier DropDownlist. public class SupplierViewModel { public string CompanyName { get; set; } public int SupplierID { get; set; } } Add a CategoryViewModel class. public class CategoryViewModel { public string CategoryName { get; set; } public int CategoryID { get; set; } } Create an IProductRepository interface. This will contain the signatures of all the methods we need when accessing the entity model.  This step is not needed but follows the repository pattern. interface IProductRepository { IQueryable<Product> GetProducts(); IQueryable<ProductViewModel> GetProductsProjected(int? supplierID, int? categoryID); IQueryable<SupplierViewModel> GetSuppliers(); IQueryable<CategoryViewModel> GetCategories(); } Create a ProductRepository class that implements the IProductReposity above. The methods available in this class are as follows: GetProducts – returns an IQueryable of all products. GetProductsProjected – returns an IQueryable of ProductViewModel. The method filters all the products based on SupplierId and CategoryId if any. It then projects the result into the ProductViewModel. GetSuppliers() – returns an IQueryable of all suppliers projected into a SupplierViewModel GetCategories() – returns an IQueryable of all categories projected into a CategoryViewModel  public class ProductRepository : IProductRepository { /// <summary> /// IQueryable of all Products /// </summary> /// <returns></returns> public IQueryable<Product> GetProducts() { var dataContext = new NorthwindEntities(); var products = from p in dataContext.Products select p; return products; }   /// <summary> /// IQueryable of Projects projected /// into the ProductViewModel class /// </summary> /// <returns></returns> public IQueryable<ProductViewModel> GetProductsProjected(int? supplierID, int? categoryID) { var projectedProducts = from p in GetProducts() select new ProductViewModel { ProductID = p.ProductID, ProductName = p.ProductName, UnitPrice = p.UnitPrice, CategoryName = p.Category.CategoryName, CategoryID = p.CategoryID, SupplierID = p.SupplierID, Discontinued = p.Discontinued }; // Filter on SupplierID if (supplierID.HasValue) { projectedProducts = projectedProducts.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { projectedProducts = projectedProducts.Where(a => a.CategoryID == categoryID); }   return projectedProducts; }     public IQueryable<SupplierViewModel> GetSuppliers() { var dataContext = new NorthwindEntities(); var suppliers = from s in dataContext.Suppliers select new SupplierViewModel { SupplierID = s.SupplierID, CompanyName = s.CompanyName }; return suppliers; }   public IQueryable<CategoryViewModel> GetCategories() { var dataContext = new NorthwindEntities(); var categories = from c in dataContext.Categories select new CategoryViewModel { CategoryID = c.CategoryID, CategoryName = c.CategoryName }; return categories; } } Your solution explorer should look like the following. Build your project and make sure you don’t get any errors. In the next part, we will see how to create the client report definition file using the Report Wizard.   Creating an ASP.NET report using Visual Studio 2010 - Part 2

    Read the article

  • Interesting things – Twitter annotations and your phone as a web server

    - by jamiet
    I overheard/read a couple of things today that really made me, data junkie that I am, take a step back and think, “Hmmm, yeah, that could be really interesting” and I wanted to make a note of them here so that (a) I could bring them to the attention of anyone that happens to read this and (b) I can maybe come back here in a few years and see if either of these have come to fruition. Your phone as a web server While listening to Jon Udell’s (twitter) “Interviews with Innovators Podcast” today in which he interviewed Herbert Van de Sompel (twitter) about his Momento project. During the interview Jon and Herbert made the following remarks: Jon: [some people] really had this vision of a web of servers, the notion that every node on the internet, every connected entity, is potentially a server and a client…we can see where we’re getting to a point where these endpoint devices we have in our pockets are going to be massively capable and it may be in the not too distant future that significant chunks of the web archive will be cached all over the place including on your own machine… Herbert: wasn’t it Opera who at one point turned your browser into a server? That really got my brain ticking. We all carry a mobile phone with us and therefore we all potentially carry a mobile web server with us as well and to my mind the only thing really stopping that from happening is the capabilities of the phone hardware, the capabilities of the network infrastructure and the will to just bloody do it. Certainly all the standards required for addressing a web server on a phone already exist (to this uninitiated observer DNS and IPv6 seem to solve that problem) so why not? I tweeted about the idea and Rory Street answered back with “why would you want a phone to be a web server?”: Its a fair question and one that I would like to try and answer. Mobile phones are increasingly becoming our window onto the world as we use them to upload messages to Twitter, record our location on FourSquare or interact with our friends on Facebook but in each of these cases some other service is acting as our intermediary; to see what I’m thinking you have to go via Twitter, to see where I am you have to go to FourSquare (I’m using ‘I’ liberally, I don’t actually use FourSquare before you ask). Why should this have to be the case? Why can’t that data be decentralised? Why can’t we be masters of our own data universe? If my phone acted as a web server then I could expose all of that information without needing those intermediary services. I see a time when we can pass around URLs such as the following: http://jamiesphone.net/location/current - Where is Jamie right now? http://jamiesphone.net/location/2010-04-21 – Where was Jamie on 21st April 2010? http://jamiesphone.net/thoughts/current – What’s on Jamie’s mind right now? http://jamiesphone.net/blog – What documents is Jamie sharing with me? http://jamiesphone.net/calendar/next7days – Where is Jamie planning to be over the next 7 days? and those URLs get served off of the phone in our pockets. If we govern that data then we can control who has access to it and (crucially) how long its available for. Want to wipe yourself off the face of the web? its pretty easy if you’re in control of all the data – just turn your phone off. None of this exists today but I look forward to a time when it does. Opera really were onto something last June when they announced Opera Unite (admittedly Unite only works because Opera provide an intermediary DNS-alike system – it isn’t totally decentralised). Opening up Twitter annotations Last week Twitter held their first developer conference called Chirp where they announced an upcoming new feature called ‘Twitter Annotations’; in short this will allow us to attach metadata to a Tweet thus enhancing the tweet itself. Think of it as a richer version of hashtags. To think of it another way Twitter are turning their data into a humongous Entity-Attribute-Value or triple-tuple store. That alone has huge implications both for the web and Twitter as a whole – the ability to enrich that 140 characters data and thus make it more useful is indeed compelling however today I stumbled upon a blog post from Eugene Mandel entitled Tweet Annotations – a Way to a Metadata Marketplace? where he proposed the idea of allowing tweets to have metadata added by people other than the person who tweeted the original tweet. This idea really fascinated me especially when I read some of the potential uses that Eugene and his commenters suggested. They included: Amazon could attach an ISBN to a tweet that mentions a book. Specialist clients apps for book lovers could be built up around this metadata. Advertisers could pay to place adverts in metadata. The revenue generated from those adverts could be shared with the tweeter or people who add the metadata. Granted, allowing anyone to add metadata to a tweet has the potential to create a spam problem the like of which we haven’t even envisaged but spam hasn’t halted the growth of the web and neither should it halt the growth of data annotations either. The original tweeter should of course be able to determine who can add metadata and whether it should be moderated. As Eugene says himself: Opening publishing tweet annotations to anyone will open the way to a marketplace of metadata where client developers, data mining companies and advertisers can add new meaning to Twitter and build innovative businesses. What Eugene and his followers did not mention is what I think is potentially the most fascinating use of opening up annotations. Google’s success today is built on their page rank algorithm that measures the validity of a web page by the number of incoming links to it and the page rank of the sites containing those links – its a system built on reputation. Twitter annotations could open up a new paradigm however – let’s call it People rank- where reputation can be measured by the metadata that people choose to apply to links and the websites containing those links. Its not hard to see why Google and Microsoft have paid big bucks to get access to the Twitter firehose! Neither of these features, phones as a web server or the ability to add annotations to other people’s tweets, exist today but I strongly believe that they could dramatically enhance the web as we know it today. I hope to look back on this blog post in a few years in the knowledge that these ideas have been put into place. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1

    - by rajbk
    The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010. Many other other applications in the works. * This post walks you through how to create an OData feed, define a shape for the data and pre-filter the data using Visual Studio 2010, WCF Data Services and the Entity Framework. A sample project is attached at the bottom of Part 2 of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Create the Web Application File –› New –› Project, Select “ASP.NET Empty Web Application” Add the Entity Data Model Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “ADO.NET Entity Data Model” under "Data”. Name the Model “Northwind” and click “Add”.   In the “Choose Model Contents”, select “Generate Model From Database” and click “Next”   Define a connection to your database containing the Northwind database in the next screen. We are going to expose the Products table through our OData feed. Select “Products” in the “Choose your Database Object” screen.   Click “Finish”. We are done creating our Entity Data Model. Save the Northwind.edmx file created. Add the WCF Data Service Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “WCF Data Service” from the list and call the service “DataService” (creative, huh?). Click “Add”.   Enable Access to the Data Service Open the DataService.svc.cs class. The class is well commented and instructs us on the next steps. public class DataService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Replace the comment that starts with “/* TODO:” with “NorthwindEntities” (the entity container name of the Model we created earlier).  WCF Data Services is initially locked down by default, FTW! No data is exposed without you explicitly setting it. You have explicitly specify which Entity sets you wish to expose and what rights are allowed by using the SetEntitySetAccessRule. The SetServiceOperationAccessRule on the other hand sets rules for a specified operation. Let us define an access rule to expose the Products Entity we created earlier. We use the EnititySetRights.AllRead since we want to give read only access. Our modified code is shown below. public class DataService : DataService<NorthwindEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } We are done setting up our ODataFeed! Compile your project. Right click on DataService.svc and select “View in Browser” to see the OData feed. To view the feed in IE, you must make sure that "Feed Reading View" is turned off. You set this under Tools -› Internet Options -› Content tab.   If you navigate to “Products”, you should see the Products feed. Note also that URIs are case sensitive. ie. Products work but products doesn’t.   Filtering our data OData has a set of system query operations you can use to perform common operations against data exposed by the model. For example, to see only Products in CategoryID 2, we can use the following request: /DataService.svc/Products?$filter=CategoryID eq 2 At the time of this writing, supported operations are $orderby, $top, $skip, $filter, $expand, $format†, $select, $inlinecount. Pre-filtering our data using Query Interceptors The Product feed currently returns all Products. We want to change that so that it contains only Products that have not been discontinued. WCF introduces the concept of interceptors which allows us to inject custom validation/policy logic into the request/response pipeline of a WCF data service. We will use a QueryInterceptor to pre-filter the data so that it returns only Products that are not discontinued. To create a QueryInterceptor, write a method that returns an Expression<Func<T, bool>> and mark it with the QueryInterceptor attribute as shown below. [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.Discontinued == false; } Viewing the feed after compilation will only show products that have not been discontinued. We also confirm this by looking at the WHERE clause in the SQL generated by the entity framework. SELECT [Extent1].[ProductID] AS [ProductID], ... ... [Extent1].[Discontinued] AS [Discontinued] FROM [dbo].[Products] AS [Extent1] WHERE 0 = [Extent1].[Discontinued] Other examples of Query/Change interceptors can be seen here including an example to filter data based on the identity of the authenticated user. We are done pre-filtering our data. In the next part of this post, we will see how to shape our data. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Foot Notes * http://msdn.microsoft.com/en-us/data/aa937697.aspx † $format did not work for me. The way to get a Json response is to include the following in the  request header “Accept: application/json, text/javascript, */*” when making the request. This is easily done with most JavaScript libraries.

    Read the article

  • Michael Crump&rsquo;s notes for 70-563 PRO &ndash; Designing and Developing Windows Applications usi

    - by mbcrump
    TIME TO GO PRO! This is my notes for 70-563 PRO – Designing and Developing Windows Applications using .NET Framework 3.5 I created it using several resources (various certification web sites, msdn, official ms 70-548 book). The reason that I created this review is because a) I am taking the exam. b) MS did not create a book for this exam. Use the(MS 70-548)book. c) To make sure I am familiar with each before the exam. I hope that it provides a good start for your own notes. I hope that someone finds this useful. At least, it will give you a starting point of what to expect to know on the PRO exam. Also, for those wondering, the PRO exam does contains very little code. It is basically all theory. 1. Validation Controls – How to prevent users from entering invalid data on forms. (MaskedTextBox control and RegEx) 2. ServiceController – used to start and control the behavior of existing services. 3. User Feedback (know winforms Status Bar, Tool Tips, Color, Error Provider, Context-Sensitive and Accessibility) 4. Specific (derived) exceptions must be handled before general (base class) exceptions. By moving the exception handling for the base type Exception to after exception handling of ArgumentNullException, all ArgumentNullException thrown by the Helper method will be caught and logged correctly. 5. A heartbeat method is a method exposed by a Web service that allows external applications to check on the status of the service. 6. New users must master key tasks quickly. Giving these tasks context and appropriate detail will help. However, advanced users will demand quicker paths. Shortcuts, accelerators, or toolbar buttons will speed things along for the advanced user. 7. MSBuild uses project files to instruct the build engine what to build and how to build it. MSBuild project files are XML files that adhere to the MSBuild XML schema. The MSBuild project files contain complete file, build action, and dependency information for each individual projects. 8. Evaluating whether or not to fix a bug involves a triage process. You must identify the bug's impact, set the priority, categorize it, and assign a developer. Many times the person doing the triage work will assign the bug to a developer for further investigation. In fact, the workflow for the bug work item inside of Team System supports this step. Developers are often asked to assess the impact of a given bug. This assessment helps the person doing the triage make a decision on how to proceed. When assessing the impact of a bug, you should consider time and resources to fix it, bug risk, and impacts of the bug. 9. In large projects it is generally impossible and unfeasible to fix all bugs because of the impact on schedule and budget. 10. Code reviews should be conducted by a technical lead or a technical peer. 11. Testing Applications 12. WCF Services – application state 13. SQL Server 2005 / 2008 Express Edition – reliable storage of data / Microsoft SQL Server 3.5 Compact Database– used for client computers to retrieve and save data from a shared location. 14. SQL Server 2008 Compact Edition – used for minimum possible memory and can synchronize data with a corporate SQL Server 2008 Database. Supports offline user and minimum dependency on external components. 15. MDI and SDI Forms (specifically IsMDIContainer) 16. GUID – in the case of data warehousing, it is important to define unique keys. 17. Encrypting / Security Data 18. Understanding of Isolated Storage/Proper location to store items 19. LINQ to SQL 20. Multithreaded access 21. ADO.NET Entity Framework model 22. Marshal.ReleaseComObject 23. Common User Interface Layout (ComboBox, ListBox, Listview, MaskedTextBox, TextBox, RichTextBox, SplitContainer, TableLayoutPanel, TabControl) 24. DataSets Class - http://msdn.microsoft.com/en-us/library/system.data.dataset%28VS.71%29.aspx 25. SQL Server 2008 Reporting Services (SSRS) 26. SystemIcons.Shield (Vista UAC) 27. Leverging stored procedures to perform data manipulation for a database schema that can change. 28. DataContext 29. Microsoft Windows Installer Packages, ClickOnce(bootstrapping features), XCopy. 30. Client Application Services – will authenticate users by using the same data source as a ASP.NET web application. 31. SQL Server 2008 Caching 32. StringBuilder 33. Accessibility Guidelines for Windows Applications http://msdn.microsoft.com/en-us/library/ms228004.aspx 34. Logging erros 35. Testing performance related issues. 36. Role Based Security, GenericIdentity and GenericPrincipal 37. System.Net.CookieContainer will store session data for webapps (see isolated storage for winforms) 38. .NET CLR Profiler tool will identify objects that cause performance issues. 39. ADO.NET Synchronization (SyncGroup) 40. Globalization - CultureInfo 41. IDisposable Interface- reports on several questions relating to this. 42. Adding timestamps to determine whether data has changed or not. 43. Converting applications to .NET Framework 3.5 44. MicrosoftReportViewer 45. Composite Controls 46. Windows Vista KNOWN folders. 47. Microsoft Sync Framework 48. TypeConverter -Provides a unified way of converting types of values to other types, as well as for accessing standard values and sub properties. http://msdn.microsoft.com/en-us/library/system.componentmodel.typeconverter.aspx 49. Concurrency control mechanisms The main categories of concurrency control mechanisms are: Optimistic - Delay the checking of whether a transaction meets the isolation rules (e.g., serializability and recoverability) until its end, without blocking any of its (read, write) operations, and then abort a transaction, if the desired rules are violated. Pessimistic - Block operations of a transaction, if they may cause violation of the rules. Semi-optimistic - Block operations in some situations, and do not block in other situations, while delaying rules checking to transaction's end, as done with optimistic. 50. AutoResetEvent 51. Microsoft Messaging Queue (MSMQ) 4.0 52. Bulk imports 53. KeyDown event of controls 54. WPF UI components 55. UI process layer 56. GAC (installing, removing and queuing) 57. Use a local database cache to reduce the network bandwidth used by applications. 58. Sound can easily be annoying and distracting to users, so use it judiciously. Always give users the option to turn sound off. Because a user might have sound off, never convey important information through sound alone.

    Read the article

< Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >