Search Results

Search found 13935 results on 558 pages for 'notify send'.

Page 160/558 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Setting up a DNS name server for a mass virtual host with Bind9

    - by Dez
    I am trying to set up a chrooted DNS name server in a local LAN like this everyone connected in the LAN can have access to the mass virtual hosts defined for a development ambience without having to edit manually their local /etc/hosts one by one. The mass virtual host is named example.user.dev (VirtualDocumentRoot /home/user/example ) and example.test (DocumentRoot /var/www/example). I set up everything and the /var/log/syslog doesn't show any error, but when checking the DNS with: host -v example.test Doesn't find the host. Also using the dig command I don't receive answer. dig -x example.test ; << DiG 9.5.1-P3 << -x imprimere ;; global options: printcmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 47844 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;imprimere.in-addr.arpa. IN PTR ;; AUTHORITY SECTION: in-addr.arpa. 600 IN SOA a.root-servers.net. dns-ops.arin.net. 2010042604 1800 900 691200 10800 ;; Query time: 108 msec ;; SERVER: 80.58.0.33#53(80.58.0.33) ;; WHEN: Mon Apr 26 11:15:53 2010 ;; MSG SIZE rcvd: 107 My configuration is the following: /etc/bind/named.conf.local zone "example.test" { type master; allow-query { any; }; file "/etc/bind/zones/master_example.test"; notify yes; }; zone "1.168.192.in-addr.arpa" { type master; allow-query { any; }; file "/etc/bind/zones/master_1.168.192.in-addr.arpa"; notify yes; }; /etc/bind/named.conf.options Note: We have an static IP address so I forward the querys to DNS server to said IP address. options{ directory "/var/cache/bind"; forwarders { 80.34.100.160; }; auth-nxdomain no; listen-on-v6 { any; }; }; /etc/bind/zones/master_example.test $ORIGIN example.test. $TTL 86400 @ IN SOA example.test. root.example.test. ( 201004227 ; serial 28800 ; refresh 14400 ; retry 3600000 ; expire 86400 ) ; min ; TXT "example.test, DNS service" @ IN NS example.test. localhost A 127.0.0.1 example.test. A 192.168.1.52 example A 192.168.1.52 www CNAME example.test. /etc/hosts 127.0.0.1 localhost example 192.168.1.52 localhost example example.test /etc/resolv.conf Note: For Bind I just added the 3 last lines. nameserver 80.58.0.33 nameserver 80.58.61.250 nameserver 80.58.61.254 search example.test search example nameserver 192.168.1.52

    Read the article

  • outgoing DNS flood targeted to non-ISP hosts

    - by radudani
    Below is the specific traffic monitored at the network perimeter and originating from a user PC on Vista platform; my question is not about the effects of the flood, but about the nature of the source of it; is this a kind of known infection, or just an application went out of control? a standard NOD32 scan didn't find anything, as the user told me; Thank you for any hint, Danny 14:40:10.115876 IP 192.168.7.42.4122 67.228.0.181.53: S 2742536765:2742536765(0) win 16384 14:40:10.115943 IP 192.168.7.42.4124 67.228.181.207.53: S 3071079888:3071079888(0) win 16384 14:40:10.116015 IP 192.168.7.42.4126 67.228.0.181.53: S 3445199428:3445199428(0) win 16384 14:40:10.116086 IP 192.168.7.42.4128 67.228.181.207.53: S 2053198691:2053198691(0) win 16384 14:40:10.116154 IP 192.168.7.42.4130 67.228.0.181.53: S 2841660872:2841660872(0) win 16384 14:40:10.116222 IP 192.168.7.42.4132 67.228.181.207.53: S 3150822465:3150822465(0) win 16384 14:40:10.116290 IP 192.168.7.42.4134 67.228.0.181.53: S 1692515021:1692515021(0) win 16384 14:40:10.116358 IP 192.168.7.42.4136 67.228.181.207.53: S 3358275919:3358275919(0) win 16384 14:40:10.116430 IP 192.168.7.42.4138 67.228.0.181.53: S 930184999:930184999(0) win 16384 14:40:10.116498 IP 192.168.7.42.4140 67.228.181.207.53: S 1504984630:1504984630(0) win 16384 14:40:10.116566 IP 192.168.7.42.4142 67.228.0.181.53: S 546074424:546074424(0) win 16384 14:40:10.116634 IP 192.168.7.42.4144 67.228.181.207.53: S 4241828590:4241828590(0) win 16384 14:40:10.116702 IP 192.168.7.42.4146 67.228.0.181.53: S 668634627:668634627(0) win 16384 14:40:10.116769 IP 192.168.7.42.4148 67.228.181.207.53: S 3768119461:3768119461(0) win 16384 14:40:10.117360 IP 192.168.7.42.4111 67.228.0.181.53: 12676 op8 Resp12*- [2128q][|domain] 14:40:10.117932 IP 192.168.7.42.4112 67.228.181.207.53: 44190 op7 NotAuth*|$ [29103q],[|domain] 14:40:10.118726 IP 192.168.7.42.4113 67.228.0.181.53: 49196 inv_q [b2&3=0xeea] [64081q] [28317a] [43054n] [23433au] Type63482 (Class 5889)? M-_^OSM-JM-m^_M-i.[|domain] 14:40:10.119934 IP 192.168.7.42.4114 67.228.181.207.53: 48131 updateMA Resp12$ [43850q],[|domain] 14:40:10.121164 IP 192.168.7.42.4115 67.228.0.181.53: 46330 updateM% [b2&3=0x665b] [23691a] [998q] [32406n] [11452au][|domain] 14:40:10.121866 IP 192.168.7.42.4116 67.228.181.207.53: 34425 op7 YXRRSet* [39927q][|domain] 14:40:10.123107 IP 192.168.7.42.4117 67.228.0.181.53: 56536 notify+ [b2&3=0x27e6] [59761a] [23005q] [33341n] [29705au][|domain] 14:40:10.123961 IP 192.168.7.42.4118 67.228.181.207.53: 19323 stat% [b2&3=0x14bb] [32491a] [41925q] [2038n] [5857au][|domain] 14:40:10.132499 IP 192.168.7.42.4119 67.228.0.181.53: 50432 updateMA+ [b2&3=0x6bc2] [10733a] [9775q] [46984n] [15261au][|domain] 14:40:10.133394 IP 192.168.7.42.4120 67.228.181.207.53: 2171 notify Refused$ [26027q][|domain] 14:40:10.134421 IP 192.168.7.42.4121 67.228.0.181.53: 25802 updateM NXDomain*-$ [28641q][|domain] 14:40:10.135392 IP 192.168.7.42.4122 67.228.181.207.53: 2073 updateMA+ [b2&3=0x6d0b] [43177a] [54332q] [17736n] [43636au][|domain] 14:40:10.136638 IP 192.168.7.42.4123 67.228.0.181.53: 15346 updateD+% [b2&3=0x577a] [61686a] [19106q] [15824n] [37833au] Type28590 (Class 64856)? [|domain] 14:40:10.137265 IP 192.168.7.42.4124 67.228.181.207.53: 60761 update+ [b2&3=0x2b66] [43293a] [53922q] [23115n] [11349au][|domain] 14:40:10.148122 IP 192.168.7.42.4125 67.228.0.181.53: 3418 op3% [b2&3=0x1a92] [51107a] [60368q] [47777n] [56081au][|domain]

    Read the article

  • bind9 DNS Ubuntu names pingible on server, but not on Windows Machines?

    - by leeand00
    I setup a DNS server today on Ubuntu, following this tutorial. My intent was to setup my network for dns-name resolving on the private LAN within a single zone (nothing fancy I just want name resolution). I've tested the setup on the DNS server machine itself, and I can ping all the machines listed in the configuration file. I've also configured the Windows Machines on my network, and for some reason they are incapable of pinging by names as was possible on the DNS Server itself. I've tried running nslookup on the Windows DNS clients and I receive and error mentioning the address of the DNS server. DNS forwarding works fine, I'm not having any trouble accessing the internet, the problem only lies within accessing names within the private LAN. Here are my configuration files: options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; forwarders { 8.8.8.8; 8.8.8.4; 74.242.0.12; //68.87.76.178; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.options zone "leerdomain.local" { type master; file "/etc/bind/zones/leerdomain.local.db"; notify no; }; zone "2.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/rev.2.168.192.in-addr.arpa"; notify no; }; /etc/bind/named.conf.local Lookup: $TTL 3D @ IN SOA ns.leerdomain.local. admin.leerdomain.local. ( 2010011001 28800 3600 604800 38400 ); leerdomain.local. IN NS ns.leerdomain.local. ns IN A 192.168.2.9 asus IN A 192.168.2.254 www IN CNAME asus vaio IN A 192.168.2.253 iptouch IN A 192.168.2.252 toshiba IN A 192.168.2.251 gw IN A 192.168.2.1 TXT "Network Gateway" /etc/bind/zones/leerdomain.local.db (Validates fine with named-checkzone when validating zone leerdomain.local) Reverse Lookup: $TTL 3D @ IN SOA ns.leerdomain.local. admin.leerdomain.local. ( 201001101 28800 604800 604800 86400 ) IN NS ns.leerdomain.local. 1 IN PTR gw.leerdomain.local. 254 IN PTR asus.leerdomain.local. 253 IN PTR vaio.leerdomain.local. 252 IN PTR iptouch.leerdomain.local. 251 IN PTR toshiba.leerdomain.local. /etc/bind/zones/rev.2.168.192.in-addr.arpa *(Does not validate with named-checkzone when validating zone leerdomain.local gives an error of: zone leerdomain.local/IN: NS 'ns.leerdomain.local' has no address records (A or AAAA) zone leerdomain.local/IN: not loaded due to errors. * Despite not validating bind9 starts without errors in /var/log/syslog I've also configured a few of the windows machines on my network to have the static ip as specified in the lookup and reverse lookup config files. i.e. Using nslookup yields the following results: C:\Users\leeand00>nslookup ns Server: UnKnown Address: 192.168.2.9 *** UnKnown can't find ns: Non-existent domain C:\Users\leeand00>nslookup gw Server: UnKnown Address: 192.168.2.9 Name: gw. Additionally trying to ping by name also fails on machines that are not the DNS Server. Is there something wrong with my configuration of either the nameserver or the Windows Boxes that is keeping me from accessing other machines using names?

    Read the article

  • Does this prove a network bandwidth bottleneck?

    - by Yuji Tomita
    I've incorrectly assumed that my internal AB testing means my server can handle 1k concurrency @3k hits per second. My theory at at the moment is that the network is the bottleneck. The server can't send enough data fast enough. External testing from blitz.io at 1k concurrency shows my hits/s capping off at 180, with pages taking longer and longer to respond as the server is only able to return 180 per second. I've served a blank file from nginx and benched it: it scales 1:1 with concurrency. Now to rule out IO / memcached bottlenecks (nginx normally pulls from memcached), I serve up a static version of the cached page from the filesystem. The results are very similar to my original test; I'm capped at around 180 RPS. Splitting the HTML page in half gives me double the RPS, so it's definitely limited by the size of the page. If I internally ApacheBench from the local server, I get consistent results of around 4k RPS on both the Full Page and the Half Page, at high transfer rates. Transfer rate: 62586.14 [Kbytes/sec] received If I AB from an external server, I get around 180RPS - same as the blitz.io results. How do I know it's not intentional throttling? If I benchmark from multiple external servers, all results become poor which leads me to believe the problem is in MY servers outbound traffic, not a download speed issue with my benchmarking servers / blitz.io. So I'm back to my conclusion that my server can't send data fast enough. Am I right? Are there other ways to interpret this data? Is the solution/optimization to set up multiple servers + load balancing that can each serve 180 hits per second? I'm quite new to server optimization, so I'd appreciate any confirmation interpreting this data. Outbound traffic Here's more information about the outbound bandwidth: The network graph shows a maximum output of 16 Mb/s: 16 megabits per second. Doesn't sound like much at all. Due to a suggestion about throttling, I looked into this and found that linode has a 50mbps cap (which I'm not even close to hitting, apparently). I had it raised to 100mbps. Since linode caps my traffic, and I'm not even hitting it, does this mean that my server should indeed be capable of outputting up to 100mbps but is limited by some other internal bottleneck? I just don't understand how networks at this large of a scale work; can they literally send data as fast as they can read from the HDD? Is the network pipe that big? In conclusion 1: Based on the above, I'm thinking I can definitely raise my 180RPS by adding an nginx load balancer on top of a multi nginx server setup at exactly 180RPS per server behind the LB. 2: If linode has a 50/100mbit limit that I'm not hitting at all, there must be something I can do to hit that limit with my single server setup. If I can read / transmit data fast enough locally, and linode even bothers to have a 50mbit/100mbit cap, there must be an internal bottleneck that's not allowing me to hit those caps that I'm not sure how to detect. Correct? I realize the question is huge and vague now, but I'm not sure how to condense it. Any input is appreciated on any conclusion I've made.

    Read the article

  • Connection Timed Out - Simple outbound Postfix for PHP Contact form

    - by BLaZuRE
    Alright, so I only got Postfix for a PHP contact form that will send email to a single . I only want it to send out mail to a single external address ([email protected]). I have domain sub1.sub2.domain.com. I installed Postfix out of the Ubuntu repo, with minimal config changes. I cannot get Postfix to send mail externally (though it succeeds for internal accounts, which is unnecessary). The email simply defers if I generate an email using PHP mail(). If I try to form my own in telnet, right after rcpt to: [email][email protected][/email], I get a postfix/smtpd[31606]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 <[email protected]>: Recipient address rejected: example.com; from=<root@localhost> to=<[email protected]> proto=ESMTP helo=<localhost> when commenting out default_transport = error and relay_transport = error lines, I get the following: Jun 26 14:33:00 sub1 postfix/smtp[12191]: 2DA06F88206A: to=<[email protected]>, relay=none, delay=514, delays=409/0.01/105/0, dsn=4.4.1, status=deferred (connect to aspmx3.googlemail.com[74.125.127.27]:25: Connection timed out) Jun 26 14:36:36 sub1 postfix/smtp[12225]: connect to mta7.am0.yahoodns.net[98.139.175.224]:25: Connection timed out Jun 26 14:38:00 sub1 postfix/smtp[12225]: 22952F88208E: to=<[email protected]>, relay=none, delay=655, delays=550/0.01/105/0, dsn=4.4.1, status=deferred (connect to mta5.am0.yahoodns.net[67.195.168.230]:25: Connection timed out) My main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = sub1.sub2.domain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = sub1.sub2.domain.com, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all default_transport = error relay_transport = error Also, a dig sub1.sub2.domain.com MX returns: ; <<>> DiG 9.7.0-P1 <<>> sub1.sub2.domain.com MX ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4853 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;sub1.sub2.domain.com. IN MX ;; AUTHORITY SECTION: sub2.domain.com. 600 IN SOA sub2.domain.com. sub5.domain.com. 2012062915 7200 600 1209600 600 ;; Query time: 0 msec ;; SERVER: x.x.x.x#53(x.x.x.x) ;; WHEN: Fri Jun 29 16:35:00 2012 ;; MSG SIZE rcvd: 84 lsof -i returns empty netstat -t -a | grep LISTEN returns tcp 0 0 localhost:mysql *:* LISTEN tcp 0 0 *:ftp *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost:ipp *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp6 0 0 [::]:netbios-ssn [::]:* LISTEN tcp6 0 0 [::]:www [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN tcp6 0 0 localhost:ipp [::]:* LISTEN tcp6 0 0 [::]:microsoft-ds [::]:* LISTEN

    Read the article

  • Linux Kernel not passing through multicast UDP packets

    - by buecking
    Recently I've set up a new Ubuntu Server 10.04 and noticed my UDP server is no longer able to see any multicast data sent to the interface, even after joining the multicast group. I've got the exact same set up on two other Ubuntu 8.04.4 LTS machines and there is no problem receiving data after joining the same multicast group. The ethernet card is a Broadcom netXtreme II BCM5709 and the driver used is: b $ ethtool -i eth1 driver: bnx2 version: 2.0.2 firmware-version: 5.0.11 NCSI 2.0.5 bus-info: 0000:01:00.1 I'm using smcroute to manage my multicast registrations. b$ smcroute -d b$ smcroute -j eth1 233.37.54.71 After joining the group ip maddr shows the newly added registration. b$ ip maddr 1: lo inet 224.0.0.1 inet6 ff02::1 2: eth0 link 33:33:ff:40:c6:ad link 01:00:5e:00:00:01 link 33:33:00:00:00:01 inet 224.0.0.1 inet6 ff02::1:ff40:c6ad inet6 ff02::1 3: eth1 link 01:00:5e:25:36:47 link 01:00:5e:25:36:3e link 01:00:5e:25:36:3d link 33:33:ff:40:c6:af link 01:00:5e:00:00:01 link 33:33:00:00:00:01 inet 233.37.54.71 <------- McastGroup. inet 224.0.0.1 inet6 ff02::1:ff40:c6af inet6 ff02::1 So far so good, I can see that I'm receiving data for this multicast group. b$ sudo tcpdump -i eth1 -s 65534 host 233.37.54.71 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65534 bytes 09:30:09.924337 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 09:30:09.947547 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 09:30:10.108378 IP 192.164.1.120.58866 > 233.37.54.71.15574: UDP, length 268 09:30:10.196841 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 ... I can also confirm that the interface is receiving mcast packets. b $ ethtool -S eth1 | grep mcast_pack rx_mcast_packets: 103998 tx_mcast_packets: 33 Now here's the problem. When I try to capture the traffic using a simple ruby UDP server I receive zero data! Here's a simple server that reads data send on port 15572 and prints the first two characters. This works on the two 8.04.4 Ubuntu Servers, but not the 10.04 server. require 'socket' s = UDPSocket.new s.bind("", 15572) 5.times do text, sender = s.recvfrom(2) puts text end If I send a UDP packet crafted in ruby to localhost, the server receives it and prints out the first two characters. So I know that the server above is working correctly. irb(main):001:0> require 'socket' => true irb(main):002:0> s = UDPSocket.new => #<UDPSocket:0x7f3ccd6615f0> irb(main):003:0> s.send("I2 XXX", 0, 'localhost', 15572) When I check the protocol statistics I see that InMcastPkts is not increasing. While on the other 8.04 servers, on the same network, received a few thousands packets in 10 seconds. b $ netstat -sgu ; sleep 10 ; netstat -sgu IcmpMsg: InType3: 11 OutType3: 11 Udp: 446 packets received 4 packets to unknown port received. 0 packet receive errors 461 packets sent UdpLite: IpExt: InMcastPkts: 4654 <--------- Same as below OutMcastPkts: 3426 InBcastPkts: 9854 InOctets: -1691733021 OutOctets: 51187936 InMcastOctets: 145207 OutMcastOctets: 109680 InBcastOctets: 1246341 IcmpMsg: InType3: 11 OutType3: 11 Udp: 446 packets received 4 packets to unknown port received. 0 packet receive errors 461 packets sent UdpLite: IpExt: InMcastPkts: 4656 <-------------- Same as above OutMcastPkts: 3427 InBcastPkts: 9854 InOctets: -1690886265 OutOctets: 51188788 InMcastOctets: 145267 OutMcastOctets: 109712 InBcastOctets: 1246341 If I try forcing the interface into promisc mode nothing changes. At this point I'm stuck. I've confirmed the kernel config has multicast enabled. Perhaps there are other config options I should be checking? b $ grep CONFIG_IP_MULTICAST /boot/config-2.6.32-23-server CONFIG_IP_MULTICAST=y Any thoughts on where to go from here?

    Read the article

  • Choice of an OS for a home ZFS NAS

    - by OlafM
    I am preparing a home NAS with an old Athlon 64 X2 3800+, 4 GB ECC RAM, Asus M2V MX motherboard, and a single 3 TB WDC Green (another one as mirror may be installed in the future). It's the cheapest solution I found that includes ECC memory and the higher energy consumption is offset by the lower (zero) cost of acquisition. The system will be used for: music storage and stream to other desktop computers; storage of the scanned dia slides (3-4k slides, 180 MB TIFF each one plus reduced quality JPEG version); stream of these photos to a local iPad 2 (maybe Plex App? not yet sure); (one additional) remote backup via rsync/ssh or ZFS send/receive. It will be controlled via remote ssh, maybe VNC, no monitor attached. Absolute requirement is a reliable ZFS solution, plus the ability to easily install packets/software/virtual machines and to update remotely (I will be the admin and I don't live near the NAS). I have mainly three options: NAS4free/FreeNAS OpenIndiana Solaris Express 11 (yeah yeah I know the license requirements, I will write a perl script on it to count it as development machine). Problems: NAS4free/FreeNAS (I tested only NAS4free) required embedded installation for remote upgrading, but full install for easy addition of software packets. Since I need at least AirVideo Server (linux/win) and Plex App (win/linux) to stream the photos and some videos to iPad (they both require virtualbox), but I cannot be there to install updates, NAS4free/FreeNAS are excluded. http://www.nas4free.org/general_information.html explains the issue: embedded can be remotely updated, full cannot. Solaris has also another advantage: Crashplan client supports Solaris and I'm already using it for other backups. I would like to leave the option open, even if I will be doing backups probably through zfs send/receive. NexentaStor was left out because zfs send/receive are not included in the free version. The question is now Solaris 11 Express over OpenIndiana. To ease the management, I will be using http://www.napp-it.org Which one would you suggest and why? I found lots of informations and it's difficult for me to decide. I think (from the napp-it manual) that Solaris has some additional options for SMB shares, but are they really needed at home? I think I won't even use ACLs, since normal unix-style permissions are enough. OpenIndiana has maybe more frequent updates (Solaris offers only security updates between releases), but again, do I need them? I don't think so. Moreover, this is a NAS that has to work and nothing else, I cannot risk having problems that require me to access the server. Isn't OpenIndiana a bit more... cutting edge (in the Solaris world)? I'm just asking, no need to focus on this for the answer :-) I would limit myself to these two options (SE11.1/OI) also because I will be making a NAS for me in the future (where high performances with Mac shares are also required) and Solaris has kernel support for AFP. I will use this server to gather experience as well. After this long question, thanks in advance! If you need additional info, let me know and I will update this post.

    Read the article

  • Choice of an OS for a home ZFS NAS

    - by OlafM
    I am preparing a home NAS with an old Athlon 64 X2 3800+, 4 GB ECC RAM, Asus M2V MX motherboard, and a single 3 TB WDC Green (another one as mirror may be installed in the future). It's the cheapest solution I found that includes ECC memory and the higher energy consumption is offset by the lower (zero) cost of acquisition. The system will be used for: music storage and stream to other desktop computers; storage of the scanned dia slides (3-4k slides, 180 MB TIFF each one plus reduced quality JPEG version); stream of these photos to a local iPad 2 (maybe Plex App? not yet sure); (one additional) remote backup via rsync/ssh or ZFS send/receive. It will be controlled via remote ssh, maybe VNC, no monitor attached. Absolute requirement is a reliable ZFS solution, plus the ability to easily install packets/software/virtual machines and to update remotely (I will be the admin and I don't live near the NAS). I have mainly three options: NAS4free/FreeNAS OpenIndiana Solaris Express 11 (yeah yeah I know the license requirements, I will write a perl script on it to count it as development machine). Problems: NAS4free/FreeNAS (I tested only NAS4free) required embedded installation for remote upgrading, but full install for easy addition of software packets. Since I need at least AirVideo Server (linux/win) and Plex App (win/linux) to stream the photos and some videos to iPad (they both require virtualbox), but I cannot be there to install updates, NAS4free/FreeNAS are excluded. http://www.nas4free.org/general_information.html explains the issue: embedded can be remotely updated, full cannot. Solaris has also another advantage: Crashplan client supports Solaris and I'm already using it for other backups. I would like to leave the option open, even if I will be doing backups probably through zfs send/receive. NexentaStor was left out because zfs send/receive are not included in the free version. The question is now Solaris 11 Express over OpenIndiana. To ease the management, I will be using http://www.napp-it.org Which one would you suggest and why? I found lots of informations and it's difficult for me to decide. I think (from the napp-it manual) that Solaris has some additional options for SMB shares, but are they really needed at home? I think I won't even use ACLs, since normal unix-style permissions are enough. OpenIndiana has maybe more frequent updates (Solaris offers only security updates between releases), but again, do I need them? I don't think so. Moreover, this is a NAS that has to work and nothing else, I cannot risk having problems that require me to access the server. Isn't OpenIndiana a bit more... cutting edge (in the Solaris world)? I'm just asking, no need to focus on this for the answer :-) I would limit myself to these two options (SE11.1/OI) also because I will be making a NAS for me in the future (where high performances with Mac shares are also required) and Solaris has kernel support for AFP. I will use this server to gather experience as well. After this long question, thanks in advance! If you need additional info, let me know and I will update this post. UPDATES Given the first answers, I will strongly suggest the person paying the hardware to insert a second HD. Better 2x2TB than 1x3TB (3 TB is oversized anyway). I was trying to keep the initial costs down to spread them over a longer period, but better having something good from the beginning.

    Read the article

  • Log transport and aggregation at scale

    - by markdrayton
    How're you analysing log files from UNIX/Linux machines? We run several hundred servers which all generate their own log files, either directly or through syslog. I'm looking for a decent solution to aggregate these and pick out important events. This problem breaks down into 3 components: 1) Message transport The classic way is to use syslog to log messages to a remote host. This works fine for applications that log into syslog but less useful for apps that write to a local file. Solutions for this might include having the application log into a FIFO connected to a program to send the message using syslog, or by writing something that will grep the local files and send the output to the central syslog host. However, if we go to the trouble of writing tools to get messages into syslog would we be better replacing the whole lot with something like Facebook's Scribe which offers more flexibility and reliability than syslog? 2) Message aggregation Log entries seem to fall into one of two types: per-host and per-service. Per-host messages are those which occur on one machine; think disk failures or suspicious logins. Per-service messages occur on most or all of the hosts running a service. For instance, we want to know when Apache finds an SSI error but we don't want the same error from 100 machines. In all cases we only want to see one of each type of message: we don't want 10 messages saying the same disk has failed, and we don't want a message each time a broken SSI is hit. One approach to solving this is to aggregate multiple messages of the same type into one on each host, send the messages to a central server and then aggregate messages of the same kind into one overall event. SER can do this but it's awkward to use. Even after a couple of days of fiddling I had only rudimentary aggregations working and had to constantly look up the logic SER uses to correlate events. It's powerful but tricky stuff: I need something which my colleagues can pick up and use in the shortest possible time. SER rules don't meet that requirement. 3) Generating alerts How do we tell our admins when something interesting happens? Mail the group inbox? Inject into Nagios? So, how're you solving this problem? I don't expect an answer on a plate; I can work out the details myself but some high-level discussion on what is surely a common problem would be great. At the moment we're using a mishmash of cron jobs, syslog and who knows what else to find events. This isn't extensible, maintainable or flexible and as such we miss a lot of stuff we shouldn't. Updated: we're already using Nagios for monitoring which is great for detected down hosts/testing services/etc but less useful for scraping log files. I know there are log plugins for Nagios but I'm interested in something more scalable and hierarchical than per-host alerts.

    Read the article

  • SQLAuthority News – SafePeak’s SQL Server Performance Contest – Winners

    - by pinaldave
    SafePeak, the unique automated SQL performance acceleration and performance tuning software vendor, announced the winners of their SQL Performance Contest 2011. The contest quite unique: the writer of the best / most interesting and most community liked “performance story” would win an expensive gadget. The judges were the community DBAs that could participating and Like’ing stories and could also win expensive prizes. Robert Pearl SQL MVP, was the contest supervisor. I liked most of the stories and decided then to contact SafePeak and suggested to participate in the give-away and they have gladly accepted the same. The winner of best story is: Jason Brimhall (USA) with a story about a proc with a fair amount of business logic. Congratulations Jason! The 3 participants won the second prize of $100 gift card on amazon.com are: Michael Corey (USA), Hakim Ali (USA) and Alex Bernal (USA). And 5 participants won a printed copy of a book of mine (Book Reviews of SQL Wait Stats Joes 2 Pros: SQL Performance Tuning Techniques Using Wait Statistics, Types & Queues) are: Patrick Kansa (USA), Wagner Bianchi (USA), Riyas.V.K (India), Farzana Patwa (USA) and Wagner Crivelini (Brazil). The winners are welcome to send safepeak their mail address to receive the prizes (to “info ‘at’ safepeak.com”). Also SafePeak team asked me to welcome you all to continue sending stories, simply because they (and we all) like to read interesting stuff) as well as to send them ideas for future contests. You can do it from here: www.safepeak.com/SQL-Performance-Contest-2011/Submit-Story Congratulations to everybody! I found this very funny video about SafePeak: It looks like someone (maybe the vendor) played with video’s once and created this non-commercial like video: SafePeak dynamic caching is an immediate plug-n-play performance acceleration and scalability solution for cloud, hosted and business SQL server applications. By caching in memory result sets of queries and stored procedures, while keeping all those cache correct and up to date using unique patent pending technology, SafePeak can fix SQL performance problems and bottlenecks of most applications – most importantly: without actual code changes. By the way, I checked their website prior this contest announcement and noticed that they are running these days a special end year promotion giving between 30% to 45% discounts. Since the installation is quick and full testing can be done within couple of days – those have the need (performance problems) and have budget leftovers: I suggest you hurry. A free fully functional trial is here: www.safepeak.com/download, while those that want to start with a quote should ping here www.safepeak.com/quote. Good luck! Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tools to Help Post Content On Your WordPress Blog

    - by Matthew Guay
    Now that you’ve got a nice blog, you want to do more with it and start posting content.  Here we look at some tools that will allow you to post directly to your WordPress blog. Writing a new blog post is easy with WordPress as we saw in our previous post about Starting your own WordPress blog.  The web editor gives you a lot of features and even lets you edit your post’s source code if you enjoy hacking HTML.  There are other tools that will allow you to post content, here we look at how you can post with dedicated apps, browser plugins, and even by email. Windows Live Writer Windows Live Writer (part of the Windows Live Essentials Suite) is a great app for posting content to your blog.  This free program for Microsoft lets you post content to a variety of blogging services, including Blogger, Typepad, LiveJournal, and of course WordPress.  You can write blog posts directly from its Word-like editor, complete with pictures and advanced formatting.  Even if you’re offline, you can still write posts and save them for when you’re online again. For more information about installing Live writer, check out our article on how to Install Windows Live Essentials In Windows 7. Once Live Writer is installed, open it to add your blog.  If you already had Live Writer installed and configured for a blog, you can add your new blog, too.  Just click your blog’s name in the top right corner, and select “Add blog account”. Select “Other blog service” to add your WordPress blog to Writer, and click Next.   Enter your blog’s web address, and your username and password.  Check Remember my password so you don’t have to enter it every time you write something. Writer will analyze your blog and setup your account. During the setup process it may ask to post a temporary post.  This will let you preview blog posts using your blog’s real theme, which is helpful, so click Yes. Finally, add your Blog’s name, and click Finish. You can now use the rich editor to write and add content to a new blog post.   Select the Preview tab to see how your post will look on your blog… Or, if you’re a HTML geek, select the Source tab to edit the code of your blog post. From the bottom of the window, you can choose categories, insert tags, and even schedule the post to publish on a different day.  Live Writer is fully integrated with WordPress; you’re not missing anything by using the desktop editor. If you want to edit a post you’ve already published, click the Open button and select the post.  You can chose and edit any post, including ones you published via the web interface or other editors. Add Multimedia Content to your Posts with Live Writer Back in the Edit tab, you can add pictures, videos and more from the sidebar.  Select what you want to insert. Pictures If you insert a picture, you can add many nice borders and designs to it. Or, you can even add artistic effects from the Effects tab in the sidebar. Photo Gallery If you want to post several pictures, say some of your vacation shots, then inserting a picture gallery may be the best option.  Select Insert Photo Gallery in the sidebar, and then choose the pictures you want in the gallery. Once the gallery is inserted, you can choose from several styles to showcase your pictures. When you post the blog, you will be asked to sign in with your Windows Live ID as the gallery pictures will be stored in the free Skydrive storage service. Your blog readers can see the preview of your pictures directly on your blog, and then can view each individual picture, download them, or see a slideshow online via the link. Video If you want to add a video to your blog post, select Video from the sidebar as above.  You can select a video that’s already online, or you can choose a new video from file and upload it via YouTube directly from Windows Live Writer.   Note that you will have to sign in with your YouTube account to upload videos to YouTube, so if you’re not logged in you’ll be prompted to do so when you click Insert. Geek Tip:  If you ever want to copy your Live Writer settings to another computer, check out our article on how to Backup Your Windows Live Writer Settings. Microsoft Office Word Word 2007 and 2010 also let you post content directly to your blog.  This is especially nice if you’ve already typed up a document and think it would be good on your Blog as well.  Check out our in-depth tutorial on posting blog posts via Word 2007 using Word 2007 as a blogging tool. This works in Word 2010 too, except the Office Orb has been replaced by the new Backstage view.  So, in Word 2010, to start a new blog post, click File \ New then select Blog post.  Proceed as you would in Word 2007 to add your blog settings and post the content you want. Or, if you’ve already written a document and want to post it, select File \ Share (or Save and Send in the final version of Word 2010), and then click Publish as Blog Post.  If you haven’t setup your blog account yet, set it up as shown in the Word 2007 article. Post Via Email Most of us use email daily, and already have our favorite email app or service.  Whether on your desktop or mobile phone, it’s easy to create rich emails and add content.  WordPress lets you generate a unique email address that you can use to easily post content and email to your blog.  Just compose your email with the subject as the title of your post, and send it to this unique address.  Your new post will be up in minutes. To active this feature, click the My Account button in the top menu bar in your WordPress.com account, and select My Blogs. Click the Enable button under Post by Email beside your blog’s name.   Now you’ll have a private email you can use to post to your blog.  Anything you send to this email will be posted as a new post.  If you think your email may be compromised, click Regenerate to get a new publishing email address. Any email program or webapp now is a blog post editor.  Feel free to use rich formatting or insert pictures; it all comes through great.  This is also a great way to post to your blog from your mobile device.  Whether you’re using webmail or a dedicated email client on your phone, you can now blog from anywhere.   Mobile Applications WordPress also offer dedicated applications for blogging directly from your mobile device.  You can write new posts, edit existing ones, and manage comments all from your Smartphone.  Currently they offer apps for iPhone, Android, and Blackberry.  Check them out at the link below. Conclusion Whether you want to write from your browser or email a post to your blog, WordPress is flexible enough to work right along with your preferences.  However you post, you can be sure that it will look professional and be easily accessible with your WordPress blog. Download Windows Live Writer Download WordPress apps for your mobile device Similar Articles Productive Geek Tips Quick Tip: Set a Future Date for a Post in WordPressAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogFuture Date a Post in Windows Live WriterHow To Start Your Own Professional Blog with WordPressUsing Word 2007 as a Blogging Tool TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • How to throttle email server wide on a shared server?

    - by fdsa
    I have multiple programs that send email on a shared server with a relatively low email limit. These programs are completely separate and can each individually throttle mail but cannot do so in relation to the others. Currently, whenever the hourly limit is reached, our host just starts dropping the emails. They say they have no way to change this behavior and basically suggested that I ask around. Does anyone know of any programs that will throttle email server wide on a shared server?

    Read the article

  • Building services with the .NET framework Cont’d

    - by Allan Rwakatungu
    In my previous blog I wrote an introductory post on services and how you can build services using the .NET frameworks Windows Communication Foundation (WCF) In this post I will show how to develop a real world application using WCF The problem During the last meeting we realized developers in Uganda are not so cool – they don’t use twitter so may not get the latest news and updates from the technology world. We also noticed they mostly use kabiriti phones (jokes). With their kabiriti phones they are unable to access the twitter web client or alternative twitter mobile clients like tweetdeck , twirl or tweetie. However, the kabiriti phones support SMS (Yeeeeeeei). So what we going to do to make these developers cool and keep them updated is by enabling them to receive tweets via SMS. We shall also enable them to develop their own applications that can extend this functionality Analysis Thanks to services and open API’s solving our problem is going to be easy.  1. To get tweets we can use the twitter service for FREE 2. To send SMS we shall use www.clickatell.com/ as they can send SMS to any country in the world. Besides we could not find any local service that offers API's for sending SMS :(. 3. To enable developers to integrate with our application so that they can extend it and build even cooler applications we use WCF. In addittion , because connectivity might be an issue we decided to use WCF because if has a inbuilt queing features. We also choose WCF because this is a post about .NET and WCF :). The Code Accessing the tweets To consume twitters REST API we shall use the WCF REST starter kit. Like it name indicates , the REST starter kit is a set of .NET framework classes that enable developers to create and access REST style services ( like the twitter service). Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Http; using System.Net; using System.Xml.Linq;   namespace UG.Demo {     public class TwitterService     {         public IList<TwitterStatus> SomeMethodName()         {             //Connect to the twitter service (HttpClient is part of the REST startkit classes)             HttpClient cl = new HttpClient("http://api.twitter.com/1/statuses/friends_timeline.xml");             //Supply your basic authentication credentials             cl.TransportSettings.Credentials = new NetworkCredential("ourusername", "ourpassword");             //issue an http             HttpResponseMessage resp = cl.Get();             //ensure we got reponse 200             resp.EnsureStatusIsSuccessful();             //use XLinq to parse the REST XML             var statuses = from r in resp.Content.ReadAsXElement().Descendants("status")                            select new TwitterStatus                            {                                User = r.Element("user").Element("screen_name").Value,                                Status = r.Element("text").Value                            };             return statuses.ToList();         }     }     public class TwitterStatus     {         public string User { get; set; }         public string Status { get; set; }     } }  Sending SMS Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} public class SMSService     {         public void Send(string phone, string message)         {                         HttpClient cl1 = new HttpClient();              //the clickatell XML format for sending SMS             string xml = String.Format("<clickAPI><sendMsg><api_id>3239621</api_id><user>ourusername</user><password>ourpassword</password><to>{0}</to><text>{1}</text></sendMsg></clickAPI>",phone,message);             //Post form data             HttpUrlEncodedForm form = new HttpUrlEncodedForm();             form.Add("data", xml);             System.Net.ServicePointManager.Expect100Continue = false;             string uri = @"http://api.clickatell.com/xml/xml";             HttpResponseMessage resp = cl1.Post(uri, form.CreateHttpContent());             resp.EnsureStatusIsSuccessful();         }     }

    Read the article

  • ESB Toolkit 2.0 EndPointConfig (HTTPS with WCF-BasicHttp and the ESB Toolkit 2.0)

    - by Andy Morrison
    Earlier this week I had an ESB endpoint (Off-Ramp in ESB parlance) that I was sending to over http using WCF-BasicHttp.  I needed to switch the protocol to https: which I did by changing my UDDI Binding over to https:  No problem from a management perspective; however, when I tried to run the process I saw this exception: Event Type:                     Error Event Source:                BizTalk Server 2009 Event Category:            BizTalk Server 2009 Event ID:   5754 Date:                                    3/10/2010 Time:                                   2:58:23 PM User:                                    N/A Computer:                       XXXXXXXXX Description: A message sent to adapter "WCF-BasicHttp" on send port "SPDynamic.XXX.SR" with URI "https://XXXXXXXXX.com/XXXXXXX/whatever.asmx" is suspended.  Error details: System.ArgumentException: The provided URI scheme 'https' is invalid; expected 'http'. Parameter name: via    at System.ServiceModel.Channels.TransportChannelFactory`1.ValidateScheme(Uri via)    at System.ServiceModel.Channels.HttpChannelFactory.ValidateCreateChannelParameters(EndpointAddress remoteAddress, Uri via)    at System.ServiceModel.Channels.HttpChannelFactory.OnCreateChannel(EndpointAddress remoteAddress, Uri via)    at System.ServiceModel.Channels.ChannelFactoryBase`1.InternalCreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ChannelFactoryBase`1.CreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.ServiceChannelFactoryOverRequest.CreateInnerChannelBinder(EndpointAddress to, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.CreateServiceChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.CreateChannel(Type channelType, EndpointAddress address, Uri via)    at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.ChannelFactory`1.CreateChannel()    at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.GetChannel[TChannel](IBaseMessage bizTalkMessage, ChannelFactory`1& cachedFactory)    at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.SendMessage(IBaseMessage bizTalkMessage)  MessageId:  {1170F4ED-550F-4F7E-B0E0-1EE92A25AB10}  InstanceID: {1640C6C6-CA9C-4746-AEB0-584FDF7BB61E} I knew from a previous experience that I likely needed to set the SecurityMode setting for my Send Port.  But how do you do this for a Dynamic port (which I was using since this is an ESB solution)? Within the UDDI portal you have to add an additional Instance Info to your Binding named: EndPointConfig  Then you have to set its value to:  SecurityMode=Transport Like this:    The EndPointConfig is how the ESB Toolkit 2.0 provides extensibility for the various transports.  To see what the key-value pair options are for a given transport, open up an itinerary and change one of your resolvers to a “static” resolver by setting the “Resolver Implementation” to Static.  Then select a “Transport Name” ”, for instance to WCF-BasicHttp.  At this point you can then click on the “EndPoint Configuration” property for to see an adapter/ramp specific properties dialog (key-value pairs.)    Here’s the dialog that popped up for WCF-BasicHttp:   I simply set the SecurityMode to Transport.  Please note that you will get different properties within the window depending on the Transport Name you select for the resolver. When you are done with your settings, export the itinerary to disk and find that xml; then find that resolver’s xml within that file.  It will look like endpointConfig=SecurityMode=Transport in this case.  Note that if you set additional properties you will have additional key-value pairs after endpointConfig= Copy that string and paste it into the UDDI portal for you Binding’s EndPointConfig Instance Info value.

    Read the article

  • Monitor your Hard Drive’s Health with Acronis Drive Monitor

    - by Matthew Guay
    Are you worried that your computer’s hard drive could die without any warning?  Here’s how you can keep tabs on it and get the first warning signs of potential problems before you actually lose your critical data. Hard drive failures are one of the most common ways people lose important data from their computers.  As more of our memories and important documents are stored digitally, a hard drive failure can mean the loss of years of work.  Acronis Drive Monitor helps you avert these disasters by warning you at the first signs your hard drive may be having trouble.  It monitors many indicators, including heat, read/write errors, total lifespan, and more. It then notifies you via a taskbar popup or email that problems have been detected.  This early warning lets you know ahead of time that you may need to purchase a new hard drive and migrate your data before it’s too late. Getting Started Head over to the Acronis site to download Drive Monitor (link below).  You’ll need to enter your name and email, and then you can download this free tool. Also, note that the download page may ask if you want to include a trial of their for-pay backup program.  If you wish to simply install the Drive Monitor utility, click Continue without adding. Run the installer when the download is finished.  Follow the prompts and install as normal. Once it’s installed, you can quickly get an overview of your hard drives’ health.  Note that it shows 3 categories: Disk problems, Acronis backup, and Critical Events.  On our computer, we had Seagate DiskWizard, an image backup utility based on Acronis Backup, installed, and Acronis detected it. Drive Monitor stays running in your tray even when the application window is closed.  It will keep monitoring your hard drives, and will alert you if there’s a problem. Find Detailed Information About Your Hard Drives Acronis’ simple interface lets you quickly see an overview of how the drives on your computer are performing.  If you’d like more information, click the link under the description.  Here we see that one of our drives have overheated, so click Show disks to get more information. Now you can select each of your drives and see more information about them.  From the Disk overview tab that opens by default, we see that our drive is being monitored, has been running for a total of 368 days, and that it’s health is good.  However, it is running at 113F, which is over the recommended max of 107F.   The S.M.A.R.T. parameters tab gives us more detailed information about our drive.  Most users wouldn’t know what an accepted value would be, so it also shows the status.  If the value is within the accepted parameters, it will report OK; otherwise, it will show that has a problem in this area. One very interesting piece of information we can see is the total number of Power-On Hours, Start/Stop Count, and Power Cycle Count.  These could be useful indicators to check if you’re considering purchasing a second hand computer.  Simply load this program, and you’ll get a better view of how long it’s been in use. Finally, the Events tab shows each time the program gave a warning.  We can see that our drive, which had been acting flaky already, is routinely overheating even when our other hard drive was running in normal temperature ranges. Monitor Acronis Backups And Critical Errors In addition to monitoring critical stats of your hard drives, Acronis Drive Monitor also keeps up with the status of your backup software and critical events reported by Windows.  You can access these from the front page, or via the links on the left hand sidebar.  If you have any edition of any Acronis Backup product installed, it will show that it was detected.  Note that it can only monitor the backup status of the newest versions of Acronis Backup and True Image. If no Acronis backup software was installed, it will show a warning that the drive may be unprotected and will give you a link to download Acronis backup software.   If you have another backup utility installed that you wish to monitor yourself, click Configure backup monitoring, and then disable monitoring on the drives you’re monitoring yourself. Finally, you can view any detected Critical events from the Critical events tab on the left. Get Emailed When There’s a Problem One of Drive Monitor’s best features is the ability to send you an email whenever there’s a problem.  Since this program can run on any version of Windows, including the Server and Home Server editions, you can use this feature to stay on top of your hard drives’ health even when you’re not nearby.  To set this up, click Options in the top left corner. Select Alerts on the left, and then click the Change settings link to setup your email account. Enter the email address which you wish to receive alerts, and a name for the program.  Then, enter the outgoing mail server settings for your email.  If you have a Gmail account, enter the following information: Outgoing mail server (SMTP): smtp.gmail.com Port: 587 Username and Password: Your gmail address and password Check the Use encryption box, and then select TLS from the encryption options.   It will now send a test message to your email account, so check and make sure it sent ok. Now you can choose to have the program automatically email you when warnings and critical alerts appear, and also to have it send regular disk status reports.   Conclusion Whether you’ve got a brand new hard drive or one that’s seen better days, knowing the real health of your it is one of the best ways to be prepared before disaster strikes.  It’s no substitute for regular backups, but can help you avert problems.  Acronis Drive Monitor is a nice tool for this, and although we wish it wasn’t so centered around their backup offerings, we still found it a nice tool. Link Download Acronis Drive Monitor (registration required) Similar Articles Productive Geek Tips Quick Tip: Change Monitor Timeout From Command LineAnalyze and Manage Hard Drive Space with WinDirStatMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersDefrag Multiple Hard Drives At Once In WindowsFind Your Missing USB Drive on Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer

    Read the article

  • Visual Studio 2010 Launch April 12 Las Vegas

    - by Dave Campbell
    I'm going to be in 'Vegas for the Launch on Monday, I'm not sure what time I'm getting in on Sunday, but I'm staying over Monday night as well, so if you're going to be around in that time-frame, send me an email! Bummer to not be there for Silverlight on Tuesday, but hey... watching Scott Guthrie is always worth the drive :)

    Read the article

  • Automate delivery of Crystal Reports With a Windows Service

    In this article, Vince demonstrates the creation of a Windows Service to automatically run and send a Crystal Report as an email attachment. After a basic introduction, he examines the creation of the database and windows service with the help of relevant source code and explanations. Towards the end of the article, Vince discusses the steps to be followed in order to install the windows service.

    Read the article

  • Introducing AutoVue Document Print Service

    - by celine.beck
    We recently announced the availability of our new AutoVue Document Print Service products. For more information, please read the article entitled Print Any Document Type with AutoVue Document Print Services that was posted on our blog. The AutoVue Document Print Service products help address a trivial, yet very common challenge: printing and batch printing documents. The AutoVue Document Print Service is a Web-Services based interface, which allows developers to complement their print server solutions by leveraging AutoVue's printing capabilities within broader enterprise applications like Asset Lifecycle Management, Product Lifecycle Management, Enterprise Content Management solutions, etc. This means that you can leverage the AutoVue Document Print Service products as part of your printing solution to automate the printing of virtually any document type required in any business process. Clients that consume AutoVue's Document Print Service can be written in any language (for example Java or .NET) as long as they understand Web Services Description Language (WSDL) and communicate using Simple Object Access Protocol (SOAP). The print solution consists of three main components, as described in the diagram below: a print server (not included in the AutoVue Document Print Service offering) that will interact with your application to identify the files that need to be printed, the printer to send each file, as well as the print options needed for each file (paper size, page orientation, etc), and collate the print job requests. The print server will also take care of calling the AutoVue Document Print Service to perform the actual printing. The AutoVue Document Print Services send files to a printer for printing. The AutoVue Document Print Service products leverage AutoVue's format- and platform agnostic technology to let you print/batch virtually any type of files, without requiring the authoring application installed on your machine. and Printers As shown above, you can trigger printing from your application either programmatically through automated business processes or manually through human interaction. If documents that need to be printed from your application are stored inside a content repository/Document Management System (DMS) such as Oracle Universal Content Management System (UCM), then the Print Server will need to identify the list of documents and pass the ID of each document to the AutoVue DPS to print. In this case, AutoVue DPS leverages the AutoVue VueLink integration (note: AutoVue VueLink integrations are pre-packaged AutoVue integrations with most common enterprise systems. Check our Website for more information on the subject) to fetch documents out of the document management system for printing. In lieu of the AutoVue VueLink integration, you can also leverage the AutoVue Integration Software Development Kit (iSDK) to build your own connector. If the documents you need to print from your application are not stored in a content management system, the Print Server will need to ensure that files are made available to the AutoVue Document Print Service. The Print Server could for example fetch the files out of your application or an extension to the application could be developed to fetch the files and make them available to the AutoVue DPS. More information on methods to pass on file information to the AutoVue Document Print Service products can be found in the AutoVue Document Print Service Overview documentation available on the Oracle Technology Network. Related article: Any Document Type with AutoVue Document Print Services

    Read the article

  • Save the Date for Oracle’s 2015 JD Edwards Summit

    - by Catalin Teodor
    If you’re part of the JD Edwards community, you won’t want to miss our 6th annual JD Edwards Summit. Please save the date for this event in Broomfield, Colorado on Monday, February 2 through Thursday, February 5. Formal invitation and registration to come soon.We welcome the opportunity to hear from you regarding any specific topics you would like to see covered during this valuable exchange. Please send your ideas to Sheila Ebbitt at sheila.ebbitt-AT-oracle-DOT-com.Partners interested in sponsorship should contact Rene Chapman at rene.chapman-AT-oracle-DOT-com.

    Read the article

  • ASP.NET Asynchronous Pages and when to use them

    - by rajbk
    There have been several articles posted about using  asynchronous pages in ASP.NET but none of them go into detail as to when you should use them. I finally found a great post by Thomas Marquardt that explains the process in depth. He addresses a key misconception also: So, in your ASP.NET application, when should you perform work asynchronously instead of synchronously? Well, only 1 thread per CPU can execute at a time.  Did you catch that?  A lot of people seem to miss this point...only one thread executes at a time on a CPU. When you have more than this, you pay an expensive penalty--a context switch. However, if a thread is blocked waiting on work...then it makes sense to switch to another thread, one that can execute now.  It also makes sense to switch threads if you want work to be done in parallel as opposed to in series, but up until a certain point it actually makes much more sense to execute work in series, again, because of the expensive context switch. Pop quiz: If you have a thread that is doing a lot of computational work and using the CPU heavily, and this takes a while, should you switch to another thread? No! The current thread is efficiently using the CPU, so switching will only incur the cost of a context switch. Ok, well, what if you have a thread that makes an HTTP or SOAP request to another server and takes a long time, should you switch threads? Yes! You can perform the HTTP or SOAP request asynchronously, so that once the "send" has occurred, you can unwind the current thread and not use any threads until there is an I/O completion for the "receive". Between the "send" and the "receive", the remote server is busy, so locally you don't need to be blocking on a thread, but instead make use of the asynchronous APIs provided in .NET Framework so that you can unwind and be notified upon completion. Again, it only makes sense to switch threads if the benefit from doing so out weights the cost of the switch. Read more about it in these posts: Performing Asynchronous Work, or Tasks, in ASP.NET Applications http://blogs.msdn.com/tmarq/archive/2010/04/14/performing-asynchronous-work-or-tasks-in-asp-net-applications.aspx ASP.NET Thread Usage on IIS 7.0 and 6.0 http://blogs.msdn.com/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx   PS: I generally do not write posts that simply link to other posts but think it is warranted in this case.

    Read the article

  • My Ubuntu Touch seems to be broken no matter how many different files I try

    - by zeokila
    So I'm planning on testing out Ubuntu Touch, and developing some applications for it so I thought I would flash it to my Nexus 4 that was already unlocked, and running Paranoid Android and the kernel associated. I headed to Ubuntu's website, browsed around and came across this page: Touch/Install - Ubuntu Wiki I followed Step 1 perfectly, word by word. Its seems to me that everything on that part is fine. I skipped Step 2 having already done that for Paranoid Android, and then I follow 3 and 4 word to word also. Using the command phablet-flash -b everything seemed to be fine. So it booted up and all seemed normal, but it wasn't. Here are some major bugs that only seem to happen to me. So I was greeted with a normal lock screen: But one of the first noticable things on the home screen is that I only have 4 tabs, not 5: Some of the applications that are supposed to work do not (I know some are dummies) like here with the calculator, this is what I get: On the homescreen it is a black box, it will crash a couple of seconds later: Another annoying problem is that when I want to close apps, I have from right to left, if I close an app on the left first it will close it, but it will open the app to it's right, weird: Yet another bug, this one is in the pull down drawer, when you just click on it, you can see that not all the icons are there: Pretty much everything else works as it should, but my main problem is that there is no telephony, I'm not sure how it works exactly, but I'm never asked a SIM code (I'm guessing you need that?), I can't compose SMS's and can't dial numbers, it won't let me select the 'Send' or 'Call' button. I've after tried manual installs all over the place with these files: http://cdimage.ubuntu.com/ubuntu-touch-preview/daily-preinstalled/current/saucy-preinstalled-armel+mako.zip (Says 44MB on site - 46.6MB on my laptop) http://cdimage.ubuntu.com/ubuntu-touch-preview/daily-preinstalled/current/saucy-preinstalled-phablet-armhf.zip (Says 366MB on site - 383.2MB on my laptop) There are some weird size differences between what the site told me and what I downloaded, but I've tried re-downloading just to end up with the same file. And it just alway ends up with the same problems. No telephony and those weird bugs. So my question is, how the hell can I get the same version as everyone else, with the ability to send texts and call and open the calculator and more? Also, definitely running saucy: And maybe useful? This is what's in the device info:

    Read the article

  • Silverlight Cream for March 29, 2010 -- #824

    - by Dave Campbell
    In this Issue: smartyP(-2-), Al Pascual, Mike Taulty, Shawn Burke(-2-), Vikram Pendse, Tomasz Janczuk, Lee, and Alexey Zakharov. Shoutouts: Jeff Weber announced New Silverlight Game “Snow Spill” by Nick Avery of Liserd Arts Games John Papa summarized links to all the Silverlight and Windows Phone 7 Sessions from MIX 10 Tim Heuer has a post up about OData and the MIX10 feed: MIX10: Yet another way to view video content sessions using their OData feed From SilverlightCream.com: Creating a Windows Phone 7 Metro Style Pivot Application [Part 1] smartyP has a two-part video tutorial up on creating a WP7 pivot navigation app using Expression Blend. He's also looking for feedback. Creating a Windows Phone 7 Metro Style Pivot Application [Part 2] In part 2, smartyP adds gestures to his navigation. He also has some good external links listed. Al Pascual: My First Windows Phone 7 Application Al Pascual extends the MIX10 keynote WP7 sample by adding the ability to send tweets ... with all the code. Silverlight 4 RC and the “silent installation” Mike Taulty discusses and demonstrates installing an OOB app without having to visit a webpage to get it. In other words, pass it around on a USB drive, send it in email, etc. iPhone SDK vs Windows Phone 7 Series SDK Challenge, Part 1: Hello World! Shawn Burke has a 2-part series up comparing iPhone and WP7 development looking at how easy it is to code and lines of code produced by the tools. This first post is the classic Hello World. Check out the comments as well. iPhone SDK vs. Windows Phone 7 Series SDK Challenge, Part 2: MoveMe Shawn Burke's part 2 is comparing the classic iPhone 'MoveMe' app... again, check out all the comments. Silverlight 4 : Indic Support in Silverlight Vikram Pendse demonstrates using the Microsoft Indic Language Input tool. He has some screen shots and discussion about fonts in Silverlight. Comparison of HTTP polling duplex and net.tcp performance in Silverlight 4 RC Tomasz Janczuk is checking out Silverlight4 RC and has a comparison up of the performance of the three mechanisms for asynch data push for the server to the client/. Summary rows in Datagrid with multiple groups Lee revisted a post that displayed Summary/Totals in the group header to also support multiple groups now. Silverlight Commands Hacks: Passing EventArgs as CommandParameter to DelegateCommand triggered by EventTrigger Alexey Zakharov suggests a workaround 'InvokeDelegateCommandAction' to keep Blend from ignoring event args. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • 10 PowerShell One Liners

    - by BizTalk Visionary
    Here are a few one-liners that use NetCmdlets. Some of these I've blogged about before, some are new. Let me know if you have questions, which ones you find useful, or how you altered these to suit your own needs. Send email to a list of recipient addresses: import-csv users.csv | % { send-email -to $_.email -from [email protected] -subject "Important Email" –message "Hello World!" -server 10.0.1.1 } Show the access control list for a specific Exchange folder: get-imap -server $mymailserver -cred $mycred -folder INBOX.RESUMES –acl Add look and read permissions on an Exchange folder, for a list of accounts pulled from a CSV file: import-csv users.csv | % { set-imap -server -acluser $_.username $mymailserver -cred $mycred -folder INBOX.RESUMES –acl “lr”  } Sync system time with an Internet time server: get-time -server clock.psu.edu –set To remotely sync the time on a set of computers: import-csv computers.csv | % { Invoke-Command -computerName $_.computer -cred $mycred -scriptblock { get-time -server clock.psu.edu –set } } Delete all emails from an Exchange folder that match a certain criteria.  For example, delete all emails from [email protected]: get-imap -server $mailserver –cred $mycred | ? {$_.FromEmail -eq [email protected]} | %{ set-imap -server $mailserver –cred $mycred-message $_.Id -delete } Update Twitter status from PowerShell: get-http –url "http://twitter.com/statuses/update.xml" –cred $mycred -variablename status -variablevalue "Tweeting with NetCmdlets!" A test-path that works over FTP, FTPS (SSL), and SFTP (SSH) connections: get-ftp -server $remoteserver –cred $mycred -path /remote/path/to/checkfor* Don't forget the *.  Also, to use SSL or SSH just add an –ssl or –ssh parameter. List disabled user accounts in Active Directory (or any other LDAP server): get-ldap -server $ad -cred $mycred -dn dc=yourdc -searchscope wholesubtree     -search "(&(objectclass=user)(objectclass=person)(company=*)(userAccountControl:1.2.840.113556.1.4.803:=2))" List Active Directory groups and their members: get-ldap -server testman -cred $mycred -dn dc=NS2 -searchscope wholesubtree -search "(&(objectclass=group)(cn=*admin*))" | select ResultDN, member Display the last initialization time (e.g. last reboot time) of all discoverable SNMP agents on a network: import-csv computers.csv | % { get-snmp -agent $_.computer -oid sysUpTime.0 | %{([datetime]::Now).AddSeconds(-($_.OIDValue/100))} } Not mentioned here:  data conversion (Yenc, QP, UUencoding, MD5, SHA1, base64, etc), DNS, News Groups (NNTP/UseNet), POP mail, RSS feeds, Amazon S3, Syslog, TFTP, TraceRoute, SNMP Traps, UDP, WebDAV, whois, Rexec/Rshell/Telnet, Zip files, sending IMs (Jabber/GoogleTalk/XMPP), sending text messages and pages, ping, and more. Original Source: Lance's Textbox

    Read the article

  • Handling HTTP 404 Error in ASP.NET Web API

    - by imran_ku07
            Introduction:                     Building modern HTTP/RESTful/RPC services has become very easy with the new ASP.NET Web API framework. Using ASP.NET Web API framework, you can create HTTP services which can be accessed from browsers, machines, mobile devices and other clients. Developing HTTP services is now become more easy for ASP.NET MVC developer becasue ASP.NET Web API is now included in ASP.NET MVC. In addition to developing HTTP services, it is also important to return meaningful response to client if a resource(uri) not found(HTTP 404) for a reason(for example, mistyped resource uri). It is also important to make this response centralized so you can configure all of 'HTTP 404 Not Found' resource at one place. In this article, I will show you how to handle 'HTTP 404 Not Found' at one place.         Description:                     Let's say that you are developing a HTTP RESTful application using ASP.NET Web API framework. In this application you need to handle HTTP 404 errors in a centralized location. From ASP.NET Web API point of you, you need to handle these situations, No route matched. Route is matched but no {controller} has been found on route. No type with {controller} name has been found. No matching action method found in the selected controller due to no action method start with the request HTTP method verb or no action method with IActionHttpMethodProviderRoute implemented attribute found or no method with {action} name found or no method with the matching {action} name found.                                          Now, let create a ErrorController with Handle404 action method. This action method will be used in all of the above cases for sending HTTP 404 response message to the client.  public class ErrorController : ApiController { [HttpGet, HttpPost, HttpPut, HttpDelete, HttpHead, HttpOptions, AcceptVerbs("PATCH")] public HttpResponseMessage Handle404() { var responseMessage = new HttpResponseMessage(HttpStatusCode.NotFound); responseMessage.ReasonPhrase = "The requested resource is not found"; return responseMessage; } }                     You can easily change the above action method to send some other specific HTTP 404 error response. If a client of your HTTP service send a request to a resource(uri) and no route matched with this uri on server then you can route the request to the above Handle404 method using a custom route. Put this route at the very bottom of route configuration,  routes.MapHttpRoute( name: "Error404", routeTemplate: "{*url}", defaults: new { controller = "Error", action = "Handle404" } );                     Now you need handle the case when there is no {controller} in the matching route or when there is no type with {controller} name found. You can easily handle this case and route the request to the above Handle404 method using a custom IHttpControllerSelector. Here is the definition of a custom IHttpControllerSelector, public class HttpNotFoundAwareDefaultHttpControllerSelector : DefaultHttpControllerSelector { public HttpNotFoundAwareDefaultHttpControllerSelector(HttpConfiguration configuration) : base(configuration) { } public override HttpControllerDescriptor SelectController(HttpRequestMessage request) { HttpControllerDescriptor decriptor = null; try { decriptor = base.SelectController(request); } catch (HttpResponseException ex) { var code = ex.Response.StatusCode; if (code != HttpStatusCode.NotFound) throw; var routeValues = request.GetRouteData().Values; routeValues["controller"] = "Error"; routeValues["action"] = "Handle404"; decriptor = base.SelectController(request); } return decriptor; } }                     Next, it is also required to pass the request to the above Handle404 method if no matching action method found in the selected controller due to the reason discussed above. This situation can also be easily handled through a custom IHttpActionSelector. Here is the source of custom IHttpActionSelector,  public class HttpNotFoundAwareControllerActionSelector : ApiControllerActionSelector { public HttpNotFoundAwareControllerActionSelector() { } public override HttpActionDescriptor SelectAction(HttpControllerContext controllerContext) { HttpActionDescriptor decriptor = null; try { decriptor = base.SelectAction(controllerContext); } catch (HttpResponseException ex) { var code = ex.Response.StatusCode; if (code != HttpStatusCode.NotFound && code != HttpStatusCode.MethodNotAllowed) throw; var routeData = controllerContext.RouteData; routeData.Values["action"] = "Handle404"; IHttpController httpController = new ErrorController(); controllerContext.Controller = httpController; controllerContext.ControllerDescriptor = new HttpControllerDescriptor(controllerContext.Configuration, "Error", httpController.GetType()); decriptor = base.SelectAction(controllerContext); } return decriptor; } }                     Finally, we need to register the custom IHttpControllerSelector and IHttpActionSelector. Open global.asax.cs file and add these lines,  configuration.Services.Replace(typeof(IHttpControllerSelector), new HttpNotFoundAwareDefaultHttpControllerSelector(configuration)); configuration.Services.Replace(typeof(IHttpActionSelector), new HttpNotFoundAwareControllerActionSelector());         Summary:                       In addition to building an application for HTTP services, it is also important to send meaningful centralized information in response when something goes wrong, for example 'HTTP 404 Not Found' error.  In this article, I showed you how to handle 'HTTP 404 Not Found' error in a centralized location. Hopefully you will enjoy this article too.

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >