Search Results

Search found 6781 results on 272 pages for 'src src'.

Page 240/272 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • Log - Server kernel: INFO: task httpd:000000 blocked for more than 120 seconds

    - by valter
    Almost everyday my server is crashing due to hight server load, and even restarting apache or mysql can't solve the problem. I need to reboot the server to solve, or it crash again due to the high load. The log system records something like this when it crashes: Aug 11 18:33:53 server kernel: INFO: task httpd:20008 blocked for more than 120 seconds. Aug 11 18:33:53 server kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 11 18:33:53 server kernel: httpd D ffffffff801538ac 0 20008 5816 20066 19809 (NOTLB) Aug 11 18:33:53 server kernel: ffff81025a299dc8 0000000000000082 ffff81033b4c0740 ffffffff80009a14 Aug 11 18:33:53 server kernel: ffff8101063f8d80 0000000000000009 ffff8100b758f7e0 ffff8101c57187e0 Aug 11 18:33:53 server kernel: 00009436d4100b6c 000000000001d50f ffff8100b758f9c8 000000083b531588 Aug 11 18:33:53 server kernel: Call Trace: Aug 11 18:33:53 server kernel: [<ffffffff80009a14>] __link_path_walk+0x173/0xfb9 Aug 11 18:33:53 server kernel: [<ffffffff8002cc16>] mntput_no_expire+0x19/0x89 Aug 11 18:33:53 server kernel: [<ffffffff80063c4f>] __mutex_lock_slowpath+0x60/0x9b Aug 11 18:33:53 server kernel: [<ffffffff80023908>] __path_lookup_intent_open+0x56/0x97 Aug 11 18:33:53 server kernel: [<ffffffff80063c99>] .text.lock.mutex+0xf/0x14 Aug 11 18:33:53 server kernel: [<ffffffff8001b21f>] open_namei+0xea/0x712 Aug 11 18:33:54 server kernel: [<ffffffff8002768a>] do_filp_open+0x1c/0x38 Aug 11 18:33:54 server kernel: Firewall: *UDP_IN Blocked* IN=eth1 OUT= MAC=ff:ff:ff:ff:ff:ff:00:30:48:9e:6e:99:08:00 SRC=208.43.135.158 DST=255.255.255.255 LEN=151 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=38354 DPT=6112 LEN=131 Aug 11 18:33:54 server kernel: [<ffffffff8001a061>] do_sys_open+0x44/0xbe Aug 11 18:33:54 server kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 I googled a lot trying to find a solution. But it looks that the solution is just to update the kernel or disk driver, thinks that I don't know how to do. In this url http://bugs.centos.org/view.php?id=4515 a lot o people report similar problems, except the fact that they are not related to httpd like mine. According to one member, one solution would be to add "elevator=noop " to /etc/grub.conf like in this example: title CentOS (2.6.18-238.12.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-238.12.1.el5xen ro root=/dev/VolGroup00/LogVol00 elevator=noop initrd /initrd-2.6.18-238.12.1.el5xen.img Would this really solve the problem? My disk are working in RAID. Can this cause some problem to my server? Is there any other solution?

    Read the article

  • Link aggregation with freebsd8 and a cicso 3550, what am i doing wrong?

    - by Flamewires
    Hey, I am trying to setup Link Aggrigation with LACP (well, anything that provides increased bandwidth and failover using my setup will work). I'm running FreeBSD 8.0 on 3 machines. M1 is running 2 10/100 ethernetcards setup for link aggrigation using lagg. for reference: ifconfig em0 up ifconfig tx0 up ifconfig create lagg0 ifconfig lagg0 laggproto lacp laggport tx0 laggport em0 192.168.1.16 netmask 255.255.255.0 I plugged them into ports 1 and 2 of a Cicso 3550. then ran: configure terminal interface range Fa0/1 - 2 switchport mode access switchport access vlan 1 channel-group 1 mode active (everythings in vlan 1) Now Im able to connect the other computers to other ports on the switch and failover works great, i can unplug cables in the middle of a transfer and the traffic gets rerouted. However, im not noticing any speed increase. My test setup: load balancing: i tried dst and src on the switch, neither seemed to give me a speed increase. I am SCPing 2 500 meg files from the lagg computer to other computers (one each) which are also running 10/100 full duplex cards. I get transfer speeds of about 11.2-11.4 Mbps to a single host, and about half that (5.9-6.2) Mbps when transferring to both at the same time. From what I understood with destination load balancing the router was suppose to balance traffic headed for 1 computer over 1 port and traffic headed for another over a diff(in this case) the other port. With destination-MAC address forwarding, when packets are forwarded to an EtherChannel, the packets are distributed across the ports in the channel based on the destination host MAC address of the incoming packet. Therefore, packets to the same destination are forwarded over the same port, and packets to a different destination are sent on a different port in the channel. For the 3550 series switch, when source-MAC address forwarding is used, load distribution based on the source and destination IP address is also enabled for routed IP traffic. All routed IP traffic chooses a port based on the source and destination IP address. Packets between two IP hosts always use the same port in the channel, and traffic between any other pair of hosts can use a different port in the channel. (Link) What am i doing wrong/what would i need to do to see a speed increase beyond what i could do with just a single card?

    Read the article

  • lacp, cicso 3550, 3560, help with configuration

    - by Flamewires
    Hey all this is a repost from a question I asked on the cisco forums but never got a useful reply. Hey I'm trying to convert the FreeBSD servers at work to dual-gig lagg links from regular gigabit links. Our production servers are on a 3560. I have a small test environment on a 3550. I have achieved fail-over, but am having troubles achieving the speed increase. All servers are running gig intel (em) cards. The configs for the servers are: BSDServer: #!/bin/sh #bring up both interfaces ifconfig em0 up media 1000baseTX mediaopt full-duplex ifconfig em1 up media 1000baseTX mediaopt full-duplex #create the lagg interface ifconfig lagg0 create #set lagg0's protocol to lacp, add both cards to the interface, #and assign it em1's ip/netmask ifconfig lagg0 laggproto lacp laggport em0 laggport em1 ***.***.***.*** netmask 255.255.255.0 The switches are configured as follows: #clear out old junk no int Po1 default int range GigabitEthernet 0/15 - 16 # config ports interface range GigabitEthernet 0/15 - 16 description lagg-test switchport duplex full speed 1000 switchport access vlan 192 spanning-tree portfast channel-group 1 mode active channel-protocol lacp **** switchport trunk encapsulation dot1q **** no shutdown exit interface Port-channel 1 description lagginterface switchport access vlan 192 exit port-channel load-balance src-mac end obviously change 1000's to 100's and GigabitEthernet to FastEthernet for the 3550's config, as that switch has 100Mbit speed ports. With this config on the 3550, I get failover and 92Mbits/sec speed on both links, simultaneously, connecting to 2 hosts.(tested with iperf) Success. However this is only with the "switchport trunk encapsulation dot1q" line. First, I do not understand why I need this, I thought it was only for connecting switches. Is there some other setting which this turns on that is actually responsible for the speed increase? Second, This config does not work on the 3560. I get failover, but not the speed increase. Speeds drop from gig/sec to 500Mbit/sec when I make 2 simultaneous connections to the server with or without the encapsulation line. I should mention that both switches are using source-mac load balancing. In my test I am using Iperf. I have the server(lagg box) setup as the server(iperf -s), and the client computers are client(iperf -c server-ip-address), so the source mac(and IP) are different for both connections. Any ideas/corrections/questions would be helpful, as the gig switches are what I actually need the lagg links on. Ask if you need more information.

    Read the article

  • ruby on rails gitorious setup on ubuntu

    - by dogmatic69
    Ive been trying to install gitorious for a while now which required ruby and rails etc. Ive finally got rails pages serving but cant finish the installation of gitorious because the gem version is too new. the error logs say please run 'rake ultrasphinx:configure' and that gives rake ultrasphinx:configure (in /var/www/apps/gitorious) rake aborted! uninitialized constant ActiveSupport::Dependencies::Mutex /var/www/apps/gitorious/Rakefile:10:in `require' (See full trace by running task with --trace) From searching google this is beacuse of the wrong gem verison. Cant find a way to down grade it. apparently sudo gem update --system 1.4.2 should do the trick but ubuntu 10.10 does not like this. ERROR: While executing gem ... (RuntimeError) gem update --system is disabled on Debian, because it will overwrite the content of the rubygems Debian package, and might break your Debian system in subtle ways. The Debian-supported way to update rubygems is through apt-get, using Debian official repositories. If you really know what you are doing, you can still update rubygems by setting the REALLY_GEM_UPDATE_SYSTEM environment variable, but please remember that this is completely unsupported by Debian. So i added export REALLY_GEM_UPDATE_SYSTEM=1 to .bashrc and reloaded it with . ~/.bashrc and still the same. ive tried various forms of setting this environmental variable with no luck. Ive also been told on #gitorious irc channel to add the file config/initializers/rubygems.rb with the line require "thread" to it. This has done nothing. EDIT: Just found another way which was rvm install rubygems 1.4.2 and it gave: Removing old Rubygems files... Installing rubygems dedicated to ruby-1.8.7-p334... Retrieving rubygems-1.4.2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 288k 100 288k 0 0 282k 0 0:00:01 0:00:01 --:--:-- 414k Extracting rubygems-1.4.2 ... Installing rubygems for /home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby ERROR: Error running 'GEM_PATH="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global" GEM_HOME="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334" "/home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby" "/home/ubuntu/.rvm/src/rubygems-1.4.2/setup.rb"', please read /home/ubuntu/.rvm/log/ruby-1.8.7-p334/rubygems.install.log WARN: Installation of rubygems did not complete successfully. TL;DR please tell me how to downgrade rubygems on ubuntu 10.10 or upgrade gitorious to work with 1.6.2 gems

    Read the article

  • Encoding multiple video streams with a single avconv invocation

    - by automatthias
    I played with avconv on Ubuntu and I'm now able to e.g. record the desktop with sound from a soundcard. One thing I wanted to do was recording two video inputs at the same time, for instance the desktop and from the webcam. I thought about doing something like this: avconv \ -f alsa \ -i default \ -acodec flac \ -f video4linux2 \ -r 6 \ -i /dev/video0 \ -f x11grab \ -i :0.0 \ out.mkv My thinking was that if you define multiple video inputs, and the .mkv format can handle multiple video streams, avconv will encode 2 video streams and 1 audio stream into one file. But this isn't what happens: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [alsa @ 0x1091bc0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x1091bc0] Estimating duration from bitrate, this may be inaccurate Input #0, alsa, from 'default': Duration: N/A, start: 1354364317.020350, bitrate: N/A Stream #0.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s [video4linux2 @ 0x10923e0] Estimating duration from bitrate, this may be inaccurate Input #1, video4linux2, from '/dev/video0': Duration: N/A, start: 100607.724745, bitrate: 29491 kb/s Stream #1.0: Video: rawvideo, yuyv422, 640x480, 29491 kb/s, 6 tbr, 1000k tbn, 6 tbc [x11grab @ 0x107b2a0] device: :0.0+83,87 -> display: :0.0 x: 83 y: 87 width: 854 height: 480 [x11grab @ 0x107b2a0] shared memory extension found [x11grab @ 0x107b2a0] Estimating duration from bitrate, this may be inaccurate Input #2, x11grab, from ':0.0+83,87': Duration: N/A, start: 1354364318.488382, bitrate: 196761 kb/s Stream #2.0: Video: rawvideo, bgra, 854x480, 196761 kb/s, 15 tbr, 1000k tbn, 15 tbc Incompatible pixel format 'bgra' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x107fcc0] w:854 h:480 pixfmt:bgra [avsink @ 0x10bdf00] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x10dc680] w:854 h:480 fmt:bgra -> w:854 h:480 fmt:yuv420p flags:0x4 Output #0, matroska, to '.../out.mkv': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 854x480, q=2-31, 4000 kb/s, 1k tbn, 15 tbc Stream #0.1: Audio: libvorbis, 48000 Hz, 2 channels, s16 Stream mapping: Stream #2:0 -> #0:0 (rawvideo -> mpeg4) Stream #0:0 -> #0:1 (pcm_s16le -> libvorbis) Press ctrl-c to stop encoding [mpeg4 @ 0x10bd800] rc buffer underflow ^Cframe= 160 fps= 15 q=2.0 Lsize= 3414kB time=10.66 bitrate=2623.0kbits/s video:3273kB audio:131kB global headers:4kB muxing overhead 0.165600% Received signal 2: terminating. I'm not sure if it's the question of mapping (some -map options to add?) or that avconv just can't encode more than 1 video stream at one time. So is it an actual avconv limitation, or a limitation of the available containers, or me simply not finding the right combination of command line options?

    Read the article

  • SSL connection error during handshake on Windows Server 2008 R2

    - by Thomas
    I have a Windows 2008 R2 Server that runs a HTTPS Tunneling service. The software uses a certificate that is provided via the Windows certificate store. The certificate is located in the local computer private certificates. It supports server and client authentication with signing and keyencipherment. Cert chain The certificate chain looks fine. It's a Thawte SSL123 certificate. Thawte Premium Server CA (SHA1) [?e0 ab 05 94 20 72 54 93 05 60 62 02 36 70 f7 cd 2e fc 66 66] thawte Primary Root CA [?1f a4 90 d1 d4 95 79 42 cd 23 54 5f 6e 82 3d 00 00 79 6e a2] Thawte DV SSL CA [3c a9 58 f3 e7 d6 83 7e 1c 1a cf 8b 0f 6a 2e 6d 48 7d 67 62] Server certificate Issues Most browsers accept the certificate without any warning. But IE 7 on Windows XP SP3 and Opera 12 on OSX just report an connection error. Opera complains: Secure connection: fatal error (552) https://www.example.com/ Opera was not able to connect to the server, because the server does not communicate via any secure protocol known to Opera. A connection test using openssl s_client -connect www.example.com:443 -state says: CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A 52471:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-35.1/src/ssl/s23_lib.c:182: ssldump -aAHd host www.example.com during curl https://www.example.com/ reports: New TCP connection #1: localhost(53302) <-> www.example.com(443) 1 1 0.0235 (0.0235) C>SV3.1(117) Handshake ClientHello Version 3.1 random[32]= 50 77 56 29 e8 23 82 3b 7f e0 ae 2d c1 31 cb ac 38 01 31 85 4f 91 39 c1 04 32 a6 68 25 cd a0 c1 cipher suites Unknown value 0x39 Unknown value 0x38 Unknown value 0x35 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA Unknown value 0x33 Unknown value 0x32 Unknown value 0x2f Unknown value 0x9a Unknown value 0x99 Unknown value 0x96 TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 TLS_DHE_RSA_WITH_DES_CBC_SHA TLS_DHE_DSS_WITH_DES_CBC_SHA TLS_RSA_WITH_DES_CBC_SHA TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods unknown value NULL 1 0.0479 (0.0243) S>C TCP FIN 1 0.0481 (0.0002) C>S TCP FIN Thawte provides two Java based SSL Checkers. The Legacy Thawte SSL Certificate Installation Checker and the sslToolBox. Both validate the certificate under Windows XP but report connection errors under OSX and Windows 2008 R2.

    Read the article

  • NIC is receiving, but not transmitting at all?

    - by Shtééf
    I'm trying to fix a very strange problem remotely on a machine at a customer site. The machine is a Dell PowerEdge, I believe a 1950 (haven't verified, but the lspci output matches specs I found.) The machine has two similar NICs, identified as Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12) by lspci, and using the bnx2 driver. (I suspect these are on-board and on the same controller, which is what I'm accustomed to for this type of machine.) The primary interface eth0 works perfectly, and is in fact how I am ssh'd in. However, the secondary interface eth1 is not transmitting. I can see this in ifconfig output, for example, where the TX field is always 0. However, it is receiving, and tcpdump shows ARP requests coming from the ISP's gateway on the other side. The interface is physically connected to a Siemens BSTU4 modem, configured by the ISP. The link is properly set to 10MBps and full duplex, without negotation, as the ISP requested. A small /30 subnet is configured. For the sake of anonimity, let's say the machine is 3.3.3.2/30, and the ISP's gateway .1. The machine has no firewall settings whatsoever. Even running something like arping -I eth1 3.3.3.1, and running tcpdump alongside, shows no traffic whatsoever being transmitted on the interface. (But the other side keeps steadily sending ARP requests, and that is all that can be seen.) What could be causing this? Here's some output, anonymized, which may hopefully help: $ ethtool eth1 Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised auto-negotiation: No Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off Supports Wake-on: d Wake-on: d Link detected: yes $ ip link show eth1 3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:15:c5:xx:xx:xx brd ff:ff:ff:ff:ff:ff $ ip -4 addr show eth1 3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 inet 3.3.3.2/30 brd 3.3.3.3 scope global eth1 $ ip -4 route show match 3.3.3.0/30 3.3.3.0/30 dev eth1 proto kernel scope link src 3.3.3.2 default via 10.0.0.5 dev eth0

    Read the article

  • Got Hacked. Want to understand how.

    - by gaoshan88
    Someone has, for the second time, appended a chunk of javascript to a site I help run. This javascript hijacks Google adsense, inserting their own account number, and sticking ads all over. The code is always appended, always in one specific directory (one used by a third party ad program), affects a number of files in a number of directories inside this one ad dir (20 or so) and is inserted at roughly the same overnight time. The adsense account belongs to a Chinese website (located in a town not an hour from where I will be in China next month. Maybe I should go bust heads... kidding, sort of), btw... here is the info on the site: http://serversiders.com/fhr.com.cn So, how could they append text to these files? Is it related to the permissions set on the files (ranging from 755 to 644)? To the webserver user (it's on MediaTemple so it should be secure, yes?)? I mean, if you have a file that has permissions set to 777 I still can't just add code to it at will... how might they be doing this? Here is a sample of the actual code for your viewing pleasure (and as you can see... not much to it. The real trick is how they got it in there): <script type="text/javascript"><!-- google_ad_client = "pub-5465156513898836"; /* 728x90_as */ google_ad_slot = "4840387765"; google_ad_width = 728; google_ad_height = 90; //--> </script> <script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"> </script> Since a number of folks have mentioned it, here is what I have checked (and by checked I mean I looked around the time the files were modified for any weirdness and I grepped the files for POST statements and directory traversals: access_log (nothing around the time except normal (i.e. excessive) msn bot traffic) error_log (nothing but the usual file does not exist errors for innocuous looking files) ssl_log (nothing but the usual) messages_log (no FTP access in here except for me)

    Read the article

  • PHP 5.2 to 5.3 not upgrading, no errors

    - by Webnet
    I'm following this guide: http://atik97.wordpress.com/2010/06/12/how-to-upgrade-to-php-5-3-in-ubuntu-9-10/ I've done all the steps, but it's still showing php 5.2.6 - any ideas? I have also tried -cgi instead of -cli, neither have any effect. update I've tried rebooting the server to see if that would have any effect and unfortunately it didn't update Output of dpkg -l *php*: Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Cfg-files/Unpacked/Failed-cfg/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name Version Description +++-=============================================-=============================================-========================================================================================================== un libapache2-mod-php4 <none> (no description available) ii libapache2-mod-php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (Apache 2 module) un libapache2-mod-php5filter <none> (no description available) ii php-pear 5.2.6.dfsg.1-3ubuntu4.6 PEAR - PHP Extension and Application Repository un php4-cli <none> (no description available) un php4-dev <none> (no description available) un php4-mysql <none> (no description available) un php4-pear <none> (no description available) ii php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.2.6.dfsg.1-3ubuntu4.6 command-line interpreter for the php5 scripting language ii php5-common 5.2.6.dfsg.1-3ubuntu4.6 Common files for packages built from the php5 source ii php5-curl 5.2.6.dfsg.1-3ubuntu4.6 CURL module for php5 un php5-dev <none> (no description available) ii php5-gd 5.2.6.dfsg.1-3ubuntu4.6 GD module for php5 ii php5-imap 5.2.6-0ubuntu5.1 IMAP module for php5 un php5-json <none> (no description available) ii php5-mcrypt 5.2.6-0ubuntu2 MCrypt module for php5 ii php5-mysql 5.2.6.dfsg.1-3ubuntu4.6 MySQL module for php5 un php5-mysqli <none> (no description available) ii php5-xsl 5.2.6.dfsg.1-3ubuntu4.6 XSL module for php5 un phpapi-20060613+lfs <none> (no description available) ii phpmyadmin 4:3.1.2-1ubuntu0.2 MySQL web administration tool update The following commands and their outputs: grep php53 /etc/apt/sources.list deb http://php53.dotdeb.org stable all deb-src http://php53.dotdeb.org stable all apt-cache search -f "libapache2-mod-php5" http://pastebin.com/XNXdsXYC update I've updated the question with more details on installed packages.

    Read the article

  • Squid on windows loadbalancing only to one server

    - by Martin L.
    After thousands of googles and trying days i cant get the load balancer/failover in squid on windows to work. Iam using squid 2.7. My webservers are 2 single NIC lighttpd and one dual nic lighttpd. server1 in this example is running squid on port 80 and lighttpd on port 8080 (just to test) Requirements: All 3 webservers running lighttpd should be balanced two option for load balancing: Best would be if server1 is busy server2 takes over, if server2 is busy server3 takes over, etc.. Round robin style evenly distributed load. Eg server1 takes first call, server2 second etc.. All requests should be treated the same way (no url rewriting or so on) Sent host headers have to be redirected to every server as http host header, speaking of "server1", "server1.company.internal" and "10.211.1.1". My approach: acl all src all acl manager proto cache_object http_port 80 accel defaultsite=server1.company.internal vhost #reverse proxy entries cache_peer 10.211.2.1 parent 8080 0 no-query originserver round-robin login=PASS name=server1_nic1 cache_peer 10.211.1.2 parent 80 0 no-query originserver round-robin login=PASS name=server2_nic1 cache_peer 10.211.2.3 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic1 cache_peer 10.211.2.4 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic2 #decl of names of squid host acl registered_name_hostdomain dstdomain server1.company.internal acl registered_name_host dstdomain server1 #ip of squid host acl registered_name_ip dstdomain 10.211.2.1 # access: redirects the correct squid hostname http_access allow registered_name_hostdomain http_access allow registered_name_host http_access allow registered_name_ip http_access deny all cache_peer_access server1_nic1 allow registered_name_hostdomain cache_peer_access server1_nic1 allow registered_name_host cache_peer_access server1_nic1 allow registered_name_ip cache_peer_access server2_nic1 allow registered_name_hostdomain cache_peer_access server2_nic1 allow registered_name_host cache_peer_access server2_nic1 allow registered_name_ip cache_peer_access server3_nic1 allow registered_name_hostdomain cache_peer_access server3_nic1 allow registered_name_host cache_peer_access server3_nic1 allow registered_name_ip cache_peer_access server3_nic2 allow registered_name_hostdomain cache_peer_access server3_nic2 allow registered_name_host cache_peer_access server3_nic2 allow registered_name_ip cache_peer_access server1_nic1 deny all cache_peer_access server2_nic1 deny all cache_peer_access server3_nic1 deny all cache_peer_access server3_nic2 deny all never_direct allow all Problems: Load balancer does not load balance other than to first server. Only if the first server is killed in any way the second will take over. I have seen the others working at some point, but definitely not as the intended load balancing described above. If the cache_peer_access is not defined sometimes the wrong hostname is sent to the backend webserver and this always depends on the defaultsite= parameter. Probably because the host header on the request to squid is not set and its replaced by defaultsite. Leaving out defaultsite didnt solve the problem. The only workaround i found for this is the current approach with cache_peer_access. Questions: Does the cache_peer_access influence the round-robin? Is there a better workaround to pass the host header to the backed webservers? Which parameters do increase the speed of load balancing or does anyone have a better approach? -Martin

    Read the article

  • Google analytics and multiple independent subdomains

    - by MTilsted
    I need some help trying to setup google analytics correct. Here is my setup: We host sites for multiple customers, and each customer have their own subdomain on our site. So we have customerA.oursite.com and customerB.oursite.com As we add more customers we get more subdomains. We do want to track all data for each customer independent, but I don't want to to create a new google tracking code for each new customer. So my plan is to track all visits with "oursite.com", and then I will create a filter in google Analytics to get data for each specific customer(All visits for a specific subdomain). Is this(One tracking code, and a subdomain filter) the right way to do it? To create a subdomain filter i add a new profile for each customer, and then add a custom filter saying include "Request URI" and fill in "CustomerDomain.oursite.com". Is this the correct way to do it? And a general question about filters: Is it really impossible to create a new filter by applying it to data in an existing profile? I would really like to just collect all the data in one "main" profile and then create subdomain filters as we need them. But it seems that google only apply filters to new incomming data, not existing data. Is this really true? The following is my tracking code. Is '_setDomainName','none' the right thing to do? <script type="text/javascript"> /* Tracking code for qrtown.com */ var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-11584298-10']); _gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • How to configure apache's mod_proxy_html to work as an ajax proxy?

    - by dcerecedo
    I'm trying to build a web site that let's you view and manipulate data from any page in any other website. To do that, I have to bypass 'Allow Origin' problems: i'm loading the other domain's content in an iframe and i have to manipulate its content with javascript downloaded from my domain. My first attempt was to write a simple proxy myself, requesting the other domains page through a server proxy coded in Java that not only serves the content but rebuilds links (src's and href's) in the content so that the content referenced by these links alse get downloaded through my handmade proxy. The result is not bad but has problems with url's in css and scripts. It's then that i realized that mod_proxy_html is supposed to do exactly all this job. The problem is that i cannot figure out how to make it work as expected. Let's suppose my server runs in my-domain.com and to proxy and transform content from another domain i'd make a request like this: my-domain.com/proxy?url=http://another-domain.com/some/content I'd want mod_proxy_html to serve the content and rewrite following URLs in http://another-domain.com/some/content in the following ways: Absolute URLs not from another-domain.com: no rewritting Relative from root urls:/other/content - /proxy?url=http://another-domain.com/other/content Relative urls: other/content - /proxy?url=http://another-domain.com/some/content/other/content Relative to parent urls: ../other/content - /proxy?url=http://another-domain.com/some/other/content The url should be specified at runtime, not configuration time. Can this be achieved with mod_proxy_html? Could anyone provide a simple working configuration to start with? EDIT 1-First approach The following site config will work fine with sites that use absolute url's everywhere like http://www.huffingtonpost.es/. Youc could try on this config on localhost: http://localhost/asset/http://www.huffingtonpost.es/ <VirtualHost *:80> ServerName localhost LogLevel debug ProxyRequests off RewriteEngine On RewriteRule ^/asset/(.*) $1 [P] ProxyHTMLURLMap $1 /asset/ <Location /asset/> ProxyPassReverse / ProxyHTMLURLMap / /asset/ </Location> </VirtualHost> But as explained in the documentation, if I hit a site using relative url's, I'd like to have these rewritten on the html via mod_proxy_html. So I shoud change the Location block as follows: <Location /asset/> ProxyPassReverse / #Depending on your system use one line or the other #Ubuntu: #SetOutputFilter proxy-html #any other system: ProxyHTMLEnable On ProxyHTMLURLMap / /asset/ </Location> ...which doesn't seem to work. Comments, hints and ideas welcome!

    Read the article

  • TFS 2010 SDK: Connecting to TFS 2010 Programmatically&ndash;Part 1

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS 2010 SDK,TFS API,TFS Programming,TFS ALM   Download Working Demo Great! You have reached that point where you would like to extend TFS 2010. The first step is to connect to TFS programmatically. 1. Download TFS 2010 SDK => http://visualstudiogallery.msdn.microsoft.com/25622469-19d8-4959-8e5c-4025d1c9183d?SRC=VSIDE 2. Alternatively you can also download this from the visual studio extension manager 3. Create a new Windows Forms Application project and add reference to TFS Common and client dlls Note - If Microsoft.TeamFoundation.Client and Microsoft.TeamFoundation.Common do not appear on the .NET tab of the References dialog box, use the Browse tab to add the assemblies. You can find them at %ProgramFiles%\Microsoft Visual Studio 10.0\Common7\IDE\ReferenceAssemblies\v2.0. using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common;   4. There are several ways to connect to TFS, the two classes of interest are, Option 1 – Class – TfsTeamProjectCollectionClass namespace Microsoft.TeamFoundation.Client { public class TfsTeamProjectCollection : TfsConnection { public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection); public TfsTeamProjectCollection(Uri uri); public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection, IdentityDescriptor identityToImpersonate); public TfsTeamProjectCollection(Uri uri, ICredentials credentials); public TfsTeamProjectCollection(Uri uri, ICredentialsProvider credentialsProvider); public TfsTeamProjectCollection(Uri uri, IdentityDescriptor identityToImpersonate); public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsTeamProjectCollection(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public TfsTeamProjectCollection(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public override CatalogNode CatalogNode { get; } public TfsConfigurationServer ConfigurationServer { get; internal set; } public override string Name { get; } public static Uri GetFullyQualifiedUriForName(string name); protected override object GetServiceInstance(Type serviceType, object serviceInstance); protected override object InitializeTeamFoundationObject(string fullName, object instance); } } Option 2 – Class – TfsConfigurationServer namespace Microsoft.TeamFoundation.Client { public class TfsConfigurationServer : TfsConnection { public TfsConfigurationServer(RegisteredConfigurationServer application); public TfsConfigurationServer(Uri uri); public TfsConfigurationServer(RegisteredConfigurationServer application, IdentityDescriptor identityToImpersonate); public TfsConfigurationServer(Uri uri, ICredentials credentials); public TfsConfigurationServer(Uri uri, ICredentialsProvider credentialsProvider); public TfsConfigurationServer(Uri uri, IdentityDescriptor identityToImpersonate); public TfsConfigurationServer(RegisteredConfigurationServer application, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsConfigurationServer(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsConfigurationServer(RegisteredConfigurationServer application, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public TfsConfigurationServer(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public override CatalogNode CatalogNode { get; } public override string Name { get; } protected override object GetServiceInstance(Type serviceType, object serviceInstance); public TfsTeamProjectCollection GetTeamProjectCollection(Guid collectionId); protected override object InitializeTeamFoundationObject(string fullName, object instance); } }   Note – The TeamFoundationServer class is obsolete. Use the TfsTeamProjectCollection or TfsConfigurationServer classes to talk to a 2010 Team Foundation Server. In order to talk to a 2005 or 2008 Team Foundation Server use the TfsTeamProjectCollection class. 5. Sample code for programmatically connecting to TFS 2010 using the TFS 2010 API How do i know what the URI of my TFS server is, Note – You need to be have Team Project Collection view details permission in order to connect, expect to receive an authorization failure message if you do not have sufficient permissions. Case 1: Connect by Uri string _myUri = @"https://tfs.codeplex.com:443/tfs/tfs30"; TfsConfigurationServer configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri)); Case 2: Connect by Uri, prompt for credentials string _myUri = @"https://tfs.codeplex.com:443/tfs/tfs30"; TfsConfigurationServer configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri), new UICredentialsProvider()); configurationServer.EnsureAuthenticated(); Case 3: Connect by Uri, custom credentials In order to use this method of connectivity you need to implement the interface ICredentailsProvider public class ConnectByImplementingCredentialsProvider : ICredentialsProvider { public ICredentials GetCredentials(Uri uri, ICredentials iCredentials) { return new NetworkCredential("UserName", "Password", "Domain"); } public void NotifyCredentialsAuthenticated(Uri uri) { throw new ApplicationException("Unable to authenticate"); } } And now consume the implementation of the interface, string _myUri = @"https://tfs.codeplex.com:443/tfs/tfs30"; ConnectByImplementingCredentialsProvider connect = new ConnectByImplementingCredentialsProvider(); ICredentials iCred = new NetworkCredential("UserName", "Password", "Domain"); connect.GetCredentials(new Uri(_myUri), iCred); TfsConfigurationServer configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri), connect); configurationServer.EnsureAuthenticated();   6. Programmatically query TFS 2010 using the TFS SDK for all Team Project Collections and retrieve all Team Projects and output the display name and description of each team project. CatalogNode catalogNode = configurationServer.CatalogNode; ReadOnlyCollection<CatalogNode> tpcNodes = catalogNode.QueryChildren( new Guid[] { CatalogResourceTypes.ProjectCollection }, false, CatalogQueryOptions.None); // tpc = Team Project Collection foreach (CatalogNode tpcNode in tpcNodes) { Guid tpcId = new Guid(tpcNode.Resource.Properties["InstanceId"]); TfsTeamProjectCollection tpc = configurationServer.GetTeamProjectCollection(tpcId); // Get catalog of tp = 'Team Projects' for the tpc = 'Team Project Collection' var tpNodes = tpcNode.QueryChildren( new Guid[] { CatalogResourceTypes.TeamProject }, false, CatalogQueryOptions.None); foreach (var p in tpNodes) { Debug.Write(Environment.NewLine + " Team Project : " + p.Resource.DisplayName + " - " + p.Resource.Description + Environment.NewLine); } }   Output   You can download a working demo that uses TFS SDK 2010 to programmatically connect to TFS 2010. Screen Shots of the attached demo application, Share this post :

    Read the article

  • Figuring out the IIS Version for a given OS in .NET Code

    - by Rick Strahl
    Here's an odd requirement: I need to figure out what version of IIS is available on a given machine in order to take specific configuration actions when installing an IIS based application. I build several configuration tools for application configuration and installation and depending on which version of IIS is available on IIS different configuration paths are taken. For example, when dealing with XP machine you can't set up an Application Pool for an application because XP (IIS 5.1) didn't support Application pools. Configuring 32 and 64 bit settings are easy in IIS 7 but this didn't work in prior versions and so on. Along the same lines I saw a question on the AspInsiders list today, regarding a similar issue where somebody needed to know the IIS version as part of an ASP.NET application prior to when the Request object is available. So it's useful to know which version of IIS you can possibly expect. This should be easy right? But it turns there's no real easy way to detect IIS on a machine. There's no registry key that gives you the full version number - you can detect installation but not which version is installed. The easiest way: Request.ServerVariables["SERVER_SOFTWARE"] The easiest way to determine IIS version number is if you are already running inside of ASP.NET and you are inside of an ASP.NET request. You can look at Request.ServerVariables["SERVER_SOFTWARE"] to get a string like Microsoft-IIS/7.5 returned to you. It's a cinch to parse this to retrieve the version number. This works in the limited scenario where you need to know the version number inside of a running ASP.NET application. Unfortunately this is not a likely use case, since most times when you need to know a specific version of IIS when you are configuring or installing your application. The messy way: Match Windows OS Versions to IIS Versions Since Version 5.x of IIS versions of IIS have always been tied very closely to the Operating System. Meaning the only way to get a specific version of IIS was through the OS - you couldn't install another version of IIS on the given OS. Microsoft has a page that describes the OS version to IIS version relationship here: http://support.microsoft.com/kb/224609 In .NET you can then sniff the OS version and based on that return the IIS version. The following is a small utility function that accomplishes the task of returning an IIS version number for a given OS: /// <summary> /// Returns the IIS version for the given Operating System. /// Note this routine doesn't check to see if IIS is installed /// it just returns the version of IIS that should run on the OS. /// /// Returns the value from Request.ServerVariables["Server_Software"] /// if available. Otherwise uses OS sniffing to determine OS version /// and returns IIS version instead. /// </summary> /// <returns>version number or -1 </returns> public static decimal GetIisVersion() { // if running inside of IIS parse the SERVER_SOFTWARE key // This would be most reliable if (HttpContext.Current != null && HttpContext.Current.Request != null) { string os = HttpContext.Current.Request.ServerVariables["SERVER_SOFTWARE"]; if (!string.IsNullOrEmpty(os)) { //Microsoft-IIS/7.5 int dash = os.LastIndexOf("/"); if (dash > 0) { decimal iisVer = 0M; if (Decimal.TryParse(os.Substring(dash + 1), out iisVer)) return iisVer; } } } decimal osVer = (decimal) Environment.OSVersion.Version.Major + ((decimal) Environment.OSVersion.Version.MajorRevision / 10); // Windows 7 and Win2008 R2 if (osVer == 6.1M) return 7.5M; // Windows Vista and Windows 2008 else if (osVer == 6.0M) return 7.0M; // Windows 2003 and XP 64 bit else if (osVer == 5.2M) return 6.0M; // Windows XP else if (osVer == 5.1M) return 5.1M; // Windows 2000 else if (osVer == 5.0M) return 5.0M; // error result return -1M; } } Talk about a brute force apporach, but it works. This code goes only back to IIS 5 - anything before that is not something you possibly would want to have running. :-) Note that this is updated through Windows 7/Windows Server 2008 R2. Later versions will need to be added as needed. Anybody know what the Windows Version number of Windows 8 is?© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  IIS   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SnagIt Live Writer Plug-in Updated

    - by Rick Strahl
    Ah, I love SnagIt from TechSmith and I use the heck out of it almost every day. So no surprise that I've decided some time ago to integrate SnagIt into a few applications that require screen shots extensively. It's been a while since I've posted an update to my small SnagIt Windows Live Writer plug-in. There have been a few nagging issues that have crept up with recent changes in the way SnagIt handles captures in recent versions and they have been addressed in this update of SnagIt. Personally I love SnagIt and use it extensively mostly for blogging, but also for writing documentation and articles etc. While there are many other (and also free) tools out there to do basic screen captures, SnagIt continues to be the most convenient tool for me with its nice built in capture and effects editor that makes creating professional looking captures childishly simple. And maybe even more importantly: SnagIt has a COM interface that can be automated and  makes it super easy to embed into other applications. I've built plugins for SnagIt as well as for one of my company's own tools, Html Help Builder. If you use the Windows Live Writer offline WebLog Editor to write blog posts and have a copy of SnagIt it's probably worth your while to check this out if you haven't already. In case you haven't, this plugin integrates SnagIt with Live Writer so you can easily capture and edit content and embed it into a post. Captures are shown in the SnagIt Preview editor where you can edit the image and apply image markup or effects, before selecting Finish (or Cancel). The final image can then be pasted directly into your Live Writer post. When installed the SnagIt plug-in shows up on the PlugIn list or in the Plug-Ins toolbar shortcut: Once you select the Plug in you get the capture window that allows you to customize the capture process which includes most of the useful SnagIt capture options: Once you're done capturing the image shows up in the SnagIt Image Editor and you can crop, mark up and apply effects. When done you click the Finish button and the image is embedded right into your blog post. Easy - how do you think the images in this blog entry got in here? The beauty of SnagIt is that it's all easily integrated - Capturing, editing and embedding, it only takes a few seconds to do it all especially if you save image effect presets in SnagIt. What's updated The main issue addressed in this update has to do with the plug-in updates the Live Writer window. When a capture starts Live Writer gets minimized to get out of the way to let you pick your capture source. When the capture is complete and the image has been embedded Live Writer is activated once again. Recent versions of SnagIt however had changed the Window positioning of SnagIt so that Live Writer ended up popping up back behind the SnagIt window which was pretty annoying. This update pushes Live Writer back to the top of the window stack using some delaying tactics in the code. There have also been a few small changes to the way the code interacts with the COM object which is more reliable if a capture fails or SnagIt blows up or is locked because it's already in a capture outside of the automation interface. Source Code SnagIt Automation is something I actually use a lot. As mentioned I've integrated this automation into Live Writer as well as my documentation tool Html Help Builder, which I use just about daily. The SnagIt integration has a similar interface in that application and provides similar functionality. It's quite useful to integrate SnagIt into other applications. Because it's quite useful to embed SnagIt into other apps there's source code that you can download and embed into your own applications. The code includes both the dialog class that is automated from Live Writer, as well as the basic capture component that captures images to a disk file. Resources Download the SnagIt Capture Plug-in Installer An MSI installer that you can run that will install the plug-in into Live Writer's PlugIns directory. Source Code to the SnagIt Capture Plug-in Contains the plug-in assembly, as well as the source code to the plug-in and the setup project.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Live Writer  WebLog   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Posting from ASP.NET WebForms page to another URL

    - by hajan
    Few days ago I had a case when I needed to make FORM POST from my ASP.NET WebForms page to an external site URL. More specifically, I was working on implementing Simple Payment System (like Amazon, PayPal, MoneyBookers). The operator asks to make FORM POST request to a given URL in their website, sending parameters together with the post which are computed on my application level (access keys, secret keys, signature, return-URL… etc). So, since we are not allowed nesting another form inside the <form runat=”server”> … </form>, which is required because other controls in my ASPX code work on server-side, I thought to inject the HTML and create FORM with method=”POST”. After making some proof of concept and testing some scenarios, I’ve concluded that I can do this very fast in two ways: Using jQuery to create form on fly with the needed parameters and make submit() Using HttpContext.Current.Response.Write to write the form on server-side (code-behind) and embed JavaScript code that will do the post Both ways seemed fine. 1. Using jQuery to create FORM html code and Submit it. Let’s say we have ‘PAY NOW’ button in our ASPX code: <asp:Button ID="btnPayNow" runat="server" Text="Pay Now" /> Now, if we want to make this button submit a FORM using POST method to another website, the jQuery way should be as follows: <script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.5.1.js" type="text/javascript"></script> <script type="text/javascript">     $(function () {         $("#btnPayNow").click(function (event) {             event.preventDefault();             //construct htmlForm string             var htmlForm = "<form id='myform' method='POST' action='http://www.microsoft.com'>" +                 "<input type='hidden' id='name' value='hajan' />" +             "</form>";             //Submit the form             $(htmlForm).appendTo("body").submit();         });     }); </script> Yes, as you see, the code fires on btnPayNow click. It removes the default button behavior, then creates htmlForm string. After that using jQuery we append the form to the body and submit it. Inside the form, you can see I have set the htttp://www.microsoft.com URL, so after clicking the button you should be automatically redirected to the Microsoft website (just for test, of course for Payment I’m using Operator's URL). 2. Using HttpContext.Current.Response.Write to write the form on server-side (code-behind) and embed JavaScript code that will do the post The C# code behind should be something like this: public void btnPayNow_Click(object sender, EventArgs e) {     string Url = "http://www.microsoft.com";     string formId = "myForm1";     StringBuilder htmlForm = new StringBuilder();     htmlForm.AppendLine("<html>");     htmlForm.AppendLine(String.Format("<body onload='document.forms[\"{0}\"].submit()'>",formId));     htmlForm.AppendLine(String.Format("<form id='{0}' method='POST' action='{1}'>", formId, Url));     htmlForm.AppendLine("<input type='hidden' id='name' value='hajan' />");     htmlForm.AppendLine("</form>");     htmlForm.AppendLine("</body>");     htmlForm.AppendLine("</html>");     HttpContext.Current.Response.Clear();     HttpContext.Current.Response.Write(htmlForm.ToString());     HttpContext.Current.Response.End();             } So, with this code we create htmlForm string using StringBuilder class and then just write the html to the page using HttpContext.Current.Response.Write. The interesting part here is that we submit the form using JavaScript code: document.forms["myForm1"].submit() This code runs on body load event, which means once the body is loaded the form is automatically submitted. Note: In order to test both solutions, create two applications on your web server and post the form from first to the second website, then get the values in the second website using Request.Form[“input-field-id”] I hope this was useful post for you. Regards, Hajan

    Read the article

  • Alien deletes .deb when converting from .rpm

    - by Andre
    I'm trying to convert .rpm to .deb using alien. sudo alien -k libtetra-1.0.0-2.i386.rpm Alien says that: libtetra-1.0.0-2.i386.deb generated But when I check the folder - there is just original .rpm and no .deb. Also - I can see that for a split second there is a .deb file in a folder. so it looks like alien create .deb and deletes it right away. I suspect that it's maybe because I run 64 bit os and package is 32? Can somebody explain why alien deletes .deb automatically? Verbose output: LANG=C rpm -qp --queryformat %{NAME} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{VERSION} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{RELEASE} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{ARCH} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{CHANGELOGTEXT} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{SUMMARY} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{DESCRIPTION} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{PREFIXES} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{POSTIN} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{POSTUN} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{PREUN} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{LICENSE} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{PREIN} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qcp libtetra-1.0.0-2.i386.rpm rpm -qpi libtetra-1.0.0-2.i386.rpm LANG=C rpm -qpl libtetra-1.0.0-2.i386.rpm mkdir libtetra-1.0.0 chmod 755 libtetra-1.0.0 rpm2cpio libtetra-1.0.0-2.i386.rpm | lzma -t -q > /dev/null 2>&1 rpm2cpio libtetra-1.0.0-2.i386.rpm | (cd libtetra-1.0.0; cpio --extract --make-directories --no-absolute-filenames --preserve-modification-time) 2>&1 chmod 755 libtetra-1.0.0/./ chmod 755 libtetra-1.0.0/./usr chmod 755 libtetra-1.0.0/./usr/lib chown 0:0 libtetra-1.0.0//usr/lib/libtetra.so.1.0.0 chmod 755 libtetra-1.0.0//usr/lib/libtetra.so.1.0.0 mkdir libtetra-1.0.0/debian date -R date -R chmod 755 libtetra-1.0.0/debian/rules debian/rules binary 2>&1 libtetra_1.0.0-3_i386.deb generated find libtetra-1.0.0 -type d -exec chmod 755 {} ; rm -rf libtetra-1.0.0 Very Verbose output LANG=C rpm -qp --queryformat %{NAME} libtetra-1.0.0-2.i386.rpm libtetra LANG=C rpm -qp --queryformat %{VERSION} libtetra-1.0.0-2.i386.rpm 1.0.0 LANG=C rpm -qp --queryformat %{RELEASE} libtetra-1.0.0-2.i386.rpm 2 LANG=C rpm -qp --queryformat %{ARCH} libtetra-1.0.0-2.i386.rpm i386 LANG=C rpm -qp --queryformat %{CHANGELOGTEXT} libtetra-1.0.0-2.i386.rpm - First RPM Package LANG=C rpm -qp --queryformat %{SUMMARY} libtetra-1.0.0-2.i386.rpm Panasonic KX-MC6000 series Printer Driver for Linux. LANG=C rpm -qp --queryformat %{DESCRIPTION} libtetra-1.0.0-2.i386.rpm This software is Panasonic KX-MC6000 series Printer Driver for Linux. You can print from applications by using CUPS(Common Unix Printing System) which is the printing system for Linux. Other functions for KX-MC6000 series are not supported by this software. LANG=C rpm -qp --queryformat %{PREFIXES} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qp --queryformat %{POSTIN} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qp --queryformat %{POSTUN} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qp --queryformat %{PREUN} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qp --queryformat %{LICENSE} libtetra-1.0.0-2.i386.rpm GPL and LGPL (Version2) LANG=C rpm -qp --queryformat %{PREIN} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qcp libtetra-1.0.0-2.i386.rpm rpm -qpi libtetra-1.0.0-2.i386.rpm Name : libtetra Relocations: (not relocatable) Version : 1.0.0 Vendor: Panasonic Communications Co., Ltd. Release : 2 Build Date: Tue 27 Apr 2010 05:16:40 AM EDT Install Date: (not installed) Build Host: localhost.localdomain Group : System Environment/Daemons Source RPM: libtetra-1.0.0-2.src.rpm Size : 31808 License: GPL and LGPL (Version2) Signature : (none) URL : http://panasonic.net/pcc/support/fax/world.htm Summary : Panasonic KX-MC6000 series Printer Driver for Linux. Description : This software is Panasonic KX-MC6000 series Printer Driver for Linux. You can print from applications by using CUPS(Common Unix Printing System) which is the printing system for Linux. Other functions for KX-MC6000 series are not supported by this software. LANG=C rpm -qpl libtetra-1.0.0-2.i386.rpm /usr/lib/libtetra.so /usr/lib/libtetra.so.1.0.0 mkdir libtetra-1.0.0 chmod 755 libtetra-1.0.0 rpm2cpio libtetra-1.0.0-2.i386.rpm | lzma -t -q > /dev/null 2>&1 rpm2cpio libtetra-1.0.0-2.i386.rpm | (cd libtetra-1.0.0; cpio --extract --make-directories --no-absolute-filenames --preserve-modification-time) 2>&1 63 blocks chmod 755 libtetra-1.0.0/./ chmod 755 libtetra-1.0.0/./usr chmod 755 libtetra-1.0.0/./usr/lib chown 0:0 libtetra-1.0.0//usr/lib/libtetra.so.1.0.0 chmod 755 libtetra-1.0.0//usr/lib/libtetra.so.1.0.0 mkdir libtetra-1.0.0/debian date -R Mon, 07 Feb 2011 11:03:58 -0500 date -R Mon, 07 Feb 2011 11:03:58 -0500 chmod 755 libtetra-1.0.0/debian/rules debian/rules binary 2>&1 dh_testdir dh_testdir dh_testroot dh_clean -k -d dh_clean: No packages to build. dh_installdirs dh_installdocs dh_installchangelogs find . -maxdepth 1 -mindepth 1 -not -name debian -print0 | \ xargs -0 -r -i cp -a {} debian/ dh_compress dh_makeshlibs dh_installdeb dh_shlibdeps dh_gencontrol dh_md5sums dh_builddeb libtetra_1.0.0-2_i386.deb generated find libtetra-1.0.0 -type d -exec chmod 755 {} ; rm -rf libtetra-1.0.0

    Read the article

  • Using real fonts in HTML 5 & CSS 3 pages

    - by nikolaosk
    This is going to be the fifth post in a series of posts regarding HTML 5. You can find the other posts here, here , here and here.In this post I will provide a hands-on example on how to use real fonts in HTML 5 pages with the use of CSS 3.Font issues have been appearing in all websites and caused all sorts of problems for web designers.The real problem with fonts for web developers until now was that they were forced to use only a handful of fonts.CSS 3 allows web designers not to use only web-safe fonts.These fonts are in wide use in most user's operating systems.Some designers (when they wanted to make their site stand out) resorted in various techniques like using images instead of fonts. That solution is not very accessible-friendly and definitely less SEO friendly.CSS (through CSS3's Fonts module) 3 allows web developers to embed fonts directly on a web page.First we need to define the font and then attach the font to elements.Obviously we have various formats for fonts. Some are supported by all modern browsers and some are not.The most common formats are, Embedded OpenType (EOT),TrueType(TTF),OpenType(OTF). I will use the @font-face declaration to define the font used in this page.  Before you download fonts (in any format) make sure you have understood all the licensing issues. Please note that all these real fonts will be downloaded in the client's computer.A great resource on the web (maybe the best) is http://www.typekit.com/.They have an abundance of web fonts for use. Please note that they sell those fonts.Another free (best things in life a free, aren't they?) resource is the http://www.google.com/webfonts website. I have visited the website and downloaded the Aladin webfont.When you download any font you like make sure you read the license first. Aladin webfont is released under the Open Font License (OFL) license. Before I go on with the actual demo I will use the (http://www.caniuse.com) to see the support for web fonts from the latest versions of modern browsers.Please have a look at the picture below. We see that all the latest versions of modern browsers support this feature. In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here.I create a simple HTML 5 page. The markup follows and it is very easy to use and understand<!DOCTYPE html><html lang="en">  <head>    <title>HTML 5, CSS3 and JQuery</title>    <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">       </head>  <body>      <div id="header">      <h1>Learn cutting edge technologies</h1>      <p>HTML 5, JQuery, CSS3</p>    </div>        <div id="main">          <h2>HTML 5</h2>                        <p>            HTML5 is the latest version of HTML and XHTML. The HTML standard defines a single language that can be written in HTML and XML. It attempts to solve issues found in previous iterations of HTML and addresses the needs of Web Applications, an area previously not adequately covered by HTML.          </p>      </div>             </body>  </html> Then I create the style.css file.<style type="text/css">@font-face{font-family:Aladin;src: url('Aladin-Regular.ttf')}h1{font-family:Aladin,Georgia,serif;}</style> As you can see we want to style the h1 tag in our HTML 5 markup.I just use the @font-face property,specifying the font-family and the source of the web font. Then I just use the name in the font-family property to style the h1 tag.Have a look below to see my page in IE10. Make sure you open this page in all your browsers installed in your machine. Make sure you have downloaded the latest versions. Now we can make our site stand out with web fonts and give it a really unique look and feel. Hope it helps!!!  

    Read the article

  • FireFox 6 Super Slow? Cache Settings Corruption

    - by Rick Strahl
    For those of you that follow me on Twitter, you've probably seen some of my tweets regarding major performance problems I've seen with the install of FireFox 6.0. FireFox 6.0 was released a couple of weeks ago and is treated as a 'force feed' update for FireFox 5.0. I'm not sure what the deal is with this braindead versioning that Mozilla is doing with major version releases coming out, what now every other month? Seriously that's retarded especially given the limited number of new features these releases bring, and the upgrade pain for plug-ins that the major version release causes. Anyway, after the FireFox updater bugged me long enough I finally gave in last week and updated to FireFox 6. Immediately after install I noticed terrible performance. Everything was running at a snail's pace with Web pages loading slowly and most content actually slowly 'painting' the page. A typical sign of content downloading slowly. However these are pages that should be mostly cached on my system and even repeated accesses ran just as slow. Just for a reality check I ran the same sites in Chrome (blazing fast) and IE (fast enough :-)) but FireFox - dog on a stick. Why so slow Boss? While complaining lots of people recommended to ditch FireFox - use Chrome, yada yada yada. Yeah, Chrome is fast and getting better but I have a number of plug-ins that I use in FF that I can't easily give up. So I suffered and started looking around more closely at what was happening. The first thing I noticed when accessing pages was that I continually saw accesses to the Google CDN downloading jQuery and jQuery UI. UI especially is pretty heavy in size and currently I'm in a location with a fairly slow IP connection where large files are a bit of an issue. However, seeing the CDN urls pop up repeatedly raised a flag with me. That stuff should be caching and it looked like each and every hit was reloading these scripts and various images over and over again. Fired up FireBug and sure enough I saw something like this on a repeated hit to my blog: Those two highlights are jquery and the main CSS file for the site and both are being loaded fully and taking a while to load. However, since this page had been loaded before, these items should be cached and show 304 requests instead of the full HTTP requests returning 200 result codes. In short it looked like FireFox was not caching ANY content at all and constantly reloading all page resources. No wonder things were running dog slow. Once I realized what the problem was I took a look in the about:config settings and lo and behold a bunch of the cache settings were set to not cache: In my case ALL the main cache flags were set to false for some reason that I can't figure out.  It appears that after the FireFox 6 update these flags somehow mysteriously changed and performance took a nose dive. Switching the .enable flags back to true and resetting all the cache settings tote default reverted performance back to the way it's supposed to be: reasonably fast and snappy as soon as content is cached and accessed again  from cache. I try not to muck with the about:config settings much (other than turning off the IPV6 option) but when there are problems access to these features can be really nice. However, I treat this as a last resort so it took me quite some time before I started looking through ALL the settings. This takes a while, not knowing what I was looking for exactly. If Web load performance is slow it's a good idea to check the cache settings. I have no idea what hosed these settings for me - I certainly didn't explicitly set them in about:config and while in FireFox's Options dialog I didn't see any option that would affect global caching like this, so this remains a mystery to me. Anyway, I hope that this is helpful to some, in case some of you end up running into a similar issue.© Rick Strahl, West Wind Technologies, 2005-2011Posted in FireFox   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Alien deletes .deb when converting from .rpm

    - by Stann
    I'm trying to convert .rpm to .deb using alien. sudo alien -k libtetra-1.0.0-2.i386.rpm Alien says that: libtetra-1.0.0-2.i386.deb generated But when I check the folder - there is just original .rpm and no .deb. Also - I can see that for a split second there is a .deb file in a folder. so it looks like alien create .deb and deletes it right away. I suspect that it's maybe because I run 64 bit os and package is 32? Can somebody explain why alien deletes .deb automatically? Verbose output: LANG=C rpm -qp --queryformat %{NAME} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{VERSION} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{RELEASE} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{ARCH} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{CHANGELOGTEXT} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{SUMMARY} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{DESCRIPTION} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{PREFIXES} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{POSTIN} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{POSTUN} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{PREUN} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{LICENSE} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{PREIN} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qcp libtetra-1.0.0-2.i386.rpm rpm -qpi libtetra-1.0.0-2.i386.rpm LANG=C rpm -qpl libtetra-1.0.0-2.i386.rpm mkdir libtetra-1.0.0 chmod 755 libtetra-1.0.0 rpm2cpio libtetra-1.0.0-2.i386.rpm | lzma -t -q > /dev/null 2>&1 rpm2cpio libtetra-1.0.0-2.i386.rpm | (cd libtetra-1.0.0; cpio --extract --make-directories --no-absolute-filenames --preserve-modification-time) 2>&1 chmod 755 libtetra-1.0.0/./ chmod 755 libtetra-1.0.0/./usr chmod 755 libtetra-1.0.0/./usr/lib chown 0:0 libtetra-1.0.0//usr/lib/libtetra.so.1.0.0 chmod 755 libtetra-1.0.0//usr/lib/libtetra.so.1.0.0 mkdir libtetra-1.0.0/debian date -R date -R chmod 755 libtetra-1.0.0/debian/rules debian/rules binary 2>&1 libtetra_1.0.0-3_i386.deb generated find libtetra-1.0.0 -type d -exec chmod 755 {} ; rm -rf libtetra-1.0.0 Very Verbose output LANG=C rpm -qp --queryformat %{NAME} libtetra-1.0.0-2.i386.rpm libtetra LANG=C rpm -qp --queryformat %{VERSION} libtetra-1.0.0-2.i386.rpm 1.0.0 LANG=C rpm -qp --queryformat %{RELEASE} libtetra-1.0.0-2.i386.rpm 2 LANG=C rpm -qp --queryformat %{ARCH} libtetra-1.0.0-2.i386.rpm i386 LANG=C rpm -qp --queryformat %{CHANGELOGTEXT} libtetra-1.0.0-2.i386.rpm - First RPM Package LANG=C rpm -qp --queryformat %{SUMMARY} libtetra-1.0.0-2.i386.rpm Panasonic KX-MC6000 series Printer Driver for Linux. LANG=C rpm -qp --queryformat %{DESCRIPTION} libtetra-1.0.0-2.i386.rpm This software is Panasonic KX-MC6000 series Printer Driver for Linux. You can print from applications by using CUPS(Common Unix Printing System) which is the printing system for Linux. Other functions for KX-MC6000 series are not supported by this software. LANG=C rpm -qp --queryformat %{PREFIXES} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qp --queryformat %{POSTIN} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qp --queryformat %{POSTUN} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qp --queryformat %{PREUN} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qp --queryformat %{LICENSE} libtetra-1.0.0-2.i386.rpm GPL and LGPL (Version2) LANG=C rpm -qp --queryformat %{PREIN} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qcp libtetra-1.0.0-2.i386.rpm rpm -qpi libtetra-1.0.0-2.i386.rpm Name : libtetra Relocations: (not relocatable) Version : 1.0.0 Vendor: Panasonic Communications Co., Ltd. Release : 2 Build Date: Tue 27 Apr 2010 05:16:40 AM EDT Install Date: (not installed) Build Host: localhost.localdomain Group : System Environment/Daemons Source RPM: libtetra-1.0.0-2.src.rpm Size : 31808 License: GPL and LGPL (Version2) Signature : (none) URL : http://panasonic.net/pcc/support/fax/world.htm Summary : Panasonic KX-MC6000 series Printer Driver for Linux. Description : This software is Panasonic KX-MC6000 series Printer Driver for Linux. You can print from applications by using CUPS(Common Unix Printing System) which is the printing system for Linux. Other functions for KX-MC6000 series are not supported by this software. LANG=C rpm -qpl libtetra-1.0.0-2.i386.rpm /usr/lib/libtetra.so /usr/lib/libtetra.so.1.0.0 mkdir libtetra-1.0.0 chmod 755 libtetra-1.0.0 rpm2cpio libtetra-1.0.0-2.i386.rpm | lzma -t -q > /dev/null 2>&1 rpm2cpio libtetra-1.0.0-2.i386.rpm | (cd libtetra-1.0.0; cpio --extract --make-directories --no-absolute-filenames --preserve-modification-time) 2>&1 63 blocks chmod 755 libtetra-1.0.0/./ chmod 755 libtetra-1.0.0/./usr chmod 755 libtetra-1.0.0/./usr/lib chown 0:0 libtetra-1.0.0//usr/lib/libtetra.so.1.0.0 chmod 755 libtetra-1.0.0//usr/lib/libtetra.so.1.0.0 mkdir libtetra-1.0.0/debian date -R Mon, 07 Feb 2011 11:03:58 -0500 date -R Mon, 07 Feb 2011 11:03:58 -0500 chmod 755 libtetra-1.0.0/debian/rules debian/rules binary 2>&1 dh_testdir dh_testdir dh_testroot dh_clean -k -d dh_clean: No packages to build. dh_installdirs dh_installdocs dh_installchangelogs find . -maxdepth 1 -mindepth 1 -not -name debian -print0 | \ xargs -0 -r -i cp -a {} debian/ dh_compress dh_makeshlibs dh_installdeb dh_shlibdeps dh_gencontrol dh_md5sums dh_builddeb libtetra_1.0.0-2_i386.deb generated find libtetra-1.0.0 -type d -exec chmod 755 {} ; rm -rf libtetra-1.0.0 Resolution Oh well. It looks like it's perhaps a bug? or I don't know. I simply installed 32-bit version of Ubuntu in VirtualBox and converted package there. For some reason I couldn't convert 32-bit package in 64 OS. and that is that. If someone ever finds the reason ffor this behavior - plz. post somewhere in comments. Thanks

    Read the article

  • Problem with apt-get update: failed to fetch error

    - by user171447
    I run an Ubuntu Server 12.04.3 LTS. Today, I wanted to update it, but I did not managed it (yes...), however upgrading worked well. I don't want you to solve my problem but it would be greatful if you could give me some hints. I googled hours, I fould a lot of this kind of errors, but not exactly this. Here is the output of apt-get update: Hit http://filepile.fastit.net precise Release.gpg Hit http://filepile.fastit.net precise Release Hit http://filepile.fastit.net precise/main amd64 Packages Hit http://filepile.fastit.net precise/restricted amd64 Packages Hit http://filepile.fastit.net precise/universe amd64 Packages Hit http://filepile.fastit.net precise/multiverse amd64 Packages Hit http://filepile.fastit.net precise/main i386 Packages Hit http://filepile.fastit.net precise/restricted i386 Packages Hit http://filepile.fastit.net precise/universe i386 Packages Hit http://filepile.fastit.net precise/multiverse i386 Packages Ign http://filepile.fastit.net precise/main TranslationIndex Ign http://filepile.fastit.net precise/multiverse TranslationIndex Ign http://filepile.fastit.net precise/restricted TranslationIndex Ign http://filepile.fastit.net precise/universe TranslationIndex Ign http://filepile.fastit.net precise/main Translation-en_GB Ign http://filepile.fastit.net precise/main Translation-en Ign http://filepile.fastit.net precise/main Translation-en_GB.UTF-8 Hit http://archive.canonical.com precise Release.gpg Ign http://filepile.fastit.net precise/multiverse Translation-en_GB Ign http://filepile.fastit.net precise/multiverse Translation-en Ign http://filepile.fastit.net precise/multiverse Translation-en_GB.UTF-8 Ign http://filepile.fastit.net precise/restricted Translation-en_GB Ign http://filepile.fastit.net precise/restricted Translation-en Ign http://filepile.fastit.net precise/restricted Translation-en_GB.UTF-8 Ign http://filepile.fastit.net precise/universe Translation-en_GB Ign http://filepile.fastit.net precise/universe Translation-en Ign http://filepile.fastit.net precise/universe Translation-en_GB.UTF-8 Hit http://archive.ubuntu.com precise Release.gpg Hit http://archive.canonical.com precise Release Hit http://archive.ubuntu.com precise Release Hit http://archive.ubuntu.com precise/main Sources Hit http://archive.ubuntu.com precise/restricted Sources Hit http://archive.ubuntu.com precise/main i386 Packages Hit http://archive.ubuntu.com precise/restricted i386 Packages Hit http://archive.ubuntu.com precise/multiverse i386 Packages Hit http://archive.ubuntu.com precise/main TranslationIndex Hit http://archive.ubuntu.com precise/multiverse TranslationIndex Hit http://archive.ubuntu.com precise/restricted TranslationIndex Hit http://archive.ubuntu.com precise/main Translation-en_GB Hit http://archive.ubuntu.com precise/main Translation-en Hit http://archive.ubuntu.com precise/multiverse Translation-en_GB Hit http://archive.ubuntu.com precise/multiverse Translation-en Hit http://archive.ubuntu.com precise/restricted Translation-en_GB Hit http://archive.ubuntu.com precise/restricted Translation-en Hit http://fr.archive.ubuntu.com precise-security Release.gpg Hit http://fr.archive.ubuntu.com precise-updates Release.gpg Hit http://fr.archive.ubuntu.com precise-security Release Hit http://fr.archive.ubuntu.com precise-updates Release Hit http://fr.archive.ubuntu.com precise-security/main amd64 Packages Hit http://fr.archive.ubuntu.com precise-security/restricted amd64 Packages Hit http://fr.archive.ubuntu.com precise-security/universe amd64 Packages Hit http://fr.archive.ubuntu.com precise-security/multiverse amd64 Packages :W: Failed to fetch http://archive.canonical.com/dists/precise/Release Unable to find expected entry 'main/binary-amd64/Packages' in Release file (Wrong sources.list entry or malformed file) E: Some index files failed to download. They have been ignored, or old ones used instead. Hit http://fr.archive.ubuntu.com precise-security/main i386 Packages Hit http://fr.archive.ubuntu.com precise-security/restricted i386 Packages Hit http://fr.archive.ubuntu.com precise-security/universe i386 Packages Hit http://fr.archive.ubuntu.com precise-security/multiverse i386 Packages Hit http://fr.archive.ubuntu.com precise-security/main TranslationIndex Hit http://fr.archive.ubuntu.com precise-security/multiverse TranslationIndex Hit http://fr.archive.ubuntu.com precise-security/restricted TranslationIndex Hit http://fr.archive.ubuntu.com precise-security/universe TranslationIndex Hit http://fr.archive.ubuntu.com precise-updates/main amd64 Packages Hit http://fr.archive.ubuntu.com precise-updates/restricted amd64 Packages Hit http://fr.archive.ubuntu.com precise-updates/universe amd64 Packages Hit http://fr.archive.ubuntu.com precise-updates/multiverse amd64 Packages Hit http://fr.archive.ubuntu.com precise-updates/main i386 Packages Hit http://fr.archive.ubuntu.com precise-updates/restricted i386 Packages Hit http://fr.archive.ubuntu.com precise-updates/universe i386 Packages Hit http://fr.archive.ubuntu.com precise-updates/multiverse i386 Packages Hit http://fr.archive.ubuntu.com precise-updates/main TranslationIndex Hit http://fr.archive.ubuntu.com precise-updates/multiverse TranslationIndex Hit http://fr.archive.ubuntu.com precise-updates/restricted TranslationIndex Hit http://fr.archive.ubuntu.com precise-updates/universe TranslationIndex Hit http://fr.archive.ubuntu.com precise-security/main Translation-en Hit http://fr.archive.ubuntu.com precise-security/multiverse Translation-en Hit http://fr.archive.ubuntu.com precise-security/restricted Translation-en Hit http://fr.archive.ubuntu.com precise-security/universe Translation-en Hit http://fr.archive.ubuntu.com precise-updates/main Translation-en_GB Hit http://fr.archive.ubuntu.com precise-updates/main Translation-en Hit http://fr.archive.ubuntu.com precise-updates/multiverse Translation-en_GB Hit http://fr.archive.ubuntu.com precise-updates/multiverse Translation-en Hit http://fr.archive.ubuntu.com precise-updates/restricted Translation-en_GB Hit http://fr.archive.ubuntu.com precise-updates/restricted Translation-en Hit http://fr.archive.ubuntu.com precise-updates/universe Translation-en_GB Hit http://fr.archive.ubuntu.com precise-updates/universe Translation-en And here is my /etc/apt/sources.list: ###### Ubuntu Main Repos deb http://filepile.fastit.net/ubuntu/ precise main restricted universe multiverse # deb http://de.archive.ubuntu.com/ubuntu/ precise main restricted universe multiverse deb http://archive.canonical.com/ precise main restricted universe multiverse ###### Ubuntu Update Repos deb http://fr.archive.ubuntu.com/ubuntu/ precise-security main restricted universe multiverse deb http://fr.archive.ubuntu.com/ubuntu/ precise-updates main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ precise main restricted multiverse deb-src http://archive.ubuntu.com/ubuntu precise main restricted multiverse Thanks for your help!

    Read the article

  • How does one find out which application is associated with an indicator icon?

    - by Amos Annoy
    It is trivial to do this in Ubuntu 10.04. The question is specific to Ubuntu 12.04. some pertinent references (src: answer to What is the difference between indicators and a system tray?: Here is the documentation for indicators: Application indicators | Ubuntu App Developer libindicate Reference Manual libappindicator Reference Manual also DesktopExperienceTeam/ApplicationIndicators - Ubuntu Wiki ref: How can the application that makes an indicator icon be identified? bookmark: How does one find out which application is associated with an indicator icon in Ubuntu 12.04? is a serious question for reasons & problems outlined below and for which a significant investment has been made and is necessary for remedial purposes. reviewing refs. to find an orchestrated resolution ... (an indicator ap. indicator maybe needed) This has nothing to do (does it?) with right click. How can an indicator's icon in Ubuntu 12.04 be matched with the program responsible for it's manifestation on the top panel? A list of running applications can include all processes using System Monitor. How is the correct matching process found for an indicator? How are the sub-indicator applications identified? These are the aps associated with the components of an indicators drop-down menu. (This was to be a separate question and quite naturally follows up the progression. It is included here as it is obvious there is no provisioning to track down offending either sub or indicator aps. easily.) (The examination of SM points out a rather poignant factor in the faster battery depletion and shortened run time - the ambient quiescent CPU rate in 12.04 is now well over 20% when previously, in 10.04, it was well under 10%, between 5% and 7%! - the huge inordinate cpu overhead originates from Xorg and compiz - after booting the system, only SM is run and All Processes are selected, sorting on %CPU - switching between Resources and Processes profiles the execution overhead problem - running another ap like gedit "Text Editor" briefly gives it CPU priority - going back to S&M several aps. are at the top of the list in order: gnome-system-monitor as expected, then: Xorg, compiz, unity-panel-service, hud-service, with dbus-daemon and kworker/x:y's mixed in with some expected daemons and background tasks like nm-applet - not only do Xorg and compiz require excessive CPU time but their entourage has to come along too! further exacerbating the problem - our compute bound tasks no longer work effectively in the field - reduced battery life, reduced CPU time for custom ap.s etc. - and all this precipitated from an examination of what is going on with the battery ap. indicator - this was and is not a flippant, rhetorical or idle musing but has consequences for the credible deployment of 12.04 to reduce the negative impact of its overhead in a production environment) (I have a problem with the battery indicator - it sometimes has % and other times hh:mm - it is necessary to know the ap. & v. to get more info on controlling same. ditto: There are issues with other indicator aps.: NM vs. iwlist/iwconfig conflict, BT ap. vs RF switch, Battery ap. w/ no suspend/sleep for poor battery runtime, ... the list goes on) Details from: How can I find Application Indicator ID's? suggests looking at: file:///usr/share/indicator-application/ordering-override.keyfile [Ordering Index Overrides] nm-applet=1 gnome-power-manager=2 ibus=3 gst-keyboard-xkb=4 gsd-keyboard-xkb=5 which solves the battery ap. identification, and presumably nm is NetworkManager for the rf icon, but the envelope, blue tooth and speaker indicator aps. are still a mystery. (Also, the ordering is not correlated.) Mind you, it was simple in the past to simply right click to get the About option to find the ap. & v. info. browsing around and about: file:///usr/share/indicator-application/ordering-override.keyfile examined: file:///usr/share/indicators file:///usr/share/indicators/messages/applications/ ... perhaps?/presumably? the information sought may be buried in file:///usr/share/indicators A reference in the comments was given to: What is the difference between indicators and a system tray? quoting from that source ... Unfortunately desktop indicators are not well documented yet: I couldn't find any specification doc ... Well ... the actual document https://wiki.ubuntu.com/DesktopExperienceTeam/ApplicationIndicators#Summary does not help much but it's existential information provides considerable insight ...

    Read the article

  • Creating Rich View Components in ASP.NET MVC

    - by kazimanzurrashid
    One of the nice thing of our Telerik Extensions for ASP.NET MVC is, it gives you an excellent extensible platform to create rich view components. In this post, I will show you a tiny but very powerful ListView Component. Those who are familiar with the Webforms ListView component already knows that it has the support to define different parts of the component, we will have the same kind of support in our view component. Before showing you the markup, let me show you the screenshots first, lets say you want to show the customers of Northwind database as a pagable business card style (Yes the example is inspired from our RadControls Suite) And here is the markup of the above view component. <h2>Customers</h2> <% Html.Telerik() .ListView(Model) .Name("customers") .PrefixUrlParameters(false) .BeginLayout(pager => {%> <table border="0" cellpadding="3" cellspacing="1"> <tfoot> <tr> <td colspan="3" class="t-footer"> <% pager.Render(); %> </td> </tr> </tfoot> <tbody> <tr> <%}) .BeginGroup(() => {%> <td> <%}) .Item(item => {%> <fieldset style="border:1px solid #e0e0e0"> <legend><strong>Company Name</strong>:<%= Html.Encode(item.DataItem.CompanyName) %></legend> <div> <div style="float:left;width:120px"> <img alt="<%= item.DataItem.CustomerID %>" src="<%= Url.Content("~/Content/Images/Customers/" + item.DataItem.CustomerID + ".jpg") %>"/> </div> <div style="float:right"> <ul style="list-style:none none;padding:10px;margin:0"> <li> <strong>Contact Name:</strong> <%= Html.Encode(item.DataItem.ContactName) %> </li> <li> <strong>Title:</strong> <%= Html.Encode(item.DataItem.ContactTitle) %> </li> <li> <strong>City:</strong> <%= Html.Encode(item.DataItem.City)%> </li> <li> <strong>Country:</strong> <%= Html.Encode(item.DataItem.Country)%> </li> <li> <strong>Phone:</strong> <%= Html.Encode(item.DataItem.Phone)%> </li> <li> <div style="float:right"> <%= Html.ActionLink("Edit", "Edit", new { id = item.DataItem.CustomerID }) %> <%= Html.ActionLink("Delete", "Delete", new { id = item.DataItem.CustomerID })%> </div> </li> </ul> </div> </div> </fieldset> <%}) .EmptyItem(() =>{%> <fieldset style="border:1px solid #e0e0e0"> <legend>Empty</legend> </fieldset> <%}) .EndGroup(() => {%> </td> <%}) .EndLayout(pager => {%> </tr> </tbody> </table> <%}) .GroupItemCount(3) .PageSize(6) .Pager<NumericPager>(pager => pager.ShowFirstLast()) .Render(); %> As you can see that you have the complete control on the final angel brackets and like the webform’s version you also can define the templates. You can also use this component to show Master/Detail data, for example the customers and its order like the following: I am attaching the complete source code along with the above examples for your review, what do you think, how about creating some component with our extensions? Download: MvcListView.zip

    Read the article

  • Man pages not finding entry

    - by Mike
    So, I'm not sure what is going on with my system (ubuntu 12.04), but my man pages do not seem to be working. I try man gcc and get the following response No manual entry for gcc See 'man 7 undocumented' for help when manual pages are not available. However I see the man entry in /usr/share/man/man1/gcc.1.gz Here is what my /etc/manpath.config file looks like # manpath.config # # This file is used by the man-db package to configure the man and cat paths. # It is also used to provide a manpath for those without one by examining # their PATH environment variable. For details see the manpath(5) man page. # # Lines beginning with `#' are comments and are ignored. Any combination of # tabs or spaces may be used as `whitespace' separators. # # There are three mappings allowed in this file: # -------------------------------------------------------- # MANDATORY_MANPATH manpath_element # MANPATH_MAP path_element manpath_element # MANDB_MAP global_manpath [relative_catpath] #--------------------------------------------------------- # every automatically generated MANPATH includes these fields # #MANDATORY_MANPATH /usr/src/pvm3/man # MANDATORY_MANPATH /usr/man MANDATORY_MANPATH /usr/share/man MANDATORY_MANPATH /usr/local/share/man #--------------------------------------------------------- # set up PATH to MANPATH mapping # ie. what man tree holds man pages for what binary directory. # # *PATH* -> *MANPATH* # MANPATH_MAP /bin /usr/share/man MANPATH_MAP /usr/bin /usr/share/man MANPATH_MAP /sbin /usr/share/man MANPATH_MAP /usr/sbin /usr/share/man MANPATH_MAP /usr/local/bin /usr/local/man MANPATH_MAP /usr/local/bin /usr/local/share/man MANPATH_MAP /usr/local/sbin /usr/local/man MANPATH_MAP /usr/local/sbin /usr/local/share/man MANPATH_MAP /usr/X11R6/bin /usr/X11R6/man MANPATH_MAP /usr/bin/X11 /usr/X11R6/man MANPATH_MAP /usr/games /usr/share/man MANPATH_MAP /opt/bin /opt/man MANPATH_MAP /opt/sbin /opt/man #--------------------------------------------------------- # For a manpath element to be treated as a system manpath (as most of those # above should normally be), it must be mentioned below. Each line may have # an optional extra string indicating the catpath associated with the # manpath. If no catpath string is used, the catpath will default to the # given manpath. # # You *must* provide all system manpaths, including manpaths for alternate # operating systems, locale specific manpaths, and combinations of both, if # they exist, otherwise the permissions of the user running man/mandb will # be used to manipulate the manual pages. Also, mandb will not initialise # the database cache for any manpaths not mentioned below unless explicitly # requested to do so. # # In a per-user configuration file, this directive only controls the # location of catpaths and the creation of database caches; it has no effect # on privileges. # # Any manpaths that are subdirectories of other manpaths must be mentioned # *before* the containing manpath. E.g. /usr/man/preformat must be listed # before /usr/man. # # *MANPATH* -> *CATPATH* # MANDB_MAP /usr/man /var/cache/man/fsstnd MANDB_MAP /usr/share/man /var/cache/man MANDB_MAP /usr/local/man /var/cache/man/oldlocal MANDB_MAP /usr/local/share/man /var/cache/man/local MANDB_MAP /usr/X11R6/man /var/cache/man/X11R6 MANDB_MAP /opt/man /var/cache/man/opt # #--------------------------------------------------------- # Program definitions. These are commented out by default as the value # of the definition is already the default. To change: uncomment a # definition and modify it. # #DEFINE pager pager -s #DEFINE cat cat #DEFINE tr tr '\255\267\264\327' '\055\157\047\170' #DEFINE grep grep #DEFINE troff groff -mandoc #DEFINE nroff nroff -mandoc #DEFINE eqn eqn #DEFINE neqn neqn #DEFINE tbl tbl #DEFINE col col #DEFINE vgrind vgrind #DEFINE refer refer #DEFINE grap grap #DEFINE pic pic -S # #DEFINE compressor gzip -c7 #--------------------------------------------------------- # Misc definitions: same as program definitions above. # #DEFINE whatis_grep_flags -i #DEFINE apropos_grep_flags -iEw #DEFINE apropos_regex_grep_flags -iE #--------------------------------------------------------- # Section names. Manual sections will be searched in the order listed here; # the default is 1, n, l, 8, 3, 0, 2, 5, 4, 9, 6, 7. Multiple SECTION # directives may be given for clarity, and will be concatenated together in # the expected way. # If a particular extension is not in this list (say, 1mh), it will be # displayed with the rest of the section it belongs to. The effect of this # is that you only need to explicitly list extensions if you want to force a # particular order. Sections with extensions should usually be adjacent to # their main section (e.g. "1 1mh 8 ..."). # SECTION 1 n l 8 3 2 3posix 3pm 3perl 5 4 9 6 7 # #--------------------------------------------------------- # Range of terminal widths permitted when displaying cat pages. If the # terminal falls outside this range, cat pages will not be created (if # missing) or displayed. # #MINCATWIDTH 80 #MAXCATWIDTH 80 # # If CATWIDTH is set to a non-zero number, cat pages will always be # formatted for a terminal of the given width, regardless of the width of # the terminal actually being used. This should generally be within the # range set by MINCATWIDTH and MAXCATWIDTH. # #CATWIDTH 0 # #--------------------------------------------------------- # Flags. # NOCACHE keeps man from creating cat pages. #NOCACHE Thanks for any help (p.s. even 'man man' fails) Edit: When I run ls -l /usr/share/man/man1/gcc* I get the following output lrwxrwxrwx 1 root root 12 May 27 15:41 /usr/share/man/man1/gcc.1.gz -> gcc-4.6.1.gz -rw-r--r-- 1 root root 217776 Apr 15 17:34 /usr/share/man/man1/gcc-4.6.1.gz

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >