Search Results

Search found 15906 results on 637 pages for 'scott and the dev team'.

Page 594/637 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • open source knowledge base CMS system

    - by Thomi
    I'm looking for an open source knowledge base system that uses tags, rather than free-text search to identify articles (a lot like serverfault does). I've looked at twiki, which many people suggested, but haven't found what I'm looking for. Basically I want to be able to create and tag articles, and provide an easy way for anonymous users to search based on tags. Edit: OK, here's some more detail regarding what I want. Basically, all the knowledge base systems I have seen so far are a collection of articles, each article with a title. Most of them allow you to categorise articles into groups and sub-groups. Users of the system can search for information using a title search, for example "How do I print from AwesomeProduct?" - which then shows a list of any articles that match that search text. This is fine and dandy when your KB is for one version of the software product (the mythical AwesomeProduct ver 1.0). However, the development team then go ahead and create a new version (ver 2.0) that adds many new features and changes some existing features. Now, how do we support both products in the same KB? The Naive method is to copy all articles from 1.0, and update them for 2.0, adding and removing articles in 2.0 as required. We can then add text at the top of every 1.0 article that says: "this articles applies to 1.0 only, to see the 2.0 version, click here" (or something similar) The problem with articles being indexed in the system by title is that it's very hard to filter based on meta-data like version. What happens when we create version 3.0 or 4.0? The end-situation here is that you have a mess of articles. They're hard to search, hard to filter, and even harder to manage. The solution (it seems to me) is to use tags, rather than text as the article index mechanism. So articles can be tagged with a tag representing the software version, topic area etc. etc. Users can then filter based on tag - an example search might be "version_1 printing" - which straight away gives a list of articles with all these tags. So that's what I'm looking for - a KB system that uses tags, rather than text to index many articles. I'm sure I could build something with drupal, but I was hoping for something that worked out-of-the-box.

    Read the article

  • Routing all data through an VPN tunnel with ppp

    - by Oliver
    I'm trying to create a VPN tunnel that forwards all data from the local machine to the VPN server. I'm using ppp-2.4.5 for this with the following configuration: pty "pptp <VPNServer> --nolaunchpppd" name <my login name> remotename PPTP usepeerdns require-mppe-128 file /etc/ppp/options.pptp persist maxfail 0 holdoff 5 I have a script in if-up.d with the following content: route del default eth0 route add default dev ppp0 Before starting the VPN tunnel my routing looks like: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 2 0 0 eth0 127.0.0.0 127.0.0.1 255.0.0.0 UG 0 0 0 lo 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 After starting the tunnel (via pon) it looks like: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 12.34.56.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 127.0.0.0 127.0.0.1 255.0.0.0 UG 0 0 0 lo 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 Now the problem is, that the VPN tunnel seems to be looped into itself. If I run ifconfig after a few seconds without any traffic: eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.0.0 broadcast 192.168.255.255 ether 00:01:2e:2f:ff:35 txqueuelen 1000 (Ethernet) RX packets 39931 bytes 6784614 (6.4 MiB) RX errors 0 dropped 90 overruns 0 frame 0 TX packets 34980 bytes 7633181 (7.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 20 memory 0xfbdc0000-fbde0000 ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1496 inet 12.34.56.78 netmask 255.255.255.255 destination 12.34.56.1 ppp txqueuelen 3 (Point-to-Point Protocol) RX packets 7 bytes 94 (94.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 782863 bytes 349257986 (333.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 It states that already over 300 MiB have been send, ppp0 is only online since a few seconds and the connection isn't working anyway. Can someone please help me to fix the routing table, so that the traffic from ppp0 is not send again through ppp0 but instead goes to the remote server?

    Read the article

  • Nginx fastcgi problems with django

    - by wizard
    I'm deploying my first django app. I'm familiar with nginx and fastcgi from deploying php-fpm. I can't get python to recognize the urls. I'm also at a loss on how to debug this further. I'd welcome solutions to this problem and tips on debugging fastcgi problems. Currently I get a 404 page regardless of the url and for some reason a double slash For http://www.site.com/admin/ Page not found (404) Request Method: GET Request URL: http://www.site.com/admin// My urls.py from the debug output - which work in the dev server. Using the URLconf defined in ahrlty.urls, Django tried these URL patterns, in this order: ^listings/ ^admin/ ^accounts/login/$ ^accounts/logout/$ my nginx config server { listen 80; server_name beta.ahrlty.com; access_log /home/ahrlty/ahrlty/logs/access.log; error_log /home/ahrlty/ahrlty/logs/error.log; location /static/ { alias /home/ahrlty/ahrlty/ahrlty/static/; break; } location /media/ { alias /usr/lib/python2.6/dist-packages/django/contrib/admin/media/; break; } location / { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:8001; break; } } and my fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param PATH_INFO $fastcgi_script_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; And lastly I'm running fastcgi from the commandline with django's manage.py. python manage.py runfcgi method=threaded host=127.0.0.1 port=8080 pidfile=mysite.pid minspare=4 maxspare=30 daemonize=false I'm having a hard time debugging this one. Does anything jump out at anybody?

    Read the article

  • Two VHosts Use Same DocumentRoot, PHP Not Working on Second VHost

    - by thegrip
    I'm helping maintain an e-commerce site that is run on Magento. This site is an outlet for our wholesale customers. We have recently decided to open a second store to reach out to our Retail customers. We decided to set up another website inside of our magento store so that we can share the products across both stores. I'm in the process of setting up this new site on the server, but have run into an issue. I've set up the second vhost for the new retail site, and I've made the DocumentRoot for this vhost the same as for the wholesale site, so we can use one magento application for both sites. This is where the error occurs. When I browse to the new store it triggers a download of the index.php file. So I know the DocumentRoot directive is working, but it seems like PHP is being broken in the process. I'm using plesk to manage the server. I've made sure that PHP is turned on in both vhosts and still get the same issue. Does this sound like a problem of PHP breaking, or is it possible my vhost.conf file is set up incorrectly? (Although the vhost is managed by plesk and appears correct) Any help will be much appreciated. EDIT: Here's the vhost configs (generated by plesk): <VirtualHost IPADDRESS:80> ServerName domain.com:80 ServerAdmin "[email protected]" DocumentRoot /var/www/vhosts/domain.com/subdomains/tk/httpdocs CustomLog /var/www/vhosts/domain.com/statistics/logs/access_log plesklog ErrorLog /var/www/vhosts/domain.com/statistics/logs/error_log <IfModule mod_ssl.c> SSLEngine off </IfModule> <Directory /var/www/vhosts/domain.com/subdomains/tk/httpdocs> <IfModule mod_php4.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/domain.com/subdomains/tk/httpdocs:/tmp" </IfModule> <IfModule mod_php5.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/mkdesigngroup.com/subdomains/tk/httpdocs:/tmp" </IfModule> Options -Includes -ExecCGI </Directory> Include /var/www/vhosts/domain.com/subdomains/tk/conf/vhost.conf And here's what I've added in vhost.conf (which is included by plesk): DocumentRoot /var/www/vhosts/domain.com/subdomains/dev/httpdocs -grip

    Read the article

  • Thinkpad speaker turns mute - Linux Codec issue?

    - by Curlew
    At some point a few days ago the speakers on my Lenovo Thinkpad T410 (Model number: 2537A11) suddenly stopped working randomly. This error happens every time I watch a video or listen to a music file. The sound just abruptly stops. At the moment, I can't produce a single sound no matter what I do. I am using Debian GNU/Linux on this laptop and there doesn't appear to be anything else wrong (the fan is working, no abnormal heat (staying around ~40°C), no other obvious errors or problems). Here is the output of a nice program someone pointed me to: martin@martin:~/Downloads$ sudo python run.py --monitor Using temporary directory: /dev/shm/hda-analyzer You may remove this directory when finished or if you like to download the most recent copy of hda-analyzer tool. Downloading file hda_analyzer.py Downloading file hda_guilib.py Downloading file hda_codec.py Downloading file hda_proc.py Downloading file hda_graph.py Downloading file hda_mixer.py Downloaded all files, executing hda_analyzer.py Watching 1 cards ====================================== Sound is working normally and then it stops and the following lines appear: Diff for codec 0/0 (0x14f15069): --- +++ @@ -164,17 +164,17 @@ Power: setting=D0, actual=D0 Node 0x1f [Pin Complex] wcaps 0x400501: Stereo Pincap 0x00000010: OUT Pin Default 0x901701f0: [Fixed] Speaker at Int N/A Conn = Analog, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT - Power: setting=D0, actual=D0 + Power: setting=D3, actual=D3 Connection: 2 0x10* 0x11 Node 0x20 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE And now there is also an error in the dmesg output hda-intel: IRQ timing workaround is activated for card #0. Suggest a bigger bdl_pos_adj. I changed the bdl_pos_adj to various numbers (-1, 0, 64, 1024) and either there is no change at all or dmesg reports that the adjustment is too big. I wonder if this bdl_pos_adj is the real reason for the error. Here is my hardware information provided by alsa-info.sh website. Okay, i did some serious testing and even installed Windows and now i officially conclude that this is a hard-ware related issue with my Laptop speakers. Reason: The error occurs in my installed Debian Linux, an Ubuntu Live distribution and Windows XP No error-message appears in all of the OS. The sound just keeps running and i can't hear a thing. I tested different setups, including OSS, ALSA and the pulseaudio server on top If i use my new usb-headphones i can hear sound all the time without any sudden silences. So obviously, although hard to believe, my laptop speakers are not okay (never heard of similar cases). I'll award the bounty to anyone who can point me to good tutorials or the procedure how to exchange my T410 speakers (i still have warranty. The laptop was bought in Germany, but now i am in Denmark). Or to someone who can explain me the output from hda-analyzer (big log above).

    Read the article

  • How to Configure Source NAT (Private IP => Public IP Outbound)

    - by DavidScherer
    I'm running VMWare ESXi Free and have Zentyal SBS 3.2 running as a Gateway. I have 5 Public IPS (CIDR/29, let's call them 69.1.1.1 - 69.1.1.5) and currently Zentyal is bound to 69.1.1.1 as the Gateway, with the other 4 Public IPs set as Virtual Interfaces in Zentyal (wan2-wan5) I have machines sitting on the Private Network (10.34.251.x) that, when going Outbound (to Google for instance) should be seen by the Internet as an IP other than the Gateway (69.1.1.1), this is because our machines need to be able to communicate with 3rd party APIs that expect these requests to come from a specific IP. From what I could find, SNAT (Source NAT) in Zentyal is used to achieve this, but I'm not sure how to configure it and cannot find a specific piece of Documentation for it at Zentyal. I've tried setting this up a couple different ways, with no results and at this point I have no idea if I'm going about this completely wrong, or my lack of experience with networking and the associated terminology is preventing me from placing the correct values in the correct fields. I get the following form to set up "SNAT" rules in Zentyal: Perhaps someone can offer some guidance and definitions for the fields above? SNAT Address Is this the Public IP I want to masquerade? Outgoing Interface Should this by my External NIC (one connected to Public 'Net), or is it the "Private" interface? It sounds as though this should be the External interface as I want the traffic from the internal network sent Out over this Interface (using a different IP than normal, anyway) Source Is the the Source on the internal network (one of the private IPs?), a public IP I want to masquerade as, or something else entirely? Destination Is this a place on the Internet (eg, "Only do this for the Site Google.com"/IP) or am I allowing myself to become confused again? Service I'm assuming this allows me to restrict which services this rule will apply to, but is it for a service on the internal network or a service being accessed on the external network? If I can offer any further details or information to make what I'm trying to do more clear, I will happily do so. Honestly any kind of help here would be very appreciated. I'm not a NetOps or anything even close, I spend most of my day writing code and my entire "team" at this company consists of "me, myself, and I" so while I try to broaden my KB at every possible opportunity, I can only learn so much, so fast and I feel like with networking especially there's just so much, coupled with a learning curve for each solution that likes to (from my limited perspective) use slightly different terminology that what I'm used to (and I don't exactly have the necessary experience to cross reference this stuff with the stuff I already know in context).

    Read the article

  • Routing table with two NIC adapters in libvirt/KVM

    - by lzap
    I created a virtual NAT network (192.168.100.0/24 network) in my libvirt and new guest with two interfaces - one in this network, one as bridged (10.34.1.0/24 network) to the local LAN. The reason for that is I need to have my own virtual network for my DHCP/TFTP/DNS testing and still want to access my guest externally from my LAN. On both networks I have working DHCP, both giving them IP addresses. When I setup NAT port forwarding (e.g. for ssh), I can connect to the eth0 (virtual network), everything is fine. But when I try to access the eth1 via bridged interface, I have no response. I guess I have problem with my routing table - outgoing packets are routed to the virtual NAT network (which has access to the machine I am connecting from - I can ping it). But I am not sure if this setup is correct. I think I need to add something to my routing table. # ifconfig eth0 Link encap:Ethernet HWaddr 52:54:00:B4:A7:5F inet addr:192.168.100.14 Bcast:192.168.100.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:feb4:a75f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16468 errors:0 dropped:27 overruns:0 frame:0 TX packets:6081 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:22066140 (21.0 MiB) TX bytes:483249 (471.9 KiB) Interrupt:11 Base address:0x2000 eth1 Link encap:Ethernet HWaddr 52:54:00:DE:16:21 inet addr:10.34.1.111 Bcast:10.34.1.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fede:1621/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:34 errors:0 dropped:0 overruns:0 frame:0 TX packets:189 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4911 (4.7 KiB) TX bytes:9 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.34.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1 0.0.0.0 192.168.100.1 0.0.0.0 UG 0 0 0 eth0 Network I am trying to connect from is different than network the hypervisor is connected to: 10.36.0.0. But it is accessible from that network. So I tried to add new route rule: route add -net 10.36.0.0 netmask 255.255.0.0 dev eth1 And it is not working. I thought setting correct interface would be sufficient. What is needed to get my packets coming through?

    Read the article

  • New CentOS/cPanel servers showing high load averages at idle

    - by Jax
    I have taken delivery of two identically specced CentOS/cPanel servers, showing the same behaviour of a resting load average of 1.30, 1.21, 1.16 and yet the CPU is sitting 100% idle. Hardware: Xeon(R) CPU E3-1270 4GB RAM Behavior:- top shows CPU 99.9% idle virtually no disk IO Some command output :- uname -a Linux server.myserver.com 2.6.18-308.4.1.el5PAE #1 SMP Tue Apr 17 17:47:38 EDT 2012 i686 i686 i386 GNU/Linux top top - 10:37:50 up 1:47, 1 user, load average: 1.28, 1.20, 1.17 Tasks: 199 total, 1 running, 198 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4125104k total, 438764k used, 3686340k free, 25788k buffers Swap: 2096440k total, 0k used, 2096440k free, 291080k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2160 640 552 S 0.0 0.0 0:00.89 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 35 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/5 18 root 38 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/6 21 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/6 22 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/6 23 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/7 24 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/7 25 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/7 26 root 10 -5 0 0 0 S 0.0 0.0 0:06.42 events/0 27 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/1 28 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/2 29 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/3 30 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/4 31 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/5 32 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/6 33 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/7 34 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 khelper 35 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kthread 45 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/0 46 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/1 47 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/2 48 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/3 49 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/4 50 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/5 51 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/6 52 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/7 53 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kacpid 189 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/0 190 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/1 191 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/2 192 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/3 193 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/4 194 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/5 195 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/6 196 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/7 199 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 khubd ps axf PID TTY STAT TIME COMMAND 1 ? Ss 0:00 init [3] 2 ? S< 0:00 [migration/0] 3 ? SN 0:00 [ksoftirqd/0] 4 ? S< 0:00 [watchdog/0] 5 ? S< 0:00 [migration/1] 6 ? SN 0:00 [ksoftirqd/1] 7 ? S< 0:00 [watchdog/1] 8 ? S< 0:00 [migration/2] 9 ? SN 0:00 [ksoftirqd/2] 10 ? S< 0:00 [watchdog/2] 11 ? S< 0:00 [migration/3] 12 ? SN 0:00 [ksoftirqd/3] 13 ? S< 0:00 [watchdog/3] 14 ? S< 0:00 [migration/4] 15 ? SN 0:00 [ksoftirqd/4] 16 ? S< 0:00 [watchdog/4] 17 ? S< 0:00 [migration/5] 18 ? SN 0:00 [ksoftirqd/5] 19 ? S< 0:00 [watchdog/5] 20 ? S< 0:00 [migration/6] 21 ? SN 0:00 [ksoftirqd/6] 22 ? S< 0:00 [watchdog/6] 23 ? S< 0:00 [migration/7] 24 ? SN 0:00 [ksoftirqd/7] 25 ? S< 0:00 [watchdog/7] 26 ? S< 0:06 [events/0] 27 ? S< 0:00 [events/1] 28 ? S< 0:00 [events/2] 29 ? S< 0:00 [events/3] 30 ? S< 0:00 [events/4] 31 ? S< 0:00 [events/5] 32 ? S< 0:00 [events/6] 33 ? S< 0:00 [events/7] 34 ? S< 0:00 [khelper] 35 ? S< 0:00 [kthread] 45 ? S< 0:00 \_ [kblockd/0] 46 ? S< 0:00 \_ [kblockd/1] 47 ? S< 0:00 \_ [kblockd/2] 48 ? S< 0:00 \_ [kblockd/3] 49 ? S< 0:00 \_ [kblockd/4] 50 ? S< 0:00 \_ [kblockd/5] 51 ? S< 0:00 \_ [kblockd/6] 52 ? S< 0:00 \_ [kblockd/7] 53 ? S< 0:00 \_ [kacpid] 189 ? S< 0:00 \_ [cqueue/0] 190 ? S< 0:00 \_ [cqueue/1] 191 ? S< 0:00 \_ [cqueue/2] 192 ? S< 0:00 \_ [cqueue/3] 193 ? S< 0:00 \_ [cqueue/4] 194 ? S< 0:00 \_ [cqueue/5] 195 ? S< 0:00 \_ [cqueue/6] 196 ? S< 0:00 \_ [cqueue/7] 199 ? S< 0:00 \_ [khubd] 201 ? S< 0:00 \_ [kseriod] 301 ? S 0:00 \_ [khungtaskd] 302 ? S 0:00 \_ [pdflush] 303 ? S 0:00 \_ [pdflush] 304 ? S< 0:00 \_ [kswapd0] 305 ? S< 0:00 \_ [aio/0] 306 ? S< 0:00 \_ [aio/1] 307 ? S< 0:00 \_ [aio/2] 308 ? S< 0:00 \_ [aio/3] 309 ? S< 0:00 \_ [aio/4] 310 ? S< 0:00 \_ [aio/5] 311 ? S< 0:00 \_ [aio/6] 312 ? S< 0:00 \_ [aio/7] 472 ? S< 0:00 \_ [kpsmoused] 551 ? S< 0:00 \_ [ata/0] 552 ? S< 0:00 \_ [ata/1] 553 ? S< 0:00 \_ [ata/2] 554 ? S< 0:00 \_ [ata/3] 555 ? S< 0:00 \_ [ata/4] 556 ? S< 0:00 \_ [ata/5] 557 ? S< 0:00 \_ [ata/6] 558 ? S< 0:00 \_ [ata/7] 559 ? S< 0:00 \_ [ata_aux] 569 ? S< 0:00 \_ [scsi_eh_0] 570 ? S< 0:00 \_ [scsi_eh_1] 571 ? S< 0:00 \_ [scsi_eh_2] 572 ? S< 0:00 \_ [scsi_eh_3] 573 ? S< 0:00 \_ [scsi_eh_4] 574 ? S< 0:00 \_ [scsi_eh_5] 593 ? S< 0:00 \_ [kstriped] 630 ? S< 0:00 \_ [kjournald] 655 ? S< 0:00 \_ [kauditd] 1860 ? S< 0:00 \_ [kmpathd/0] 1861 ? S< 0:00 \_ [kmpathd/1] 1862 ? S< 0:00 \_ [kmpathd/2] 1863 ? S< 0:00 \_ [kmpathd/3] 1864 ? S< 0:00 \_ [kmpathd/4] 1865 ? S< 0:00 \_ [kmpathd/5] 1866 ? S< 0:00 \_ [kmpathd/6] 1867 ? S< 0:00 \_ [kmpathd/7] 1868 ? S< 0:00 \_ [kmpath_handlerd] 1902 ? S< 0:00 \_ [kjournald] 1904 ? S< 0:00 \_ [kjournald] 1906 ? S< 0:00 \_ [kjournald] 1908 ? S< 0:00 \_ [kjournald] 1910 ? S< 0:00 \_ [kjournald] 2184 ? S< 0:00 \_ [iscsi_eh] 2288 ? S< 0:00 \_ [cnic_wq] 2298 ? S< 0:00 \_ [bnx2i_thread/0] 2299 ? S< 0:00 \_ [bnx2i_thread/1] 2300 ? S< 0:00 \_ [bnx2i_thread/2] 2301 ? S< 0:00 \_ [bnx2i_thread/3] 2302 ? S< 0:00 \_ [bnx2i_thread/4] 2303 ? S< 0:00 \_ [bnx2i_thread/5] 2304 ? S< 0:00 \_ [bnx2i_thread/6] 2305 ? S< 0:00 \_ [bnx2i_thread/7] 2330 ? S< 0:00 \_ [ib_addr] 2359 ? S< 0:00 \_ [ib_mcast] 2360 ? S< 0:00 \_ [ib_inform] 2361 ? S< 0:00 \_ [local_sa] 2371 ? S< 0:00 \_ [iw_cm_wq] 2381 ? S< 0:00 \_ [ib_cm/0] 2382 ? S< 0:00 \_ [ib_cm/1] 2383 ? S< 0:00 \_ [ib_cm/2] 2384 ? S< 0:00 \_ [ib_cm/3] 2385 ? S< 0:00 \_ [ib_cm/4] 2386 ? S< 0:00 \_ [ib_cm/5] 2387 ? S< 0:00 \_ [ib_cm/6] 2388 ? S< 0:00 \_ [ib_cm/7] 2398 ? S< 0:00 \_ [rdma_cm] 2684 ? S< 0:00 \_ [bond0] 2882 ? S< 0:00 \_ [bond1] 3195 ? S< 0:00 \_ [kondemand/0] 3197 ? S< 0:00 \_ [kondemand/1] 3198 ? S< 0:00 \_ [kondemand/2] 3199 ? S< 0:00 \_ [kondemand/3] 3200 ? S< 0:00 \_ [kondemand/4] 3201 ? S< 0:00 \_ [kondemand/5] 3202 ? S< 0:00 \_ [kondemand/6] 3203 ? S< 0:00 \_ [kondemand/7] 688 ? S<s 0:00 /sbin/udevd -d 2425 ? S<Lsl 0:00 iscsiuio 2432 ? Ss 0:00 iscsid 2434 ? S<Ls 0:00 iscsid 3061 ? S<sl 0:00 auditd 3063 ? S<sl 0:00 \_ /sbin/audispd 3121 ? Ss 0:00 syslogd -m 0 3124 ? Ss 0:00 klogd -x 3220 ? Ss 0:00 irqbalance 3278 ? Ss 0:00 dbus-daemon --system 3324 ? Ss 0:00 /usr/sbin/acpid 3337 ? Ss 0:00 hald 3338 ? S 0:00 \_ hald-runner 3345 ? S 0:00 \_ hald-addon-acpi: listening on acpid socket /var/run/acpid.socket 3349 ? S 0:00 \_ hald-addon-keyboard: listening on /dev/input/event1 3360 ? S 0:00 \_ hald-addon-storage: polling /dev/sr0 3413 ? Ssl 0:00 automount 3435 ? Ssl 0:00 /usr/sbin/named -u named 3466 ? Ss 0:00 /usr/sbin/sshd 4072 ? Ss 0:00 \_ sshd: root@pts/0 4078 pts/0 Ss 0:00 \_ -bash 5436 pts/0 R+ 0:00 \_ ps axf 3484 ? Ss 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid 3500 ? SLs 0:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g 3514 ? S 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/server.myserver.com.pid 3575 ? Sl 0:00 \_ /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log-error=/var/lib/mysql/server.myserver.com.err --pid-fil 3687 ? Ss 0:00 /usr/sbin/exim -bd -q1h 3709 ? Ss 0:00 /usr/sbin/dovecot 3710 ? S 0:00 \_ dovecot-auth 3725 ? S 0:00 \_ pop3-login 3726 ? S 0:00 \_ pop3-login 3727 ? S 0:00 \_ imap-login 3728 ? S 0:00 \_ imap-login 3729 ? Ss 0:00 /usr/local/apache/bin/httpd -k start -DSSL 4326 ? S 0:00 \_ /usr/bin/perl /usr/local/cpanel/bin/leechprotect 4332 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4333 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4334 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4335 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4336 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4337 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4382 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4383 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4384 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 5389 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 5390 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 3741 ? Ss 0:00 pure-ftpd (SERVER) 3746 ? S 0:00 /usr/sbin/pure-authd -s /var/run/ftpd.sock -r /usr/sbin/pureauth 3759 ? Ss 0:00 crond 3772 ? Ss 0:00 /usr/sbin/atd 3909 ? S 0:00 cpsrvd (SSL) - waiting for connections 5435 ? Z 0:00 \_ [cpsrvd-ssl] <defunct> 3931 ? S 0:00 queueprocd - wait to process a task 3948 ? S 0:00 tailwatchd 3954 ? SN 0:00 cpanellogd - sleeping for logs 4003 ? Ss 0:00 ./nimbus /opt/nimsoft 4016 ? S 0:00 \_ nimbus(controller) 4053 ? Sl 0:00 \_ nimbus(spooler) 4066 ? S 0:00 \_ nimbus(hdb) 4069 ? S 0:00 \_ nimbus(cdm) 4070 ? S 0:00 \_ nimbus(processes) 4023 ? S 0:00 /usr/sbin/smartd -q never 4027 tty1 Ss+ 0:00 /sbin/mingetty tty1 4028 tty2 Ss+ 0:00 /sbin/mingetty tty2 4029 tty3 Ss+ 0:00 /sbin/mingetty tty3 4030 tty4 Ss+ 0:00 /sbin/mingetty tty4 4031 tty5 Ss+ 0:00 /sbin/mingetty tty5 4033 tty6 Ss+ 0:00 /sbin/mingetty tty6 4035 ttyS1 Ss+ 0:00 /sbin/agetty -h -L ttyS1 19200 vt100 vmstat 10 6 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 3718136 25684 257424 0 0 8 3 127 189 0 0 100 0 0 0 0 0 3718136 25700 257420 0 0 0 7 1013 1500 0 0 100 0 0 0 0 0 3718136 25700 257424 0 0 0 1 1013 1551 0 0 100 0 0 0 0 0 3718136 25700 257424 0 0 0 0 1012 1469 0 0 100 0 0 1 0 0 3712680 25716 257424 0 0 0 2 1013 1542 0 0 100 0 0 0 0 0 3718376 25740 257424 0 0 0 46 1017 1534 0 0 100 0 0 Can anyone advise me as to what is the cause of and how I may resolve this behaviour? A kernel/driver conflict perhaps? I don't see any processes in R or D state that might inflate the load averages artificially, I realise it may be considered low in an 8 thread system but its higher at idle than any normal behaviour I've previously come across. Thanks in advance for your time. Edit: iotop Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 26 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.29 % [events/0] 3205 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.10 % [kondemand/2] 3208 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kondemand/5] 3209 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kondemand/6] 3207 be/3 root 0.00 B/s 0.00 B/s 0.10 % 0.00 % [kondemand/4] 3210 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kondemand/7] 3227 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % irqbalance 3288 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rpciod/1] 3287 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rpciod/0] 3206 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kondemand/3] 3069 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % auditd 3070 be/2 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % audispd 655 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kauditd] 3619 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % automount 3 be/7 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0] 3068 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % auditd 29 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [events/3] 4 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/0] 7 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/1] 10 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/2] 13 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/3] 16 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/4] 19 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/5] 22 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/6] 25 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/7] 27 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [events/1] 28 be/3 root 0.00 B/s 0.00 B/s 0.29 % 0.00 % [events/2] 30 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [events/4] 31 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [events/5] 32 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [events/6] 33 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [events/7] 34 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [khelper] 35 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthread] 45 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kblockd/0]

    Read the article

  • How to disable or tune filesystem cache sharing for OpenVZ?

    - by gertvdijk
    For OpenVZ, an example of container-based virtualization, it seems that host and all guests are sharing the filesystem cache. This sounds paradoxical when talking about virtualization, but this is actually a feature of OpenVZ. It makes sense too. Because only one kernel is running, it's possible to benefit from sharing the same pages of filesystem cache in memory. And while it sounds beneficial, I think a set up here actually suffers in performance from it. Here's why I think why: my machines aren't actually sharing any files on disk so I can't benefit from this feature in OpenVZ. Several OpenVZ machines are running MySQL with MyISAM tables. MyISAM relies on the system's filesystem cache for caching of data files, unlike InnoDB's buffer pool. Also some virtual machines are known to do heavy and large I/O operations on the same filesystem in the host. For example, when running cat *.MYD > /dev/null on some large database in one machine, I saw the filesystem cache lowering in another, monitored by htop. This essentially flushes all the useful filesystem cache in guests (FIFO) and so it flushes the MySQL caches in the guests. Now users are complaining that MySQL is very slow. And it is. Some simple SELECT queries take several seconds on times disk I/O is heavily used by other machines. So, simply put: Is there a way to avoid filesystem cache being wiped out by other virtual machines in container-based virtualization? Some thoughts: Choosing algorithm for flushing filesystem cache in the kernel. (possible? how?) Reserving a certain amount of pages for a single VM. (seems no option for filesystem cache type of pages that reading man vzctl) Will running MySQL on another filesystem get me anywhere? If not, I think my alternatives are: Use KVM for MySQL-MyISAM running VMs. KVM actually assigns memory to the VM and does not allow swapping out caches unless using a balloon driver. Move to InnoDB and tune the buffer pools, dirty pages, etc. This is now considered to be 'nice to have' on the long-term as not everyone responsible for administration of the system understands InnoDB. more suggestions welcome. System software: Proxmox (now 1.9, could be upgraded to 2.x). One big LV assigned for the VMs.

    Read the article

  • Why won't my Windows 8 Command line update its path

    - by mawcsco
    I needed to add a new entry to my PATH variable. This is a common activity for me in my job, but I've recently started using Windows 8. I assumed the process would be similar to Windows 7, Vista, XP... Here's my sequence of events: Open System properties (Start- [type "Control Panel"] - Control Panel\System and Security\System - Advanced system settings - Environment Variables) Add the new path to beginning of my USER PATH variable (C:\dev\Java\apache-ant-1.8.4\bin;) Opened a command prompt (Start - [type "command prompt" enter] - [type "path" enter] My new path entry is not available (see attached image and vide). I Duplicated the exact same process on a Windows 7 machine and it worked. EDIT Windows 8 Environment Variables and Command Prompt video EDIT This is definitely not the behavior of Windows 7. Watch this video to see the behavior I expect working in Windows 7. http://youtu.be/95JXY5X0fII EDIT 5/31/2013 So, after much frustration, I wrote a small C# app to test the WM_SETTINGCHANGE event. This code receives the event in both Windows 7 and Windows 8. However, in Windows 8 on my system, I do not get the correct path; but, I do in Windows 7. This could not be reproduced in other Windows 8 systems. Here is the C# code. using System; using Microsoft.Win32; public sealed class App { static void Main() { SystemEvents.UserPreferenceChanging += new UserPreferenceChangingEventHandler(OnUserPreferenceChanging); Console.WriteLine("Waiting for system events."); Console.WriteLine("Press <Enter> to exit."); Console.ReadLine(); } static void OnUserPreferenceChanging(object sender, UserPreferenceChangingEventArgs e) { Console.WriteLine("The user preference is changing. Category={0}", e.Category); Console.WriteLine("path={0}", System.Environment.GetEnvironmentVariable("PATH")); } } OnUserPreferenceChanging is equivalent to WM_SETTINGCHANGE C# program running in Windows 7 (you can see the event come through and it picks up the correct path). C# program running in Windows 8 (you can see the event come through, but the wrong path). There is something about my environment that is precipitating this problem. However, is this a Windows 8 bug?

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link inside of directory with the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • Nginx fastcgi problems with django (double slashes in url?)

    - by wizard
    I'm deploying my first django app. I'm familiar with nginx and fastcgi from deploying php-fpm. I can't get python to recognize the urls. I'm also at a loss on how to debug this further. I'd welcome solutions to this problem and tips on debugging fastcgi problems. Currently I get a 404 page regardless of the url and for some reason a double slash For http://www.site.com/admin/ Page not found (404) Request Method: GET Request URL: http://www.site.com/admin// My urls.py from the debug output - which work in the dev server. Using the URLconf defined in ahrlty.urls, Django tried these URL patterns, in this order: ^listings/ ^admin/ ^accounts/login/$ ^accounts/logout/$ my nginx config server { listen 80; server_name beta.ahrlty.com; access_log /home/ahrlty/ahrlty/logs/access.log; error_log /home/ahrlty/ahrlty/logs/error.log; location /static/ { alias /home/ahrlty/ahrlty/ahrlty/static/; break; } location /media/ { alias /usr/lib/python2.6/dist-packages/django/contrib/admin/media/; break; } location / { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:8001; break; } } and my fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param PATH_INFO $fastcgi_script_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; And lastly I'm running fastcgi from the commandline with django's manage.py. python manage.py runfcgi method=threaded host=127.0.0.1 port=8080 pidfile=mysite.pid minspare=4 maxspare=30 daemonize=false I'm having a hard time debugging this one. Does anything jump out at anybody? Notes nginx version: nginx/0.7.62 Django svn trunk rev 13013

    Read the article

  • Two DHCP Servers, Block Clients for one of them?

    - by Rilindo
    I am building out a kickstart network that resides on a different VLAN uses its own DHCP server. For some reason, my kickstart clients kept getting assign IPs from my primary DHCP server. The way I have it set up is that I have a primary DHCP server on this router here: 192.168.15.1 Connected to that DHCP server is a switch with the IP of 192.168.15.2. My kickstart (Scientific Linux) server is connected to that switch on two ports: Port 2 - where the kickstart server communicates to the rest of the production network via eth0. The IP assigned to the server on that interface is 192.168.15.100 (on eth0). The details are: Interface: eth0 IP: 192.168.15.100 Netmask: 255.255.255.0 Gateway: 192.168.15.1 Port 7 - has it's own VLAN ID (along with port 8). The kickstart server is connected to that port with the IP of 172.16.15.100 (on eth1). Again, the details are: Interface: eth1 IP: 172.16.15.100 Netmask: 255.255.255.0 Gateway: none The kickstart server runs its own DHCP server and assigns them over the eth1. Most of the kick starts are built over the kickstart VLAN through port 8. To prevent the kickstart DHCP server from assigning addresses over the production network, I have the route setup like so: route add -host 255.255.255.255 dev eth1 At this point, the clients kept getting assign IPs from the 192.168.15.1 DHCP server. I need to figure out a way to block client requests from reaching that DHCP. Its should be noted that but I also build KVM hosts on the kickstart server as well, so I need those KVMs to have the ability to get DHCP requests from the 192.168.15.1 DHCP server via the bridge network once I finish resolved this particular problem. (Currently, they communicate via NAT). So what would be done to resolve this? Through iptables or some sort of routing I need to put in? I tried to limited to requests via IPtables on that interface, allowing DHCP requests for 172.16.15.x network: -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 69 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 69 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 68 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 68 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 67 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 67 -j ACCEPT And rejects assignments on eth1 from 192.168.15.x network: -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 69 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 69 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 68 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 68 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 67 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 67 -j REJECT Nope. :(

    Read the article

  • iotop for Linux kernel 2.6.18

    - by Lightsauce
    So it has to come to my attention that iotop isn't availalbe for 2.6.18 since it's less than 2.6.20 and requires Python 2.6+. I've done some research and came across this article: http://lserinol.blogspot.com/2009/09/io-usage-per-process-on-linux.html According to this, if these process have io stats in /proc/pid#/io (where pid# is the process #) it's doable regardless of the kernel version. So, in reality, I could upgrade Python to 2.6 and test out iotop. However, my flavor of Linux, CentOS release 5.5 (Final), only supports Python 2.4.3-44.el5 currently. If I were to do uninstall from yum, it doesn't look so pretty. It ends up wanting to uninstall 235 packages, most of which are very important! I read in one place, online (I forget the URL from yesterday), that you can install Python 2.6+ parallel to this one, and have the rpm install for iotop use that. Well, I didn't choose that route. I figured, what the heck, lets write iotop (not copying it, but reverse engineering it without actually looking at it's code/it in use) in bash. I thought it would just grab the /proc/pid#/io file and parse stats. So I wrote a script to grab the top 10 rchar, wchar, read_bytes, and write_bytes by collecting all these stats from all the /proc/pid#/io files, sorting them by each metric, then grabbing the top 10 highest values. The conclusion, the data seems completely useless. Does anybody know any resources for advanced Linux where I can figure out how to take these /proc/pid#/ directories and figure out what the heck they are doing with io on the disk? My main goal is to figure out what exactly is causing high load on my disk. I just know it's on the / partition (/dev/sda2 in this case), and I'm not really sure how to narrow it down without the help of iotop. If I run iostat to grab metrics for 1 minute, every second, the first result it gives me shows a high 'kB_read/s', so that makes me think, it's reading mostly. However, if I watch the update it gives me every second, it's actually just showing values for kB_wrtn/s. This makes me think the initial value iostat gives me is misleading.

    Read the article

  • How to setup Proxy Cache with Nginx and Passenger

    - by tiny
    I use Nginx and Passenger for my rails application. I want to use proxy cache to cache my pages. However, every request go direct to my rails application. I don't know what wrong with my configuration. Below is my configuration: user www-data; worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-2.2.15; passenger_ruby /usr/bin/ruby1.8; passenger_max_pool_size 6; passenger_max_instances_per_app 1; passenger_pool_idle_time 0; rails_spawn_method conservative; include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 512; sendfile on; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_http_version 1.0; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css text/javascript application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss; proxy_cache_path /var/www/cache/webapp levels=1:2 keys_zone=webapp:8m max_size=1000m inactive=600m; include vhosts/*.conf; include /opt/nginx/conf/sites-enabled/*; root /var/www; } server { listen 127.0.0.1:3008; server_name localhost; root /var/www/yoolk_web_app/public; # <--- be sure to point to 'public'! passenger_enabled on; rails_env development; passenger_use_global_queue on; } server { listen 80; server_name webpage.dev; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; error_page 503 http://$host/maintenance.html; location ~* (css|js|png|jpe?g|gif|ico)$ { root /var/www/web_app/public; expires max; } location / { proxy_pass http://127.0.0.1:3008/; proxy_cache webapp; proxy_cache_valid 200 10m; } #More Location }

    Read the article

  • Pushing Large Files to 500+ Computers [closed]

    - by WMIF
    I work with a team to manage 500-600 rented Windows 7 computers for an annual conference. We have a large amount of data that needs to be synced to these computers, up to 1 TiB. The computers are divided into rooms and connected through unmanaged gigabit switches. We prepare these computers ahead of time with the Windows installation and configuration, plus any files that we have available to us before we send the base image in for replication by the rental company. Every year, we have presenters approach on site with up to gigs of data that need to be pushed to the room that they will be presenting in. Sometimes they only have a few files that are small sizes, such as a slide PDF, but can sometimes be much larger 5 GiB. Our current strategy for pushing these files is using batch scripts and RoboCopy. For the large pushes, we actually use a BitTorrent client to generate a torrent file, and then we use the batch-RoboCopy to push the torrent into a folder on the remote machines that is being monitored by an installed BT client. Often times, this data needs to be pushed immediately with a small time window. We have several machines in a control room that are identical to the machines on the floor that we use for these pushes. We occasionally have a need to execute a program on the remote machines, and we currently use batch and PSexec to handle this task. We would love to be able to respond to these last minute pushes with "sorry, your own fault", but it won't happen. The BT method has allowed us to have a much faster response time, but the whole batch process can get messy when there are multiple jobs being pushed. We use Enterprise Ghost for other processes, and it doesn't work well in this large of scale, plus it is really quite expensive for a once-a-year task like this. EDIT: There is a hard requirement that the remote machines on the floor are running Windows. The control machines do not have a hard OS requirement. I would really like to stay away from Multicast because of complications with upstream routers. Is Multicast or BitTorrent the better way to go on this? Is there another protocol that might work better?

    Read the article

  • Nginx Rewrite Rule For File Within Folder Not Working

    - by user3620111
    Good evening everyone or possible early morning if you are in my neck of the woods. My problem seems trivial but after several hours of testing, researching and fiddling I can't seem to get this simple nginx rewrite function to work. There are several rewrites we need, some will have multiple parameters but I cant even get this simple 1 parameter current url to alter at all to the desired. Current: website.com/public/viewpost.php?id=post-title Desired: website.com/public/post/post-title Can someone kindly point me to as what I have done wrong, I am baffled / very tired... For testing purposes before we launch we were just using a simple port on the server. Here is that section. # Listen on port 7774 for dev test server { listen 7774; server_name localhost; root /usr/share/nginx/html/paa; index index.php home.php index.html index.htm /public/index.php; location ~* /uploads/.*\.php$ { if ($request_uri ~* (^\/|\.jpg|\.png|\.gif)$ ) { break; } return 444; } location ~ \.php$ { try_files $uri @rewrite =404; fastcgi_index index.php; include fastcgi_params; fastcgi_pass php5-fpm-sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location @rewrite { rewrite ^/viewpost.php$ /post/$arg_id? permanent; } } I have tried countless attempts such as above @rewrite and simpler: location / { rewrite ^/post/(.*)$ /viewpost.php?id=$1 last; } location ~ \.php$ { try_files $uri =404; fastcgi_index index.php; include fastcgi_params; fastcgi_pass php5-fpm-sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } I can not seem to get anything to work at all, I have tried changing the location tried multiple rules... Please tell me what I have done wrong. Pause for facepalm [relocated from stack overflow as per mod suggestion]

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • how can I give openvpn clients access to a dns server (bind9) that is located on the same machine as the openvpn server

    - by lacrosse1991
    I currently have a debian server that is running an openvpn server. I also have a dns server (bind9) that I would like give allow access to by the connected openvpn clients, but I am unsure as of how to do this, I already known how to send dns options to the clients using push "dhcp-option DNS x.x.x.x" but I am just unsure how give the clients access to the dns server that is located on the same machine as the vpn server, so if anyone could point me in the right direction I would really appreciate it. Also in case this would have anything to do with adding rules to iptables, this is my current configuration for iptables # Generated by iptables-save v1.4.14 on Thu Oct 18 22:05:33 2012 *nat :PREROUTING ACCEPT [3831842:462225238] :INPUT ACCEPT [3820049:461550908] :OUTPUT ACCEPT [1885011:139487044] :POSTROUTING ACCEPT [1883834:139415168] -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE COMMIT # Completed on Thu Oct 18 22:05:33 2012 # Generated by iptables-save v1.4.14 on Thu Oct 18 22:05:33 2012 *filter :INPUT ACCEPT [45799:10669929] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [45747:10335026] :fail2ban-apache - [0:0] :fail2ban-apache-myadmin - [0:0] :fail2ban-apache-noscript - [0:0] :fail2ban-ssh - [0:0] :fail2ban-ssh-ddos - [0:0] :fail2ban-webserver-w00tw00t - [0:0] -A INPUT -p tcp -m multiport --dports 80,443 -j fail2ban-apache-myadmin -A INPUT -p tcp -m multiport --dports 80,443 -j fail2ban-webserver-w00tw00t -A INPUT -p tcp -m multiport --dports 80,443 -j fail2ban-apache-noscript -A INPUT -p tcp -m multiport --dports 80,443 -j fail2ban-apache -A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh-ddos -A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh -A INPUT -i tun+ -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT -A FORWARD -i tun+ -j ACCEPT -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT -A fail2ban-apache -j RETURN -A fail2ban-apache-myadmin -s 211.154.213.122/32 -j DROP -A fail2ban-apache-myadmin -s 201.170.229.96/32 -j DROP -A fail2ban-apache-myadmin -j RETURN -A fail2ban-apache-noscript -j RETURN -A fail2ban-ssh -s 76.9.59.66/32 -j DROP -A fail2ban-ssh -s 64.13.220.73/32 -j DROP -A fail2ban-ssh -s 203.69.139.179/32 -j DROP -A fail2ban-ssh -s 173.10.11.146/32 -j DROP -A fail2ban-ssh -j RETURN -A fail2ban-ssh-ddos -j RETURN -A fail2ban-webserver-w00tw00t -s 217.70.51.154/32 -j DROP -A fail2ban-webserver-w00tw00t -s 86.35.242.58/32 -j DROP -A fail2ban-webserver-w00tw00t -j RETURN COMMIT # Completed on Thu Oct 18 22:05:33 2012 also here is my openvpn server configuration port 1194 proto udp dev tun ca ca.crt cert server.crt key server.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt keepalive 10 120 comp-lzo user nobody group users persist-key persist-tun status /var/log/openvpn/openvpn-status.log verb 3 push "redirect-gateway def1" push "dhcp-option DNS 213.133.98.98" push "dhcp-option DNS 213.133.99.99" push "dhcp-option DNS 213.133.100.100" client-to-client

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • Apache will not stop/start gracefully

    - by ddjammin
    CentOs 6 64bit running apache 2.2.15-29.el6.centos. When I try to stop/start or restart httpd I get an error that says it has failed. A tail of the error log is below. I also noticed that a httpd.pid file is not created even though it is configured in the main conf file. If I set selinux to permissive, it works just fine. I do not want to run it with selinux disabled. If I delete the SSL_Mutex file it will start. HTTPD was running fine until I tried to add the ssl configuration. I copied over the ssl.conf file from a working server into the conf.d folder. I also copied a sslcert folder into the conf folder. It contains the certs, key, csr and password file. I think the problem has to do with the selinux context for the sslcert folder that was copied but I am not certain and not sure how to fix it. Below is the security context for the sslcert folder after executing restorecon -R sslcert ls -Z -rw-r--r--. root root system_u:object_r:httpd_config_t:s0 httpd.conf -rw-r--r--. root root system_u:object_r:httpd_config_t:s0 magic **drwxr-xr-x. root root system_u:object_r:httpd_config_t:s0 sslcert** tail -f /var/log/httpd/error_log [Thu Oct 17 13:33:19 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Oct 17 13:33:20 2013] [notice] Digest: generating secret for digest authentication ... [Thu Oct 17 13:33:20 2013] [notice] Digest: done [Thu Oct 17 13:33:20 2013] [warn] pid file /etc/httpd/logs/ssl.pid overwritten -- Unclean shutdown of previous Apache run? [Thu Oct 17 13:33:20 2013] [notice] Apache/2.2.15 (Unix) DAV/2 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations [Thu Oct 17 21:04:48 2013] [notice] caught SIGTERM, shutting down [Thu Oct 17 21:06:42 2013] [notice] **SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0** [Thu Oct 17 21:06:42 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Oct 17 21:06:42 2013] [error] (17)File exists: Cannot create SSLMutex with file `/etc/httpd/logs/ssl_mutex' I also saw mention of possible issues with semaphores. Below is the output of the current semaphores and apache is currently not running. ipcs -s ------ Semaphore Arrays -------- key semid owner perms nsems 0x00000000 0 root 600 1 0x00000000 65537 root 600 1 Finally selinux reports the following error. `sealert -a /var/log/audit/audit.log` 0% donetype=AVC msg=audit(1382034755.118:420400): avc: denied { write } for pid=3393 comm="httpd" name="ssl_mutex" dev=dm-0 ino=9513484 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:httpd_log_t:s0 tclass=file **** Invalid AVC allowed in current policy *** 100% doneERROR: failed to read complete file, 1044649 bytes read out of total 1043317 bytes (/var/log/audit/audit.log) found 1 alerts in /var/log/audit/audit.log -------------------------------------------------------------------------------- SELinux is preventing /usr/sbin/httpd from remove_name access on the directory ssl_mutex.

    Read the article

  • [CentOS 4.8] nslookup resolves domains to IPs, but I can't get a response to pings to external servers

    - by Beco
    I have a fresh install of CentOS 4.8 running on an internal development server. I haven't done anything to it besides setting up sudoers and SSH. I can SSH into the server and from there resolve domains to IPs and ping internal servers, but for some reason I don't get any response from pinging external servers. The software firewall is disabled, and the problem is present with both static and DHCP-assigned network configurations. The network domain controller is a Windows Server 2003 box. $ nslookup google.com Server: 10.254.2.5 Address: 10.254.2.5#53 Non-authoritative answer: Name: google.com Address: 74.125.47.147 Name: google.com Address: 74.125.47.99 <etc...> 10.254.2.5 is the Win2K3 server. $ ping google.com PING google.com (74.125.47.106) 56(84) bytes of data. It just hangs here indefinitely. $ cat /etc/resolv.conf ; generated by /sbin/dhclient-script search <...snip...>.local nameserver 10.254.2.5 nameserver 10.254.2.124 10.254.2.124 is the backup DC server, which is currently off and tombstoned by this point. The snipped section is our company name. # ifconfig eth0 Link encap:Ethernet HWaddr <snip> inet addr:10.254.2.101 Bcast:10.254.2.255 Mask:255.255.255.0 inet6 addr: <snip>/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:80066 errors:0 dropped:0 overruns:0 frame:0 TX packets:4421 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7810133 (7.4 MiB) TX bytes:590550 (576.7 KiB) Interrupt:225 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:32 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8104 (7.9 KiB) TX bytes:8104 (7.9 KiB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.254.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.254.2.5 0.0.0.0 UG 0 0 0 eth0 And, for good measure, a snapshot of the current ethernet config via the system-config-network GUI. Edit: I don't yet have enough rep to post images, so here's a link. Sorry! system-config-network snapshot I'm pretty green when it comes to setting up *nix dev servers and network configuration in general, so please let me know if I've left out critical information, or posted information I shouldn't have posted. Thanks!

    Read the article

  • Prefer extension methods for encapsulation and reusability?

    - by tzaman
    edit4: wikified, since this seems to have morphed more into a discussion than a specific question. In C++ programming, it's generally considered good practice to "prefer non-member non-friend functions" instead of instance methods. This has been recommended by Scott Meyers in this classic Dr. Dobbs article, and repeated by Herb Sutter and Andrei Alexandrescu in C++ Coding Standards (item 44); the general argument being that if a function can do its job solely by relying on the public interface exposed by the class, it actually increases encapsulation to have it be external. While this confuses the "packaging" of the class to some extent, the benefits are generally considered worth it. Now, ever since I've started programming in C#, I've had a feeling that here is the ultimate expression of the concept that they're trying to achieve with "non-member, non-friend functions that are part of a class interface". C# adds two crucial components to the mix - the first being interfaces, and the second extension methods: Interfaces allow a class to formally specify their public contract, the methods and properties that they're exposing to the world. Any other class can choose to implement the same interface and fulfill that same contract. Extension methods can be defined on an interface, providing any functionality that can be implemented via the interface to all implementers automatically. And best of all, because of the "instance syntax" sugar and IDE support, they can be called the same way as any other instance method, eliminating the cognitive overhead! So you get the encapsulation benefits of "non-member, non-friend" functions with the convenience of members. Seems like the best of both worlds to me; the .NET library itself providing a shining example in LINQ. However, everywhere I look I see people warning against extension method overuse; even the MSDN page itself states: In general, we recommend that you implement extension methods sparingly and only when you have to. (edit: Even in the current .NET library, I can see places where it would've been useful to have extensions instead of instance methods - for example, all of the utility functions of List<T> (Sort, BinarySearch, FindIndex, etc.) would be incredibly useful if they were lifted up to IList<T> - getting free bonus functionality like that adds a lot more benefit to implementing the interface.) So what's the verdict? Are extension methods the acme of encapsulation and code reuse, or am I just deluding myself? (edit2: In response to Tomas - while C# did start out with Java's (overly, imo) OO mentality, it seems to be embracing more multi-paradigm programming with every new release; the main thrust of this question is whether using extension methods to drive a style change (towards more generic / functional C#) is useful or worthwhile..) edit3: overridable extension methods The only real problem identified so far with this approach, is that you can't specialize extension methods if you need to. I've been thinking about the issue, and I think I've come up with a solution. Suppose I have an interface MyInterface, which I want to extend - I define my extension methods in a MyExtension static class, and pair it with another interface, call it MyExtensionOverrider. MyExtension methods are defined according to this pattern: public static int MyMethod(this MyInterface obj, int arg, bool attemptCast=true) { if (attemptCast && obj is MyExtensionOverrider) { return ((MyExtensionOverrider)obj).MyMethod(arg); } // regular implementation here } The override interface mirrors all of the methods defined in MyExtension, except without the this or attemptCast parameters: public interface MyExtensionOverrider { int MyMethod(int arg); string MyOtherMethod(); } Now, any class can implement the interface and get the default extension functionality: public class MyClass : MyInterface { ... } Anyone that wants to override it with specific implementations can additionally implement the override interface: public class MySpecializedClass : MyInterface, MyExtensionOverrider { public int MyMethod(int arg) { //specialized implementation for one method } public string MyOtherMethod() { // fallback to default for others MyExtension.MyOtherMethod(this, attemptCast: false); } } And there we go: extension methods provided on an interface, with the option of complete extensibility if needed. Fully general too, the interface itself doesn't need to know about the extension / override, and multiple extension / override pairs can be implemented without interfering with each other. I can see three problems with this approach - It's a little bit fragile - the extension methods and override interface have to be kept synchronized manually. It's a little bit ugly - implementing the override interface involves boilerplate for every function you don't want to specialize. It's a little bit slow - there's an extra bool comparison and cast attempt added to the mainline of every method. Still, all those notwithstanding, I think this is the best we can get until there's language support for interface functions. Thoughts?

    Read the article

  • SharePoint 2010 – Central Admin tooling to create host header site collections

    - by eJugnoo
    Just like SharePoint 2007, you can create host-header based site collections in SharePoint 2010 as well. It means, that you do not necessarily need to create a site-collection under a managed path like /sites/, you can create multiple root-level site collections on same web-application/port by using host-header site collections. All you need to do is point your domain or sub-domain to your web-application and create a matching site-collection that you want. But, just like in 2007, it is something that you do by using STSADM, and is not available on Central Admin UI in 2010 as well. Yeah, though you can now also use PowerShell to create one: C:\PS>$w = Get-SPWebApplication http://sitename   C:\PS>New-SPSite http://www.contoso.com -OwnerAlias "DOMAIN\jdoe" -HostHeaderWebApplication $w -Title "Contoso" -Template "STS#0"   This example creates a host header site collection. Because the template is provided, the root Web of this site collection will be created. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I’ve been playing with WCM in SharePoint 2010 more and more, and for that I preferred creating hosts file entries for desired domains and create site-collections by those headers – in my dev environment. I used PowerShell initially, but then got interested to build my own UI on Central Admin instead. Developed with Visual Studio 2010 So I used new Visual Studio 2010 tooling to create an empty SharePoint 2010 project. Added an application page (there is no option to add _Admin page item in VS 2010 RC), that got created in Layouts “mapped” folder. Created a new Admin mapped folder for 14-“hive”, and moved my new page there instead. Yes, I didn’t change the base class for page, its just that it runs under _admin, but it is indeed a LayoutsPageBase inherited page. To introduce a action-link in Central Admin console, I created following element: 1: <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> 2: <CustomAction 3: Id="CreateSiteByHeader" 4: Location="Microsoft.SharePoint.Administration.Applications" 5: Title="Create site collections by host header" 6: GroupId="SiteCollections" 7: Sequence="15" 8: RequiredAdmin="Delegated" 9: Description="Create a new top-level web site, by host header" > 10: <UrlAction Url="/_admin/OfficeToolbox/CreateSiteByHeader.aspx" /> 11: </CustomAction> 12: </Elements> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Used Reflector to understand any special code behind createpage.aspx, and created a new for our purpose – CreateSiteByHeader.aspx. From there I quickly created a similar code behind, without all the fancy of Farm Config Wizard handling and dealt with alternate implementations of sealed classes! Goal was to create a professional looking and OOB-type experience. I also added Regex validation to ensure user types a valid domain name as header value. Below is the result…   Release @ Codeplex I’ve released to WSP on OfficeToolbox @ Codeplex, and you can download from here. Hope you find it useful… -- Sharad

    Read the article

  • How To Configure Remote Desktop To Hyper-V Guest Virtual Machines

    - by Brian Jackett
    Configuring Remote Desktop (RDP) from a host Hyper-V machine to a guest virtual machine can be tricky, so this post is dedicated to the issues and resolution steps I went through to allow RDP.  Cutting to the point, below are the things to look for followed by some explanation about my scenario if you care to read.  This is not an exhaustive list of what is required, just the items that were causing problems for my particular scenario. Requirements Allow Remote Desktop Connections in guest OS. The network adapter type must allow communication with host machine (e.g. use an “Internal” virtual adapter.) If running Server 2008 R2 on guest, network discovery mode must be turned on. If running Server 2008 R2 on guest, the services supporting network discovery mode must be running: - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host My Environment     A quick word about my environment.  I am running Windows Server 2008 R2 with Hyper V on my laptop and numerous guest VMs running Windows Server 2003 R2 or Windows Server 2008 R2.  I run a domain controller VM and then 1 or 2 SharePoint servers depending on my work needs.  I’ve found this setup to work well except when it comes to the display window for my VMs. The Issue     Ever since I began running Hyper-V I haven’t been able to RDP to my guest VMs which means the resolution for my connection windows ha been limited to what the native Hyper-V connections allow.  During personal use I can put the resolution up to 1152 x 864, but during presentations I am usually limited to a measly 800 x 600.  That is until today when I decided to fully investigate why I couldn’t connect via RDP.     First a thank you to John Ross (@johnrossjr), Christina Wheeler (@cwheeler76) and Clayton Cobb (@warrtalon) for various suggestions while I was researching tonight.  As it turns out I had not 1, not 2, but 3 items preventing me from using RDP.  Let’s dig into the requirements above. Allow RDP Connection     This item I had previously taken care of, but it bears repeating because by default Windows Server 2008 R2 does not allow RDP connections.  Change the setting from “Don’t allow…” to whichever “Allow connections…” setting suits your needs.  I chose the less secure option as this is just my dev laptop. Network Adapter Type     When I originally configured my VMs I configured each to use 2 network adapters: one using the physical ethernet adapter for internet use and a virtual private adapter for communication between the VMs.  The connection for the ethernet adapter is an "”External” adapter and thus doesn’t connect between the host and guest.  The virtual private adapter allowed communication ONLY between the VMs and not to my host.  There is a third option “Internal” which allows communication between VMs as well as to the host.  After finding out this distinction I promptly created an Internal network adapter and assigned that to my VMs. Turn On Network Discovery     Seems like a pretty common sense thing, but in order to allow remote desktop connections the target computer must able to be found by the source computer (explained here.)  One of the settings that controls if a computer can be found on the network is aptly named Network Discovery.  By default Windows Server 2008 R2 turns Network Discovery off for security purposes.  To enable it open up the Network and Sharing Center.  Click “Change Advanced Sharing Settings” on the left.  On the following screen select “Turn on network discovery” for the currently used profile and click Save Settings.  You may notice though that your selection to turn on network discovery doesn’t save.  If this is the case then you most likely don’t have the supporting services running (as was my case.) Network Discovery Supporting Services     There are a total of 4 services (listed again below) that need to be running before you can turn on network discovery (explained here.)  The below images highlight these services.  In my guest VM I found that I had DNS Client already running while the other 3 were disabled.  I set them all to enabled and started the ones that were stopped.  After this change I returned to the Sharing settings screen and found that Network Discovery was turned on.  I’m not sure whether this was picking up my attempt to turn it on previously or if starting those services turned it on.  Either way the end result was a success. - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host Before and After Results     The first image is the smaller square shaped viewing window used by the Hyper-V native connection.  The second is the full-screen RDP connection in all its widescreen glory. Conclusion     Over the past few months I’ve found Hyper-V to be very useful for virtualizing my development environments, but I’ve also had a steep learning curve to get various items configured just right.  Allowing RDP connections to guest VMs was one area that I hadn’t been able to get right for the longest time.  Now that I resolved these issues I hope that others can avoid the pitfalls that I ran into.  If you know of any other items I left off feel free to let me know.        -Frog Out   Links Turning on Network Discovery http://sqlblog.com/blogs/john_paul_cook/archive/2009/08/15/remote-desktop-connection-on-windows-server-2008-r2.aspx Services required for Network Discovery http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/2e1fea01-3f2b-4c46-a631-a8db34ed4f84

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >