Search Results

Search found 3118 results on 125 pages for 'fragment caching'.

Page 101/125 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • nokia cell phone not accepting IP from dnsmasq dhcp server

    - by samix
    Hello, I having problem connecting a NOkia cell phone to my home wifi network. The wifi network is provided by a wireless card in a machine running Debian Testing and 2.6.26-2-686 kernel. The cars is D-Link DWL-G520 working in ap mode and has WPA encryption enabled. The wireless network is provided by hostapd using madwifi driver. Windows and Mac machines work properly with this wifi network. When I try to get the Nokia phone to connect to the wifi network, I get these lines in my dnsmasq log (to see lines without wrapping, here is the pastebin link for convenience - http://pastebin.com/m466c8fd2): Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 IEEE 802.11: disassociated Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 IEEE 802.11: associated Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 RADIUS: starting accounting session 4AE664FA-00000036 Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 WPA: pairwise key handshake completed (WPA) Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 WPA: group key handshake completed (WPA) Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 Available DHCP range: 192.168.5.150 -- 192.168.5.199 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 DHCPDISCOVER(ath0) 0.0.0.0 11:22:33:44:55:66 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 DHCPOFFER(ath0) 192.168.5.21 11:22:33:44:55:66 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 requested options: 12:hostname, 6:dns-server, 15:domain-name, Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 requested options: 1:netmask, 3:router, 28:broadcast, 120:sip-server Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 tags: known, ath0 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 next server: 192.168.5.1 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 1 option: 53:message-type 02 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 54:server-identifier 192.168.5.1 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 51:lease-time 00:00:46:50 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 58:T1 00:00:23:28 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 59:T2 00:00:3d:86 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 1:netmask 255.255.255.0 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 28:broadcast 192.168.5.255 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 3:router 192.168.5.1 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 6:dns-server 192.168.5.1 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 8 option: 15:domain-name home.pvt Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 3 option: 12:hostname NokiaCellPhone Anybody know the problem might be? If I switch off dnsmasq dhcp queries logging, i.e. if I decrease the verbosity of the log, all I see are two lines of DHCPDISCOVER(ath0) and DHCPOFFER(ath0) repeatedly in the log with no acceptance by the cell phone. It appears as though the phone is not accepting the dhcp offer. However, if I give the phone a static IP address in its configuration, it works properly on the wifi network. So it appears as though the problem is dhcp related. Hints? Suggestions? Installed stuff: $ dpkg -l dnsmasq hostap* | grep ^i ii dnsmasq 2.50-1 A small caching DNS proxy and DHCP/TFTP server ii dnsmasq-base 2.50-1 A small caching DNS proxy and DHCP/TFTP server ii hostapd 1:0.6.9-3 user space IEEE 802.11 AP and IEEE 802.1X/WPA/ Thanks. PS: Here is the DHCP tcp dump for more information (with mac addresses changed): $ sudo dhcpdump -i ath0 -h ^11:22:33:44:55:66 TIME: 2009-10-30 12:15:32.916 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 0 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:32.918 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 0 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:32.918 IP: 192.168.5.1 (a:bb:cc:dd:ee:ff) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 0 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 192.168.5.21 SIADDR: 192.168.5.1 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 192.168.5.1 OPTION: 51 ( 4) IP address leasetime 18000 (5h) OPTION: 58 ( 4) T1 9000 (2h30m) OPTION: 59 ( 4) T2 15750 (4h22m30s) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 192.168.5.255 OPTION: 3 ( 4) Routers 192.168.5.1 OPTION: 6 ( 4) DNS server 192.168.5.1 OPTION: 15 ( 8) Domainname home.pvt OPTION: 12 ( 3) Host name Nokia_E63 TIME: 2009-10-30 12:15:34.922 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 2 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:34.922 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 2 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:34.923 IP: 192.168.5.1 (a:bb:cc:dd:ee:ff) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 2 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 192.168.5.21 SIADDR: 192.168.5.1 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 192.168.5.1 OPTION: 51 ( 4) IP address leasetime 18000 (5h) OPTION: 58 ( 4) T1 9000 (2h30m) OPTION: 59 ( 4) T2 15750 (4h22m30s) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 192.168.5.255 OPTION: 3 ( 4) Routers 192.168.5.1 OPTION: 6 ( 4) DNS server 192.168.5.1 OPTION: 15 ( 8) Domainname home.pvt OPTION: 12 ( 3) Host name Nokia_E63 TIME: 2009-10-30 12:15:38.919 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 6 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:38.920 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 6 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:38.921 IP: 192.168.5.1 (a:bb:cc:dd:ee:ff) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 6 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 192.168.5.21 SIADDR: 192.168.5.1 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 192.168.5.1 OPTION: 51 ( 4) IP address leasetime 18000 (5h) OPTION: 58 ( 4) T1 9000 (2h30m) OPTION: 59 ( 4) T2 15750 (4h22m30s) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 192.168.5.255 OPTION: 3 ( 4) Routers 192.168.5.1 OPTION: 6 ( 4) DNS server 192.168.5.1 OPTION: 15 ( 8) Domainname home.pvt OPTION: 12 ( 3) Host name Nokia_E63 TIME: 2009-10-30 12:15:46.944 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: ccafe769 SECS: 14 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:46.944 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: ccafe769 SECS: 14 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:46.945 IP: 192.168.5.1 (a:bb:cc:dd:ee:ff) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: ccafe769 SECS: 14 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 192.168.5.21 SIADDR: 192.168.5.1 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 192.168.5.1 OPTION: 51 ( 4) IP address leasetime 18000 (5h) OPTION: 58 ( 4) T1 9000 (2h30m) OPTION: 59 ( 4) T2 15750 (4h22m30s) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 192.168.5.255 OPTION: 3 ( 4) Routers 192.168.5.1 OPTION: 6 ( 4) DNS server 192.168.5.1 OPTION: 15 ( 8) Domainname home.pvt OPTION: 12 ( 3) Host name Nokia_E63 TIME: 2009-10-30 12:15:48.952 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 ... and so on ...

    Read the article

  • Windows Phone 7 development: Using isolated storage

    - by DigiMortal
    In my previous posting about Windows Phone 7 development I showed how to use WebBrowser control in Windows Phone 7. In this posting I make some other improvements to my blog reader application and I will show you how to use isolated storage to store information to phone. Why isolated storage? Isolated storage is place where your application can save its data and settings. The image on right (that I stole from MSDN library) shows you how application data store is organized. You have no other options to keep your files besides isolated storage because Windows Phone 7 does not allow you to save data directly to other file system locations. From MSDN: “Isolated storage enables managed applications to create and maintain local storage. The mobile architecture is similar to the Silverlight-based applications on Windows. All I/O operations are restricted to isolated storage and do not have direct access to the underlying operating system file system. Ultimately, this helps to provide security and prevents unauthorized access and data corruption.” Saving files from web to isolated storage I updated my RSS-reader so it reads RSS from web only if there in no local file with RSS. User can update RSS-file by clicking a button. Also file is created when application starts and there is no RSS-file. Why I am doing this? I want my application to be able to work also offline. As my code needs some more refactoring I provide it with some next postings about Windows Phone 7. If you want it sooner then please leave me a comment here. Here is the code for my RSS-downloader that downloads RSS-feed and saves it to isolated storage file calles rss.xml. public class RssDownloader {     private string _url;     private string _fileName;       public delegate void DownloadCompleteDelegate();     public event DownloadCompleteDelegate DownloadComplete;       public RssDownloader(string url, string fileName)     {         _url = url;         _fileName = fileName;     }       public void Download()     {         var request = (HttpWebRequest)WebRequest.Create(_url);         var result = (IAsyncResult)request.BeginGetResponse(ResponseCallback, request);            }       private void ResponseCallback(IAsyncResult result)     {         var request = (HttpWebRequest)result.AsyncState;         var response = request.EndGetResponse(result);           using(var stream = response.GetResponseStream())         using(var reader = new StreamReader(stream))         using(var appStorage = IsolatedStorageFile.GetUserStoreForApplication())         using(var file = appStorage.OpenFile("rss.xml", FileMode.OpenOrCreate))         using(var writer = new StreamWriter(file))         {             writer.Write(reader.ReadToEnd());         }           if (DownloadComplete != null)             DownloadComplete();     } } Of course I modified RSS-source for my application to use rss.xml file from isolated storage. As isolated storage files also base on streams we can use them everywhere where streams are expected. Reading isolated storage files As isolated storage files are opened as streams you can read them like usual files in your usual applications. The next code fragment shows you how to open file from isolated storage and how to read it using XmlReader. Previously I used response stream in same place. using(var appStorage = IsolatedStorageFile.GetUserStoreForApplication()) using(var file = appStorage.OpenFile("rss.xml", FileMode.Open)) {     var reader = XmlReader.Create(file);                      // more code } As you can see there is nothing complex. If you have worked with System.IO namespace objects then you will find isolated storage classes and methods to be very similar to these. Also mention that application storage and isolated storage files must be disposed after you are not using them anymore.

    Read the article

  • It’s nice to be important, but it’s more important to be nice

    - by BuckWoody
    I’ve been a little “preachy” lately, telling you that you should let people finish their sentences, and always check a problem out before you tell a user that their issue is “impossible”. Well, I’ll round that out with one more tip today. Keep in mind that all of these things are actions I’ve been guilty of, hopefully in the past. I’m kind of a “work in progress”. And yes, I know these tips are coming from someone who picks on people in presentations, but that is of course done in fun, and (hopefully) with the audience’s knowledge.   (No, this isn’t aimed at any one person or event in particular – I just see it happen a lot)   I’ve seen, unfortunately over and over, someone in authority react badly to someone who is incorrect, or at least perceived to be incorrect. This might manifest itself in a comment, post, question or whatever, but the point is that I’ve seen really intelligent people literally attack someone they view as getting something wrong. Don’t misunderstand me; if someone posts that you should always drop a production database in the middle of the day I think you should certainly speak up and mention that this might be a bad idea!  No, I’m talking about generalizations or even incorrect statements done in good faith. Let me explain with an example.   Suppose someone makes the statement: “If you don’t have enough space on your system, you can just use a DBCC command to shrink the database”. Let’s take two responses to this statement.   Response One: “That’s insane. Everyone knows that shrinking a database is a stupid idea, you’re just going to fragment your indexes all over the place.” Response Two: “That’s an interesting take – in my experience and from what I’ve read here (someurl.com) I think this might not be a universal best practice.”   Of course, both responses let the person making the statement and those reading it know that you don’t agree, and that it’s probably wrong. But the person you responded to and the general audience hearing you (or reading your response) might form two different opinions of you.   The first response says to me “this person really needs to be right, and takes arguments personally. They aren’t thinking of the other person at all, or the folks reading or hearing the exchange. They turned an incorrect technical statement into a personal attack. They haven’t left the other party any room to ‘save face’, and they have potentially turned what could be a positive learning experience for everyone into a negative. Also, they sound more than just a little arrogant.”   The second response says to me “this person has left room for everyone to save face, has presented evidence to the contrary and is thinking about moving the ball forward and getting it right rather than attacking someone for getting it wrong.” It’s the idea of questioning a statement rather than attacking a person.   Perhaps you have a different take. Maybe you think the “direct” approach is best – and maybe that’s worked for you. Something to consider is what you’ve really accomplished while using that first method. Sure, the info you provide is correct, and perhaps someone out there won’t shrink a database because of your response – but perhaps you’ve turned a lot more people off, and now they won’t listen to your other valuable information. You’ll be an expert, but another one of the nameless, arrogant jerks in technology. And I don’t think anyone likes to be thought of that way.   OK, I’ll get down off of the high-horse now. And I’ll keep the title of this entry (said to me by my grandmother when I was a little kid) in mind when I dismount. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • PHP-APC Installation

    - by Leo
    Trying to get my head around the way to install APC cache on PHP 5.3.13. That's a VPS with apache, configured preferably through whm/cpanel (although not only). I read a bunch of articles where it was suggested to use FastCGI with APC, as suPHP doens't do well with opcode caching, and fcgid_module doesn't do it right for APC either. Noted that fcgid_module is a newer package than FastCGI and that's what whm/cpanel installs for you but ok, that can be solved I guess. Then I'm reading that php-fpm is a much better alternative to manage the php processes, especially for APC. Ok. Then I realised that php-fpm is included in php core since 5.3 and got confused. Does that mean I don't have to use FastCGI/fcgid_module (and what should I use instead of them - mod_php or cgi?)? Or does that mean that I still need to get the older FastCGI module, and configure it to use one process per user (or just one process?)? Or would fcgid_module work as well? And how bad would it be just to go with mod_php/APC to avoid troubles of installing php-fpm and FastCGI (whm/cpanel doesn't support neither) given than Varnish would serve most of the static content anyway - no php process need to be created for static content. Any examples of their FastCGI/fcgid_module/php-fpm/APC configurations would be greatly appreciated as well.

    Read the article

  • It’s nice to be important, but it’s more important to be nice

    - by BuckWoody
    I’ve been a little “preachy” lately, telling you that you should let people finish their sentences, and always check a problem out before you tell a user that their issue is “impossible”. Well, I’ll round that out with one more tip today. Keep in mind that all of these things are actions I’ve been guilty of, hopefully in the past. I’m kind of a “work in progress”. And yes, I know these tips are coming from someone who picks on people in presentations, but that is of course done in fun, and (hopefully) with the audience’s knowledge.   (No, this isn’t aimed at any one person or event in particular – I just see it happen a lot)   I’ve seen, unfortunately over and over, someone in authority react badly to someone who is incorrect, or at least perceived to be incorrect. This might manifest itself in a comment, post, question or whatever, but the point is that I’ve seen really intelligent people literally attack someone they view as getting something wrong. Don’t misunderstand me; if someone posts that you should always drop a production database in the middle of the day I think you should certainly speak up and mention that this might be a bad idea!  No, I’m talking about generalizations or even incorrect statements done in good faith. Let me explain with an example.   Suppose someone makes the statement: “If you don’t have enough space on your system, you can just use a DBCC command to shrink the database”. Let’s take two responses to this statement.   Response One: “That’s insane. Everyone knows that shrinking a database is a stupid idea, you’re just going to fragment your indexes all over the place.” Response Two: “That’s an interesting take – in my experience and from what I’ve read here (someurl.com) I think this might not be a universal best practice.”   Of course, both responses let the person making the statement and those reading it know that you don’t agree, and that it’s probably wrong. But the person you responded to and the general audience hearing you (or reading your response) might form two different opinions of you.   The first response says to me “this person really needs to be right, and takes arguments personally. They aren’t thinking of the other person at all, or the folks reading or hearing the exchange. They turned an incorrect technical statement into a personal attack. They haven’t left the other party any room to ‘save face’, and they have potentially turned what could be a positive learning experience for everyone into a negative. Also, they sound more than just a little arrogant.”   The second response says to me “this person has left room for everyone to save face, has presented evidence to the contrary and is thinking about moving the ball forward and getting it right rather than attacking someone for getting it wrong.” It’s the idea of questioning a statement rather than attacking a person.   Perhaps you have a different take. Maybe you think the “direct” approach is best – and maybe that’s worked for you. Something to consider is what you’ve really accomplished while using that first method. Sure, the info you provide is correct, and perhaps someone out there won’t shrink a database because of your response – but perhaps you’ve turned a lot more people off, and now they won’t listen to your other valuable information. You’ll be an expert, but another one of the nameless, arrogant jerks in technology. And I don’t think anyone likes to be thought of that way.   OK, I’ll get down off of the high-horse now. And I’ll keep the title of this entry (said to me by my grandmother when I was a little kid) in mind when I dismount. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • NFS "Permission Denied" getting cached on NetApp Filer

    - by Christopher Karel
    We have a bunch of Linux boxes mounting NFS shares off a NetApp filer. From time to time, I will flub some part of the export configuration. Typo on one of the allowed hosts, incorrect IP address, etc, etc. No worries, this is usually done on a test system, or with brand new exports that aren't yet in production. However, I've found that once I've been denied permission to mount something from a Linux machine, the failure gets cached for as long as a day. I will correct the problem that was blocking the mount, re-export on the NetApp, and still not be able to mount the share. I'm pretty sure this caching is done at the NetApp side. It normally ages out after a day or so, but it really sucks having to wait until tomorrow to mount a share. I've tried exportfs -f on the NetApp, as well as dns flush. (I found both suggestions via Google) However, neither one works. I would sell my soul if someone could help out with a command/pagan ritual that would clear up this cache issue. --Christopher Karel

    Read the article

  • Setting up SQL Server 2005 to use all available memory in 32bit Windows Server 2003 - and verifying

    - by Rizwan Kassim
    There are a number of questions along this line - but they either sometimes contradict each other, or don't show how to properly verify that everything is actually working - hopefully this can be comprehensive... I'm running SQL Server 2005 SP3 Standard on Windows Server 2003 R2 Standard. My server has 8GB of memory installed - my system is almost entirely used as a Database Server - there are some services running on them, but the OS + services can run within 1Gb of RAM. What I've done (please tell me if I'm doing something wrong): /3GB in the boot.ini. (To increase the amount of user-space memory available - info) /PAE in the boot.ini. (Windows claimed to be doing PAE even without this switch, somethow.) Enabled AWE in SQL Server. Enabled Lock Pages in Memory Option for users SYSTEM and Local Service. (info). SQL Server Standard doesn't seem to use this until Cumulative Update 4, which isn't installed on my server. (info) Set Min/Max Memory to : 1024Mb/5112Mb After doing all the above, we definately saw a level of improvement - but I'd like now to verify my settings, make sure that I'm making full use of the memory available. (There appeared to be a slowdown when max = 7Gb, so I edged off from that value, but it might have been just perceptual.) To verify, I checked the following levels in PerfMon : Process(sqlserv):Working Set : 76386304 SQL Server(Memory Manager) : Total Server Memory : 3538944 (I saw a doc that noted that this wasn't the full memory used by SQL Server, so I'm not sure whether to trust it) So -- my questions... Should my max be around 7Gb? If not, what should it be? Why is total server memory at 3.5G, when it's been allocated 5G? What is the proper metric for the amount of memory allocated to SQL Server? The Working Set seems a bit large... Am I possibly missing any steps in the setup? Any recommended resources on starting to tune the caching system now? Thanks

    Read the article

  • Training on Demand Certification Packages for DBAs

    - by Antoinette O'Sullivan
    The demand for Database Administrators continues to grow.*Almost two-thirds of IT hiring managers indicate that they highly value certifications in validatingIT skills and expertise.** * Job satisfaction and DBA work growth rate: CNN Money's 2011 Best Jobs in America survey.** Survey among nearly 1,700 respondents by CompTIA, the nonprofit trade association for the IT industry, cited in Certification Magazine, Feb. 14 th., 2012. Get Certified with Training on DemandAre you an experienced Database professional eager to achieve certification?Is time your most precious resource?Then try our new Training On Demand Certification Value Package with 20% discount. These all-in-one packages give you everything you need to get certified with success: Why Training On Demand:  Expert training from Oracle’s top instructors Sophisticated streaming video recording Available for 90 days, 24 hours a day, 7 days a week White boarding and training labs for hands-on experience Start, stop, pause, jump or rewind sections of the course as needed  Oracle University instructor Q&A  A full-text search leads to the right video fragment in a matter of seconds. Watch this demo to see how it works. Additional Certification resources: Benefits of Oracle Certification Database Certification Paths Available Database Certification Exams Getting certified has never been easier!For assistance contact your local Oracle University Service Desk. Many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database 11g training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL Database will broaden your career path with growing job demand. These Value Packages are also available with the following training formats: In-Class, Live Virtual Class and Self Study: MySQL Database Administration Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand MySQL Database Administrator Certification Value Package View Package View Package View Package View Package MySQL Developer Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand       MySQL Developer Certification Value Package View Package View Package     Oracle Database 10g Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand Oracle Database 10g Administrator Certified Associate Certification Value Package View Package View Package View Package   Oracle Database 10g Administrator Certified Professional Certification Value Package View Package View Package View Package   Oracle Database 11g Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand Oracle Database 11g Administrator Certified Associate Certification Value Package View Package View Package View Package View Package Oracle Database 11g Administrator Certified Professional Certification Value Package View Package View Package View Package View Package Exam Prep Seminar Value Package: Oracle Database Admin 1       View Package Oracle Database 11g Administrator Certified Professional UPGRADE Certification Value Package       View Package Oracle Real Application Clusters 11g and Grid Infrastructure Administraton Certified Expert Certification Value Package       View Package Exam Prep Seminar Value Package: Oracle Database Admin 2        View Package Exam Prep Seminar Value Package: Oracle RAC 11g and Grid Infrastructure Administration       View Package Exam Prep Seminar Value Package: Upgrade Oracle Certified Professional (OCP) to Oracle Database 11g       View Package SQL and PL/SQL Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand Oracle Database Sql Expert Certification Value Package View Package View Package View Package View Package Exam Prep Seminar Value Package: Oracle Database SQL       View Package View our Certification Value Packages Mention this code at the time of booking: E1245 Connect For a full list of MySQL Training courses and events, go to http://oracle.com/education/mysql.

    Read the article

  • ASP.NET MVC: Converting business objects to select list items

    - by DigiMortal
    Some of our business classes are used to fill dropdown boxes or select lists. And often you have some base class for all your business classes. In this posting I will show you how to use base business class to write extension method that converts collection of business objects to ASP.NET MVC select list items without writing a lot of code. BusinessBase, BaseEntity and other base classes I prefer to have some base class for all my business classes so I can easily use them regardless of their type in contexts I need. NB! Some guys say that it is good idea to have base class for all your business classes and they also suggest you to have mappings done same way in database. Other guys say that it is good to have base class but you don’t have to have one master table in database that contains identities of all your business objects. It is up to you how and what you prefer to do but whatever you do – think and analyze first, please. :) To keep things maximally simple I will use very primitive base class in this example. This class has only Id property and that’s it. public class BaseEntity {     public virtual long Id { get; set; } } Now we have Id in base class and we have one more question to solve – how to better visualize our business objects? To users ID is not enough, they want something more informative. We can define some abstract property that all classes must implement. But there is also another option we can use – overriding ToString() method in our business classes. public class Product : BaseEntity {     public virtual string SKU { get; set; }     public virtual string Name { get; set; }       public override string ToString()     {         if (string.IsNullOrEmpty(Name))             return base.ToString();           return Name;     } } Although you can add more functionality and properties to your base class we are at point where we have what we needed: identity and human readable presentation of business objects. Writing list items converter Now we can write method that creates list items for us. public static class BaseEntityExtensions {            public static IEnumerable<SelectListItem> ToSelectListItems<T>         (this IList<T> baseEntities) where T : BaseEntity     {         return ToSelectListItems((IEnumerator<BaseEntity>)                    baseEntities.GetEnumerator());     }       public static IEnumerable<SelectListItem> ToSelectListItems         (this IEnumerator<BaseEntity> baseEntities)     {         var items = new HashSet<SelectListItem>();           while (baseEntities.MoveNext())         {             var item = new SelectListItem();             var entity = baseEntities.Current;               item.Value = entity.Id.ToString();             item.Text = entity.ToString();               items.Add(item);         }           return items;     } } You can see here to overloads of same method. One works with List<T> and the other with IEnumerator<BaseEntity>. Although mostly my repositories return IList<T> when querying data there are always situations where I can use more abstract types and interfaces. Using extension methods in code In your code you can use ToSelectListItems() extension methods like shown on following code fragment. ... var model = new MyFormModel(); model.Statuses = _myRepository.ListStatuses().ToSelectListItems(); ... You can call this method on all your business classes that extend your base entity. Wanna have some fun with this code? Write overload for extension method that accepts selected item ID.

    Read the article

  • puppet cert mismatch in ec2

    - by Stick
    I'm setting up a puppetmaster (2.7.6) in ec2 via gems (on rhel6) and I'm running into problems with the cert names and getting the master able to talk to itself. my puppet.conf looks like this: [main] logdir = /var/log/puppet rundir = /var/run/puppet vardir = /var/lib/puppet ssldir = $vardir/ssl pluginsync = true environment = production report = true certname = master When I start the puppetmaster process the ssl directory looks like: ssl/private_keys/master.pem ssl/crl.pem ssl/public_keys/master.pem ssl/ca/ca_crl.pem ssl/ca/signed/master.pem ssl/ca/ca_crt.pem ssl/ca/ca_pub.pem ssl/ca/ca_key.pem ssl/certs/ca.pem ssl/certs/master.pem I have an /etc/hosts entry on the box to point the 'puppet' hostname to localhost so that I don't have to change the 'server' option. When I run the agent I get the following: # puppet agent --test info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Server hostname 'puppet' did not match server certificate; expected master err: /File[/var/lib/puppet/lib]: Could not evaluate: Server hostname 'puppet' did not match server certificate; expected master Could not retrieve file metadata for puppet://puppet/plugins: Server hostname 'puppet' did not match server certificate; expected master err: Could not retrieve catalog from remote server: Server hostname 'puppet' did not match server certificate; expected master warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: Server hostname 'puppet' did not match server certificate; expected master If I specify the certname as the server (with corresponding hosts entry) I get: # puppet agent --test --server master info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://master/plugins info: Caching catalog for master info: Applying configuration version '1321805956' notice: Finished catalog run in 0.05 seconds Which is success of a sort, that source error will bite me later when I'm applying manifests. I've tried a couple of other variations with using the ec2 private hostname and gotten mixed results. I'd like to avoid setting server = 'x' and use dns/hosts to control what 'puppet' resolves to in order to decide which server (plays easier with availability zones, etc)

    Read the article

  • Cachefilesd (cachefiles) everything seems to be set up, still not working

    - by Evgenius
    I'm trying to set up cachefilesd to work with my network folder shared using NFS. I have seemingly everything set up, however cachefilesd starts normally, however caching isn't functioning. Here is output of commands, which I ran in the same order 1 sudo mount ... cache-1:/mnt/datashared on /mnt/nfsshare type nfs (rw,sync,ac,acregmin=3,acregmax=60,acdirmin=30,acdirmax=300,lookupcache=pos,vers=3,fsc) ... 2 lsbmod | grep cachefiles cachefiles 40555 1 fscache 57430 4 nfs,cifs,cachefiles,nfsv4 3 [edited - deleted] 4 uname -r 3.8.0-34-generic 5 grep CONFIG_NFS_FSCACHE /boot/config-3.8.0-34-generic CONFIG_NFS_FSCACHE=y 6 lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 13.04 Release: 13.04 Codename: raring 7 sudo service cachefilesd restart * Restarting FilesCache daemon cachefilesd [ OK ] 8 dmesg [6211206.141781] FS-Cache: Withdrawing cache "mycache" [6211210.135236] FS-Cache: Cache "mycache" added (type cachefiles) [6211210.135242] CacheFiles: File cache on sdb1 registered [6214644.348929] CacheFiles: File cache on sdb1 unregistering [6214644.348935] FS-Cache: Withdrawing cache "mycache" [6214654.575909] FS-Cache: Cache "mycache" added (type cachefiles) [6214654.575915] CacheFiles: File cache on sdb1 registered 9 ps aux | grep cachefilesd root 65399 0.0 0.0 4460 540 ? Ss 23:14 0:00 /sbin/cachefilesd 1000 65464 0.0 0.0 8160 916 pts/0 S+ 23:16 0:00 grep --color=auto cachefilesd finally, biggest problem is 10 cat /proc/fs/nfsfs/volumes NV SERVER PORT DEV FSID FSC v3 64476645 801 0:24 233e020f0da07a93 no tl;dr I think I configured everything properly but FSC mount option + fscachefilesd don't seem to work

    Read the article

  • How to use most of memory available on MySQL

    - by Zilvinas
    I've got a MySQL server which has both InnoDB and MyISAM tables. InnoDB tablespace is quite small under 4 GB. MyISAM is big ~250 GB in total of which 50 GB is for indexes. Our server has 32 GB of RAM but it usually uses only ~8GB. Our key_buffer_size is only 2GB. But our key cache hit ratio is ~95%. I find it hard to believe.. Here's our key statistics: | Key_blocks_not_flushed | 1868 | | Key_blocks_unused | 109806 | | Key_blocks_used | 1714736 | | Key_read_requests | 19224818713 | | Key_reads | 60742294 | | Key_write_requests | 1607946768 | | Key_writes | 64788819 | key_cache_block_size is default at 1024. We have 52 GB's of index data and 2GB key cache is enough to get a 95% hit ratio. Is that possible? On the other side data set is 200GB and since MyISAM uses OS (Centos) caching I would expect it to use a lot more memory to cache accessed myisam data. But at this stage I see that key_buffer is completely used, our buffer pool size for innodb is 4gb and is also completely used that adds up to 6GB. Which means data is cached using just 1 GB? My question is how could I check where all the free memory could be used? How could I check if MyISAM hits OS cache for data reads instead of disk?

    Read the article

  • I need advices: small memory footprint linux mail server with spam filtering

    - by petermolnar
    I have a VPS which is originally destined to be a webserver but some minimal mail capabilities are needed to be deployed as well, including sending and receiving as standalone server. The current setup is the following: Postfix reveices the mail, the users are in virtual tables, stored in MySQL on connection all servers are tested with policyd-weight service against some DNSBLs all mail is runs through SpamAssassin spamd with the help of spamc client the mail is then delivered with Dovecot 2' LDA (local delivery agent), virtual users as well As you saw... there's no virus scanner running, and that's for a reason: clamav eats all the memory possible and also, virus mails are all filtered out with this setup (I've tested the same with ClamAV enabled for 1,5 years, no virus mail ever got even to ClamAV) I don't use amavisd and I really don't want to. You only need that monster if you have plenty of memory and lots of simultaneous scanners. It's also a nightmare to fine tune by hand. I run policyd-weight instead of policyd and native DNSBLs in postfix. I don't like to send someone away because a single service listed them. Important statement: everything works fine. I receive very small amount of spam, nearly never get a false positive and most of the bad mail is stopped by policyd-weight. The only "problem" that I feel the services at total uses a bit much memory alltogether. I've already cut the modules of spamassassin (see below), but I'd really like to hear some advices how to cut the memory footprint as low as possible, mostly: what plugins SpamAssassin really needs and what are more or less useless, regarding to my current postfix & policyd-weight setup? SpamAssassin rules are also compiled with sa-compile (sa-update runs once a week from cron, compile runs right after that) These are some of the current configurations that may matter, please tell me if you need anything more. postfix/master.cf (parts only) dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/bin/spamc -e /usr/lib/dovecot/deliver -d ${recipient} -f {sender} postfix/main.cf (parts only) smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_invalid_hostname, reject_non_fqdn_hostname, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_pipelining, reject_unauth_destination, check_policy_service inet:127.0.0.1:12525, permit policyd-weight.conf (parts only) $REJECTMSG = "550 Mail appeared to be SPAM or forged. Ask your Mail/DNS-Administrator to correct HELO and DNS MX settings or to get removed from DNSBLs"; $REJECTLEVEL = 4; $DEFER_STRING = 'IN_SPAMCOP= BOGUS_MX='; $DEFER_ACTION = '450'; $DEFER_LEVEL = 5; $DNSERRMSG = '450 No DNS entries for your MTA, HELO and Domain. Contact YOUR administrator'; # 1: ON, 0: OFF (default) # If ON request that ALL clients are only checked against RBLs $dnsbl_checks_only = 0; # 1: ON (default), 0: OFF # When set to ON it logs only RBLs which affect scoring (positive or negative) $LOG_BAD_RBL_ONLY = 1; ## DNSBL settings @dnsbl_score = ( # host, hit, miss, log name 'dnsbl.ahbl.org', 3, -1, 'dnsbl.ahbl.org', 'dnsbl.njabl.org', 3, -1, 'dnsbl.njabl.org', 'dnsbl.sorbs.net', 3, -1, 'dnsbl.sorbs.net', 'bl.spamcop.net', 3, -1, 'bl.spamcop.net', 'zen.spamhaus.org', 3, -1, 'zen.spamhaus.org', 'pbl.spamhaus.org', 3, -1, 'pbl.spamhaus.org', 'cbl.abuseat.org', 3, -1, 'cbl.abuseat.org', 'list.dsbl.org', 3, -1, 'list.dsbl.org', ); # If Client IP is listed in MORE DNSBLS than this var, it gets REJECTed immediately $MAXDNSBLHITS = 3; # alternatively, if the score of DNSBLs is ABOVE this level, reject immediately $MAXDNSBLSCORE = 9; $MAXDNSBLMSG = '550 Az levelezoszerveruk IP cime tul sok spamlistan talahato, kerjuk ellenorizze! / Your MTA is listed in too many DNSBLs; please check.'; ## RHSBL settings @rhsbl_score = ( 'multi.surbl.org', 4, 0, 'multi.surbl.org', 'rhsbl.ahbl.org', 4, 0, 'rhsbl.ahbl.org', 'dsn.rfc-ignorant.org', 4, 0, 'dsn.rfc-ignorant.org', # 'postmaster.rfc-ignorant.org', 0.1, 0, 'postmaster.rfc-ignorant.org', # 'abuse.rfc-ignorant.org', 0.1, 0, 'abuse.rfc-ignorant.org' ); # skip a RBL if this RBL had this many continuous errors $BL_ERROR_SKIP = 2; # skip a RBL for that many times $BL_SKIP_RELEASE = 10; ## cache stuff # must be a directory (add trailing slash) $LOCKPATH = '/var/run/policyd-weight/'; # socket path for the cache daemon. $SPATH = $LOCKPATH.'/polw.sock'; # how many seconds the cache may be idle before starting maintenance routines #NOTE: standard maintenance jobs happen regardless of this setting. $MAXIDLECACHE = 60; # after this number of requests do following maintenance jobs: checking for config changes $MAINTENANCE_LEVEL = 5; # negative (i.e. SPAM) result cache settings ################################## # set to 0 to disable caching for spam results. To this level the cache will be cleaned. $CACHESIZE = 2000; # at this number of entries cleanup takes place $CACHEMAXSIZE = 4000; $CACHEREJECTMSG = '550 temporarily blocked because of previous errors'; # after NTTL retries the cache entry is deleted $NTTL = 1; # client MUST NOT retry within this seconds in order to decrease TTL counter $NTIME = 30; # positve (i.,e. HAM) result cache settings ################################### # set to 0 to disable caching of HAM. To this number of entries the cache will be cleaned $POSCACHESIZE = 1000; # at this number of entries cleanup takes place $POSCACHEMAXSIZE = 2000; $POSCACHEMSG = 'using cached result'; #after PTTL requests the HAM entry must succeed one time the RBL checks again $PTTL = 60; # after $PTIME in HAM Cache the client must pass one time the RBL checks again. #Values must be nonfractal. Accepted time-units: s, m, h, d $PTIME = '3h'; # The client must pass this time the RBL checks in order to be listed as hard-HAM # After this time the client will pass immediately for PTTL within PTIME $TEMP_PTIME = '1d'; ## DNS settings # Retries for ONE DNS-Lookup $DNS_RETRIES = 1; # Retry-interval for ONE DNS-Lookup $DNS_RETRY_IVAL = 5; # max error count for unresponded queries in a complete policy query $MAXDNSERR = 3; $MAXDNSERRMSG = 'passed - too many local DNS-errors'; # persistent udp connection for DNS queries. #broken in Net::DNS version 0.51. Works with Net::DNS 0.53; DEFAULT: off $PUDP= 0; # Force the usage of Net::DNS for RBL lookups. # Normally policyd-weight tries to use a faster RBL lookup routine instead of Net::DNS $USE_NET_DNS = 0; # A list of space separated NS IPs # This overrides resolv.conf settings # Example: $NS = '1.2.3.4 1.2.3.5'; # DEFAULT: empty $NS = ''; # timeout for receiving from cache instance $IPC_TIMEOUT = 2; # If set to 1 policyd-weight closes connections to smtpd clients in order to avoid too many #established connections to one policyd-weight child $TRY_BALANCE = 0; # scores for checks, WARNING: they may manipulate eachother # or be factors for other scores. # HIT score, MISS Score @client_ip_eq_helo_score = (1.5, -1.25 ); @helo_score = (1.5, -2 ); @helo_score = (0, -2 ); @helo_from_mx_eq_ip_score= (1.5, -3.1 ); @helo_numeric_score= (2.5, 0 ); @from_match_regex_verified_helo= (1,-2 ); @from_match_regex_unverified_helo = (1.6, -1.5 ); @from_match_regex_failed_helo = (2.5, 0 ); @helo_seems_dialup = (1.5, 0 ); @failed_helo_seems_dialup= (2, 0 ); @helo_ip_in_client_subnet= (0,-1.2 ); @helo_ip_in_cl16_subnet = (0,-0.41 ); #@client_seems_dialup_score = (3.75, 0 ); @client_seems_dialup_score = (0, 0 ); @from_multiparted = (1.09, 0 ); @from_anon= (1.17, 0 ); @bogus_mx_score = (2.1, 0 ); @random_sender_score = (0.25, 0 ); @rhsbl_penalty_score = (3.1, 0 ); @enforce_dyndns_score = (3, 0 ); spamassassin/init.pre (I've put the .pre files together) loadplugin Mail::SpamAssassin::Plugin::Hashcash loadplugin Mail::SpamAssassin::Plugin::SPF loadplugin Mail::SpamAssassin::Plugin::Pyzor loadplugin Mail::SpamAssassin::Plugin::Razor2 loadplugin Mail::SpamAssassin::Plugin::AutoLearnThreshold loadplugin Mail::SpamAssassin::Plugin::MIMEHeader loadplugin Mail::SpamAssassin::Plugin::ReplaceTags loadplugin Mail::SpamAssassin::Plugin::Check loadplugin Mail::SpamAssassin::Plugin::HTTPSMismatch loadplugin Mail::SpamAssassin::Plugin::URIDetail loadplugin Mail::SpamAssassin::Plugin::Bayes loadplugin Mail::SpamAssassin::Plugin::BodyEval loadplugin Mail::SpamAssassin::Plugin::DNSEval loadplugin Mail::SpamAssassin::Plugin::HTMLEval loadplugin Mail::SpamAssassin::Plugin::HeaderEval loadplugin Mail::SpamAssassin::Plugin::MIMEEval loadplugin Mail::SpamAssassin::Plugin::RelayEval loadplugin Mail::SpamAssassin::Plugin::URIEval loadplugin Mail::SpamAssassin::Plugin::WLBLEval loadplugin Mail::SpamAssassin::Plugin::VBounce loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody spamassassin/local.cf (parts) use_bayes 1 bayes_auto_learn 1 bayes_store_module Mail::SpamAssassin::BayesStore::MySQL bayes_sql_dsn DBI:mysql:db:127.0.0.1:3306 bayes_sql_username user bayes_sql_password pass bayes_ignore_header X-Bogosity bayes_ignore_header X-Spam-Flag bayes_ignore_header X-Spam-Status ### User settings user_scores_dsn DBI:mysql:db:127.0.0.1:3306 user_scores_sql_password user user_scores_sql_username pass user_scores_sql_custom_query SELECT preference, value FROM _TABLE_ WHERE username = _USERNAME_ OR username = '$GLOBAL' OR username = CONCAT('%',_DOMAIN_) ORDER BY username ASC # for better speed score DNS_FROM_AHBL_RHSBL 0 score __RFC_IGNORANT_ENVFROM 0 score DNS_FROM_RFC_DSN 0 score DNS_FROM_RFC_BOGUSMX 0 score __DNS_FROM_RFC_POST 0 score __DNS_FROM_RFC_ABUSE 0 score __DNS_FROM_RFC_WHOIS 0 UPDATE 01 As adaptr advised I remove policyd-weight and configured postfix postscreen, this resulted approximately -15-20 MB from RAM usage and a lot faster work. I'm not sure it's working at full capacity but it seems promising.

    Read the article

  • IIS7 Session ID rotating with Classic ASP

    - by ManiacZX
    I am trying to migrate a Classic ASP app onto a Windows 2008 R2 server. The application features run fine, but I am having issue with session. The application keeps the logged in user information in session and I am constantly getting knocked out as if the session had expired. While debugging I have discovered the sessions are not expiring but instead I am getting 2-3 different Session IDs in use by one browser. I am outputting Response.Write(Session.SessionID) on various pages in the application and I can sit there and hit refresh over and over and watch the number changed between these 2-3 SessionIDs randomly. The sessions are still valid because when I refresh and get the Session ID that I logged in under the page is displayed (because the security check was successful) and when I get one of the other Session IDs I get the "you aren't logged in, you need to log in" message. If I close and re-open the browser, same story just the set of IDs are new. This happens with IE8, Firefox and Chrome from multiple computers. Things I've tried: - AppPool set to No Managed Code and Classic - Output Caching set .asp to never cache - ASP Session Properties enabled and disabled asp session state and confirmed it affected page (error trying to read Session.SessionID when disabled) Things I've tried just in case but shouldn't have anything to do with ASP Session: - Disabled compression - Changed ASP.Net Session State properties (InProc, StateServer, SQLServer, Cookies, URI, etc) -

    Read the article

  • Increasing SQL Server / Sage performance with SSD? (Dell PE T410)

    - by Anthony
    I have a client wanting better performance of their Sage (Accpac & CRM) server (v5.5, soon to be v7). It's running on 1 of 2 Hyper-V VMs (Svr2008) on a Dell PE T410 server with 24GB of RAM (1333MHz) & dual quad-core, and both VMs (only their C: drives) are on a single RAID5 array. All clients connect via 1Gb ethernet. The 2nd VM is SBS2008 with 9GB RAM (& all SBS dbs & company data are on a separate RAID5 array), & 3GB RAM for the Svr2008 hypervisor. I've given the Sage/SQL Server VM all the RAM I can (12GB) & SQL Server RAM caching (~8GB, never exceeds ~7.5GB, eg. entire db can now be cached in RAM) and that's helped significantly. Upgrading the Hypervisor to Svr2012 is an obvious step, but probably not a dramatic improvement? What about an SSD for this Sage/SQL Server VM (VM = 100GB, <10GB for the actual live DB) ? Can SSDs be put into the SAS hot-swap bays? Or will I have to use the mobo SATA(3Gbps?) ports, or PCI-E SSD card? Should SSDs be RAIDed for this situation? Or is SSD's higher reliability offsetting the need for RAID1/5/10? (I have nightly full disk backups) New territory for me, would appreciate some feedback. Thanks, Anthony.

    Read the article

  • How can I cache a Subversion password on a server, without storing it in unencrypted form?

    - by Zilk
    My Subversion server only provides access via HTTPS; support for svn+ssh has been dropped because we wanted to avoid creating system users on that machine just for SVN access. Now I'm trying to provide a way for users to cache their passwords for a while, without leaving them stored on the filesystem in unencrypted form. This is no problem for Gnome or KDE users, because they can use gnome-keyring and kwallet, respectively. IIRC, TortoiseSVN has a similar caching mechanism, too. But what about users on a non-GUI system? Some context: in this case, we have a development/testing server where one project has been checked out into the Apache htdocs directory. Development for this project is almost complete, and only minor text/layout changes are performed directly on this server. Nevertheless, the changes should be checked into the repository. There's no kwallet and no gnome-keyring on this system, and the ssh-agent can't help because the repository is accessed via https instead of svn+ssh. As far as I know, that leaves them the choice of entering the password every time they talk to the SVN server, or storing it in an insecure way. Is there any way to get something like what gnome-keyring and kwallet provide in a non-GUI environment?

    Read the article

  • Atmospheric Scattering

    - by Lawrence Kok
    I'm trying to implement atmospheric scattering based on Sean O`Neil algorithm that was published in GPU Gems 2. But I have some trouble getting the shader to work. My latest attempts resulted in: http://img253.imageshack.us/g/scattering01.png/ I've downloaded sample code of O`Neil from: http://http.download.nvidia.com/developer/GPU_Gems_2/CD/Index.html. Made minor adjustments to the shader 'SkyFromAtmosphere' that would allow it to run in AMD RenderMonkey. In the images it is see-able a form of banding occurs, getting an blueish tone. However it is only applied to one half of the sphere, the other half is completely black. Also the banding appears to occur at Zenith instead of Horizon, and for a reason I managed to get pac-man shape. I would appreciate it if somebody could show me what I'm doing wrong. Vertex Shader: uniform mat4 matView; uniform vec4 view_position; uniform vec3 v3LightPos; const int nSamples = 3; const float fSamples = 3.0; const vec3 Wavelength = vec3(0.650,0.570,0.475); const vec3 v3InvWavelength = 1.0f / vec3( Wavelength.x * Wavelength.x * Wavelength.x * Wavelength.x, Wavelength.y * Wavelength.y * Wavelength.y * Wavelength.y, Wavelength.z * Wavelength.z * Wavelength.z * Wavelength.z); const float fInnerRadius = 10; const float fOuterRadius = fInnerRadius * 1.025; const float fInnerRadius2 = fInnerRadius * fInnerRadius; const float fOuterRadius2 = fOuterRadius * fOuterRadius; const float fScale = 1.0 / (fOuterRadius - fInnerRadius); const float fScaleDepth = 0.25; const float fScaleOverScaleDepth = fScale / fScaleDepth; const vec3 v3CameraPos = vec3(0.0, fInnerRadius * 1.015, 0.0); const float fCameraHeight = length(v3CameraPos); const float fCameraHeight2 = fCameraHeight * fCameraHeight; const float fm_ESun = 150.0; const float fm_Kr = 0.0025; const float fm_Km = 0.0010; const float fKrESun = fm_Kr * fm_ESun; const float fKmESun = fm_Km * fm_ESun; const float fKr4PI = fm_Kr * 4 * 3.141592653; const float fKm4PI = fm_Km * 4 * 3.141592653; varying vec3 v3Direction; varying vec4 c0, c1; float scale(float fCos) { float x = 1.0 - fCos; return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main( void ) { // Get the ray from the camera to the vertex, and its length (which is the far point of the ray passing through the atmosphere) vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); vec3 v3Pos = normalize(gl_Vertex.xyz) * fOuterRadius; vec3 v3Ray = v3CameraPos - v3Pos; float fFar = length(v3Ray); v3Ray = normalize(v3Ray); // Calculate the ray's starting position, then calculate its scattering offset vec3 v3Start = v3CameraPos; float fHeight = length(v3Start); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fCameraHeight)); float fStartAngle = dot(v3Ray, v3Start) / fHeight; float fStartOffset = fDepth*scale(fStartAngle); // Initialize the scattering loop variables float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; // Now loop through the sample rays for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight)); float fLightAngle = dot(normalize(v3LightPos), v3SamplePoint) / fHeight; float fCameraAngle = dot(normalize(v3Ray), v3SamplePoint) / fHeight; float fScatter = (-fStartOffset + fDepth*( scale(fLightAngle) - scale(fCameraAngle)))/* 0.25f*/; vec3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } // Finally, scale the Mie and Rayleigh colors and set up the varying variables for the pixel shader vec4 newPos = vec4( (gl_Vertex.xyz + view_position.xyz), 1.0); gl_Position = gl_ModelViewProjectionMatrix * vec4(newPos.xyz, 1.0); gl_Position.z = gl_Position.w * 0.99999; c1 = vec4(v3FrontColor * fKmESun, 1.0); c0 = vec4(v3FrontColor * (v3InvWavelength * fKrESun), 1.0); v3Direction = v3CameraPos - v3Pos; } Fragment Shader: uniform vec3 v3LightPos; varying vec3 v3Direction; varying vec4 c0; varying vec4 c1; const float g =-0.90f; const float g2 = g * g; const float Exposure =2; void main(void){ float fCos = dot(normalize(v3LightPos), v3Direction) / length(v3Direction); float fMiePhase = 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos*fCos) / pow(1.0 + g2 - 2.0*g*fCos, 1.5); gl_FragColor = c0 + fMiePhase * c1; gl_FragColor.a = 1.0; }

    Read the article

  • Increasing MSSQL/Sage performance with SSD? (Dell PE T410)

    - by Anthony
    I have a client wanting better performance of their Sage (Accpac & CRM) server (v5.5, soon to be v7). It's running on 1 of 2 Hyper-V VMs (Svr2008) on a Dell PE T410 server with 24GB of RAM (1333MHz) & dual quad-core, and both VMs (only their C: drives) are on a single RAID5 array. All clients connect via 1Gb ethernet. The 2nd VM is SBS2008 with 9GB RAM (& all SBS dbs & company data are on a separate RAID5 array), & 3GB RAM for the Svr2008 hypervisor. I've given the Sage/MSSQL VM all the RAM I can (12GB) & SQL RAM caching (~8GB, never exceeds ~7.5GB, eg. entire db can now be cached in RAM) and that's helped significantly. Upgrading the Hypervisor to Svr2012 is an obvious step, but probably not a dramatic improvement? What about an SSD for this Sage/SQL VM (VM = 100GB, <10GB for the actual live DB) ? Can SSDs be put into the SAS hot-swap bays? Or will I have to use the mobo SATA(3Gbps?) ports, or PCI-E SSD card? Should SSDs be RAIDed for this situation? Or is SSD's higher reliability offsetting the need for RAID1/5/10? (I have nightly full disk backups) New territory for me, would appreciate some feedback. Thanks, Anthony.

    Read the article

  • Setting up Squid -> VPN connection

    - by Nedlinin
    I recently purchased a VPS and am wanting to use it as a VPN server. However, it has bandwidth limitations. So, I figured since I already have a local Squid proxy caching things for me, I could have users connect to the proxy and the proxy connect to the VPN. Then when someone hits the web, Squid will serve it from cache if available and, if not, it will use the VPN to download it. My issue is, I have no idea how to set this up :p - Essentially I want Machine - Squid - VPN. My VPN is running on Ubuntu Server with pptpd. Squid is running on a local Arch Linux box. Squid and the VPN are both working perfectly independently. Any help on how to have Squid push traffic through the VPN would be greatly appreciated! Also: I don't actually want to use the VPN for all traffic. Otherwise, I'd just connect my router to the VPN and be happy. I only want to use it for web traffic from specific machines on the network.

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • apache2 defaultsite redirect but not virtual host

    - by MMM
    I'm trying to set up a new server with several virtual hosts but also such that if the requested fqdn doesn't match a virtual host then the request is redirected to http://example.com/log.php?url=fqdn I have got the default host redirecting as desired however the virtual host that I have defined doesn't work. I'm testing using a different host and curl -I http://hostname.example.com:8080/ on the command line to read the html headers to check for the redirect header directly rather than following it with a browser (to avoid any caching issues). I have defined a virtualhost as the fqdn of the server but when I use curl to request that virtualhost I get redirected. If I request the server by any other name which doesn't have a virtualhost defined I also get redirected. apache version is 2.2.16 on ubuntu The config (concatenated together in order from a couple of different files) is as follows: Listen 8080 NameVirtualHost * <VirtualHost _default_> ServerAdmin [email protected] RewriteEngine On RewriteRule ^(.*)$ http://example.com/log.php?url=%{HTTP_HOST}$1 [R=302,L] </VirtualHost> <VirtualHost *> <Directory "/var/www"> allow from all Options Indexes </Directory> DocumentRoot /var/www ServerName hostname.example.com </VirtualHost> I've also tried ServerName values of hostname.example.com:* and hostname.example.com:8080 In case I wasn't clear enough: anything.anything.any/something requested from my server should redirect to example.com/log.php?url=anything.anything.any/something foo.example.com (not defined as a VirtualHost) requested from my server should redirect to example.com/log.php?url=foo.example.com hostname.example.com (defined as a VirtualHost) requested from my server should return an html document anothername.example.com (also defined as a VirtualHost) requested from my server should return an html document It turns out that because the servers own fqdn is hostname.example.com that gets redirected to the Default VirtualHost even if there is a named VirtualHost for it. Other fqdn's that are not the same as the servers fqdn work as I intended.

    Read the article

  • How can I download a file directly to a web server?

    - by matt
    I'd like to download files directly to a hosted server, whether it's one I set up myself or a hosted service like Dropbox. For example, when I download a podcast, instead of downloading it to my computer then uploading it to the server, how can I have it download directly to the cloud. My interest here is reducing the traffic I'm using over a metered data plan on my laptop, so I don't want my computer acting as a physical intermediary caching the file. Ideally, there would be some way for me to have a download link and tell it to go directly to my server. How can I accomplish this? I realize that this question is potentially involving a "webapp" and it is potentially involving "server administration" and since my goal is to cut my computer out of the loop I can see people saying this is off-topic and should be on another site. My issue is this: I don't know if this is going to be a webapp solution or a server solution but I do know regardless I'm going to be using a computer to get it done and I am replacing a function that's currently done on my computer so I figured I'd ask it here. If I was wrong and this definitely should be at webapps feel free to let me know or just migrate it.

    Read the article

  • Strange performance differences in read/write from/to USB flash drive

    - by Mario De Schaepmeester
    When copying files from my 8GB USB 2.0 flash drive with Windows 7 to a traditional hard drive, the average speed is between 25 and 30 MB/s. When doing the reverse, copying to the USB drive, the speed is 5MB/s average. I have tested this with about 4.5GB of files, a mixture of smaller and larger ones. The observations were the same on both FAT32 and exFAT file systems on the USB drive, NTFS on the internal hard disk. I don't think I can be mistaken in saying that flash memory has a lot higher performance than a spinning hard drive in both terms of reading and writing. For both memory types, reading should be faster than writing too. Now I wonder, how can it be that copying files from a fast read memory to a faster write memory is actually slower than copying files from a fast read memory to a slow write memory? I think that the files are stored in RAM before being copied over too, and there's caching as well, but I don't see how even that could tip the balance. It can only be in the advantage of writing to the USB drive, since it is "closer" to the SATA system than the USB port and it will receive data from the internal SATA HDD faster. Perhaps my way of thinking is all wrong or it just depends on the manufacturer of the USB pen. But I am curious.

    Read the article

  • Is 30 calls / second a lot for one IIS server?

    - by Lieven Cardoen
    We have a RIA application that 300 clients concurrently use in an intranet environment. Together they make 30 calls / second to IIS (asp.net) (actually it's 60 but calls are loadbalanced over two IIS servers). Half of the calls is getting an asset (Caching Profile is used so most of the time cache is hit), the other half is saving data to a sql server. Retrieving an asset is done with a aspx page. Saving the data happens via WebORB, asp.net and Sql Server. So some processing is needed by WebORB (amf decoding, GZIP, ...). We also use Spring.NET, and some of the container objects have a request scope (not a lot). IIS servers -- Virtual machines, 4 CPU, 2 gb RAM. They are based on Windows 2008 x64 SP2 Enterprise Edition. Sql Server 2008 is used. Apparently CPU of both IIS serers is constantly around 60-70%. Now, my question, is the load of 60-70% acceptable and how could we possible bring that down to less % (maybe using only one IIS server)? + Is 2 gb RAM enough? Assets can be up to 20mb, but on average, they are about 30kb. (the load of 60-70% is achieved with assets around 30kb). The data that gets saved with weborb is very small (2kb) and is just one object.

    Read the article

  • suggestions for firewall/router project using *BSD or Linux

    - by Adeodatus
    Hi All, I have a project in mind and I'd love to hear some ideas on some open source solutions with COTS hardware. I have a few 24 and/or 48 port managed layer2 switches with customers potentially on each port (though its usually about 20-30). Right now the switch has a bridged network and backhaul the traffic to our core to a centralized DHCP server. I need to move them to a NAT solution and, while doing this, I'd like to protect the customers on each port from the customer traffic on the other ports. I also need to be able to port forward from the public side of the firewall/nat box to specific hardware on the inside of the nat machine (easy enough, I know). My first thoughts are to build an appliance-like box (the fewer moving parts the better) that can do filtering and NAT with rfc1918 an address range being handed out via a DHCP server on the appliance. A caching DNS server on the appliance would be a plus since we backhaul everything to the core. I'd like to run FreeBSD but I'm open. Now, to try to limit the broadcast traffic thats visible I was thinking of doing each port on the switch as a different vlan and have the switch do trunking to the private NIC on the FreeBSD/appliance. I'd probably need to do some magic on the freebsd NIC to get this working but it should. We have the parts to build these systems. So, does this make sense? Are there any other solutions out there that we don't have to spend money on but can use our parts to create something? Are there any good distros that could do this already (monowall)?? I may or may not admin this solution so a secure web configuration and management tool would be a plus in the other admins' minds. Thoughts?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >