Search Results

Search found 22633 results on 906 pages for 'service accounts'.

Page 776/906 | < Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >

  • Windows Server 2008 R2 SP1 Drivers Missing Would Not Install?

    - by bleepzter
    I am building my dev machine with WS2008R2SP1. I consider Windows Server as the best development environment for C# since I tend to focus on server-centric (integration) applications. The desktop flavor of Windows (7) doesn't play nicely with things like Commerce Server or BizTalk... so I to like stick to the environment I develop for. Previously I used to develop inside of VM's but I've found that it is super inefficient and tends to take a toll on the laptops. (I've gone through two of them in 6 months). Problem is that I multiple devices that do not want to be recognized by Windows: My machine is Dell Precision M4500: Intel Core i7-Q740, 1TB HDD, 8GB RAM, Dell re-branded Broadcom DW1501 802x11n Half-Mini Card, Dell re-branded Broadcom DW375 Bluetooh Module, Intel 82577LM Gigabit Network connection NVidia Quadro FX1800 Graphics The devices in question are the Dell rebranded broadcom network and bluetooth adapters: Broadcom USH: USB\VID_0A5C&PID_5800&REV_0101&MI_00 USB\VID_0A5C&PID_5800&MI_00 DW375 Bluetooth Module USB\VID_413C&PID_8187&REV_0517 USB\VID_413C&PID_8187 When I ran the broadcom installers I get "Operating System not supported" which I think is a big oversight on Broadcom's part. Why check for system version string? UGHGHGH Moreover if I try to manually force the driver in windows... I get an error: Driver Management concluded the process to install driver FileRepository\btwampsecfl.inf_amd64_neutral_d8fc2b85d035ed47\btwampsecfl.inf for Device Instance ID USB\VID_0A5C&PID_5800&MI_00\7&66DE6C9&0&0000 with the following status: 0xe0000217. '- or - Driver Management concluded the process to install driver FileRepository\btwampfl.inf_amd64_neutral_d4c4acf036c61299\btwampfl.inf for Device Instance ID USB\VID_413C&PID_8187\90004EEEF5A6 with the following status: 0xe0000217. I googled the 0xe0000217 error code and it says Bad service install section in the driver inf file... Any ideas on how to fix this? I also tried the post by BetaIQ on MSDN Forum, unfortunately the links to the driver package included in the post were dead :( PS. On a side note I also do mobile development for Android, iOS, and Windows Phone, and BB. Having the bluetooth is quite useful with mobile devices.

    Read the article

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

  • Apache, Tomcat 5 and problem with HTTP basic auth

    - by Juha Syrjälä
    I have setup a Tomcat with a webapp that uses http basic auth in some of its URLs. There is a Apache server in front of the Tomcat. I have setup Apache as a proxy like this (all traffic should go directly to tomcat): /etc/httpd/conf.d/proxy_ajp.conf: LoadModule proxy_ajp_module modules/mod_proxy_ajp.so ProxyPass / ajp://localhost:8009/ ProxyPassReverse / ajp://localhost:8009/ There is a webapp installed to root of Tomcat (ROOT.war), so I should be able to use http://localhost/ to access my webapp. But it is not working with http basic auth. The problem is that everything works until I try to access URL that are protected by the HTTP basic auth. URLs without authentication work just fine. When accessing this url via apache I am getting an error message from Apache. If I access the same URL directly from tomcat, everything works just fine. I am getting this to Apache error log: [Wed Sep 01 21:34:01 2010] [error] proxy: dialog to [::1]:8009 (localhost) failed access log looks like this: ::1 - - [01/Sep/2010:21:34:01 +0300] "GET /protected_path/ HTTP/1.0" 503 360 "-" "w3m/0.5.2" I am using: Fedora release 13 (Goddard) httpd-2.2.16-1.fc13.x86_64 tomcat5-5.5.27-7.4.fc12.noarch The basic auth is implemented in the webapp (not in Apache or Tomcat). The webapp is actually implemented in Scala/Lift, but that shouldn't matter. The auth works if I access the tomcat directly. Error message that I am getting from Apache. It is curious that the title is Unauthorized and not Internal error: Unauthorized The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.16 (Fedora) Server at my.server.name.com Port 80 It could be that Apache is seeing a some thing else than 200 OK response and thinks that it is an error when it actually should pass the received 401 Unauthorized response directly to browser. If this is the problem, how to fix it?

    Read the article

  • Citrix Access Gateway with Citrix Receiver

    - by vm370
    I'm currently using a Citrix Access Gateway with firmware 5.0.4 to provide access over an SSL-VPN to an isolated environment, which is connected to over the Citrix Access Gateway Client, which is delivered by the device by default. However we encountered different problems, e.g. that it's somehow not possible to get it out of the autostart (not registered as a service or in the autostart?!) and it killed the Cisco VPN Client, which is used in the company and unfortunately cannot be replaced. The Cisco client can also just be used again after a procedure with cleaning the registry from all CAG Client remains, which requires a lot of effort. Because of that, I'd like to check if there is an alternative to this client, since is this is a real pain... Unfortunately I couldn't find a way to use the Receiver with the CAG yet, but if you have any resources on how to build this workaround, I'd be very happy. Thanks a lot in advance UPDATE: If there are other alternatives I'd be even more happy, since using the Receiver would also mean that there is an issue with the ICA-Client, which is also used in our environments. From my experience, the Receiver and the ICA-Client are also no good friends...

    Read the article

  • How to make DD-WRT router's (configured like a repeater) devices be accessible on LAN? (i.e. integrate DHCP for both routers)

    - by Annonomus Penguin
    I have a D-Link DIR-600-A1 router running DD-WRT (using the 601's firmware: except for the model number, they are near identical). It has an Atheros chip, so there is no "repeater" option. You can bypass this by setting the main radio as a client to the main router, and adding a virtual radio configured as an AP. You can then set up the credentials for connecting to the main router and allowing devices to connect to the repeater/router. I have a few devices on my network: Ethernet computers Server with Samba running WiFi devices connected to the main router I then wanted to add a repeater. I have a couple of other things on the repeater: WiFi Computer Other WiFi devices. Anyway, I wanted to connect my WiFi computer to the share on my server via Samba. However, for some reason, my router treats the main router as WAN, not another device. I've tried disabling the SPI firewall: However, that doesn't work. I've tried pinging my WiFi computer from my server. However, I can ping my server from my WiFi computer. AFAIK, they are on the same subset, just using different IPs: the main one uses 192.168.0.x and the repeater uses 192.168.1.x (starting at 100 for some reason). It seems as I need to configure my router(s) to work together for DHCP. I noticed there was a "DHCP forwarder" option, but I have no idea what that would do. A quick note: for some reason (that's beyond me) my ISP disabled the capability to bridge a WiFi to ethernet connection with the router they provide (something about PPPoE or similar...). The service rep I talked to when I was having issues after I changed ISPs said that, but they couldn't explain exactly what they were "blocking." How can I get DD-WRT to not treat the client connection as WAN and the router to recognize the devices connected to the repeater?

    Read the article

  • Secure iptables config for Samba

    - by Eric
    I'm trying to setup an iptables config such that outbound connections from my CentOS 6.2 server are allowed ONLY if they are of state ESTABLISHED. Currently, the following setup is working great for sshd, but all the Samba rules get totally ignored for a reason I cannot figure out. iptables Bash script to setup ALL rules: # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p udp --dport 137:138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p udp --sport 137:138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT # Enable these rules service iptables restart iptables rule list after running the above script: [root@repoman ~]# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:22222 state NEW,ESTABLISHED Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp spt:22222 state ESTABLISHED Ultimately, I'm trying to restrict Samba the same way I have done for sshd. In addition, I'm trying to restrict connections to the following IP address range: 10.1.1.12 - 10.1.1.19 Can you guys offer some pointers or possibly even a full-blown solution? I've read man iptables quite extensively, so I'm not sure why the Samba rules are getting thrown out. Additionally, removing the -s 10.1.1.0/24 flags don't change the fact the rules get ignored.

    Read the article

  • In Windows 7, why can't I use perfmon against a remote server?

    - by SomeGuy
    I am on Windows 7 and trying to run perfmon against Windows 2003 and Windows 2008 servers. I am running into the same issue with all remote machines. When creating a data collector set, I specify a domain account that is in the administrators group on the remote machines (and "Performance Log Users" and "Performance Monitor Users" to be safe). On the "Available Counters" screen, When I type in a remote computer name, PerfMon locks up for a good 2-3 minutes before I can add any counters. I can then save the collector set. However, when I save it, the go/stop buttons are disabled if I click the set in the left panel, and missing if I click the Data collector set itself in the right panel. See the screens below. I can run data collector sets against my local machine with no problem. I am opening perfmon with my local account in both scenarios. I also have Remote Registry Service started on each remote machine. What is going on?

    Read the article

  • KVM work with bridge network problems

    - by isware
    I try to configure bridge network for KVM(refer to http://www.linux-kvm.org/page/Networking), and it worked for Guest OS, but I have two problems with my Fedora host OS: 1?I can not access internet on host 2?The bridge configuration lost after reboot, I need to execute "service network restart" again to bring it up I checked here(http://serverfault.com/questions/168119/kvm-network-bridge-with-public-static-ip-for-both-host-and-guests) for the first problem, it seems not working for me. Any advice is appreciated! ifconfig -a eth0 Link encap:Ethernet HWaddr 48:5B:39:ED:EB:5A inet6 addr: fe80::4a5b:39ff:feed:eb5a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:231340 errors:0 dropped:0 overruns:0 frame:0 TX packets:413424 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:15335606 (14.6 MiB) TX bytes:114755796 (109.4 MiB) Interrupt:44 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:119307 errors:0 dropped:0 overruns:0 frame:0 TX packets:119307 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:57151264 (54.5 MiB) TX bytes:57151264 (54.5 MiB) sit0 Link encap:IPv6-in-IPv4 NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) sw0 Link encap:Ethernet HWaddr 48:5B:39:ED:EB:5A inet addr:192.168.1.133 Bcast:255.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::4a5b:39ff:feed:eb5a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:229584 errors:0 dropped:0 overruns:0 frame:0 TX packets:401232 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11047463 (10.5 MiB) TX bytes:113891533 (108.6 MiB) tap0 Link encap:Ethernet HWaddr F2:86:1A:48:E2:55 inet6 addr: fe80::f086:1aff:fe48:e255/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:232 errors:0 dropped:0 overruns:0 frame:0 TX packets:2744 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:24842 (24.2 KiB) TX bytes:243899 (238.1 KiB) virbr0 Link encap:Ethernet HWaddr 9A:7C:09:6B:85:65 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:46 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:5513 (5.3 KiB)

    Read the article

  • Passing OpenVPN road-warrior traffic through tunnel pfsense

    - by Chris
    I have a local LAN (10.100.100.0/24) and OpenVPN road-warriors (10.99.99.0/24). pfSense is regulating all this as follows: LAN: 10.100.100.105 WAN: 10.100.99.1 (connected to DSL Router which connects to internet). OPT1: 10.99.99.0 (OpenVPN tun0). There is an IPSec connection between my office and another office where my LAN can work on a specific IP address (sql server to be exact) on 192.168.30.41. My problem is that I wish my OpenVPN road-warrior clients to be able to use the IPSec service on 192.168.30.41 as well (which at present they cannot despite the fact that I am pushing the route 192.168.30.0 255.255.255.0). The other site's administrator cannot add the extra route for my openvpn clients for a lot of reasons which I am not going to enter at this stage. Is there a possibility that I could NAT all of my openVPN road-warriors requests through a local LAN IP address (something like 10.100.100.250 which is not used by anything on my LAN). The problem is that I am a newbie with pfSense so as much step-by-step help as possible would be very much appreciated! Thank you. C.

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • MS SQL Server slows down over time?

    - by Dave Holland
    Have any of you experienced the following, and have you found a solution: A large part of our website's back-end is MS SQL Server 2005. Every week or two weeks the site begins running slower - and I see queries taking longer and longer to complete in SQL. I have a query that I like to use: USE master select text,wait_time,blocking_session_id AS "Block", percent_complete, * from sys.dm_exec_requests CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 order by start_time asc Which is fairly useful... it gives a snapshot of everything that's running right at that moment against your SQL server. What's nice is that even if your CPU is pegged at 100% for some reason and Activity Monitor is refusing to load (I'm sure some of you have been there) this query still returns and you can see what query is killing your DB. When I run this, or Activity Monitor during the times that SQL has begun to slow down I don't see any specific queries causing the issue - they are ALL running slower across the board. If I restart the MS SQL Service then everything is fine, it speeds right up - for a week or two until it happens again. Nothing that I can think of has changed, but this just started a few months ago... Ideas? --Added Please note that when this database slowdown happens it doesn't matter if we are getting 100K page views an hour (busier time of day) or 10K page views an hour (slow time) the queries all take a longer time to complete than normal. The server isn't really under stress - the CPU isn't high, the disk usage doesn't seem to be out of control... it feels like index fragmentation or something of the sort but that doesn't seem to be the case. As far as pasting results of the query I pasted above I really can't do that. The Query above lists the login of the user performing the task, the entire query, etc etc.. and I'd really not like to hand out the names of my databases, tables, columns and the logins online :)... I can tell you that the queries running at that time are normal, standard queries for our site that run all the time, nothing out of the norm.

    Read the article

  • Internal but no external Citrix Access?

    - by leeand00
    We recently had to reload our configuration of Citrix on our server Server1, and since we have, we can access Citrix internally, but not externally. Normally we access Citrix from http://remote.xyz.org/Citrix/XenApp but since the configuration was reloaded we are met with a Service Unavailable message. Internally accessing the Citrix web application from http://localhost/Citrix/XenApp/ on Server1 we are able to access the web application. And also from machines on our local network using http://Server1/Citrix/XenApp/. I have gone into the Citrix Access Management Console and from the tree pane on the left clicked on Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/PNAgent Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/XenApp, which in both cases displays a screen that reads Secure client access. Here it offers me several options: Direct, Alternate, Translated, Gateway Direct, Gateway Alternate, Gateway Translated. I know that I can change the method of use by clicking Manage secure client access->Edit secure client access settings which opens a window that reads "Specify Access Methods", and below that reads "Specify details of the DMZ settings, including IP address, mask, and associated access method", I don't know what the original settings were, and I also don't know how our DMZ is configured so that I can specify the correct settings, to give access to our external users on the http://remote.xyz.org/Citrix/XenApp site. We have a vendor who setup our DMZ and does not allow us access to the gateway to see these settings. What sorts of questions should I ask them to restore remote access?

    Read the article

  • JAVASCRIPT ENABLED

    - by kirchoffs415
    HI, I hope somebody can help, i keep getting the following message when i log on-- Your Javascript is disabled. Limited functionality is available. it will stay for maybe a day sometimes two.I have uninstalled javascript and reinstalled but still the same. Iam using chrome. any help would be gratefull many thanks Dominic p.s. my system spec is as follows System InformationOS Name Microsoft® Windows Vista™ Home Premium Version 6.0.6002 Service Pack 2 Build 6002 Other OS Description Not Available OS Manufacturer Microsoft Corporation System Name DOM-PC System Manufacturer Dell Inc. System Model Inspiron 1545 System Type X86-based PC Processor Pentium(R) Dual-Core CPU T4200 @ 2.00GHz, 2000 Mhz, 2 Core(s), 2 Logical Processor(s) BIOS Version/Date Dell Inc. A05, 25/02/2009 SMBIOS Version 2.4 Windows Directory C:\Windows System Directory C:\Windows\system32 Boot Device \Device\HarddiskVolume3 Locale United Kingdom Hardware Abstraction Layer Version = "6.0.6002.18005" User Name DOM-PC\DOM Time Zone GMT Standard Time Installed Physical Memory (RAM) 3.00 GB Total Physical Memory 2.96 GB Available Physical Memory 1.38 GB Total Virtual Memory 5.89 GB Available Virtual Memory 4.25 GB Page File Space 3.00 GB Page File C:\pagefile.sys My System Specs

    Read the article

  • Apache Derby running in Tomcat shutdown issues

    - by Luke
    I have set up Derby Network Server to be hosted within a Tomcat environment. This works great. However, when I shut down Tomcat I get the following errors: 04/01/2011 10:41:41 AM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.ClientDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.AutoloadedDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [derby.NetworkServerStarter] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [NetworkServerThread_4] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_5] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_13] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.coyote.http11.Http11Protocol destroy INFO: Stopping Coyote HTTP/1.1 on http-8080 I'm currently starting and stopping Tomcat with the following commands: ./catalina run ./catalina stop Is there a better way to shutdown Tomcat with Derby or can this be solved by a configuration change?

    Read the article

  • netlogon errors

    - by rorr
    I have two instances of mssql 2005 and am using CA XOSoft replication. The master is a failover cluster and the replica is a standalone server. They are all running Server 2003 sp2 x64. Same patch levels on all servers. This setup has worked great for several months until we recently restricted the RPC ports on both nodes of the master(5000 - 6000 using rpccfg.exe). We have to implement egress filtering, thus the limiting of the ports. We began receiving login errors for sql windows authentication and NETLOGON Event ID: 5719: This computer was not able to set up a secure session with a domain controller in domain due to the following: Not enough storage is available to process this command. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. We also see group policies failing to update and cluster file shares go offline at the same time. The RPC ports were set back to default when we started seeing these problems and the servers rebooted, but the problems persist. The domain controllers are not showing any errors. Running dcdiag and netdiag shows everything is fine. We have noticed that the XOSoft service ws_rep.exe is using a lot of handles(8 - 9k), about the same number that sqlserver is using. As soon as xosoft replication is stopped the login errors cease and everything functions correctly. I have opened a ticket with CA for XOSoft, but I'm not sure that the problem is actually xosoft, but that it is the one bringing the problem to light. I'm looking for tips on debugging RPC problems. Specifically on limiting the ports and then reverting the changes.

    Read the article

  • Installer not being updated ( probably because of Windows 7 file cache )

    - by Sithu Kyaw
    I'm creating an installer for my Visual FoxPro application using ISTool and Inno Setup. It is ok for me for the first time. But, I updated my code and re-built the EXE file. Then, compiled the installer again. I found that my update was not compiled into the installer and I did not see the update in my running application. I noticed that the EXE file, which was built by VFP, was updated properly. It seems the installation script did not output the updated file. But, when I changed folder names, it did work. I don't want to change folder names whenever I run that installation script. It is not a good idea actually. I think it is because of Windows 7 cache system. Mine is Windows 7 Home Premium Service Pack 1. For example, My previous output file is located at C:\path\to\myinstaller.exe When I compile the installation script, the output file there should be overwritten, but it was not as expected. Although I deleted the file, it did not work. When I changed to output file path as C:\newpath\to\myinstaller.exe, I got the fix, but it is not a solution what I'm looking for. Does anyone how to do that? [Edit] I found that the installed directory was not updated properly. For example, I installed the program to C:\Program files\MyInstalledApp When I run the installer again, that installation directory should be overwritten, but failed. Thus, I got to uninstall the app before I re-install it. Is there any fix for this?

    Read the article

  • Retail windows xp prof sp 2 Lost key but have the genuine cd , what to do?

    - by AdityaGameProgrammer
    I had recently formatted my system only to find out i have lost the cd key to my original cd. i had used the option to enter the product key later. Yes, i know its a stupid thing to do but i bought the cd in 2008 from a retail store and i lost the original packaging. the actual label on the cd is includes service pack version 2002 .@2004 microsoft corporation reserved. There are some numbers on the back side of the cd in the inner ring. i cant for the life of me figure out how what is the use of the genuine cd i have with me when i cant seem to activate it. what exactly is the advantage of having the original cd in your possession in situations like this?. i have tried the unattend.txt and it doesnt contain the correct key. and there does not exist any winnnt.sif file in the cd. where on the cd or in it can i find the product id information i stay in india . and my attempts at trying the microsoft support site keeps getting me directed to page which says they had stopped support for windows xp in 2011. lets say by some miracle i do contact microsoft. what information would i have to provide them? and would they be giving me the product key for my cd key from their database? or a new key?

    Read the article

  • Windows 7 Apache Crashes on ANY request

    - by Dan
    I have XAMPP installed. I am running Windows 7. I have WordPress installed so that I may tweak it and test things locally before putting them 'live' on a remote server. I just installed BuddyPress. The installation was rather seamless. I activated the plugin and almost immediately, Apache crashed. I have Apache running as a service so it immediately restarted itself and was running BUT if I even so much as refresh the page (or create any other request), down it goes. Listed here is the error report as generated by Windows 7: Problem signature: Problem Event Name: APPCRASH Application Name: apache.exe Application Version: 2.2.4.0 Application Timestamp: 45ebef86 Fault Module Name: ZendOptimizer.dll Fault Module Version: 0.0.0.0 Fault Module Timestamp: 45ea8fee Exception Code: c0000005 Exception Offset: 0004dc22 OS Version: 6.1.7600.2.0.0.256.1 Locale ID: 1033 Additional Information 1: 1ec0 Additional Information 2: 1ec0fd70d07d060e5bfcf53c69ad1739 Additional Information 3: 2c48 Additional Information 4: 2c48940de5e7d1cb2e131ad6a0ca2feb Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Help?

    Read the article

  • Recommendations for secure business collaboration tools

    - by Michael Prescott
    I'm searching for a secure and easy way for business partners to collaboratively edit and exchange documents, share calendars, create schedules, and assign tasks. I speculate that the ideal collaboration environment or work-flow would actually involve several technologies and services. My co-workers and I have tried a variety of things from Google Apps to Wiki's, but nothing feels very fluid or complete. I suppose defining what we need and our constraints is probably in order: collaboratively edit basic text documents and spreadsheets exchange documents like flow-charts, graphs, and files generated by our other desktop applications, but not source code assign tasks to each other and ourselves and track the history of those tasks easily see when relevant documents have been modified since last viewing and ability to easily push notifications to relevant workers (a clean front page that shows updates would probably suffice) provide limited access to contract workers and guests users if a remote user system is compromised (keystroke logger or other spyware) we don't want the criminal to be able to gain access to all business documents (processes, trade-secrets, customer lists, etc.) simply because they gained access to a single Google account (or whatever web service) Cannot be a difficult to administer VPN infrastructure Cannot cost more than $100 per month (yeah, money is tight) Needs to support up to 25 users We can host our own web applications, but it must be low maintenance solution

    Read the article

  • try to attach to a database file but can't browse folder which contains the file

    - by Chadworthington
    I am trying to attach to database file (*.mdf, *.ldf) that I placed in the same folder as all my other SQL Server databases. I begin the attach by attempting to browse to the folder which contains the db files as well as all of my active database files. I select "attach Database" and click the "Add" button to add a database to the list of databases to attach to. When I do so, I get this error: TITLE: Locate Database Files - BESI-CHAD ------------------------------ D:\SQLdata\MSSQL10_50.SQLBESI\MSSQL\DATA Cannot access the specified path or file on the server. Verify that you have the necessary security privileges and that the path or file exists. If you know that the service account can access a specific file, type in the full path for the file in the File Name control in the Locate dialog box. ------------------------------ BUTTONS: OK ------------------------------ The path is correct and, as I mentioned, it contains all of my other database files so I wouldn't think that permissions should be an issue, but here is what I see for that folder: Any idea why I cannot browse to that folder and attach to the db files that I have place there?

    Read the article

  • Java exception when the traffic grow up

    - by sahid
    I have an error with java/solr when the traffic grows up. It seems Solr tries to cast a java.lang.Object to a org.apache.solr.common.util.ConcurrentLRUCache$CacheEntry SEVERE: java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to [Lorg.apache.solr.common.util.ConcurrentLRUCache$CacheEntry; at org.apache.solr.common.util.ConcurrentLRUCache$PQueue.myInsertWithOverflow(ConcurrentLRUCache.java:377) at org.apache.solr.common.util.ConcurrentLRUCache.markAndSweep(ConcurrentLRUCache.java:329) at org.apache.solr.common.util.ConcurrentLRUCache.put(ConcurrentLRUCache.java:144) at org.apache.solr.search.FastLRUCache.put(FastLRUCache.java:131) at org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:573) at org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:641) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1109) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1090) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:337) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:431) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1298) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:340) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:470) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:636) This is my java configuration for tomcat JAVA_OPTS="-Djava.awt.headless=true -D64 -server -Xms4096M -Xmx4096M -XX:+UseConcMarkSweepGC -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=5400 -Dcom.sun.management.jmxremote.ss\ l=false -Dcom.sun.management.jmxremote.authenticate=false" Somebody know why these errors happen ?

    Read the article

  • Visual Studio 2005 won't install on Windows 7

    - by Peanut
    Hi, My question relates very closely to this question: http://superuser.com/questions/34190/visual-studio-2005-sp1-refuses-to-install-in-windows-7 However this question hasn't provided the answer I'm looking for. I'm trying to install Visual Studio 2005 onto a clean Windows 7 (64 bit) box. However I keep getting the following error when the 'Microsoft Visual Studio 2005' component finishes installing ... Error 1935.An error occurred during the installation of assembly 'policy.8.0.Microsoft.VC80.OpenMP,type="win32-policy",version="8.0.50727.42",publicKeyToken="1fc8b3b9a1e18e3b",processorArchitecture="x86",Please refer to Help and Support for more information. HRESULT: 0x80073712. On my first attempt to install VS 2005 I got a warning about compatibility issues. I stopped at this point, downloaded the necessary service packs and restarted the installation from the beginning. Every since then I just get the error message above. I keep rolling back the installation and trying again ... it's but always the same error. Any help would be very much appreciated. Thanks.

    Read the article

  • Apache reverse proxy: no protocol handler

    - by gonvaled
    I am trying to configure a reverse proxy with apache, but I am getting a No protocol handler was valid for the URL error, which I do not understand. This is the relevant configuration of apache: ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /gonvaled/examples/jsonrpc/output/services/ http://localhost:8000/services/ ProxyPassReverse /gonvaled/examples/jsonrpc/output/services/ http://localhost:8000/services/ The requests is reaching apache as: POST /gonvaled/examples/jsonrpc/output/services/EchoService.py HTTP/1.1 And they should be forwarded to my internal service, located at: 0.0.0.0:8000/services/EchoService.py These are the logs: ==> /var/log/apache2/error.log <== [Wed Jun 20 02:05:20 2012] [debug] proxy_util.c(1506): [client 127.0.0.1] proxy: http: found worker http://localhost:8000/services/ for http://localhost:8000/services/EchoService.py, referer: http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html [Wed Jun 20 02:05:20 2012] [debug] mod_proxy.c(998): Running scheme http handler (attempt 0) [Wed Jun 20 02:05:20 2012] [warn] proxy: No protocol handler was valid for the URL /gonvaled/examples/jsonrpc/output/services/EchoService.py. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule. [Wed Jun 20 02:05:20 2012] [debug] mod_deflate.c(615): [client 127.0.0.1] Zlib: Compressed 614 to 373 : URL /gonvaled/examples/jsonrpc/output/services/EchoService.py, referer: http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html ==> /var/log/apache2/access.log <== 127.0.0.1 - - [20/Jun/2012:02:05:20 +0200] "POST /gonvaled/examples/jsonrpc/output/services/EchoService.py HTTP/1.1" 500 598 "http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html" "Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.162 Safari/535.19"

    Read the article

  • Postfix configuration - Uing virtual min but server is bouncing back my mail.

    - by brodiebrodie
    I have no experience in setting up postfix, and thought virtualmin minght do the legwork for me. Appears not. When I try to send mail to the domain (either [email protected] [email protected] or [email protected]) I get the following message returned This is the mail system at host dedq239.localdomain. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to <postmaster> If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]> (expanded from <[email protected]>): User unknown in virtual alias table Final-Recipient: rfc822; [email protected] Original-Recipient: rfc822;[email protected] Action: failed Status: 5.0.0 Diagnostic-Code: X-Postfix; User unknown in virtual alias table How can I diagnose the problem here? It seems that the mail gets to my server but the server fails to locally deliver the message to the correct user. (This is a guess, truthfully I have no idea what is happening). I have checked my virtual alias table and it seems to be set up correctly (I can post if this would be helpful). Can anyone give me a clue as to the next step? Thanks alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no local_recipient_maps = $virtual_mailbox_maps mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination smtpd_sasl_auth_enable = yes soft_bounce = no unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual My mail log file (the last entry) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 207C6B18158: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: from=<[email protected]>, size=1805, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/error[7238]: 207C6B18158: to=<[email protected]>, orig_to=<[email protected]>, relay=none, delay=0.64, delays=0.61/0.01/0/0.02, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 8DC13B18169: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 8DC13B18169: from=<>, size=3691, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/bounce[7239]: 207C6B18158: sender non-delivery notification: 8DC13B18169 Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: removed Sep 30 15:13:48 dedq239 postfix/smtp[7240]: 8DC13B18169: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[209.85.216.55]:25, delay=1.3, delays=0.02/0.01/0.58/0.75, dsn=2.0.0, status=sent (250 2.0.0 OK 1254348828 36si15082901pxi.91) Sep 30 15:13:48 dedq239 postfix/qmgr[7177]: 8DC13B18169: removed Sep 30 15:14:17 dedq239 postfix/smtpd[7233]: disconnect from mail-bw0-f228.google.com[209.85.218.228] etc.aliases file below I have not touched this file - myvirtualdomain is a replacement for my real domain name # Aliases in this file will NOT be expanded in the header from # Mail, but WILL be visible over networks or from /bin/mail. # # >>>>>>>>>> The program "newaliases" must be run after # >> NOTE >> this file is updated for any changes to # >>>>>>>>>> show through to sendmail. # # Basic system aliases -- these MUST be present. mailer-daemon: postmaster postmaster: root # General redirections for pseudo accounts. bin: root daemon: root adm: root lp: root sync: root shutdown: root halt: root mail: root news: root uucp: root operator: root games: root gopher: root ftp: root nobody: root radiusd: root nut: root dbus: root vcsa: root canna: root wnn: root rpm: root nscd: root pcap: root apache: root webalizer: root dovecot: root fax: root quagga: root radvd: root pvm: root amanda: root privoxy: root ident: root named: root xfs: root gdm: root mailnull: root postgres: root sshd: root smmsp: root postfix: root netdump: root ldap: root squid: root ntp: root mysql: root desktop: root rpcuser: root rpc: root nfsnobody: root ingres: root system: root toor: root manager: root dumper: root abuse: root newsadm: news newsadmin: news usenet: news ftpadm: ftp ftpadmin: ftp ftp-adm: ftp ftp-admin: ftp www: webmaster webmaster: root noc: root security: root hostmaster: root info: postmaster marketing: postmaster sales: postmaster support: postmaster # trap decode to catch security attacks decode: root # Person who should get root's mail #root: marc abuse-myvirtualdomain.com: [email protected] My etc/postfix/virtual file is below - again myvirtualdomain is a replacement. I think this file was generated by Virtualmin and I have tried messing around with is with no success... This is the version without my changes. myunixusername@myvirtualdomain .com myunixusername myvirtualdomain .com myvirtualdomain.com [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]

    Read the article

  • I need advices: small memory footprint linux mail server with spam filtering

    - by petermolnar
    I have a VPS which is originally destined to be a webserver but some minimal mail capabilities are needed to be deployed as well, including sending and receiving as standalone server. The current setup is the following: Postfix reveices the mail, the users are in virtual tables, stored in MySQL on connection all servers are tested with policyd-weight service against some DNSBLs all mail is runs through SpamAssassin spamd with the help of spamc client the mail is then delivered with Dovecot 2' LDA (local delivery agent), virtual users as well As you saw... there's no virus scanner running, and that's for a reason: clamav eats all the memory possible and also, virus mails are all filtered out with this setup (I've tested the same with ClamAV enabled for 1,5 years, no virus mail ever got even to ClamAV) I don't use amavisd and I really don't want to. You only need that monster if you have plenty of memory and lots of simultaneous scanners. It's also a nightmare to fine tune by hand. I run policyd-weight instead of policyd and native DNSBLs in postfix. I don't like to send someone away because a single service listed them. Important statement: everything works fine. I receive very small amount of spam, nearly never get a false positive and most of the bad mail is stopped by policyd-weight. The only "problem" that I feel the services at total uses a bit much memory alltogether. I've already cut the modules of spamassassin (see below), but I'd really like to hear some advices how to cut the memory footprint as low as possible, mostly: what plugins SpamAssassin really needs and what are more or less useless, regarding to my current postfix & policyd-weight setup? SpamAssassin rules are also compiled with sa-compile (sa-update runs once a week from cron, compile runs right after that) These are some of the current configurations that may matter, please tell me if you need anything more. postfix/master.cf (parts only) dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/bin/spamc -e /usr/lib/dovecot/deliver -d ${recipient} -f {sender} postfix/main.cf (parts only) smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_invalid_hostname, reject_non_fqdn_hostname, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_pipelining, reject_unauth_destination, check_policy_service inet:127.0.0.1:12525, permit policyd-weight.conf (parts only) $REJECTMSG = "550 Mail appeared to be SPAM or forged. Ask your Mail/DNS-Administrator to correct HELO and DNS MX settings or to get removed from DNSBLs"; $REJECTLEVEL = 4; $DEFER_STRING = 'IN_SPAMCOP= BOGUS_MX='; $DEFER_ACTION = '450'; $DEFER_LEVEL = 5; $DNSERRMSG = '450 No DNS entries for your MTA, HELO and Domain. Contact YOUR administrator'; # 1: ON, 0: OFF (default) # If ON request that ALL clients are only checked against RBLs $dnsbl_checks_only = 0; # 1: ON (default), 0: OFF # When set to ON it logs only RBLs which affect scoring (positive or negative) $LOG_BAD_RBL_ONLY = 1; ## DNSBL settings @dnsbl_score = ( # host, hit, miss, log name 'dnsbl.ahbl.org', 3, -1, 'dnsbl.ahbl.org', 'dnsbl.njabl.org', 3, -1, 'dnsbl.njabl.org', 'dnsbl.sorbs.net', 3, -1, 'dnsbl.sorbs.net', 'bl.spamcop.net', 3, -1, 'bl.spamcop.net', 'zen.spamhaus.org', 3, -1, 'zen.spamhaus.org', 'pbl.spamhaus.org', 3, -1, 'pbl.spamhaus.org', 'cbl.abuseat.org', 3, -1, 'cbl.abuseat.org', 'list.dsbl.org', 3, -1, 'list.dsbl.org', ); # If Client IP is listed in MORE DNSBLS than this var, it gets REJECTed immediately $MAXDNSBLHITS = 3; # alternatively, if the score of DNSBLs is ABOVE this level, reject immediately $MAXDNSBLSCORE = 9; $MAXDNSBLMSG = '550 Az levelezoszerveruk IP cime tul sok spamlistan talahato, kerjuk ellenorizze! / Your MTA is listed in too many DNSBLs; please check.'; ## RHSBL settings @rhsbl_score = ( 'multi.surbl.org', 4, 0, 'multi.surbl.org', 'rhsbl.ahbl.org', 4, 0, 'rhsbl.ahbl.org', 'dsn.rfc-ignorant.org', 4, 0, 'dsn.rfc-ignorant.org', # 'postmaster.rfc-ignorant.org', 0.1, 0, 'postmaster.rfc-ignorant.org', # 'abuse.rfc-ignorant.org', 0.1, 0, 'abuse.rfc-ignorant.org' ); # skip a RBL if this RBL had this many continuous errors $BL_ERROR_SKIP = 2; # skip a RBL for that many times $BL_SKIP_RELEASE = 10; ## cache stuff # must be a directory (add trailing slash) $LOCKPATH = '/var/run/policyd-weight/'; # socket path for the cache daemon. $SPATH = $LOCKPATH.'/polw.sock'; # how many seconds the cache may be idle before starting maintenance routines #NOTE: standard maintenance jobs happen regardless of this setting. $MAXIDLECACHE = 60; # after this number of requests do following maintenance jobs: checking for config changes $MAINTENANCE_LEVEL = 5; # negative (i.e. SPAM) result cache settings ################################## # set to 0 to disable caching for spam results. To this level the cache will be cleaned. $CACHESIZE = 2000; # at this number of entries cleanup takes place $CACHEMAXSIZE = 4000; $CACHEREJECTMSG = '550 temporarily blocked because of previous errors'; # after NTTL retries the cache entry is deleted $NTTL = 1; # client MUST NOT retry within this seconds in order to decrease TTL counter $NTIME = 30; # positve (i.,e. HAM) result cache settings ################################### # set to 0 to disable caching of HAM. To this number of entries the cache will be cleaned $POSCACHESIZE = 1000; # at this number of entries cleanup takes place $POSCACHEMAXSIZE = 2000; $POSCACHEMSG = 'using cached result'; #after PTTL requests the HAM entry must succeed one time the RBL checks again $PTTL = 60; # after $PTIME in HAM Cache the client must pass one time the RBL checks again. #Values must be nonfractal. Accepted time-units: s, m, h, d $PTIME = '3h'; # The client must pass this time the RBL checks in order to be listed as hard-HAM # After this time the client will pass immediately for PTTL within PTIME $TEMP_PTIME = '1d'; ## DNS settings # Retries for ONE DNS-Lookup $DNS_RETRIES = 1; # Retry-interval for ONE DNS-Lookup $DNS_RETRY_IVAL = 5; # max error count for unresponded queries in a complete policy query $MAXDNSERR = 3; $MAXDNSERRMSG = 'passed - too many local DNS-errors'; # persistent udp connection for DNS queries. #broken in Net::DNS version 0.51. Works with Net::DNS 0.53; DEFAULT: off $PUDP= 0; # Force the usage of Net::DNS for RBL lookups. # Normally policyd-weight tries to use a faster RBL lookup routine instead of Net::DNS $USE_NET_DNS = 0; # A list of space separated NS IPs # This overrides resolv.conf settings # Example: $NS = '1.2.3.4 1.2.3.5'; # DEFAULT: empty $NS = ''; # timeout for receiving from cache instance $IPC_TIMEOUT = 2; # If set to 1 policyd-weight closes connections to smtpd clients in order to avoid too many #established connections to one policyd-weight child $TRY_BALANCE = 0; # scores for checks, WARNING: they may manipulate eachother # or be factors for other scores. # HIT score, MISS Score @client_ip_eq_helo_score = (1.5, -1.25 ); @helo_score = (1.5, -2 ); @helo_score = (0, -2 ); @helo_from_mx_eq_ip_score= (1.5, -3.1 ); @helo_numeric_score= (2.5, 0 ); @from_match_regex_verified_helo= (1,-2 ); @from_match_regex_unverified_helo = (1.6, -1.5 ); @from_match_regex_failed_helo = (2.5, 0 ); @helo_seems_dialup = (1.5, 0 ); @failed_helo_seems_dialup= (2, 0 ); @helo_ip_in_client_subnet= (0,-1.2 ); @helo_ip_in_cl16_subnet = (0,-0.41 ); #@client_seems_dialup_score = (3.75, 0 ); @client_seems_dialup_score = (0, 0 ); @from_multiparted = (1.09, 0 ); @from_anon= (1.17, 0 ); @bogus_mx_score = (2.1, 0 ); @random_sender_score = (0.25, 0 ); @rhsbl_penalty_score = (3.1, 0 ); @enforce_dyndns_score = (3, 0 ); spamassassin/init.pre (I've put the .pre files together) loadplugin Mail::SpamAssassin::Plugin::Hashcash loadplugin Mail::SpamAssassin::Plugin::SPF loadplugin Mail::SpamAssassin::Plugin::Pyzor loadplugin Mail::SpamAssassin::Plugin::Razor2 loadplugin Mail::SpamAssassin::Plugin::AutoLearnThreshold loadplugin Mail::SpamAssassin::Plugin::MIMEHeader loadplugin Mail::SpamAssassin::Plugin::ReplaceTags loadplugin Mail::SpamAssassin::Plugin::Check loadplugin Mail::SpamAssassin::Plugin::HTTPSMismatch loadplugin Mail::SpamAssassin::Plugin::URIDetail loadplugin Mail::SpamAssassin::Plugin::Bayes loadplugin Mail::SpamAssassin::Plugin::BodyEval loadplugin Mail::SpamAssassin::Plugin::DNSEval loadplugin Mail::SpamAssassin::Plugin::HTMLEval loadplugin Mail::SpamAssassin::Plugin::HeaderEval loadplugin Mail::SpamAssassin::Plugin::MIMEEval loadplugin Mail::SpamAssassin::Plugin::RelayEval loadplugin Mail::SpamAssassin::Plugin::URIEval loadplugin Mail::SpamAssassin::Plugin::WLBLEval loadplugin Mail::SpamAssassin::Plugin::VBounce loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody spamassassin/local.cf (parts) use_bayes 1 bayes_auto_learn 1 bayes_store_module Mail::SpamAssassin::BayesStore::MySQL bayes_sql_dsn DBI:mysql:db:127.0.0.1:3306 bayes_sql_username user bayes_sql_password pass bayes_ignore_header X-Bogosity bayes_ignore_header X-Spam-Flag bayes_ignore_header X-Spam-Status ### User settings user_scores_dsn DBI:mysql:db:127.0.0.1:3306 user_scores_sql_password user user_scores_sql_username pass user_scores_sql_custom_query SELECT preference, value FROM _TABLE_ WHERE username = _USERNAME_ OR username = '$GLOBAL' OR username = CONCAT('%',_DOMAIN_) ORDER BY username ASC # for better speed score DNS_FROM_AHBL_RHSBL 0 score __RFC_IGNORANT_ENVFROM 0 score DNS_FROM_RFC_DSN 0 score DNS_FROM_RFC_BOGUSMX 0 score __DNS_FROM_RFC_POST 0 score __DNS_FROM_RFC_ABUSE 0 score __DNS_FROM_RFC_WHOIS 0 UPDATE 01 As adaptr advised I remove policyd-weight and configured postfix postscreen, this resulted approximately -15-20 MB from RAM usage and a lot faster work. I'm not sure it's working at full capacity but it seems promising.

    Read the article

< Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >