Search Results

Search found 22139 results on 886 pages for 'security testing'.

Page 704/886 | < Previous Page | 700 701 702 703 704 705 706 707 708 709 710 711  | Next Page >

  • Running Jackd on Ubuntu for my External Firewire Sound card

    - by Asaf
    Hello, I'm running Ubuntu 10.04 and I have an external Sound card: Phonic Firefly 302. I've connected the device, installed Jackd, added the lines: @audio - rtprio 99 @audio - memlock 500000 @audio - nice -10 to /etc/security/limits.conf logged out, logged back in, ran qjackctl (sudo qjackctl to be exact), ran the settings and chose "firewire" on the driver option, pressed "Start" and that was the output: 20:10:19.450 Patchbay deactivated. 20:10:19.578 Statistics reset. 20:10:19.601 ALSA connection graph change. 20:10:19.828 ALSA connection change. 20:10:21.293 Startup script... 20:10:21.293 artsshell -q terminate sh: artsshell: not found 20:10:21.695 Startup script terminated with exit status=32512. 20:10:21.695 JACK is starting... 20:10:21.695 /usr/bin/jackd -dfirewire -r44100 -p1024 -n3 jackd 0.118.0 Copyright 2001-2009 Paul Davis, Stephane Letz, Jack O'Quinn, Torben Hohn and others. jackd comes with ABSOLUTELY NO WARRANTY This is free software, and you are welcome to redistribute it under certain conditions; see the file COPYING for details 20:10:21.704 JACK was started with PID=22176. no message buffer overruns JACK compiled with System V SHM support. loading driver .. libffado 2.0.0 built Mar 31 2010 14:47:42 firewire ERR: Error creating FFADO streaming device cannot load driver module firewire no message buffer overruns 20:10:21.819 JACK was stopped successfully. 20:10:21.819 Post-shutdown script... 20:10:21.822 killall jackd jackd: no process found 20:10:22.230 Post-shutdown script terminated with exit status=256. 20:10:23.865 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Error: "/tmp/kde-asaf" is owned by uid 1000 instead of uid 0.

    Read the article

  • nginx rewrite or internal redirection cycle

    - by gyre
    Im banging my head against a table trying to figure out what is causing redirection cycle in my nginx configuration when trying to access URL which does not exist Configuration goes as follows: server { listen 127.0.0.1:8080; server_name .somedomain.com; root /var/www/somedomain.com; access_log /var/log/nginx/somedomain.com-access.nginx.log; error_log /var/log/nginx/somedomain.com-error.nginx.log debug; location ~* \.php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) include /etc/nginx/fastcgi.conf; } # all other files location / { root /var/www/somedomain.com; try_files $uri $uri/ ; } error_page 404 /errors/404.html; location /errors/ { alias /var/www/errors/; } #this loads custom logging configuration which disables favicon error logging include /etc/nginx/drop.conf; } this domain is a simple STATIC HTML site just for some testing purposes. I'd expect that the error_page directive would kick in in response to PHP-FPM not being able to find given files as I have fastcgi_intercept_errors on; in http block and nave error_page set up, but I'm guessing the request fails even before that somewhere on internal redirects. Any help would be much appreciated.

    Read the article

  • Tuning Windows 7 for use in a VM

    - by intuited
    I'm running Windows 7 in a VirtualBox Virtual Machine, and would like to make it run in a more streamlined fashion. I'll be using the install primarily for testing web apps, and have no need for it to run quickly. I would like it to run with minimal memory requirements, and with minimal changes to its virtual hard drive's contents. Changes to the hard drive contents, for example the paging file, result in larger snapshot sizes. Another recent post of mine seems to be related to this issue, but does not directly address issues with Windows. One concern that I have is that Windows seems to be using 17% of its paging file even with over 900MB of memory marked "Standby" or "Free". My uneducated guess is that this is being used to store indexes or some other data that helps to speed up the system but is not really necessary. I'm also wondering if it's normal for Windows to use over 500 MB of "In Use" memory with no apps running. Will this amount decrease if I reduce the amount of "installed" memory in the VM? What steps can I take to reduce the system's memory footprint without incurring an increase in paging file usage?

    Read the article

  • Exchange 2003 ActiveSync problem

    - by colemanm
    We're having problems getting iPhones to sync properly with SBS 2003 Exchange. When you add a new Exchange ActiveSync account on an iPhone and enter all the pertinent information, it shows a "Verifying Exchange account info" message for a minute or so, then says everything's verified and asks what you want to sync, Mail, Contacts, Calendars... so it looks like it's working. However, when you go to the Mail app and select the Exchange email account, it just shows an "Inbox" folder with nothing in it. When you try refreshing, it attempts for a second, then says "Last Updated" with a timestamp, as if it worked, but there's no mail and no error message/feedback at all. I think I've narrowed it down to some sort of certificate issue, but I'm having trouble finding out where to go from here... I ran MS's Exchange connectivity testing tool with these results: Our cert was purchased from Network Solutions, and I'd already added it to the IIS Default Website for OWA purposes. But this report makes it look like the cert is somehow problematic. I don't know what to do now... Here's a shot of the cert details, just in case:

    Read the article

  • Disabling Skype automatic update

    - by user13267
    How to stop skype from searching or at least downloading update without consent? I want that annoying "Update skype now" dialog box that keeps popping up before I log in to Skype and after I log in to Skype from appearing at all. Few months ago this used to work: 1) C:\Users\”YourName”\AppData\Local\Temp folder. 2) Find the file called SkypeSetup.exe, and delete it. 3) Create a text file in the folder, rename it to SkypeSetup.exe 4) Right click on the new file you just created and ask for properties. 5) Next left click the security tab then left click the advanced button. 6) Now left click “Change Permissions” and then “Add”. Enter “Everyone” (without the quotes) where it sez’, “Enter the object name to select (examples):” and click “OK”. 7) Now check the “Deny” box for “Full control” and click “OK”. obtained from HERE, but now it seems this has stopped working. The worst part is Skype seems to download ~30MB of executable setup file without my knowledge before bugging me with the dialog box to update it, and there seems to be no direct way to disable this download. And disabling the skype updater service does not seem to work either. Is there any kind of patch or registry hack I can use to stop skype from auto updating? Or should I start looking for an alternative to Skype altogether?

    Read the article

  • When running PowerShell script as a scheduled task some Exchange 2010 database properties are null

    - by barophobia
    Hello, I've written a script that intends to retrieve the DatabaseSize of a database from Exchange 2010. I created a new AD user for this script to run under as a scheduled task. I gave this user admin rights to the Exchange Organization (as a last resort during my testing) and local admin rights on the Exchange machine. When I run this script manually by starting powershell (with runas /noprofile /user:domain\user powershell) everything works fine. All the database properties are available. When I run the script as a scheduled task a lot of the properties are null including the one I want: DatabaseSize. I've also tried running the script as the domain admin account with the same results. There must be something different in the two contexts but I can't figure out what it is. My script: Add-PSSnapin Microsoft.Exchange.Management.PowerShell.E2010 Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Starting script" $databases = get-mailboxdatabase -status if($databases -ne $null) { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Object created" $databasesize_text = $databases.databasesize.tomb().tostring() if($databasesize_text -ne $null) { $output = "echo "+$databasesize_text+":ok" Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Path check" if(test-path "\\mon-01\prtgsensors\EXE\") { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Path valid" Set-Content \\mon-01\prtgsensors\EXE\ex-05_db_size.bat -value $output } Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Exiting program" } else { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "databasesize_text is empty. nothing to do" } } else { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "object not created. nothing to do" } exit 0

    Read the article

  • Improving abysmal 802.11n wireless network

    - by concept
    I am in desperate need of help to improve the abysmal performance of my 802.11n wireless network. At best I get 30Mbs (this is an internet download) from a technology that boasts 300Mbs, even worse is the LAN where to date best i have ever gotten is 1Mbs. It is literally quicker to copy the file to a USB and walk it to the other computer. Infrastructure is this AP 802.11n only broadcasting at both 2.4GHz and 5GHz Mac with 802.11a/b/g/n card is connected to the AP via 5GHz Linux with 802.11a/b/g/n card is connected to AP via 2.4GHz I have conducted the following tests (results at end of post) Internet based speed test wired and wireless LAN file copy wired and wireless I have read: http://nutsaboutnets.com/troubleshooting-wi-fi-problems/ http://www.smallnetbuilder.com/wireless/wireless-basics/30664-5-ways-to-fix-slow-80211n-- speed http colon //www.wi-fiplanet dot com/tutorials/7-tips-to-increase-wi-fi-performance.html Slow file transfer on network between two 802.11n laptops (connected directly together via access point) Wireless Network Performance Issues Slower than expected 802.11n wireless network speeds I have made the following optimizations AP broadcasts only 802.11n on both 2.4GHz and 5GHz frequencies 2.4GHz is on a channel with least interference (live in an apartment with lots of APs), this did make a 10Mb/sec improvement Our AP is the only one transmitting on the 5GHz freq. Security: WPA Personal WPA2 AES encryption Bandwidth: 20MHz / 40MHz (i assume this to be channel bonding) I have tried the following with 0 improvement Dropped the Fragment Threshold to 512 Dropped the Request To Send (RTS) Threshold to 512 and 1 Even thought of buying a frequency spectrum analyzer, until i saw the cost of them!!! Speed test results Linux Wired: DOWNLOAD 128.40Mb/s UPLOAD 10.62Mb/s www dot speedtest dot net/my-result/2948381853 Mac Wired: DOWNLOAD 118.02Mb/s UPLOAD 10.56Mb/s www dot speedtest dot net/my-result/2948384406 Linux Wireless: DOWNLOAD 23.99Mb/s UPLOAD 10.31Mb/s www.speedtest dot net/my-result/2948394990 Mac Wireless: DOWNLOAD 22.55Mb/s UPLOAD 10.36Mb/s www.speedtest dot net/my-result/2948396489 LAN NFS 53,345,087 bytes (51Mb) file Linux Mac NFS Wired: 65.6959 Mb/sec Linux Mac NFS Wireless: .9443 Mb/sec All help is appreciated, even testing methods will be accepted.

    Read the article

  • Setting up a localhost mail server on Mac OSX

    - by Thom
    I asked this over on stackoverflow. I would love to be able to test php webapps that require emailing registration info etc. on my mac. I downloaded a version of CommuniGate Pro. I need to mail either to an account inside or outside (whichever is best) of the localhost. Again this would be used for testing purposes to verify and debug my code prior to uploading to a hosting service. Any ideas, help and/or examples would be very much appreciated. If it would be easier I could go over to Windows XP. That would just mean setting up wamp and transfering my files over from the mac side via dropbox. I got the local mailserver to work so I can send emails between accounts. However, I cannot seem to get the php code to work. I know that I am missing something. <?php function email($to, $subject, $body, $headers) { $headers = 'MIME-Version: 1.0' . "\r\n"; $headers .= 'From: <[email protected]>' . "\r\n"; mail($to, $subject, $body, $headers); } ?>

    Read the article

  • Setting up a localhost mail server on Mac OSX

    - by Thom
    I asked this over on stackoverflow. They pointed me here. I would love to be able to test php webapps that require emailing registration info etc. on my mac. I downloaded a version of CommuniGate Pro. I need to mail either to an account inside or outside (whichever is best) of the localhost. Again this would be used for testing purposes to verify and debug my code prior to uploading to a hosting service. Any ideas, help and/or examples would be very much appreciated. If it would be easier I could go over to Windows XP. That would just mean setting up wamp and transfering my files over from the mac side via dropbox. I got the local mailserver to work so I can send emails between accounts. However, I cannot seem to get the php code to work. I know that I am missing something. I see where this has been asked before. I want to add that I am using xampp. In Mac OS 10.6.8. I tried changing the php.ini SMTP command to macintosh-3.local. <?php function email($to, $subject, $body, $headers) { $headers = 'MIME-Version: 1.0' . "\r\n"; $headers .= 'From: <[email protected]>' . "\r\n"; mail($to, $subject, $body, $headers); } ?>

    Read the article

  • Ho can I recover from SharePoint configuration errors after promoting the server to a Domain Controller?

    - by jjr2527
    I have a SharePoint 2010 VM setup in VirtualBox and I was using local machine accounts to handle security on the server. While preparing for a demo it came time to have some meaningful users on my VM image. I followed some docs on promoting my server to a Domain Controller in a new forrest. So now I have [MachineName].SPDEMO.CONTOSO.com and I can add users as needed. However, when I try to connect to my SharePoint sites I am getting a white screen with the error: "Cannot connect to the configuration database" I changed the pool identity account of each of my IIS app pools to the new Administrator account and started the services successfully but I can't get the SQL services to start up. When I try to start them I get the following error: Windows could not start the SQL Server (MSSQLSERVER) on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 17058. In the event log I see the following error: The SQL Server (MSSQLSERVER) service terminated with service-specific error %%17058. Can I recover from this or should I roll back or just uninstall the Domain Controller role. I'd like to keep the server as a standalone DC so I can do some user profile creation/management but I need the SharePoint bits to work as well.

    Read the article

  • Cisco Multi-DMZ firewall

    - by BParker
    I need to find a firewall that will give me 1 LAN port, and 5-7 DMZ ports. I have a requirement to replace some FreeBSD systems that are used to run some testing equipment. It is essential that the DMZ ports cannot communicate with each other, but the LAN port can communicate with everyone. That way a user on the LAN can connect to the test systems, but the test systems are isolated entirely and cannot interfere with each other. One of the DMZ's will be connected to a VMWare ESXi server, one to a standard server, and the rest to various types of equipment. The lan port will be connected to the corporate LAN switch. Sorry if i am a little vague, I am just trying to work all this out myself! Currently we have a FreeBSD configured, but the quad port NIC's are pretty expensive, and the PC itself is old, so i would prefer to replace it with a dedicate piece of kit which can do the same job, but more reliably! These test rigs are used all over the place, and get moved quite often, so i am aiming for Cisco kit for ease of configuration and reliability of the hardware itself. Thanks

    Read the article

  • How can I debug user mode driver failures in Windows 8

    - by Tom
    I have a 32 GB SD Card. Whenever I insert this card in to my newly upgraded Windows 8 laptop the OS stops responding normally. Metro Apps won't work. The system may or may not log in. Desktop apps may or may not be able to do things. When I remove the card and restart then all is fine. As soon as I put the card back in, the system starts misbehaving again. I've run Windows Update, so I have the latest drivers from Microsoft. This does not occur with the 8 GB cards I have. Unfortunately I only have one 32 GB card, so I can't test with others. From examining the system event log I've determined this is happening due to a user mode driver failure. How can I best debug this issue from here? How can I figure out which driver this is related to? Will there be a Dr. Watson crash dump somewhere? Details - System - Provider [ Name] Microsoft-Windows-DriverFrameworks-UserMode [ Guid] {2E35AAEB-857F-4BEB-A418-2E6C0E54D988} EventID 10110 Version 1 Level 1 Task 64 Opcode 0 Keywords 0x2000000000000000 - TimeCreated [ SystemTime] 2012-10-29T00:51:57.532718300Z EventRecordID 40417 Correlation - Execution [ ProcessID] 1056 [ ThreadID] 3796 Channel System Computer thebrain - Security [ UserID] S-1-5-18 - UserData - UMDFHostProblem [ lifetime] {811E3DC4-FBC6-420B-ABCC-AD7505A36F3B} - Problem [ code] 3 [ detectedBy] 2 ExitCode 3 - Operation [ code] 259 Message 72448 Status 4294967295 Edit 1 So I tried using Debug View from SysInternals (you can get it here: http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx). That gave me this information: which is not especially helpful. Then I tried connecting WinDbg to WUDFHost.exe (the process that seems to host user mode drivers) to see if it could catch the error. Get it here: http://msdn.microsoft.com/en-US/windows/hardware/hh852363 Instructions: http://msdn.microsoft.com/en-US/library/windows/hardware/ff554716(v=vs.85).aspx That didn't help much. It didn't catch any exceptions as I'd hoped (which would point me to the cause of the crash at least). Here's the stack of one of the threads:

    Read the article

  • Nginx - Enable PHP for all hosts

    - by F21
    I am currently testing out nginx and have set up some virtual hosts by putting configurations for each virtual host in its own file in a folder called sites-enabled. I then ask nginx to load all those config files using: include C:/nginx/sites-enabled/*.conf; This is my current config: http { server_names_hash_bucket_size 64; include mime.types; include C:/nginx/sites-enabled/*.conf; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; root C:/www-root; #charset koi8-r; #access_log logs/host.access.log main; location / { index index.html index.htm index.php; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server{ server_name localhost; } } And this is one of the configs for a virtual host: server { server_name testsubdomain.testdomain.com root C:/www-root/testsubdomain.testdomain.com; } The problem is that for testsubdomain.testdomain.com, I cannot get php scripts to run unless I have defined a location block with fastcgi parameters for it. What I would like to do is to be able to enable PHP for all hosted sites on this server (without having to add a PHP location block with fastcgi parameters) for maintainability. This is so that if I need to change any fastcgi values for PHP, I can just change it in 1 location. Is this something that's possible for nginx? If so, how can this be done?

    Read the article

  • Rsync: General file/folder synchronization

    - by Rey Leonard Amorato
    I have a file server, which is in-charge of pulling a folder tree from multiple workstations on a daily basis. My current method for this is by using rsync, (which works pretty well provided directory names and/or files remain the same) however, when files are renamed or moved about within subdir1, rsync will copy them over to the server, creating duplicates. I have to manually find and delete extraneous files/folders that had been left on the server during previous syncs. Note that I cannot use rsync's --delete flag because any sync from a workstation will then mirror that particular folder tree, instead of merging them to the server. Visual diagram: Server: Workstation1 Workstation2 Workstation(n) Folder* Folder* Folder* Folder* -subdir1 -subdir1 -subdir1 -subdir(n) -file1 -file1 -file2 -file(n) -file2 -file(n) Is there a simple script (preferably in bash, nothing fancy) that can accomplish the deletion of the extraneous files/folders in the event a file is renamed or moved to a different subdir? Is there a different program, much like rsync that can accomplish this task autonomously and in a much simpler manner? I have looked at unison, but I did not like the fact that it keeps a local database for the syncing info. Any tips at all as to how I am supposed to tackle this? Thank you in advanced for your help. EDIT: I have tried unison just recently and I can safely say it is out of the question now. unison is a bi-directional synchronization tool and from my testing, it mirrors the files existing on the server to all workstations. - This is unwanted. preferably, i would want files/folders to stay within their respective workstations and just merge to the server. AKA uni-directional sync; but with renames/moves propagated to the server. I might have to look into Git/Mercurial/Bazaar as mentioned by kyle, but still unsure if they are fit for the job.

    Read the article

  • Windows Task Scheduler fails on EventData instruction

    - by Pete
    The Scheduled Task fails on the Event Data instruction in this XML: <ValueQueries> <Value name="eventChannel">Event/System/Channel</Value> <Value name="eventRecordID">Event/System/EventRecordID</Value> <Value name="eventData">Event/EventData/Data</Value> </ValueQueries> The other 2 fields can be passed as arguments and the EventData syntax matches other websites, so I don't know why it's failing. This is the Event Viewer XML: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Aptify.ExceptionManagerPublishedException" /> <EventID Qualifiers="0">0</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2013-11-07T19:39:14.000000000Z" /> <EventRecordID>97555</EventRecordID> <Channel>Application</Channel> <Computer>[Computer Name]</Computer> <Security /> </System> <EventData> <Data>General Information ********************************************* Additional Info: ExceptionManager.MachineName: [Computer Name] ExceptionManager.TimeStamp: 11/7/2013 12:39:14 PM ExceptionManager.FullName: AptifyExceptionManagement, Version=4.0.0.0, Culture=neutral, PublicKeyToken=[key] ExceptionManager.AppDomainName: Aptify Shell.exe ExceptionManager.ThreadIdentity: ExceptionManager.WindowsIdentity: ACA_DOMAIN\pbassett 1) Exception Information ********************************************* Exception Type: Aptify.Framework.BusinessLogic.GenericEntity.AptifyGenericEntityValidationException Entity: Tasks ErrorString: Task Type "Make Contact" is not active. MachineName: [machine] CreatedDateTime: 11/7/2013 12:39:14 PM AppDomainName: Aptify Shell.exe ThreadIdentityName: WindowsIdentityName: [identity] Severity: 0 ErrorNumber: 0 Message: Task Type "Make Contact" is not active. Data: System.Collections.ListDictionaryInternal TargetSite: Boolean Save(Boolean, System.String ByRef, Sys tem.String) HelpLink: NULL Source: AptifyGenericEntity StackTrace Information ********************************************* at Aptify.Framework.BusinessLogic.GenericEntity.AptifyGenericEntity.Save(Boolean AllowGUI, String& ErrorString, String TransactionID)</Data> </EventData> </Event>

    Read the article

  • Subversion all or nothing access to repo tree

    - by Glader
    I'm having some problems setting up access to my Subversion repositories on a Linux server. The problem is that I can only seem to get an all-or-nothing structure going. Either everyone gets read access to everything or noone gets read or write access to anything. The setup: SVN repos are located in /www/svn/repoA,repoB,repoC... Repositories are served by Apache, with Locations defined in etc/httpd/conf.d/subversion.conf as: <Location /svn/repoA> DAV svn SVNPath /var/www/svn/repoA AuthType Basic AuthName "svn repo" AuthUserFile /var/www/svn/svn-auth.conf AuthzSVNAccessFile /var/www/svn/svn-access.conf Require valid-user </Location> <Location /svn/repoB> DAV svn SVNPath /var/www/svn/repoB AuthType Basic AuthName "svn repo" AuthUserFile /var/www/svn/svn-auth.conf AuthzSVNAccessFile /var/www/svn/svn-access.conf Require valid-user </Location> ... svn-access.conf is set up as: [/] * = [/repoA] * = userA = rw [/repoB] * = userB = rw But checking out URL/svn/repoA as userA results in Access Forbidded. Changing it to [/] * = userA = r [/repoA] * = userA = rw [/repoB] * = userB = rw gives userA read access to ALL repositories (including repoB) but only read access to repoA! so in order for userA to get read-write access to repoB i need to add [/] userA = rw which is mental. I also tried changing Require valid-user to Require user userA for repoA in subversion.conf, but that only gave me read access to it. I need a way to default deny everyone access to every repository, giving read/write access only when explicitly defined. Can anyone tell me what I'm doing wrong here? I have spent a couple of hours testing and googling but come up empty, so now I'm doing the post of shame.

    Read the article

  • VPN being blocked somewhere between either my BT2700HGV and DLink DFL-210

    - by Dom
    Hi, For some time I have been unable to get VPN working through my set up. I have a BT2700HGV Router (2 Wire Model) and as a firewall I have the Dlink DFL-210. You can't turn the firewall completely off on the BT2700HGV so I have it set to DMZplus mode for the Firewall. In theory this should then allow the VPN ports through. On the Firewall I have a series of rules set up and one is the pptp-allow rule which should allow access on the correct ports also. When I try to connect via VPN however the client machine gets an error 809. If I check the log on the Dlink firewall, I see this record: http://dl.dropbox.com/u/1041315/packetdrop.PNG The laptop I am testing the vpn with is connected directly to the BT2700HGV router and I am trying to VPN from it onto 81.138.86.217. I can't work out whether I have some sort of problem in the set up of the rules on my firewall or if the BT router (even though it's in DMZplus mode) is still blocking port 1723. I read somewhere that there where problems because BTs Openzone held onto this port for some reason. Any help would be greatly appreciated. If you need further screen shots or information then please let me know. I wasn't able to create new tags for the router and firewall name or insert the picture in as I am new to the forum. Dom :-)

    Read the article

  • SpamAssassin bayesian score discrepancies

    - by CaptSaltyJack
    This makes my brain hurt. For some reason, SpamAssassin is giving high scores to certain emails, but when I test them on the command line, they get a low score. This one particular email has this in the header: X-Spam-Flag: YES X-Spam-Score: 8.521 X-Spam-Level: ******** X-Spam-Status: Yes, score=8.521 tagged_above=-9999 required=5 tests=[BAYES_99=3.5, BAYES_999=0.2, HTML_MESSAGE=0.001, NO_RECEIVED=-0.001, NO_RELAYS=-0.001, RAZOR2_CF_RANGE_51_100=0.5, RAZOR2_CF_RANGE_E8_51_100=1.886, RAZOR2_CHECK=0.922, URIBL_RHS_DOB=1.514] autolearn=no Yet when I dump the raw email into a file msg and run sudo su amavis -c 'spamassassin -t msg', I get this output: Content analysis details: (3.8 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 1.5 URIBL_RHS_DOB Contains an URI of a new domain (Day Old Bread) [URIs: cliobeads.com] -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP 0.0 HTML_MESSAGE BODY: HTML included in message -0.0 BAYES_20 BODY: Bayes spam probability is 5 to 20% [score: 0.1855] 1.9 RAZOR2_CF_RANGE_E8_51_100 Razor2 gives engine 8 confidence level above 50% [cf: 100] 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50% [cf: 100] 0.9 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/) I'm really confused as to why when the email comes in, it gets a completely different score attached to it than when I run spamassassin -t. Is there some other way I should be testing emails? Also, my users have the ability to drag false positives into a folder called "False Positives," and every day a cron job fires off that runs this on every message in every user's folder: sa-learn --dbpath=/var/lib/amavis/.spamassassin --ham /tmp/*-*.eml >/dev/null I ran sudo locate bayes_toks and there's definitely only one bayes DB on the system, in /var/lib/amavis/.spamassassin. I'm clueless, any help would be great and may help restore my sanity!

    Read the article

  • SQL Server 2005 SP3 on Windows 7 - No Management Studio

    - by Mike Thomas
    I've been trying for a day and a half now to get SQL Server 2005, DEV edition, to work on Windows 7, 64 bit prof. I install from the disk, then run SP 3. I get a failure on the Client Components section of the Installation Progress along with this vague message - Product : Client Components Product Version (Previous): 1399 Product Version (Final) : Status : Failure Log File : C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\LOG\Hotfix\SQLTools9_Hotfix_KB955706_sqlrun_tools.msp.log Error Number : 1712 Error Description : MSP Error: 1712 One or more of the files required to restore your computer to its previous state could not be found. Restoration will not be possible. I've uninstalled all Visual Studio and tried to make this as clean as possible, and have read a lot of the blog posts, but am really at my wits end about this. I am not a DBA, but I use SQL Server all the time when coding and testing apps. Does anyone have any ideas as to where I can get this sorted out? I've been ati this for a long time and have never encountered an installation as bad as this one. Thanks Mike Thomas

    Read the article

  • Why does "commit" appear in the mysql slow query log?

    - by Tom
    In our MySQL slow query logs I often see lines that just say "COMMIT". What causes a commit to take time? Another way to ask this question is: "How can I reproduce getting a slow commit; statement with some test queries?" From my investigation so far I have found that if there is a slow query within a transaction, then it is the slow query that gets output into the slow log, not the commit itself. Testing In mysql command line client: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=benchmark(9999999, md5('This is to slow down the update')) WHERE id = 21560; Query OK, 0 rows affected (2.32 sec) Rows matched: 1 Changed: 0 Warnings: 0 At this point (before the commit) the UPDATE is already in the slow log. mysql commit; Query OK, 0 rows affected (0.01 sec) The commit happens fast, it never appeared in the slow log. I also tried a UPDATE which changes a large amount of data but again it was the UPDATE that was slow not the COMMIT. However, I can reproduce a slow ROLLBACK that takes 46s and gets output to the slow log: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=CONCAT(myfield,'TEST'); Query OK, 481446 rows affected (53.31 sec) Rows matched: 481446 Changed: 481446 Warnings: 0 mysql rollback; Query OK, 0 rows affected (46.09 sec) I understand why rollback has a lot of work to do and therefore takes some time. But I'm still struggling to understand the COMMIT situation - i.e. why it might take a while.

    Read the article

  • Are there any wireless webcams/cameras that Windows will recognize as a capture device?

    - by Keithius
    I'd like to have a webcam in a different room from my computer, and the distance means USB is out of the question. I know there are many wireless cameras, but what I can't seem to find out is if any of them would be recognized by Windows as a capture device (just like a locally connected USB webcam). Most of the wireless cameras I can find (e.g., D-Link DCS920; Cisco-Linksys WVC54GCA, etc.) can all stream video directly from the camera itself, which is fine if you're using the camera as a "security" camera (for private use only), but not for other uses (say, sending the video to an online video streaming service, e.g., Ustream). It seems like this should be possible; after all, wireless (WiFi) printers with scanners are recognized by Windows. Are there any wireless (WiFi) cameras out there that would be recognized by Windows as a capture device in the same way as a USB webcam would? Alternatively, a camera that's not wireless (e.g., connects via Ethernet) would do the trick too - but I imagine if anyone is going to make a remote camera like this, they'd go the extra step and make it wireless, too.

    Read the article

  • Exchange 2003 IMAP not working for some users

    - by John Gardeniers
    We normally don't have a need for IMAP connections from outside the company network but in order to allow a one user to use IMAP on a portable device I've turned it on and opened port 993 on the firewall. When the user in question was unable to get connected I tested this using Outlook remotely. Start by creating a new IMAP account in Outlook using a test account. No problems, it worked perfectly. Now try the same thing using the account of the user who actually needs to connect and it's a no-go. Outlook simply keeps prompting for logon credentials. Next I tried using my own account and that too failed. Testing with a couple of other accounts worked perfectly. Interestingly enough, with my own account I've used IMAP on a MAC before (internally) without a problem and I'm not aware of anything that has changed which could affect IMAP on my account. Checking the user settings in ADUC showed that all accounts have the same Exchange protocol settings. Specifically, IMAP is enabled. A check of the event logs on the server reveals no entries for the connection attempts, making this kind of difficult to debug. Has anyone here encountered such a situation and, even more importantly, what caused it?

    Read the article

  • error creating MS Exchange distribution list: Active directory response: 00000005: SecErr: DSID-031521D0

    - by BabakBani
    We've migrated a client from google apps to an MS Exchange 2010 SP2 on-premise setup. The setup /prepareAD went well, and the software was installed with the Administrator account. We've used the Exchange Management Console to setup mailboxes and had to google up the appropriate workarounds such as going into each users Advanced Security Settings and selecting "include inheritable permissions from this object's parents", and changing their logon-to from specific machines to "all computers" so that they can connect to Outlook Web Access, and in turn so their Outlook 2007-2010 clients can connect to Exchange. Sending and receiving emails are working well. Now that all this is in place, we can create Dynamic Distrubution Lists with no problem, but as soon as we try and create a DISTRIBUTION LIST, either in the EMC or the Exchange PowerShell, we get an error. As the error message in the powershell is more verbose, I include this if anyone can suggest how we remedy this: [PS] C:\Windows\system32new-DistributionGroup -Name 'projects' -SamAccountName 'projects' -Alias 'projects' Active Directory operation failed on DC.cppe.local. This error is not retriable. Additional information: Access is denied. Active directory response: 00000005: SecErr: DSID-031521D0, problem 4003 (INSUFF_ACCESS_RIGHTS), data 0 + CategoryInfo : NotSpecified: (0:Int32) [New-DistributionGroup], ADOperationException + FullyQualifiedErrorId : 1EA5CD3E,Microsoft.Exchange.Management.RecipientTasks.NewDistributionGroup

    Read the article

  • JBoss7 load balancing with mod_proxy_balancer - session not working

    - by Phil P.
    I am trying to set up mod_proxy_balancer for routing requests to 2 jboss7-servers. For the time being I am testing this setup on my local machine, using following config in httpd.conf: ProxyRequests Off <Proxy \*> Order deny,allow Deny from all </Proxy> ProxyPass / balancer://mycluster/ stickysession=JSESSIONID|jsessionid scolonpathdelim=On <Proxy balancer://mycluster> BalancerMember http://localhost:8080 route=node1 BalancerMember http://localhost:8081 route=node2 Order allow,deny Allow from all </Proxy> and in the standalone.xml file of each jboss I have defined the jvmRoute system property: <system-properties> <property name="jvmRoute" value="node1"/> </system-properties> At http:// localhost/myapp the application is accessible but the java-session is not build up correctly. Consequently the authentication is not working. The funny thing is, that everything is working if I turn off one JBoss-instance. As I have tried a couple of settings already, I am thankful for any further suggestions.

    Read the article

  • 500 error with deploying rails application via apache2+passenger

    - by user1633983
    I finally completed my own app, so the only work left is deploying the app. I'm using Ubuntu 10.04 and apache2(installed by apt-get), so I'm trying to deploy through passenger. I installed passenger gem like this: sudo gem install passenger rvmsudo passenger-install-apache2-module and I configured apache settings as what the installation message says. I added below lines in the middle of /etc/apache2/apache2.conf file. LoadModule passenger_module /home/admin/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.17/ext/apache2/mod_passenger.so PassengerRoot /home/admin/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.17 PassengerRuby /home/admin/.rvm/wrappers/ruby-1.9.3-p194/ruby and, I appended below lines in /etc/apache2/sites-available/default file. <VirtualHost *:80> ServerName localhost # !!! Be sure to point DocumentRoot to 'public'! DocumentRoot /home/admin/homepage/public <Directory /home/admin/homepage/public> # This relaxes Apache security settings. AllowOverride all # MultiViews must be turned off. Options -MultiViews </Directory> But when I restart the apache service and hit the address, 500 error occurs. At first, it was same 500 error but the 500 error page is from apache's, but when I reinstalled the libapache2-module-passenger, the 500 error page is changed to that from rails'. Because of rails' 500 error page(which is located at public/500.html), I think passenger module is properly connected with apache. What should I do to fix this problem? Do I need to configure something inside my app before deployment?

    Read the article

< Previous Page | 700 701 702 703 704 705 706 707 708 709 710 711  | Next Page >