Search Results

Search found 9087 results on 364 pages for 'strong parameters'.

Page 281/364 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • Solaris 10 invalid ARP requests from 0.0.0.0? Link up/down every hour or 2

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). [mymacaddress]/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) It's being logged every hour. I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. There are some tunable SCTP parameters but it's not something I'm familiar with. Do I have to add changes to /etc/system? Looks like sctp_heartbeat_interval might be what I need to change? If it makes any difference, I have a few solaris zones running on this server, each with their own IP address on a virtual interface. eth0:0, eth0:1, etc. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else? Update: This was happening more frequently but now it seems to be happening roughly every hour or every two hours. It's not consistent. I tried setting setting the link speed and duplex to match the switch port and that seemed to make it stop happening for a few hours but then it started again.

    Read the article

  • trouble shooting ntfs-loop-xen combination in wubi based grub of Ubuntu

    - by Registered User
    Here is a situation I installed Ubuntu on a laptop using Wubi in Windows 7 drive.*The laptop is not mine.*I have installed and things worked by now perfectly without any problem.We are trying to set up a Xen (virtualization)environment in this laptop. After setting up every thing cleanly.When I needed to boot with following grub entries menuentry "Xen Linux 2.6.32.27" { insmod ntfs set root='(hd0,2)' loopback loop0 /ubuntu/disks/root.disk set root=(loop0) multiboot /boot/xen.gz module /boot/vmlinuz-2.6.32.27 dummy=dummy root=/dev/sda2 loop=/ubuntu/disks/root.disk ro console=tty0 module /boot/initrd.img-2.6.32.27 } I got error file not found error unknown command 'multiboot' error unknown command 'module' error unknown command 'module' Now to dig this issue further I reboot the machine and go to grub command prompt and manually pass on each of the above parameters which you see in the grub entry when I reached grub> insmod multiboot then I got following message on screen error:file not found. It looks like this wubi+ grub setup has just enough modules to use loopback file on ntfs, but the ACTUAL /boot directory is on the loopback NOT ntfs (hd0,2). Therefore any attempt to read any files from (hd0,2) simply wont work, cause there's no file there.I need to use insmod multiboot and command multiboot and module which are available in grub on a normal install without Wubi.But since the laptop is not mine so I am not allowed to partition it and have to make it work in this situation only. While a normal Kernel is still booting? How can I get module multiboot in this Wubi based install.

    Read the article

  • Scripted forwarding for Outlook 2003

    - by John Gardeniers
    We have a staff member in sales who has gone onto a 4 day week (getting ready for retirement), so each Thursday afternoon her email needs to be forwarded to another user and each Friday afternoon it needs to be set back. I'm using the VBS script below to do this, run via the Task Scheduler. Although the script appears to do it's job, based on what I see when I view the user's Exchange settings, Exchange doesn't always recognise that the setting has changed. e.g. Last Thursday the forwarding was a enabled and worked correctly. On Friday the script did it's thing to clear the forwarding but Exchange continued to forward messages all weekend. I found that I can force Exchange to honour the changed setting be merely opening and closing the user's properties in ADUC. Of course I don't want to have to do that. Is there a non-manual way I can have Exchange read and honour the setting? The script (VBS): ' Call this script with the following parameters: ' ' SrcUser - The logon ID of the suer who's account is to be modified ' DstUser - The logon account of the person to who mail is to be forwarded ' Use "reset" to clear the email forwarding SrcUser = WScript.Arguments.Item(0) DstUser = WScript.Arguments.Item(1) SourceUser = SearchDistinguishedName(SrcUser) 'The user login name Set objUser = GetObject("LDAP://" & SourceUser) If DstUser = "reset" then objUser.PutEx 1, "altRecipient", "" Else ForwardTo = SearchDistinguishedName(DstUser)' The contact common name objUser.Put "AltRecipient", ForwardTo End If objUser.SetInfo Public Function SearchDistinguishedName(ByVal vSAN) Dim oRootDSE, oConnection, oCommand, oRecordSet Set oRootDSE = GetObject("LDAP://rootDSE") Set oConnection = CreateObject("ADODB.Connection") oConnection.Open "Provider=ADsDSOObject;" Set oCommand = CreateObject("ADODB.Command") oCommand.ActiveConnection = oConnection oCommand.CommandText = "<LDAP://" & oRootDSE.get("defaultNamingContext") & ">;(&(objectCategory=User)(samAccountName=" & vSAN & "));distinguishedName;subtree" Set oRecordSet = oCommand.Execute On Error Resume Next SearchDistinguishedName = oRecordSet.Fields("DistinguishedName") On Error GoTo 0 oConnection.Close Set oRecordSet = Nothing Set oCommand = Nothing Set oConnection = Nothing Set oRootDSE = Nothing End Function

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • TFTP Timing Out on Ubuntu VM

    - by valsidalv
    I'm running a Windows 7 PC with VMware installed which has my Ubuntu (10.04 Lucid Lynx). I recently installed a DHCP server and TFTP (Xinet tftpd) using these instructions. I've mapped a network drive so that my Windows has access to all the files in my VM through a 192.x.x.x IP address. I'm trying to throw some custom firmware onto a router. The router has its own built-in TFTP utility that will download the image. It successfully manages to do everything but it is slow because it writes it to flash memory. There is another method that is much quicker because it writes to RAM directly but it must use the TFTP server in Ubuntu. The issue I'm facing is that the Ubuntu TFTP transfer seems to be timing out. The transfer starts but never goes past ~60%. Here's my /etc/xinetd.d/tftp file (similar to a known working config): service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /home/user/tftp/ disable = no cps = 300 2 per_source = 60 } I've done some searching but can't find any parameters for this file to control timeout time or number of retries. The last two arguments (cps, per_source) and completely alien to me (can anyone explain). I have a few possible solutions but the easiest would be to get this TFTP server working. Can anyone help? Either with a timeout configuration or maybe even recommend a different TFTP server? Thanks!

    Read the article

  • How to move or delete files from a folder containing 2 million files on an NTFS drive?

    - by Beau
    The issue is that any modification to the directory locks up Explorer indefinitely, though Samba access to other directories still works. I've tried moving files locally and over Samba. Even enumerating the directory to get the list of files locks up the computer indefinitely. I tried using Python's win32file.FindFilesIterator to iterate the files but that also hangs. My idea was to move each file to a different directory (in a directory above the directory we're dealing with) based on its timestamp, so that we'd have at most a thousand or so files in each directory... But since I can't even enumerate the files, that's been a non-starter. If I have to give up and just nuke the directory I'm willing to do that, but a standard delete also hangs indefinitely. I have set these two parameters to increase speed and they also did not help the issue: R:\>fsutil behavior query disablelastaccess disablelastaccess = 1 R:\>fsutil behavior query disable8dot3 disable8dot3 = 1 These are all sequential images that would have run into the 'bug' with 8.3 filenames whereby many similarly named files in one directory can take a long time to compute 8.3 filenames. From what I understand this data is stored in the file system even after disable8dot3 is enabled, so it may still be contributing to the problem. Any ideas?

    Read the article

  • hosts.deny ignored by MacOSX 10.8

    - by David Holm
    I have been trying to set up my MacOS X Server, which I recently upgraded to Mountain Lion, to use denyhosts as I need to open port 22 to it. denyhosts is set up and adds entries to /etc/hosts.deny so I decided to add my laptops IP to it in order to verify that it actually works but I can still log in and my IP shows up in /private/var/log/system.log. I even rebooted the server once just to be sure there wasn't some service that had to be restarted. I tried the following entries: ALL: <my laptop's IP> sshd: <my laptop's IP> sshd: 127.0.0.1 My /etc/sshd_config has the following parameters set: UsePAM yes UseDNS no I Googled if deny.hosts has been deprecated in OSX 10.7 or 10.8 but I couldn't find any indications that it has. Any ideas of what is going wrong or if there is an alternative way to achieve the same result? Yes, a private key would solve this problem but for the time being I would like to stick to using password authentication. I also like the idea of denyhosts actually blocking access to all services running on the server and not just ssh.

    Read the article

  • How to test nginx proxy timeouts

    - by mkorszun
    Target: I would like to test all Nginx proxy timeout parameters in very simple scenario. My first approach was to create really simple HTTP server and put some timeouts: Between listen and accept to test proxy_connect_timeout Between accept and read to test proxy_send_timeout Between read and send to test proxy_read_timeout Test: 1) Server code (python): import socket import os import time import threading def http_resp(conn): conn.send("HTTP/1.1 200 OK\r\n") conn.send("Content-Length: 0\r\n") conn.send("Content-Type: text/xml\r\n\r\n\r\n") def do(conn, addr): print 'Connected by', addr print 'Sleeping before reading data...' time.sleep(0) # Set to test proxy_send_timeout data = conn.recv(1024) print 'Sleeping before sending data...' time.sleep(0) # Set to test proxy_read_timeout http_resp(conn) print 'End of data stream, closing connection' conn.close() def main(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('', int(os.environ['PORT']))) s.listen(1) print 'Sleeping before accept...' time.sleep(130) # Set to test proxy_connect_timeout while 1: conn, addr = s.accept() t = threading.Thread(target=do, args=(conn, addr)) t.start() if __name__ == "__main__": main() 2) Nginx configuration: I have extended Nginx default configuration by setting explicitly proxy_connect_timeout and adding proxy_pass pointing to my local HTTP server: location / { proxy_pass http://localhost:8888; proxy_connect_timeout 200; } 3) Observation: proxy_connect_timeout - Even though setting it to 200s and sleeping only 130s between listen and accept Nginx returns 504 after ~60s which might be because of the default proxy_read_timeout value. I do not understand how proxy_read_timeout could affect connection at so early stage (before accept). I would expect 200 here. Please explain! proxy_send_timeout - I am not sure if my approach to test proxy_send_timeout is correct - i think i still do not understand this parameter correctly. After all, delay between accept and read does not force proxy_send_timeout. proxy_read_timeout - it seems to be pretty straightforward. Setting delay between read and write does the job. So I guess my assumptions are wrong and probably I do not understand proxy_connect and proxy_send timeouts properly. Can some explain them to me using above test if possible (or modifying if required).

    Read the article

  • Incorrect durations mp4 file created by ffmpeg (avconv)

    - by Ruslan Sharipov
    Example usage: avconv -i rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af -vcodec copy -an -sn -map 0 -f segment -segment_format mp4 -segment_time 60 -y %05d.mp4 avconv version 0.8.3-6:0.8.3-1+b1, Copyright (c) 2000-2012 the Libav developers built on Jun 15 2012 13:54:35 with gcc 4.7.0 HandShake: client signature does not match! Metadata: height 480.00 remote_addr: sdp_session {sdp_session,0, {sdp_o,"-","1289703354974145","1289703354974145",inet4, "10.1.12.99"}, "Media Presentation", {inet4,"0.0.0.0"}, {0,0}, [{"control","*"},{"range","npt=0.0 start 30400239.52 timeshift_duration 319250.58 timeshift_size 120000.00 width 640.00 [flv @ 0x1d36a40] Estimating duration from bitrate, this may be inaccurate Input #0, flv, from 'rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af': Duration: N/A, start: 0.000000, bitrate: N/A Stream #0.0: Video: h264 (Baseline), yuvj420p, 640x480 [PAR 1:1 DAR 4:3], 1k tbr, 1k tbn, 2k tbc Output #0, segment, to '%05d.mp4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: libx264, yuvj420p, 640x480 [PAR 1:1 DAR 4:3], q=2-31, 1k tbn, 1k tbc Stream mapping: Stream #0:0 -> #0:0 (copy) Press ctrl-c to stop encoding ^Cframe= 9566 fps= 36 q=-1.0 Lsize= -0kB time=318.25 bitrate= -0.0kbits/s video:30348kB audio:0kB global headers:0kB muxing overhead -100.000071% Received signal 2: terminating. Result: serafim@yard:~/video2$ ls 00000.mp4 00001.mp4 00002.mp4 00003.mp4 00004.mp4 00005.mp4 Now try to play the files in the player, such as VLC. And that's what we get: the first fragment (00000.mp4) played well, no problems, but the second (00001.mp4 and beyond) starts the bug manifests itself, namely the file 00001.mp4 first 60 seconds black screen, but since 61 seconds starts playing the video. Attachments: https://dl.dropbox.com/u/760901/rtmp_and_mp4.zip How to get rid of the delay with black screen at the beginning of the segments? Maybe ffmpeg to pass parameters, or third-party software is able to correct the obtained segments mp4?

    Read the article

  • Windows is very slow with my new SSD

    - by Maksym H.
    I have a laptop HP probook 4520s with Core i5 M480 @ 2.67Ghz, 4Gb RAM, 640 GB HDD Radeon HD 6370m 1GB video card. It would seem like a good stack for work, right? But My HDD has crashed after everyday walking with laptop about 1 year. After buying my new SSD (Patriot memory - Torqx II 128 Gb SATA II) and installing new Windows 8 from scratch - it was amazing fast. But I had only install windows updates, and I feel that the speed become the same as my old HDD, after install other software for my work, it becomes so slow, so when I use my PC with old lower configuration and it really works better than my awesome laptop... I checked that TRIM and AHCI mode are turned on. So why's that? I asked for help in Patriot Memory support, they suggested to send them ATTO test results, done, sent. Here is the response: "Thank you very much for the attached results. Looking at the results, I can see that your SSD speed is a lot lower than it should be. Can you tell me your system specs?" Until they checked my email, I re-installed Windows 8 to Windows 7 and it was again perfect, but the story repeats it becomes slower and slower after every installing new software. Check out some screenshots.. (sorry for the screenshot with russian TaskManager, I hope you will recognize those parameters accordingly with your english or other lang TaskManager) So the main issue that something everytime loads the disc on 100% and the response time is jumping around 1000-3000 ms. Why am I asking about Windows? Because I tried to install Linux Mint (x86) and It just flies. So great performance independent on how many programs I have installed. Only Windows (any 7 or 8) has this problem. So guys, I appreciate any ideas about how to fix that and may be answers of main question - "why is it so.?" Thanks!

    Read the article

  • ubuntu 10.04 + php + postfix

    - by mononym
    I have a server I am running: Ubuntu 10.04 php 5.3.5 (fpm) Nginx I have installed postfix, and set it to loopback-only (only need to send) The problem is it is not sending. if i issue (at command line): echo "testing local delivery" | mail -s "test email to localhost" [email protected] I get the email no problem, but through PHP it does not arrive. When I send it via PHP, mail.log shows: Mar 28 10:15:04 host postfix/pickup[32102]: 435EF580D7: uid=0 from=<root> Mar 28 10:15:04 host postfix/cleanup[32229]: 435EF580D7: message-id=<20120328091504.435EF580D7@FQDN> Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: from=<root@FQDN>, size=1127, nrcpt=1 (queue active) Mar 28 10:15:04 host postfix/local[32230]: 435EF580D7: to=<root@FQDN>, orig_to=<root>, relay=local, delay=3.1, delays=3/0.01/0/0.09, dsn=2.0.0, status=sent (delivered to maildir) Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: removed any help appreciated, my main.cf file: smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = FQDN alias_maps = hash:/etc/aliasesalias_database = hash:/etc/aliases myorigin = /etc/mailname #myorigin = $mydomain mydestination = FQDN, localhost.FQDN, , localhost relayhost = $mydomain mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only virtual_alias_maps = hash:/etc/postfix/virtual home_mailbox = mail/

    Read the article

  • batch copy files with error log on missing permissions

    - by sc911
    Hi *, I'm searching for a tool to batch-copy files, that should support the following points: copy files from a net-share report any errors show errors only or filter log on errors don't stop on an error also report if a file or a folder could not be copied due to missing permissions if possible it should have a queue where new job can be added while copying I tried the following tools: TerraCopy: takes a lot time to just calculate the time and the size of the job and does not report errors due to missing permissions (it doesn't even add those files to the copy-queue) Karne's replicator: does not report errors due to missing permissions xcopy: does a great job when using the right parameters and piping the output to a file (in the German localization xcopy /k /r /e /i /s /c /h SOURCE TARGET>LOGFILE 2>&1 will do the job. opening the logfile in IE will give you a great monitor). but quing jobs it not possible (ok, you can join them all in a batch-file, but you can not queue jobs while another one is running (hm, thinking of a batch-script that loops through a file with the source-target-config...)) to be continued Which tools do you use? Tell me! Thx sc911

    Read the article

  • Changing open-files-limit in mysql 5.5

    - by davidv
    I'm having an issue with mysql 5.5 running on Ubuntu 12.04 with the open-files-limit parameter. I recently noticed some problems due to the 1024 limit, and actually the main system limit was set to 1024, so I modified /etc/security/limits.conf with the following: * soft nofile 32000 * hard nofile 32000 root soft nofile 32000 root hard nofile 32000 After that I check the ulimit value for root and even for mysql user, both returned the new value: 32000, so I assume the change has already been done. I also changed the value at the my.cnf file, setting open-files-limit to 24000, like this: open-files-limit = 24000 Now comes the odd part, when I restart the mysql service and check the open_files_limit variable, it returns that it's still set to 1024, so I'm having the same problems that before (obviously), I tried to use open-files-limit instead open_files_limit in the my.cnf config file, same result, BUT if I override the service command to start the service and start only using mysqld (no additional parameters), the service starts and when I check the parameter it returns 32000... I don't know where it's taking that value from, as it's not set at my.cnf and it's not being given through command line, at least, not for myself. Any ideas about why it's not working the change and how to solve it the normal way (launching it through service...)?

    Read the article

  • How to migrate Fedora DS (389 DS) to a new machine?

    - by zengr
    Hello, I am trying to migrate a Fedora DS (1.2.2) to a new server (1.2.7.5). The process has been painful to say the least. The old server (1.2.2) was also an upgrade from an old fedora DS setup, so it does not contain migrate-ds-admin.pl. I found this question, but the URL does not open. I am aware that I need to use migrate-ds-admin.pl, but I am clueless. How do I use it? I assume this works like this: 1. Copy migrate-ds-admin.pl from server which has 1.2.7 to 1.2.2 2. Run migrate-ds-admin.pl to export the schema+ldif from 1.2.2 3. Import the schema+ldif to 1.2.7 using migrate-ds-admin.pl. If the above is true, then what parameters are need for export and import? Note: ./ldif2db -n NetscapeRoot -i /root/NetscapeRoot.ldif ./ldif2db -n userRoot -i /root/userRoot.ldif The above two commands work like a charm, but since the schema (custom schema) is not migrated, I see alot of errors during import.

    Read the article

  • On Windows 2008 R2, how do I back up DHCP if the DHCP .mdb database is always busy?

    - by johnny
    I get this from my backup software. C:\WINDOWS\system32\dhcp\dhcp.mdb : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50tmp.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\tmp.edb : The process cannot access the file because it is being used by another process. My questions: Should I be doing a manual backup of DHCP via command line tools or maybe with MMC, Action, Backup before I run my backup? Is the %SystemRoot%\System32\DHCP\Backup directory always kept up to date? (which does get backed up by backup software) I'm answering my own question but the registry key is set up for 3c, 60 minutes, I believe. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DHCPServer\Parameters\BackupInterva This is not the included backup software for Windows. It is another product, but I have seen this with every backup software I've ever used.

    Read the article

  • Best Practice - SQL 2012 & IIS in VMWare

    - by Dan Ribar
    We are pretty new to VMWare and looking for some thoughts on our environment. We have a VMWare cluster that has on one host: VM#1: MS Windows 2008 R2 Enterprise & SQL Server 2012 VM#2: MS Windows 2008 R2 Standard & IIS The IIS asp.net app talks directly to the SQL Server. We had this similar environment on physical servers a few months ago and just recently moved to the virtualized environment. Regarding the setup, we have not tweaked any of the vm resource parameters -- all is set as standard and all is working. What is observed is that the VMs seem to spool down and we get lags in response. Of course this sin't as fast as the old physical environment, but I am wondering if: *is it a good idea to run the SQL server and the IIS server on the same host? They are the only two VMs on it. The host is a new Dell R620 with 192 gb mem. does it make sense to change any CPU or memory reservations when it doesn't seem like there is any contention is there a way to keep the VMs spooled up to eliminate delays? This is a brand new squeaky clean vanilla install. What are your thoughts?

    Read the article

  • Client unable to access OWA website after temporarily changing SSL certificate on the server

    - by Lorenz Meyer
    I have the following issue: One client computer (Windows XP) cannot access the OWA website. All other client computers can (Except another one in the same remote office). How this happened: I temporarily changed the SSL certificate on the Exchange Server yesterday. After a few minutes, I reverted back, an now the same certificate that was installed for years is back again. During these few minutes, they were in OWA on this computer and got a certificate error. What exactly happens: Internet Explorer displays the error Internet Explorer cannot display the webpage, Firefox displays The connection was reset and Crome shows This webpage is not available. The connection to ... was interrupted. What I already did to try to get this working: Restart the client computer Restart the exchange server Deleted Internet Explorer browsing history In IE, Internet Options, tab Content, under Certificates deletes SSL cache Restored Internet Explorer to the default parameters I looked into certmgr.msc, but did not find a certificate related to the issue What could I do else to narrow down the origin of this problem (or better: resolve it) ? Can you give any advice ?

    Read the article

  • domain user disabling screensaver

    - by RASG
    I have the following situation: Due to security reasons the screensaver is activated after 10 minutes, and immediately locks the screen. There are GPOs preventing the user from changing the screensaver parameters and the background image. In order to bypass the background policy, some users are using bginfo The problem is that for some reason now the screensaver doesn't work anymore. The settings are still the same (10 minutes; locked to the user) and comparing snapshots of the registry before and after executing bginfo doesn't show any significant modification. Any hints? EDIT 1: Ok, i figured whats going on, but now i have another question. bginfo refreshes the user settings by reading HKEY_CURRENT_USER\Control Panel\Desktop, which has ScreenSaveActive. If the user set it to 0, disables the screensaver. Why isnt HKEY_CURRENT_USER\Software\Policies\Microsoft\Windows\Control Panel\Desktop, which sets ScreenSaveActive to 1, being enforced? or if it is being enforced, where is bginfo storing the value 0, and how can it bypass the policy? EDIT 2: I also discovered that after setting any value to HKEY_CURRENT_USER\Control Panel\Desktop\ScreenSaveActive, it can be deleted and the last value will remain active. For some reason HKEY_CURRENT_USER\Software\Policies\Microsoft\Windows\Control Panel\Desktop\ScreenSaveActive value is not being enforced to the user.

    Read the article

  • Finding Bluetooth link key in Windows 7, to double pair a device on dualboot computer

    - by Ilari Kajaste
    How can I dig up the Bluetooth link key for a paired device in Windows 7? Is this something that is dependent on the Bluetooth stack I'm using (Toshiba), or is there a generic place to store these in Windows 7? Note: I'm not talking about the six-digit code usually typed by the user during pairing - that is worthless since it's discarded after pairing process. What I mean is the 128-bit link key that the devices exchange during pairing, and use thereafter to encrypt all their Bluetooth traffic. Background: I dualboot Windows 7 / Ubuntu on my laptop, and I would like to have my phone paired to both OS's. Since the dualbooting computer has only one Bluetooth adapter and thus only one Bluetooth address, I cannot do two pairings to the phone, since on the second pairing (Windows) the phone just replaces the previous pairing (Linux) to the same Bluetooth address. A thread on Ubuntu forums pointed me to what I have to do - pair first on Linux, then on Windows, and then replace the link key on Linux side with the one Windows negotiated. I can find the Linux side pairing key from /var/lib/Bluetooth/[BD_ADDR]/linkkeys - no problems there. However, on Windows side I can't find the key. According to the forum post, on Windows side the key should be in SYSTEM\ControlSet002\services\BTHPORT\Parameters\Keys\[BD_ADDR] but while that registry key does exist, it has no subkeys. (And a similar registry path in ControlSet001 didn't have any subkeys either.) One thing I've been instructed to do is to capture all events during pairing with Sysinternals Process Monitor. I did this, but I haven't been able to find any useful information from the captured events, not even by exporting the data to a huge XML and grepping that with the BD_ADDRs (with or without colons). So how could I find the link key for a paired device in Windows 7? Some reference information: Wikipedia: Bluetooth, Security Now: Bluetooth security

    Read the article

  • certutil -ping fails with 30 seconds timeout - what to do?

    - by mark
    Dear ladies and sirs. The certificate store on my Win7 box is constantly hanging. Observe: C:\1.cmd C:\certutil -? | findstr /i ping -ping -- Ping Active Directory Certificate Services Request interface -pingadmin -- Ping Active Directory Certificate Services Admin interface C:\set PROMPT=$P($t)$G C:\(13:04:28.57)certutil -ping CertUtil: -ping command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:04:58.68)certutil -pingadmin CertUtil: -pingadmin command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:05:28.79)set PROMPT=$P$G C:\ Explanations: The first command shows you that there are –ping and –pingadmin parameters to certutil Trying any ping parameter fails with 30 seconds timeout (the current time is seen in the prompt) This is a serious problem. It screws all the secure communication in my app. If anyone knows how this can be fixed - please share. Thanks. P.S. 1.cmd is simply a batch of these commands: certutil -? | findstr /i ping set PROMPT=$P($t)$G certutil -ping certutil -pingadmin set PROMPT=$P$G

    Read the article

  • ldapsearch against Active Directory fails

    - by Guacamole
    I am using ldapsearch from OpenLDAP tools to search our corporate Active Directory for my email and phone number. This query is a test to ensure that I can authenticate against the domain so I can set up a linux wiki with NTLM authentication. My theory is that if I can successfully query the AD for information, then I am a step closer to getting my wiki to authenticate against AD (I have instructions to set up moin wiki under ActiveDirectory). The problem is that I can't seem to get the ldapsearch query right. I have seen many tutorials on the net that indicate that -D should be something like -D "Americas\John_Marsharll"; however, I keep getting ldap_bind: Invalid credentials (49) error messages when I use Americas\John_Marshall. The only time I get sensical results is when I query with the parameters below. However, even then, I can't figure out how to get email and phone number. [John_Marsharll@WN7-BG3YSM1 ~]$ ldapsearch -x -h 10.1.1.1 \ -b "cn=Users,dc=Americas" mail telephonenumber -D "cn=John_Marshall,dc=Americas" # extended LDIF # # LDAPv3 # base <cn=Users,dc=Americas> with scope subtree # filter: (objectclass=*) # requesting: mail telephonenumber -D cn=John_Marshall,dc=Americas # # search result search: 2 result: 32 No such object # numResponses: 1 [John_Marshall@WN7-BG3YSM1 ~]$ Can someone give me pointers on what I'm doing wrong with the ldapsearch query above? Our AD ldap server is 10.1.1.1 and the AD domain is "Americas".

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

  • Bad performance with Linux software RAID5 and LUKS encryption

    - by Philipp Wendler
    I have set up a Linux software RAID5 on three hard drives and want to encrypt it with cryptsetup/LUKS. My tests showed that the encryption leads to a massive performance decrease that I cannot explain. The RAID5 is able to write 187 MB/s [1] without encryption. With encryption on top of it, write speed is down to about 40 MB/s. The RAID has a chunk size of 512K and a write intent bitmap. I used -c aes-xts-plain -s 512 --align-payload=2048 as the parameters for cryptsetup luksFormat, so the payload should be aligned to 2048 blocks of 512 bytes (i.e., 1MB). cryptsetup luksDump shows a payload offset of 4096. So I think the alignment is correct and fits to the RAID chunk size. The CPU is not the bottleneck, as it has hardware support for AES (aesni_intel). If I write on another drive (an SSD with LVM) that is also encrypted, I do have a write speed of 150 MB/s. top shows that the CPU usage is indeed very low, only the RAID5 xor takes 14%. I also tried putting a filesystem (ext4) directly on the unencrypted RAID so see if the layering is problem. The filesystem decreases the performance a little bit as expected, but by far not that much (write speed varying, but 100 MB/s). Summary: Disks + RAID5: good Disks + RAID5 + ext4: good Disks + RAID5 + encryption: bad SSD + encryption + LVM + ext4: good The read performance is not affected by the encryption, it is 207 MB/s without and 205 MB/s with encryption (also showing that CPU power is not the problem). What can I do to improve the write performance of the encrypted RAID? [1] All speed measurements were done with several runs of dd if=/dev/zero of=DEV bs=100M count=100 (i.e., writing 10G in blocks of 100M). Edit: If this helps: I'm using Ubuntu 11.04 64bit with Linux 2.6.38. Edit2: The performance stays approximately the same if I pass a block size of 4KB, 1MB or 10MB to dd.

    Read the article

  • How to install/configure ffmpeg to compress mp4 videos for flash player delivery?

    - by Andrew Fulton
    We have a flash web-app that created interactive video, and are using ffmpeg to do some compression/resizing when a user "publishes" their project. The user can upload flv files and mp4 files, both of which play fine in the Flash UI before publishing. After publishing the flv files work fine, but the mp4 files will not play in the flash player: Audio will play but video won't. The mp4 files will play fine if I download them and play them in the Quicktime player but if I attempt to open them in the Adobe Media Player it reports "The media file does not contain a supported video track". If I open the Movie inspector in quicktime it tells me that the original file is an "h264" video and the ffmpeg-processed ones are "mpeg-4". I have tried forcing it to h264 by adding flags like -f h264 and -vcodec h264 but I get a screenfull of errors (no frame, illegal POC type, sps_id out of range) ending with Could not find codec parameters (Video: h264) h264 will show up if I run ffmpeg -formats and ffmpeg -codecs, and as I said it will play fine in Quicktime. Is there anything else I need to do to convince the flash player to play them? Is there anything else I need to tell you about the server that will help?

    Read the article

  • How can I "filter" postfix-generated bounce messages?

    - by Flimzy
    We are using postfix 2.7 and custom SMTPD (based on qpsmtpd) in highly customized configuration for spam filtering. We have a new requirement to filter postfix-generated bounces through our custom qpsmtpd process (not so much for content filtering, but to process these bounces accordingly). Our current configuration looks (in part) like this: main.cf (only customizations shown): 2526 inet n - - - 0 cleanup pickup fifo n - - 60 1 pickup -o content_filter=smtp:127.0.0.2 Our smtpd injects messages to postfix on port 2526, by speaking directly to the cleanup daemon. And the custom pickup command instructs postfix to hand off all locally-generated mail (from cron, nagios, or other custom scripts) to our custom smtpd. The problem is that this configuration does not affect postfix generated bounce messages, since they do not go through the pickup daemon. I have tried adding the same content_filter option to the bounce daemon commands, but it does not seem to have any effect: bounce unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 defer unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 trace unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 For reference, here is my main.cf file, as well: biff = no # TLS parameters smtpd_tls_loglevel = 0 smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${queue_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${queue_directory}/smtp_scache smtp_tls_security_level = may mydestination = $myhostname alias_maps = proxy:pgsql:/etc/postfix/dc-aliases.cf transport_maps = proxy:pgsql:/etc/postfix/dc-transport.cf # This is enforced on incoming mail by QPSMTPD, so this is simply # the upper possible bound (also enforced in defaults.pl) message_size_limit = 262144000 mailbox_size_limit = 0 # We do our own message expiration, but if we set this to 0, then postfix # will try each mail delivery only once, so instead we set it to 100 days # (which is the max postfix seems to support) maximal_queue_lifetime = 100d hash_queue_depth = 1 hash_queue_names = deferred, defer, hold I also tried adding the internal_mail_filter_classes option to main.cf, but also tono affect: internal_mail_filter_classes = bounce,notify I am open to any suggestions, including handling our current content-filtering-loop in a different way. If it's not clear what I'm asking, please let me know, and I can try to clarify.

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >