Search Results

Search found 21778 results on 872 pages for 'stewart may'.

Page 173/872 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Windows 2003 Domain Controller Very Upset about NIC Teaming

    - by Kyle Brandt
    I set up BACS (Broadcom Teaming) to team two NIC on a Windows 2003 Active Directory Domain Controller. Networking still works okay, I can ping the gateway etc, but both DNS and Active Directory fail to start with various 40xx errors. The team that I created is Smart load Balancing with Failover, with one backup and only one in smart load balancing (So really it is just failover). I have the team the same IP address that the single active NIC had before. Anyone seen this before, or have any ideas what the problem might be? Event Type: Error Event Source: DNS Event Category: None Event ID: 4015 Date: 3/7/2010 Time: 10:33:03 AM User: N/A Computer: ADC Description: The DNS server has encountered a critical error from the Active Directory. Check that the Active Directory is functioning properly. The extended error debug information (which may be empty) is "". The event data contains the error. Event Type: Error Event Source: DNS Event Category: None Event ID: 4004 Date: 3/7/2010 Time: 10:33:03 AM User: N/A Computer: ADC Description: The DNS server was unable to complete directory service enumeration of zone .. This DNS server is configured to use information obtained from Active Directory for this zone and is unable to load the zone without it. Check that the Active Directory is functioning properly and repeat enumeration of the zone. The extended error debug information (which may be empty) is "". The event data contains the error. Event Type: Error Event Source: NTDS Replication Event Category: DS RPC Client Event ID: 2087 Date: 3/7/2010 Time: 10:40:28 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: ADC Description: Active Directory could not resolve the following DNS host name of the source domain controller to an IP address. This error prevents additions, deletions and changes in Active Directory from replicating between one or more domain controllers in the forest. Security groups, group policy, users and computers and their passwords will be inconsistent between domain controllers until this error is resolved, potentially affecting logon authentication and access to network resources.

    Read the article

  • Exclude specific domains from Apache2 serverAlias while using a catch all *(wildcard) alias

    - by Victor S
    I have a web application that needs to support custom domains, in that regard I have set-up the following name based virtual server: <VirtualHost *:80> ServerName example.com ServerAlias * *.example.com www.example.com example.com RailsEnv production RackEnv production DocumentRoot /srv/www/example/current/public <Directory /srv/www/example/current/public> AllowOverride all Options -MultiViews FollowSymLinks </Directory> ErrorLog /srv/www/example/log/error.log TransferLog /srv/www/example/log/access.log </VirtualHost> Notice the * as the server alias? that catches all the domains on that server. However, I have other sites on this server which I want to be excluded from this list. It is more economical for me to have a list of excluded domains than manually set every domain a user may register with at this service as a serverAlias... Perhaps this is not the best way to go, but I'm looking for help, in the best (relatively simple) way to set up a web-app that may catch any domains, while allowing other specific domains to be routed to different apps. Thanks!

    Read the article

  • VMRC equivalent for Hyper-V?

    - by Ian Boyd
    VMRC was the client tool used to connect to virtual machines running on Virtual Server. Upgrading to Windows Server 2008 R2 with the Hyper-V role, i need a way for people to be able to use the virtual machines. Note: not all virtual machines will have network connectivity not all virtual machines will be running Windows some people needing to connect to a virtual machine will be running Windows XP Hyper-V manager, allowing management of the hyper-v server, is less desirable (since it allows management of the hyper-v server (and doesn't work on all operating systems)) What is the Windows Server 2008 R2 equivalent of VMRC; to "vnc" to a virtual server? Update: i think Tatas was suggesting Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 (?): Which requires SQL Server IIS Installing those would unfortunately violate our Windows Server 2008 R2 license. i might be looking at the wrong product link, since commenter said there is a version that doesn't require "System Center". Update 2: The Windows Server 2008 R2 running HyperV is being licensed with the understanding that it only be used to host HyperV. From the [Windows Server 2008 R2 Licensing FAQ][4]: Q. If I have one license for Windows Server 2008 R2 Standard and want to run it in a virtual operating system environment, can I continue running it in the physical operating system environment? A. Yes, with Windows Server 2008 R2 Standard, you may run one instance in the physical operating system environment and one instance in the virtual operating system environment; however, the instance running in the physical operating system environment may be used only to run hardware virtualization software, provide hardware virtualization services, or to run software to manage and service operating system environments on the licensed server. This is why i'm weary about installing IIS or SQL Server.

    Read the article

  • Error 2013: Lost connection to MySQL server during query when executing CHECK TABLE FOR UPGRADE

    - by Dean Richardson
    I just upgraded Ubuntu from 11.10 to 12.04. My rails app now returns the (passenger) error "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) (Mysql2::Error)". I get a similar error when I try to access mysql at the command line on my Ubuntu server using mysql -u root -p. I have mysql-server 5.5 installed. I've checked and mysql is not running. When I try to restart it, it fails. Here are some key lines from the tail of /var/log/syslog after an attempted restart: dean@dgwjasonfried:/etc/mysql$ tail -f /var/log/syslog Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: /usr/bin/mysqlcheck: Got error: 2013: Lost connection to MySQL server during query when executing 'CHECK TABLE ... FOR UPGRADE' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: FATAL ERROR: Upgrade failed Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.assets OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.ecd_types OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5124]: Checking for insecure root accounts. Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769657] init: mysql main process (5064) terminated with status 1 Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769697] init: mysql respawning too fast, stopped Here is most of /etc/mysql/my.cnf: Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock Here is entries for some specific programs The following values assume you have at least 32M ram This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] Basic Settings user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking Instead of skip-networking the default is now to listen only on localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 And here are permissions for var/run/mysqld/mysqld.sock: srwxrwxrwx 1 mysql mysql 0 Mar 7 09:18 mysqld.sock I'd be grateful for any suggestions the community might have. I reviewed the related questions here and attempted some of the fixes offered but to no avail. Thanks! Dean Richardson Update: Thanks to quanta's suggestion, I looked at the /var/log/mysql/error.log file. I found error messages relating to pointers, fatal signals, and more stuff that I really couldn't make much sense of. I also found mysql man page references, however. One suggested that I try starting mysqld with the --innodb_force_recovery=# option, then attempt to dump (or drop) the offending/corrupted database or table. I worked through the escalating option levels one-by-one (innodb_force_recovery=1, innodb_force_recovery=2, etc.) This allowed me to successfully run mysql -u root -p from the command line and execute several commands. I was able to run queries on my production database, but any attempt to query, dump, or even drop my development database raised an error and led to me losing the connection to mysql. So I've made progress, but until I'm somehow able to drop or repair my development db I'm still unable to get my app to load. Any further advice or suggestions? Thanks! Dean Update: Right after running sudo mysqld --innodb_force_recover=1 from the command line, the error.log contains this: Right after retrying sudo mysqld --innodb_force_recover=1, The error.log file shows this: 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) Then after mysql -u root -p and mysql> drop database molex_app_development; ERROR 2013 (HY000): Lost connection to MySQL server during query mysql> the error.log contains: dean@dgwjasonfried:/var/log/mysql$ tail -f error.log /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6a3ff9ecbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6a1c004bd8): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 130308 4:58:23 [ERROR] Incorrect definition of table mysql.proc: expected column 'comment' at position 15 to have type text, found type char(64). 130308 4:58:23 InnoDB: Assertion failure in thread 140168992810752 in file fsp0fsp.c line 3639 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 10:58:23 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f7ba4f6c2f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f7ba3065e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f7ba3609039] mysqld(handle_fatal_signal+0x483)[0x7f7ba34cf9c3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f7ba2220cb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f7ba188c425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f7ba188fb8b] mysqld(+0x65e0fc)[0x7f7ba37160fc] mysqld(+0x602be6)[0x7f7ba36babe6] mysqld(+0x635006)[0x7f7ba36ed006] mysqld(+0x5d7072)[0x7f7ba368f072] mysqld(+0x5d7b9c)[0x7f7ba368fb9c] mysqld(+0x6a3348)[0x7f7ba375b348] mysqld(+0x6a3887)[0x7f7ba375b887] mysqld(+0x5c6a86)[0x7f7ba367ea86] mysqld(+0x5ae3a7)[0x7f7ba36663a7] mysqld(_Z15ha_delete_tableP3THDP10handlertonPKcS4_S4_b+0x16d)[0x7f7ba34d3ffd] mysqld(_Z23mysql_rm_table_no_locksP3THDP10TABLE_LISTbbbb+0x568)[0x7f7ba3417f78] mysqld(_Z11mysql_rm_dbP3THDPcbb+0x8aa)[0x7f7ba339780a] mysqld(_Z21mysql_execute_commandP3THD+0x394c)[0x7f7ba33b886c] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f7ba33bb28f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f7ba33bc6e0] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f7ba346119d] mysqld(handle_one_connection+0x50)[0x7f7ba3461200] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f7ba2218e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f7ba1949cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f7b7c004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. --Dean

    Read the article

  • Hardware reserved memory issue

    - by Robert Koritnik
    I've seen lots of folks having problem with hardware reserved memory issue in Windows 7/Server 2008 R2. I have it myself but not as huge as others have. Problem description When you install Windows 7 (or its bigger brother Windows Server 2008 R2) your memory may not be fully utilised. If you look at Task Manager > Performance Tab > Resource Monitor > Memory Tab And scroll to the bottom of the list you will see a graphical representation of your memory. Some of it may be hardware reserved. Previous Windows versions didn't have this problem. System was able to utilise all memory available. Question Is there any solution to lower/remove hardware reserved memory? Sidenote I tried installing 32 and 64 bit versions but to no avail. I also tried both Windows: 7 and Server 2008 R2. But always get the same amount reserved by HW. On previous Windows versions I had more memory available because I'm simultaneously running 2 VMs on host (so three machines all together). And my memory peaks much higher now as it did on older versions.

    Read the article

  • Citrix client slow to launch

    - by user706837
    Was wondering if anyone else experience Citrix client to launch very slowly. While I'm a Windows SA by trade, I consider myself Novice+ on Linux, but I doubt thats the problem. This is the simple scenario: 1. Login to Citrix server to work from home 2. Click on the published application; this typically starts the local Citrix client. 3. Citrix client should start and you're off. Problem is between #2 and #3 I click on the application and 8 out of 9 times there is a 60 second delay and then I get an SSL connection error. I suspect this error is misleading since the connection took too long to open. But I dont know how to prove it (or fix it). I'm able to successfully manually launch wfcmgr without errors; so this leads me to believe Citrix client is installed correctly. I even leave it running thinking this may help, but I don't see a difference with or without this running first. The only times I'm able to connect successfully is when the Citrix client starts up a few seconds after clicking on the application. I've searched online for articles that might help, but tried a number of fixes without much difference. Even tried "ln -sf /dev/urandom /dev/random" as suggested by this article, but no dice:http://forums.citrix.com/message.jspa?messageID=1381276 My System (specs that may be relevant) Sony VAIO Laptop VGN-NW270F Linux Mint 11.04 Problem using: FireFox and Chrome Any help would be appreciated. Just trying to either find an answer or guidance on how to determine why its taking so long to launch the Citrix Client. Thanks

    Read the article

  • Unable to connect Xend with virt-manager

    - by Majid Azimi
    I have installed debian 6.0.1a. I have install all XEN stuff. including xen kernel, libvirtd, ... but when i want to connect xend, virt-manager shows me this: Verify that: A Xen host kernel was booted The Xen service has been started details: Unable to open connection to hypervisor URI 'xen:///': unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/connection.py", line 971, in _try_open None], flags) File "/usr/lib/python2.6/dist-packages/libvirt.py", line 111, in openAuth if ret is None:raise libvirtError('virConnectOpenAuth() failed') libvirtError: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied here is uname output: Linux debian 2.6.32-5-xen-amd64 #1 SMP Tue Mar 8 00:01:30 UTC 2011 x86_64 GNU/Linux and also xend and libvirtd is runnig: root@debian:/home/mazimi# /etc/init.d/libvirt-bin status Checking status of libvirt management daemon: libvirtd running. root@debian:/home/mazimi# /etc/init.d/xend start Starting Xen daemons: xenstored xenconsoled xend. permissions for livbirt-sock: root@debian:/home/mazimi# ls -alih /var/run/libvirt/ total 12K 671017 drwxr-xr-x 3 root root 4.0K Apr 15 13:54 . 654083 drwxr-xr-x 18 root root 4.0K Apr 15 13:54 .. 670901 srwxrwx--- 1 root libvirt 0 Apr 15 13:54 libvirt-sock 670928 srwxrwxrwx 1 root libvirt 0 Apr 15 13:54 libvirt-sock-ro 670870 drwxr-xr-x 2 root root 4.0K Apr 15 02:34 qemu and also we have group named libvirt in /etc/group When running libvirtd with verbose mode it behaves kind of stange: root@debian:/var/log/libvirt# /usr/sbin/libvirtd --verbose 17:26:55.841: warning : qemudStartup:1832 : Unable to create cgroup for driver: No such device or address 17:26:56.128: warning : lxcStartup:1900 : Unable to create cgroup for driver: No such device or address and waits infinitely.

    Read the article

  • ZFS + FreeBSD + virtualbox

    - by John
    Hi, I'm configuring a FreeBSD server hosting virtualbox serving half dozen mission critical busy mail servers. I just learned ZFS, I'm quite attracted, but have a few questions: what is the CPU overhead of ZFS? I googled and found little (or no) benchmark for that. from what I learned, when ZFS updates files, it keeps the old file as snapshot, and write the updated part for the new version. However that would mean for each snapshot it keeps that require significant storage overhead. How much is this storage overhead? For example, suppose I have 2TB usable space, how much space can actually be used for the latest version of files one year later? is FreeBSD with ZFS hosting virtualbox serving half dozen busy guest mission critical mail servers a reasonable combination? Anything particular to be careful with? And can I still choose ZFS for the guest OSs? This is because I may build another identical such box for redundancy, and will need to do some mirroring between each pair of the guest systems across the boxes. I'm trying to configure a Dell R710 for this. From what I learned, I shouldn't choose any RAID at all, is that true? In that case, are the drives still arrive hot swappable? this may sounds a bit pathetic, but since I have no experience with ZFS at all, and this is a mission critical server, so just ask just in case: I'm choosing twin Intel L5630 processors, and 6 x 600GB 15K RPM Serial-Attach SCSI drives. If I need more space in the future, I would just hot swap some drivers with larger capacity to expand the storage. There is no problem with these, right?

    Read the article

  • How to know if your computer is hit by a dnschanger virus?

    - by kira
    The Federal Bureau of Investigation (FBI) is on the final stage of its Operation Ghost Click, which strikes against the menace of the DNSChanger virus and trojan. Infected PCs running the DNSChanger malware at unawares are in the danger of going offline on this coming Monday (July 9) when the FBI plans to pull down the online servers that communicate with the virus on host computers. After gaining access to a host PC, the DNSChanger virus tries to modify the DNS (Domain Name Server) settings, which are essential for Internet access, to send traffic to malicious servers. These poisoned web addresses in turn point traffic generated through infected PCs to fake or unsafe websites, most of them running online scams. There are also reports that the DNSChanger virus also acts as a trojan, allowing perpetrators of the hack attack to gain access to infected PCs. Google issued a general advisory for netizens in May earlier this year to detect and remove DNSChanger from infected PCs. According to our report, some 5 lakh PCs were still infected by the DNSChanger virus in May 2012. The first report of the DNSChanger virus and its affiliation with an international group of hackers first came to light towards the end of last year, and the FBI has been chasing them down ever since. The group behind the DNSChanger virus is estimated to have infected close to 4 million PCs around the world in 2011, until the FBI shut them down in November. In the last stage of Operation Ghost Click, the FBI plans to pull the plug and bring down the temporary rogue DNS servers on Monday, July 9, according to an official announcement. As a result, PCs still infected by the DNSChanger virus will be unable to access the Internet. How do you know if your PC has the DNSChanger virus? Don’t worry. Google has explained the hack attack and tools to remove the malware on its official blog. Trend Micro also has extensive step-by-step instructions to check if your Windows PC or Mac is infected by the virus. The article is found at http://www.thinkdigit.com/Internet/Google-warns-users-about-DNSChanger-malware_9665.html How to check if my computer is one of those affected?

    Read the article

  • get-eventlog issue

    - by Jim B
    I wanted to get a quick report of some log entries I saw on a server, so I ran: Get-Eventlog -logname system -newest 10 -computer fs1 | fl I got events back however the descriptions were all wrong. Here's an example: Index : 1260055 EntryType : Warning InstanceId : 2186936367 Message : The description for Event ID '-2108030929' in Source 'W32Time' cannot be found. The local compute r may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'time. windows.com,0x1' Category : (0) CategoryNumber : 0 ReplacementStrings : {time.windows.com,0x1} Source : W32Time TimeGenerated : 1/25/2010 10:43:31 AM TimeWritten : 1/25/2010 10:43:31 AM UserName : Note that if I pull the event ID property it's correct (in this case 38) Is this is known issue or is something wrong. The messages resolve fine via event viewer locally and remotely Here is the powershell version info: Name : ConsoleHost Version : 2.0 InstanceId : bc58fcf8-bba3-4ca8-8972-17dbd5d9ff08 UI : System.Management.Automation.Internal.Host.InternalHostUserInterface CurrentCulture : en-US CurrentUICulture : en-US PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy IsRunspacePushed : False Runspace : System.Management.Automation.Runspaces.LocalRunspace Here is the revised version info: Name Value ---- ----- CLRVersion 2.0.50727.3603 BuildVersion 6.0.6002.18111 PSVersion 2.0 WSManStackVersion 2.0 PSCompatibleVersions {1.0, 2.0} SerializationVersion 1.1.0.1 PSRemotingProtocolVersion 2.1

    Read the article

  • Getting at fsid under Linux? Or an alternate way of identifying filesystems?

    - by larsks
    In an environment with automounted home directories, such that the same filesystem exported by a fileserver may be mounted multiple times on the client, I would like to authoritatively be able to identify whether two mountpoints are in fact the same filesystem. That is, if the remote server exports: /home And the local client has: # mount fileserver:/home/l/lars on /home/lars type nfs (rw...) fileserver:/home/b/bob on /home/bob type nfs (rw...) I am looking for a way to identify that both /home/lars and /home/bob are in fact the same filesystem. In theory this is what the fsid result of the statvfs structure is for, but in all cases, for both local and remote filesystems, I am finding that the value of this structure member is 0. Is this some sort of client-side issue? Or do most modern NFS servers simply decline to provide a useful fsid? The end goal of all of this is to robustly interpret the output from the quota command for NFS filesystems. For example, given the example above, running quota as myself may return something like: Disk quotas for user lars (uid 6580): Filesystem blocks quota limit grace files quota limit grace otherserver:/vol/home0/a/alice 12 52428800 52428800 4 4294967295 4294967295 fileserver:/home/l/lars 9353032 9728000 10240000 124018 0 0 ...the problem here being that there exists a quota for me on otherserver which is visible in the results of the quota command, even though my home directory is actually on a different device. My plan was to look up the fsid for each mountpoint listed in the quota output and check to see if it matched the fsid associated with my home directory. It looks like this won't work, so...any suggestions?

    Read the article

  • Probelms Intstalling Trac using apt-get Ubuntu Jaunty

    - by Ben Waine
    Hi, I'm having some issues getting apt to install trac correctly on my Ubuntu Jaunty Box. Using the command 'apt-get install trac' I get the following output: root@myserver:~# apt-get install trac Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: trac: Depends: python-setuptools (> 0.5) but it is not installable Depends: python-pysqlite2 (>= 2.3.2) but it is not going to be installed Depends: python-subversion but it is not installable Depends: libjs-jquery but it is not installable Recommends: python-pygments (= 0.6) but it is not installable or enscript but it is not installable Recommends: python-tz but it is not installable E: Broken packages I have successfully used the command on my karmic kola desktop machine and am able to create new projects etc. I thought I might be able to solve the problem by installing all python related extensions. This produced a very similar output. I have Main, universe and multi-verse repositories enabled. Its a remote machine and I have no access to the gui. Hope someone can help, googleing failed to solve the issue or find a solution! Thanks, Ben

    Read the article

  • Fedora 16 can connect to samba share using smbclient but not in nautilus 3.2.1

    - by Nathan Jones
    I have a machine running Ubuntu 11.10 Server acting as a Samba server to share my home directory. Everything works fine on my Windows 7 machine, but on my Fedora 16 laptop, if I use Nautilus to try to access the share using smb://192.168.0.8/nathan in the location bar, it just has the loading cursor and does nothing. It never shows any errors, nothing. Using smbclient works just fine, but I'd like to get it working in Nautilus. I know that there can be problems with SELinux and Samba, so I created a file called booleans.local that contains samba_enable_home_dirs=1. My smb.conf file looks like this: # For Unix password sync to work on a Debian GNU/Linux system, the following # parameters must be set (thanks to Ian Kahan <<[email protected]> for # sending the correct chat script for the passwd program in Debian Sarge). passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . # This boolean controls whether PAM will be used for password changes # when requested by an SMB client instead of the program listed in # 'passwd program'. The default is 'no'. pam password change = yes # This option controls how unsuccessful authentication attempts are mapped # to anonymous connections map to guest = bad user ########## Domains ########### # Is this machine able to authenticate users. Both PDC and BDC # must have this setting enabled. If you are the BDC you must # change the 'domain master' setting to no # ; domain logons = yes # # The following setting only takes effect if 'domain logons' is set # It specifies the location of the user's profile directory # from the client point of view) # The following required a [profiles] share to be setup on the # samba server (see below) ; logon path = \\%N\profiles\%U # Another common choice is storing the profile in the user's home directory # (this is Samba's default) # logon path = \\%N\%U\profile # The following setting only takes effect if 'domain logons' is set # It specifies the location of a user's home directory (from the client # point of view) ; logon drive = H: # logon home = \\%N\%U # The following setting only takes effect if 'domain logons' is set # It specifies the script to run during logon. The script must be stored # in the [netlogon] share # NOTE: Must be store in 'DOS' file format convention ; logon script = logon.cmd # This allows Unix users to be created on the domain controller via the SAMR # RPC pipe. The example command creates a user account with a disabled Unix # password; please adapt to your needs ; add user script = /usr/sbin/adduser --quiet --disabled-password --gecos "" %u # This allows machine accounts to be created on the domain controller via the # SAMR RPC pipe. # The following assumes a "machines" group exists on the system ; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u # This allows Unix groups to be created on the domain controller via the SAMR # RPC pipe. ; add group script = /usr/sbin/addgroup --force-badname %g ########## Printing ########## # If you want to automatically load your printer list rather # than setting them up individually then you'll need this # load printers = yes # lpr(ng) printing. You may wish to override the location of the # printcap file ; printing = bsd ; printcap name = /etc/printcap # CUPS printing. See also the cupsaddsmb(8) manpage in the # cupsys-client package. ; printing = cups ; printcap name = cups ############ Misc ############ # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting ; include = /home/samba/etc/smb.conf.%m # Most people will find that this option gives better performance. # See smb.conf(5) and /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/speed.html # for details # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 # socket options = TCP_NODELAY # The following parameter is useful only if you have the linpopup package # installed. The samba maintainer and the linpopup maintainer are # working to ease installation and configuration of linpopup and samba. ; message command = /bin/sh -c '/usr/bin/linpopup "%f" "%m" %s; rm %s' & # Domain Master specifies Samba to be the Domain Master Browser. If this # machine will be configured as a BDC (a secondary logon server), you # must set this to 'no'; otherwise, the default behavior is recommended. # domain master = auto # Some defaults for winbind (make sure you're not using the ranges # for something else.) ; idmap uid = 10000-20000 ; idmap gid = 10000-20000 ; template shell = /bin/bash # The following was the default behaviour in sarge, # but samba upstream reverted the default because it might induce # performance issues in large organizations. # See Debian bug #368251 for some of the consequences of *not* # having this setting and smb.conf(5) for details. ; winbind enum groups = yes ; winbind enum users = yes # Setup usershare options to enable non-root users to share folders # with the net usershare command. # Maximum number of usershare. 0 (default) means that usershare is disabled. ; usershare max shares = 100 # Allow users who've been granted usershare privileges to create # public shares, not just authenticated ones usershare allow guests = yes #======================= Share Definitions ======================= # Un-comment the following (and tweak the other settings below to suit) # to enable the default home directory shares. This will share each # user's home director as \\server\username [homes] comment = Home Directories browseable = yes # By default, the home directories are exported read-only. Change the # next parameter to 'no' if you want to be able to write to them. read only = no # File creation mask is set to 0700 for security reasons. If you want to # create files with group=rw permissions, set next parameter to 0775. ; create mask = 0775 # Directory creation mask is set to 0700 for security reasons. If you want to # create dirs. with group=rw permissions, set next parameter to 0775. ; directory mask = 0775 # By default, \\server\username shares can be connected to by anyone # with access to the samba server. Un-comment the following parameter # to make sure that only "username" can connect to \\server\username # The following parameter makes sure that only "username" can connect # # This might need tweaking when using external authentication schemes valid users = %S # Un-comment the following and create the netlogon directory for Domain Logons # (you need to configure Samba to act as a domain controller too.) ;[netlogon] ; comment = Network Logon Service ; path = /home/samba/netlogon ; guest ok = yes ; read only = yes # Un-comment the following and create the profiles directory to store # users profiles (see the "logon path" option above) # (you need to configure Samba to act as a domain controller too.) # The path below should be writable by all users so that their # profile directory may be created the first time they log on ;[profiles] ; comment = Users profiles ; path = /home/samba/profiles ; guest ok = no ; browseable = no ; create mask = 0600 ; directory mask = 0700 [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = no create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no # Uncomment to allow remote administration of Windows print drivers. # You may need to replace 'lpadmin' with the name of the group your # admin users are members of. # Please note that you also need to set appropriate Unix permissions # to the drivers directory for these users to have write rights in it ; write list = root, @lpadmin # A sample share for sharing your CD-ROM with others. ;[cdrom] ; comment = Samba server's CD-ROM ; read only = yes ; locking = no ; path = /cdrom ; guest ok = yes # The next two parameters show how to auto-mount a CD-ROM when the # cdrom share is accesed. For this to work /etc/fstab must contain # an entry like this: # # /dev/scd0 /cdrom iso9660 defaults,noauto,ro,user 0 0 # # The CD-ROM gets unmounted automatically after the connection to the # # If you don't want to use auto-mounting/unmounting make sure the CD # is mounted on /cdrom # ; preexec = /bin/mount /cdrom ; postexec = /bin/umount /cdrom smbusers: <nathan> = <"nathan"> Any help would be very much appreciated! Thanks!

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Authentication required by wireless network.

    - by Roman
    I would like to use a wireless network from Ubuntu. In the network drop-down menu I select a network (this is a University network I have an account there). Then I get a windows with the following fields: Wireless Security: [WPA&WPA2 Enterprise] Authentication: [Tunneled TLS] Anonymous Identity: [] CA Certificate: [(None)] Inner Authentication: [some letters] User Name: [] Password: [] I put there my user name and password and do not change default value and leave "Anonymous Identity"blank. As a result of that I get "Authentication required by wireless network". How can I solve this problem? I think it is important to notice that our system administrator tried to find some files (which are probably needed to be used as "CA Certificate"). He said that he does not know where this file is located on Ubuntu (he support only Windows). So, probably this is direction I need to go. I need to find this file. But may be I am wrong. May be something else needs to be done. Could you pleas help me with that?

    Read the article

  • Solaris 10 zlogin logs in, logs out immediately

    - by Spelevink
    On a SPARC v445 running Solaris 10 9/10, had to rebuild rpool and reattached the three existing mirrored zpools on the other existing disks, with their zfs filesystems and NG zones intact. The zones have been configured with zonecfg -z ZONENAME create etc. ... and are now online using zoneadm -z ZONENAME attach -U then simply booting after being in installed state, but I cannot zlogin to any of the zones except one. It shows that I am logged in, then a blank line, then immediately logged out again. When I try to login using zlogin -C ZONENAME I cannot; the error message is: May 15 15:43:46 <hostname> login: open_module: stat(/usr/lib/security/pam_mkhomedir.so.1) failed: no such file or directory. May 15 15:43:46 <hostname> login: load_modules: cannot open module /usr/lib/security/pam_mkhomedir.so.1 But /usr/lib/pam_mkhomedir.so.1 does not exist, and it does not exist on my other servers, but those zones are accessible using zlogin. I can only zlogin to the zones with zlogin -S ZONENAME. What to do next? Thank you.

    Read the article

  • PHP/Linux File Permissions

    - by user1733435
    May I ask a question about file permission. I set up Ubuntu server where Apache got running. I have simple php upload form and able to upload file to /var/www/site/uploads as follows. sandbox@sandbox-virtual-machine:/var/www/site/uploads$ ll total 1736 drwxrwxrwx 2 www-data www-data 4096 Oct 18 02:53 ./ drwxrwxrwx 3 sandbox sandbox 4096 Oct 18 00:42 ../ -rw-r--r-- 1 www-data www-data 145998 Oct 18 02:53 3d wallpaper pic.jpg -rw-r--r-- 1 www-data www-data 166947 Oct 18 02:53 3D Wallpapers 9.jpg -rw-r--r-- 1 www-data www-data 1451489 Oct 18 02:53 6453_3d_landscape_hd_wallpapers_green.jpg Is there anyway to upload files and they show up as -rw-r--r-- 1 sandbox sandbox 145998 Oct 18 02:53 3d wallpaper pic.jpg -rw-r--r-- 1 sandbox sandbox 166947 Oct 18 02:53 3D Wallpapers 9.jpg -rw-r--r-- 1 sandbox sandbox 1451489 Oct 18 02:53 6453_3d_landscape_hd_wallpapers_green.jpg so that I could straight away feed them to waiting/running shell script. Right now waiting script(move,checksums,rename,resize,etc) unable to do anything to uploaded files with attributes of www-data. If I just do as local account, such as sandbox@sandbox-virtual-machine:/var/www/site/uploads$touch testfile then the script is able to run as I would like to. Any suggestion would be grateful,thanks in advance as well. Thanks for everyone giving help to me,that I was able to progress. Now I am close to getting solved and append the output sandbox@sandbox-virtual-machine:/var/www/site/uploads$ ll total 388 drwxrwxrwx 2 www-data www-data 4096 Oct 18 04:22 ./ drwxrwxrwx 3 sandbox sandbox 4096 Oct 18 04:17 ../ -rw-r--r-- 1 sandbox sandbox 166947 Oct 18 04:21 3D Wallpapers 9.jpg -rw-r--r-- 1 sandbox sandbox 219808 Oct 18 04:20 adafruit_pi.png -rw-rw-r-- 1 sandbox sandbox 0 Oct 18 04:22 test How may I set permission to uploaded files like 'test' only w difference in middle group. Such as adafruit_pi.png Vs test. Which statement shall I insert to php code,please?

    Read the article

  • Force database read to master if slave data is stale

    - by Jeff Storey
    I previously asked a specific question about this database replication for new user signup to which I got an answer, but I want to ask this in the more general sense. I have a database setup in which I am using a master/slave combination. I am using the slaves for load balancing (the data itself is partitioned/sharded across multiple databases, but each database has X slaves for load balancing). Let's say I write some data to the master. Now I do a subsequent read which hits a slave, but the slave has not yet caught up to the master. Is there a way (which can be done quickly since it will happen frequently) to determine if the data is stale in the slave so I can then route to the master? In my previous question, it was suggested to do simultaneous writes to the cache and the database. This solution seems practical, but there is still a chance that the data may have been removed from the cache but not yet updated in the slave. A possible solution is to ensure the cache is big enough (based on the typical application load) so the data will not be evicted within the time frame it takes to replicate the data. This seems like it may be feasible. Can anyone provide additional insight into this question? Thanks!

    Read the article

  • Problem with wireless networking

    - by Rodnower
    Hello, I have atheros wifi hardware, intell chipset, gigabyte laptop and CentOS 5 installed. Now I try to use wireless network and get problems. First of all I want to say that I have 2 OS on my laptop, and when I load Windows XP I still may to access to the wireless network. First I try to get it on Linux was to make active wlan0 interface in: system - administration - network but I get: Determining IP information for wlan0... failed. Second I try also was unsuccessfully: [root 1 network-scripts]# ifup-wireless Error : unrecognised wireless request "off" This relevant output of iwconfig is: Warning: Driver for device wlan0 recommend version 21 of Wireless Extension, but has been compiled with version 20, therefore some driver features may not be available... wlan0 IEEE 802.11 ESSID:"" Mode:Managed Frequency:2.462 GHz Access Point: Not-Associated Tx-Power=27 dBm Retry min limit:7 RTS thr:off Fragment thr=2352 B Encryption key:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 {output not in the original format} The same things are happen even if I do: modprobe wlan0 (this not get error) Important to say that modprobe not succeed to find ath_pci, tharefor I decide to download latest version of the madwifi driver from http://madwifi-project.org. I extracted this, but when I make this, this is what I get: [root 1 madwifi-0.9.4]# make /bin/sh: line 0: cd: /lib/modules/2.6.18-164.el5/build: No such file or directory Makefile.inc:66: * /lib/modules/2.6.18-164.el5/build is missing, please set KERNELPATH. Stop. I tried to set KERNELPATH, but I think that it was incorrect: [root 1 madwifi-0.9.4]# make KERNELPATH=/lib/modules/2.6.18-164.el5/kernel/ /bin/sh: cc: command not found Makefile.inc:81: * Cannot detect kernel version - please check compiler and KERNELPATH. Stop. Some one have any ideas? Thank you very much for ahead.

    Read the article

  • Can't Connect To Local Mysql Using IP Address, but CAN connect from remote server

    - by user1782041
    Here's an interesting one that does not seem to fall into any of the mysql connection issues I've read about or searched for: On an Ubuntu 12.04 box I had some system updates waiting to install, and I took care of that this evening. After the install, I started seeing some errors in my syslog complaining about a particular php script that could no longer connect to the mysql instance on the box. Here is the specific error: PHP Warning: mysql_connect(): Can't connect to MySQL server on '192.168.0.40' (4) Now, the server's IP address is 192.168.0.40, and I've checked to make sure that I have mysql listening on 0.0.0.0 so that I can connect using either "localhost" or "192.168.0.40". Here's where things get odd: From the local machine, if I try the following: mysql -uroot -p -h192.168.0.40 I get this error: ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.0.40' (110) I've checked, and error 110 indicates an OS timeout, and error 2003 is the mysql generic "can't connect" error. This indicates that it is not permissions with the user. However, if I do the same thing from a remote machine (say, from 192.168.0.30), I log right in with no problems. Futher, other scripts on the local machine that connect to mysql using "localhost" for the host rather than "192.168.0.40" connect with no problems. Also, I can connect via the mysql socket with no problems both from the command line and php scripts. So, this feels like a networking issue of some kind on the local box, but there are no iptables rules on this box (it is firewalled externally) and I can't figure out what else may be causing this. This problematic script worked perfectly prior to the latest system update. For now, I'll simply change the script to connect via localhost, but I'd really like to know why it broke for 2 reasons: There may be other scripts that connect using 192.168.0.40 that don't run very often which are now broken. Auditing them all will take more time than I feel like devoting at the moment. I'm curious, and want to know why it broke so I can fix it correctly. Any help?

    Read the article

  • Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host

    - by Paul J. Warner
    I am having an issue with a program where after 6 mins +- 5 secs we get the above exception. Some more info about the exception stacktrace is below. This all happens pretty religiously, 6 mins goes by and bam the following 3 exeptions. We have the application installed in 2 other environments and it is working fine there. I am hoping to find some server settings either IIS 6 or Server 2003 settings that may be causing this issue to occur. I have reviewed some of the similar questions and don't see very many answers. I am hoping that maybe the information I have provided may help a little bit. 208741,Exception,,,,2011-06-21 00:30:14.193,SERVERNAME,2624,1,CLIENTNAME,The underlying connection was closed: An unexpected error occurred on a receive. , at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at Microsoft.Web.Services3.WebServicesClientProtocol.GetResponse(WebRequest request, IAsyncResult result) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208742,Exception,,,,2011-06-21 00:30:14.227,SERVERNAME,2624,1,CLIENTNAME,Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208743,Exception,,,,2011-06-21 00:30:14.287,SERVERNAME,2624,1,CLIENTNAME,An existing connection was forcibly closed by the remote host , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size),-691097507,62,1

    Read the article

  • Requiring SSH-key Login From Specific IP Ranges

    - by Sean M
    I need to be able to access my server (Ubuntu 8.04 LTS) from remote sites, but I'd like to worry a bit less about password complexity. Thus, I'd like to require that SSH keys be used for login instead of name/password. However, I still have a lot to learn about security, and having already badly broken a test box when I was trying to set this up, I'm acutely aware of the chance of screwing myself while trying to accomplish this. So I have a second goal: I'd like to require that certain IP ranges (e.g. 10.0.0.0/8) may log in with name/password, but everyone else must use an SSH key to log in. How can I satisfy both of these goals? There already exists a very similar question here, but I can't quite figure out how to get to what I want from that information. Current tactic: reading through the PAM documentation (pam_access looks promising) and looking at /etc/ssh/sshd_config. Edit: Alternatively, is there a way to specify that certain users must authenticate with SSH keys, and others may authenticate with name/password? Solution that's currently working: # Globally deny logon via password, only allow SSH-key login. PasswordAuthentication no # But allow connections from the LAN to use passwords. Match Address 192.168.*.* PasswordAuthentication yes The Match Address block can also usefully be a Match User block, answering my secondary question. For now I'm just chalking the failure to parse CIDR addresses up to a quirk of my install, and resolving to try again when I go to Ubuntu 10.04 not too long from now. PAM turns out not to be necessary.

    Read the article

  • Wiping Deleted Directory Entries and Defragmenting Directories

    - by Synetech inc.
    Hi, I have seen plenty of apps that wipe free space on a disk (usually by creating a file that is as big as the remaining space) or defragment a file (usually by using the MoveFile API to copy it to a new contiguous area). What I have not seen however is a program that wipes the deleted directory entries. That is, when a file is deleted, its information (name, dates, etc.) remain in the directory, but are simply marked as empty. That leaves all kinds of information in a directory entry, and also wastes space since (at least on FAT drives), the directory may be using several clusters. For example, if a directory once had a lot of files, it will be expanded to use another cluster which could be anywhere on the disk. This means that the directory is fragmented, and may be using more clusters than needed, possibly with 100’s of unused (ie, “deleted file”) entries between active files. Does anyone know of a program that can defragment/consolidate directories (ie, wipe unused entries, and move active entries together)? (I would really rather not have to resort to writing my own yet again.) Thanks a lot. EDIT Sorry, I should have said, Windows and/or DOS, for FAT*/NTFS.

    Read the article

  • Are web service handler chains possible under IIS / ASP.NET

    - by Mike
    I'm working with a client who wants me to implement a particular design in an IIS/ASP.NET environment. This design was already implemented in Java, but I am not sure it is possible using Microsoft technologies. In a Tomcat/Java environment one can create so call Handler Chains. In essence a handler runs on the server on which the web service is running and it intercepts the SOAP message coming to the web service. The handler can perform a number of tasks before passing control to the web service. Some of these tasks may refer to authentication and authorization. Moreover, one can create handler chains, such that the handlers can run in a particular sequence before passing control to the web service. This is a very elegant solution, as certain aspects of authentication and authorization can be automatically performed, without the developer of the client application and of the web service having to invest anything in it. The code for the client application and the web service is not affected. You may find a number of articles on internet on this subject by searching on Google for "web service handler chain". I performed searches for web service handlers in IIS or ASP.NET. I get some hits, but apparently handlers in IIS have another meaning than that described above. My question therefor is: Can handler chains (as available in Java and Tomcat) be created in IIS? If so, how (any article, book, forum...)? Either a negative or a positive answer will be greatly appreciated. Mike

    Read the article

  • how to cause linux system datetime to run faster than real world datetime?

    - by JamesThomasMoon1979
    Background I want to monitor a running linux system over several days. It's a custom gentoo build and with much custom software on board. This software has ongoing maintenance timers and cron scripts and other clock driven events. I need to verify these scheduled events are working. Problem Waiting for the system to step through daily and weekly activity is a long wait time. And modifying all clock-based timers on the system would be time consuming. Yet, I often want to test a system's end-to-end scheduled activities without waiting a week. Potential Solution Have the linux system under test appear to run through it's daily cycle of activity within just a few hours. My Question for Serverfault Is there a way to cause the system's time to run faster than real world time? My first thought is manipulating the ntp daemon to repeatedly and smoothly increment the clock . Any other ideas? And yes, I know this may have strange side affects. However, the system has no important or time critical interactions with systems outside of itself. And this may be a valuable testing technique.

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >