Search Results

Search found 9982 results on 400 pages for 'state monad'.

Page 373/400 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Centos 7. Freeradius fails to start on boot

    - by Alex
    I was messing around with FreeRADIUS and MySQL (MariaDB) and it seems FreeRADIUS service can't start properly on startup. But it starts fine using root user or in debug mode (radiusd -X) and works just fine! Debug mode shows no errors. systemctl command shows that radiusd.service has failed to start. /var/log/messages output: Aug 21 15:52:29 nexus-test systemd: Starting The Apache HTTP Server... Aug 21 15:52:29 nexus-test systemd: Starting MariaDB database server... Aug 21 15:52:29 nexus-test systemd: Starting FreeRADIUS high performance RADIUS server.... Aug 21 15:52:29 nexus-test systemd: Started OpenSSH server daemon. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql Aug 21 15:52:30 nexus-test systemd: Started Postfix Mail Transport Agent. Aug 21 15:52:30 nexus-test avahi-daemon[604]: Registering new address record for fe80::250:56ff:fe85:e4af on eth0.*. Aug 21 15:52:30 nexus-test systemd: radiusd.service: control process exited, code=exited status=1 Aug 21 15:52:30 nexus-test systemd: Failed to start FreeRADIUS high performance RADIUS server.. Aug 21 15:52:30 nexus-test systemd: Unit radiusd.service entered failed state. Aug 21 15:52:31 nexus-test kdumpctl: kexec: loaded kdump kernel Aug 21 15:52:31 nexus-test kdumpctl: Starting kdump: [OK] Aug 21 15:52:31 nexus-test systemd: Started Crash recovery kernel arming. Aug 21 15:52:31 nexus-test systemd: Started The Apache HTTP Server. Aug 21 15:52:31 nexus-test systemd: Started MariaDB database server. /var/log/radius/radius.log output: Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Driver rlm_sql_mysql (module rlm_sql_mysql) loaded and linked Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Attempting to connect to database "radius" Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Opening additional connection (0) Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Couldn't connect socket to MySQL server radius@localhost:radius Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Mysql error 'Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)' Thu Aug 21 15:24:16 2014 : Error: rlm_sql (sql): Opening connection failed (0) Thu Aug 21 15:24:16 2014 : Error: /etc/raddb/mods-enabled/sql[47]: Instantiation failed for module "sql" After seeing this I tried to replicate the problem, killed mariadb.service and started to run debug mode again. And it spits out the same problem as in the radius.log. I tried disabling iptables and firewalld and rebooting, but no luck: systemctl disable iptables systemctl disable firewalld So maybe the problem is in the process startup order or delay of some kind. Maybe FreeRADIUS's SQL module can't connect to not yet started MariaDB? If it, how can I fix this? In earlier versions of RHEL/CENTOS I know you easily see service start order in like rc.d or stuff, now IDK. I am new to this fancy "systemd", "systemctl", "firewalld" stuff Centos 7 introduced so sorry I'm a little bit confused. Also this new FreeRADIUS 3 structure... PS. MariaDB is enabled on startup, credentials in FR DB configuration are correct A little update: cat /etc/systemd/system/multi-user.target.wants/radiusd.service output: [Unit] Description=FreeRADIUS high performance RADIUS server. After=syslog.target network.target [Service] Type=forking PIDFile=/var/run/radiusd/radiusd.pid ExecStartPre=-/bin/chown -R radiusd.radiusd /var/run/radiusd ExecStartPre=/usr/sbin/radiusd -C ExecStart=/usr/sbin/radiusd -d /etc/raddb ExecReload=/usr/sbin/radiusd -C ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target

    Read the article

  • SELF-SOLVED AutoHotkey Function GetMouseTaskbutton need to adapt for 64-bit OS

    - by auntyEEK
    SOLVED VIA SELF-HELP, HAIR-PULLING, AND TEETH-GRINDING. THANKS ANYWAY....... I'm using the GetMouseTaskbutton function from this thread on AHK forum. [http://www.autohotkey.com/forum/topic22763.html&highlight=getmousetaskbutton][1] ; Gets the index+1 of the taskbar button which the mouse is hovering over. ; Returns an empty string if the mouse is not over the taskbar's task toolbar. ; ; Some code and inspiration from Sean's TaskButton.ahk GetMouseTaskButton(ByRef hwnd) { MouseGetPos, x, y, win, ctl, 2 ; Check if hovering over taskbar. WinGetClass, cl, ahk_id %win% if (cl != "Shell_TrayWnd") return ; Check if hovering over a Toolbar. WinGetClass, cl, ahk_id %ctl% if (cl != "ToolbarWindow32") return ; Check if hovering over task-switching buttons (specific toolbar). hParent := DllCall("GetParent", "Uint", ctl) WinGetClass, cl, ahk_id %hParent% if (cl != "MSTaskSwWClass") return WinGet, pidTaskbar, PID, ahk_class Shell_TrayWnd hProc := DllCall("OpenProcess", "Uint", 0x38, "int", 0, "Uint", pidTaskbar) pRB := DllCall("VirtualAllocEx", "Uint", hProc , "Uint", 0, "Uint", 20, "Uint", 0x1000, "Uint", 0x4) VarSetCapacity(pt, 8, 0) NumPut(x, pt, 0, "int") NumPut(y, pt, 4, "int") ; Convert screen coords to toolbar-client-area coords. DllCall("ScreenToClient", "uint", ctl, "uint", &pt) ; Write POINT into explorer.exe. DllCall("WriteProcessMemory", "uint", hProc, "uint", pRB+0, "uint", &pt, "uint", 8, "uint", 0) ; SendMessage, 0x447,,,, ahk_id %ctl% ; TB_GETHOTITEM SendMessage, 0x445, 0, pRB,, ahk_id %ctl% ; TB_HITTEST btn_index := ErrorLevel ; Convert btn_index to a signed int, since result may be -1 if no 'hot' item. if btn_index 0x7FFFFFFF btn_index := -(~btn_index) - 1 if (btn_index > -1) { ; Get button info. SendMessage, 0x417, btn_index, pRB,, ahk_id %ctl% ; TB_GETBUTTON VarSetCapacity(btn, 20) DllCall("ReadProcessMemory", "Uint", hProc , "Uint", pRB, "Uint", &btn, "Uint", 20, "Uint", 0) state := NumGet(btn, 8, "UChar") ; fsState pdata := NumGet(btn, 12, "UInt") ; dwData ret := DllCall("ReadProcessMemory", "Uint", hProc , "Uint", pdata, "UintP", hwnd, "Uint", 4, "Uint", 0) } else hwnd = 0 DllCall("VirtualFreeEx", "Uint", hProc, "Uint", pRB, "Uint", 0, "Uint", 0x8000) DllCall("CloseHandle", "Uint", hProc) ; Negative values indicate seperator items. (abs(btn_index) is the index) return btn_index > -1 ? btn_index+1 : 0 } It identifies the owner of the hovered taskbar button. I'm using it in a routine to auto-activate window by hovering its taskbar button, and also a routine to close inactive window by middle-click on its taskbar button. Works great on my XP machine. The author had stated that the function does work in Vista, but it refuses to work for me in Vista 64-bit, so apparently it is only valid in 32-bit. And I am very new to AHK, and don't know how to adapt it. Unfortunately, my queries at the site sank without a trace. Does anyone have advice for me? I will be most grateful. Thanks.

    Read the article

  • SmartOS reboots spontaneously

    - by Alex
    I run a SmartOS system on a Hetzner EX4S (Intel Core i7-2600, 32G RAM, 2x3Tb SATA HDD). There are six virtual machines on the host: [root@10-bf-48-7f-e7-03 ~]# vmadm list UUID TYPE RAM STATE ALIAS d2223467-bbe5-4b81-a9d1-439e9a66d43f KVM 512 running xxxx1 5f36358f-68fa-4351-b66f-830484b9a6ee KVM 1024 running xxxx2 d570e9ac-9eac-4e4f-8fda-2b1d721c8358 OS 1024 running xxxx3 ef88979e-fb7f-460c-bf56-905755e0a399 KVM 1024 running xxxx4 d8e06def-c9c9-4d17-b975-47dd4836f962 KVM 4096 running xxxx5 4b06fe88-db6e-4cf3-aadd-e1006ada7188 KVM 9216 running xxxx5 [root@10-bf-48-7f-e7-03 ~]# The host reboots several times a week with no crash dump in /var/crash and no messages in the /var/adm/messages log. Basically /var/adm/messages looks like there was a hard reset: 2012-11-23T08:54:43.210625+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T09:14:43.187589+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T09:34:43.165100+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T09:54:43.142065+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T10:14:43.119365+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T10:34:43.096351+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T10:54:43.073821+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T10:57:55.610954+00:00 10-bf-48-7f-e7-03 genunix: [ID 540533 kern.notice] #015SunOS Release 5.11 Version joyent_20121018T224723Z 64-bit 2012-11-23T10:57:55.610962+00:00 10-bf-48-7f-e7-03 genunix: [ID 299592 kern.notice] Copyright (c) 2010-2012, Joyent Inc. All rights reserved. 2012-11-23T10:57:55.610967+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: lgpg 2012-11-23T10:57:55.610971+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: tsc 2012-11-23T10:57:55.610974+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: msr 2012-11-23T10:57:55.610978+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: mtrr 2012-11-23T10:57:55.610981+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: pge 2012-11-23T10:57:55.610984+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: de 2012-11-23T10:57:55.610987+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: cmov 2012-11-23T10:57:55.610995+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: mmx 2012-11-23T10:57:55.611000+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: mca 2012-11-23T10:57:55.611004+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: pae 2012-11-23T10:57:55.611008+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: cv8 The problem is that sometimes the host loses the network interface on reboot so we need to perform a manual hardware reset to bring it back. We do not have physical or virtual access to the server console - no KVM, no iLO or anything like this. So, the only way to debug is to analyze crash dumps/log files. I am not a SmartOS/Solaris expert so I am not sure how to proceed. Is there any equivalent of Linux netconsole for SmartOS? Can I just redirect the console output to the network port somehow? Maybe I am missing something obvious and crash information is located somewhere else.

    Read the article

  • Cisco 678 Will Not Work using PPPoE - Possibly Because I Configured it Incorrectly..?

    - by Brian Stinar
    I am attempting to configure a Cisco 678 because I am totally sick on my Actiontec. However, I am running into some problems. It seems as though the Cisco is able to train the line, but I am unable to ping out. I am all right at programming, but still learning a lot when it comes to being a system administrator. I apologize in advance if I did something ridiculous, or am attempting to configure this device to do something it was not designed to do. It is almost like I am not correctly configuring the device to grab it's IP using PPPoA (like my Actiontec.) The output from "show running" (below) makes me think this too. Below are the commands I ran in order to configure this: # en # set nvram erase # write # reboot # en # set nat enable # set dhcp server enable # set PPP wan0-0 ipcp 0.0.0.0 # set ppp wan0-0 dns 0.0.0.0 # set PPP wan0-0 login xxxxx // My actual login # set PPP wan0-0 password yyyyy // My actual password # set PPP restart enabled # set int wan0-0 close # set int wan0-0 vpi 0 # set int wan0-0 vci 32 # set int wan0-0 open # write # reboot Here is the output from a few commands I thought could provide some useful information: cbos#ping 74.125.224.113 Sending 1 8 byte ping(s) to 74.125.224.113 every 2 second(s) Request timed out cbos#show version Cisco Broadband Operating System CBOS (tm) 678 Software (C678-I-M), Version v2.4.9 - Release Software Copyright (c) 1986-2001 by cisco Systems, Inc. Compiled Nov 17 2004 15:26:29 DMT FULL firmware version G96 NVRAM image at 0x1030f000 cbos#show errors - Current Error Messages - ## Ticks Module Level Message 0 000:00:00:00 PPP Info IPCP Open Event on wan0-0 1 000:00:00:14 ATM Info Wan0 Up 2 000:00:00:14 PPP Info PPP Up Event on wan0-0 3 000:00:01:54 PPP Info PPP Down Event on wan0-0 Total Number of Error Messages: 4 cbos#show interface wan0 wan0 ADSL Physical Port Line Trained Actual Configuration: Overhead Framing: 3 Trellis Coding: Enabled Standard Compliance: T1.413 Downstream Data Rate: 1184 Kbps Upstream Data Rate: 928 Kbps Interleave S Downstream: 4 Interleave D Downstream: 16 Interleave R Downstream: 16 Interleave S Upstream: 4 Interleave D Upstream: 8 Interleave R Upstream: 16 Modem Microcode: G96 DSP version: 0 Operating State: Showtime/Data Mode Configured: Echo Cancellation: Disabled Overhead Framing: 3 Coding Gain: Auto TX Power Attenuation: 0dB Trellis Coding: Enabled Bit Swapping: Disabled Standard Compliance: T1.413 Remote Standard Compliance: T1.413 Tx Start Bin: 0x6 Tx End Bin: 0x1f Data Interface: Utopia L1 Status: Local SNR Margin: 19.0dB Local Coding Gain: 7.5dB Local Transmit Power: 12.5dB Local Attenuation: 46.0dB Remote Attenuation: 31.0dB Local Counters: Interleaved RS Corrected Bytes: 0 Interleaved Symbols with CRC Errors: 2 No Cell Delineation Interleaved: 0 Out of Cell Delineation Interleaved: 0 Header Error Check Counter Interleaved: 0 Count of Severely Errored Frames: 0 Count of Loss of Signal Frames: 0 Remote Counters: Interleaved RS Corrected Bytes: 0 Interleaved Symbols with CRC Errors: 1 No Cell Delineation Interleaved: 0 Header Error Check Counter Interleaved: 0 Count of Severely Errored Frames: 0 Count of Loss of Signal Frames: 0 cbos#show int wan0-0 WAN0-0 ATM Logical Port PVC (VPI 0, VCI 32) is configured. ScalaRate set to Auto AAL 5 UBR Traffic IP Port Enabled cbos#show running Warning: traffic may pause while NVRAM is being accessed [[ CBOS = Section Start ]] NSOS MD5 Enable Password = XXXX NSOS MD5 Root Password = XXXX NSOS MD5 Commander Password = XXXX [[ PPP Device Driver = Section Start ]] PPP Port User Name = 00, "XXXX" PPP Port User Password = 00, XXXX PPP Port Option = 00, IPCP,IP Address,3,Auto,Negotiation Not Required,Negotiable ,IP,0.0.0.0 PPP Port Option = 00, IPCP,Primary DNS Server,129,Auto,Negotiation Not Required, Negotiable,IP,0.0.0.0 PPP Port Option = 00, IPCP,Secondary DNS Server,131,Auto,Negotiation Not Require d,Negotiable,IP,0.0.0.0 [[ ATM WAN Device Driver = Section Start ]] ATM WAN Virtual Connection Parms = 00, 0, 32, 0 [[ DHCP = Section Start ]] DHCP Server = enabled [[ IP Routing = Section Start ]] IP NAT = enabled [[ WEB = Section Start ]] WEB = enabled cbos# wtf...? Thank you all very much for taking the time to read this, and the help.

    Read the article

  • Bridging a non-persistent PPP connection to wireless (or wired) in Windows XP

    - by phooze
    I have a 3G modem-like device (eMobile's D01NX, PC card style, for any Japan nerds out there) that I use to connect my PC to the Internet. I'd like to bridge this connection with another computer either via an ad-hoc wireless network, or a simple cross-over cable (either are options). However, when I open "Network Connections", I do not see the PPP connection (otherwise I could click both and bridge). I believe this is because there is software (provided by the vendor) that is handling the card directly and registering a PPP connection dynamically. When connected, an ipconfig at the command line yields: Ethernet adapter wireless: Connection-specific DNS Suffix . : Autoconfiguration IP Address. . . : 169.254.5.169 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : Ethernet adapter lan: Media State . . . . . . . . . . . : Media disconnected PPP adapter {B59EEDDE-A22B-48DF-93E5-04842B641257}: Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 114.xx.xxx.xx Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : 114.xx.xxx.xx (I've commented out my IP address for privacy reasons, but what does appear there is a functional Internet IP address.) When I disconnect the adapter with the vendor software, the PPP connection disappears completely from the ipconfig list. Any ideas on how to do this?

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • Reinstall Postfix

    - by Kevin
    I tried to reinstall Postfix, but I get this bunch of errors: root@***:/etc/init.d# sudo apt-get install -f postfix Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: procmail postfix-mysql postfix-pgsql postfix-ldap postfix-pcre resolvconf postfix-cdb mail-reader The following NEW packages will be installed: postfix 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/1,389kB of archives. After this operation, 3,531kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package postfix. (Reading database ... 56122 files and directories currently installed.) Unpacking postfix (from .../postfix_2.7.1-1ubuntu0.1_amd64.deb) ... Processing triggers for ureadahead ... Processing triggers for ufw ... Processing triggers for man-db ... Setting up postfix (2.7.1-1ubuntu0.1) ... Configuration file `/etc/init.d/postfix' ==> File on system created by you or by a script. ==> File also in package provided by package maintainer. What would you like to do about it ? Your options are: Y or I : install the package maintainer's version N or O : keep your currently-installed version D : show the differences between the versions Z : start a shell to examine the situation The default action is to keep your current version. *** postfix (Y/I/N/O/D/Z) [default=N] ? Y Installing new version of config file /etc/init.d/postfix ... Adding group `postfix' (GID 109) ... Done. Adding system user `postfix' (UID 106) ... Adding new user `postfix' (UID 106) with group `postfix' ... Not creating home directory `/var/spool/postfix'. Creating /etc/postfix/dynamicmaps.cf Adding tcp map entry to /etc/postfix/dynamicmaps.cf Adding group `postdrop' (GID 115) ... Done. setting myhostname: ***.net setting alias maps setting alias database setting myorigin setting destinations: ***.net, localhost.***.net, , localhost setting relayhost: setting mynetworks: 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 setting mailbox_size_limit: 0 setting recipient_delimiter: + setting inet_interfaces: all Postfix is now set up with a default configuration. If you need to make changes, edit /etc/postfix/main.cf (and others) as needed. To view Postfix configuration values, see postconf(1). After modifying main.cf, be sure to run '/etc/init.d/postfix reload'. Running newaliases postalias: fatal: /etc/mailname: cannot open file: Permission denied dpkg: error processing postfix (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: postfix E: Sub-process /usr/bin/dpkg returned an error code (1) I tried aptitude purge, remove, autoclean and all of dpkg options (configure, remove, purge) but nothing did the trick. /etc/mailname exists (0644 root:root) with as content *.net (fetched from hostname). What am I doing wrong?

    Read the article

  • Windows 7, HTTPS WebDav: Asks for password twice and fails. Any workarounds?

    - by AutoDMC
    Howdy. I have a Dav server running with PHP SabreDav (code.google.com/p/sabredav/wiki/Windows) on Cherokee at an HTTPS secured URL. It's set to use https, and uses Digest Authentication. I can log in with multiple browsers and a few third party clients (BitKinex and Java AnyClient can connect and browse as well, caveats below). However, when attempting to log in with Windows 7 (surprise, surprise), it asks for my password twice, then tells me that my folder is invalid. I have verified that the server is using Digest authentication. I've verified multiple times that third party software can connect. I even went out and bought a GoDaddy SSL certificate so my SSL wouldn't be self signed anymore. I've applied the registry hacks here: support.microsoft.com/kb/943280 (Note that the article says the "fix" already exists for Windows 7, I just need magical registry hax to get it to work) I've applied the registry hacks here: support.microsoft.com/kb/941050 I've applied the registry hacks here: support.microsoft.com/kb/841215 (Supposedly allows Basic Auth, which shouldn't apply, but why not?) All to no avail; Windows continues to ask for my password twice, then state that "The folder you entered does not appear to be valid. Please choose another." Try the command line? Sure: I've attempted to access with NET USE "https://dav.example.com/" password /USER:me (System error 59) I've attempted to access with NET USE "https://dav.example.com/" (System error 1790) I've attempted to access with NET USE "https://dav.example.com/subdir/" password /USER:me (System error 59) I've attempted to access with NET USE "https://dav.example.com/subdir/" (System error 1790) For good luck: ping dav.example.com ... works. And again, web browsers can access the share just fine, so can third party tools. Best I can tell at this point is "HAHA, NO WEBDAV FOR YOU ON WINDOWS 7" which would be fine except everyone who will be using this application... uses Windows 7. And most are not as persistent or pugnacious as I am. I feel like I've burned through every random suggestion I've found anywhere in the first 10 pages of Google on every search term I can think of. Any ideas? I need it to be Webdav, I need it to be over HTTPS, and I really do need a method to access it from Windows 7. EXTRA DETAIL: However, the "third party" programs I've tried have either been buggy, incomplete, or have silly ... "glitches." For example, BitKinex seems to fixate on any http error codes sent, so if there's a glitch reading a directory, BAM, that directory is always listed empty. Long directory listings also show up as blank, even though the transfer panel shows that directory listing is still taking place. In any case, BitKinex is useless for development purposes for the reasons above. And besides, I'm building this for people other than myself, people who will want to get this dav share working "the regular way."

    Read the article

  • File server share access intermittent/slow/machine unstable: win2k8r2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • Wi-Fi Stick with ZD1211 chip refuses to work on Ubuntu >8.10. No clue.

    - by Benjamin Maus
    I have a machine running Ubuntu 9.10 (Karmic *x86_64*). Everything is running smooth so far, except for the Wi-Fi USB Stick. The same device worked perfectly in 8.10. The wireless device is a GW-US54GXS using the Zydas Zd1211 chipset. Dmesg output after plugging in: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' [ 196.304209] zd1211rw 2-1:1.0: phy0 [ 196.304227] usbcore: registered new interface driver zd1211rw [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Syslog output: Nov 5 11:20:24 somesystem kernel: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' Nov 5 11:20:24 kierkegaard NetworkManager: <info> Found radio killswitch rfkill0 (at /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/ieee80211/phy0/rfkill0) (driver <unknown>) Nov 5 11:20:24 somesystem kernel: [ 196.304209] zd1211rw 2-1:1.0: phy0 Nov 5 11:20:24 somesystem kernel: [ 196.304227] usbcore: registered new interface driver zd1211rw Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): driver supports SSID scans (scan_capa 0x01). Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): new 802.11 WiFi device (driver: 'zd1211rw') Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/2 Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): now managed Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): device state change: 1 -> 2 (reason 2) Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): bringing up device. Nov 5 11:20:24 somesystem kernel: [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:24 somesystem kernel: [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 Nov 5 11:20:24 somesystem kernel: [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- Nov 5 11:20:24 somesystem NetworkManager: <WARN> nm_device_hw_bring_up(): (wlan0): device not up after timeout! Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): deactivating device (reason: 2). Nov 5 11:20:24 somesystem kernel: [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:29 somesystem wpa_supplicant[978]: Could not set interface 'wlan0' UP Nov 5 11:20:29 somesystem wpa_supplicant[978]: Failed to initialize driver interface Nov 5 11:20:29 somesystem NetworkManager: <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. Gnome tells me in the network menu that the device was "not ready". It appears in iwconfig but not in ifconfig. The same symptoms appear when I boot from the live CD. How can I solve this dilemma?

    Read the article

  • cannot send mail to postfix /w iptables linux proxy

    - by Juzzam
    I have two separate servers, both running Ubuntu 8.04. Server 1 has the real domain name of our site, let's refer to it as example.com. Server 2 is a mail server I have setup with postfix/courier. The hostname for this server is mail.example.com. I've setup iptables on Server 1 to forward all traffic on port 25 to Server 2. I used this script (except I changed the target ip address and the port from 80 to 25). When I send an email to [email protected] it works. However, when I try to send an email to [email protected] from gmail, I get this error: 550 550 #5.1.0 Address rejected [email protected] (state 14) /var/log/mail.log shows no new lines when this happens. What is strange is that it works with telnet from my local machine. For example: $ telnet example.com 25 220 VO13421.localdomain SMTP Postfix EHLO example.com 250-VO13421.localdomain 250-PIPELINING 250-SIZE 10240000 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN MAIL FROM: [email protected] 250 2.1.0 Ok RCPT TO: [email protected] 250 2.1.5 Ok data 354 Please start mail input. hello user... how have you been? . 250 Mail queued for delivery. quit 221 Closing connection. Good bye. /var/log/mail.log shows success (and the email goes to the maildr): Feb 24 09:47:36 VO13421 postfix/smtpd[2212]: connect from 81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:01 VO13421 postfix/smtpd[2212]: warning: restriction `smtpd_data_restrictions' after `permit' is ignored Feb 24 09:48:01 VO13421 postfix/smtpd[2212]: 65C68120321: client=81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:29 VO13421 postfix/smtpd[2212]: warning: restriction `smtpd_data_restrictions' after `permit' is ignored Feb 24 09:48:29 VO13421 postfix/smtpd[2212]: 6BDFA120321: client=81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:29 VO13421 postfix/cleanup[2216]: 6BDFA120321: message-id= Feb 24 09:48:29 VO13421 postfix/qmgr[2042]: 6BDFA120321: from=, size=395, nrcpt=1 (queue active) Feb 24 09:48:29 VO13421 postfix/virtual[2217]: 6BDFA120321: to=, relay=virtual, delay=0.28, delays=0.25/0.02/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Feb 24 09:48:29 VO13421 postfix/qmgr[2042]: 6BDFA120321: removed Feb 24 09:48:30 VO13421 postfix/smtpd[2212]: disconnect from 81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] iptables -L -n -v --line on example.com yields the following. Anyone know an iptables command to see the port forwarding? Also, it seems to accept all traffic, that's probably bad right? ;] num pkts bytes target prot opt in out source destination 1 14041 1023K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 338 20722 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 419K packets, 425M bytes) num pkts bytes target prot opt in out source destination 1 13711 2824K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 postconf -n results in: alias_database = hash:/etc/postfix/aliases alias_maps = hash:/etc/postfix/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix delay_warning_time = 4h disable_vrfy_command = yes inet_interfaces = all local_recipient_maps = mailbox_size_limit = 0 masquerade_domains = mail.example.com mail1.example.com masquerade_exceptions = root maximal_backoff_time = 8000s maximal_queue_lifetime = 7d minimal_backoff_time = 1000s mydestination = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mynetworks_style = host myorigin = example.com readme_directory = no recipient_delimiter = + relayhost = smtp_helo_timeout = 60s smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname SMTP $mail_name smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org smtpd_delay_reject = yes smtpd_hard_error_limit = 12 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_limit = 16 smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit smtpd_soft_error_limit = 3 smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes unknown_local_recipient_reject_code = 450 virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf virtual_gid_maps = mysql:/etc/postfix/mysql_gid.cf virtual_mailbox_base = /var/spool/mail/virtual virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf virtual_uid_maps = mysql:/etc/postfix/mysql_uid.cf

    Read the article

  • iptables rule(s) to send openvpn traffic from clients over an sshuttle tunnel?

    - by Sam Martin
    I have an Ubuntu 12.04 box with OpenVPN. The VPN is working as expected -- clients can connect, browse the Web, etc. The OpenVPN server IP is 10.8.0.1 on tun0. On that same box, I can use sshuttle to tunnel into another network to access a Web server on 10.10.0.9. sshuttle does its magic using the following iptables commands: iptables -t nat -N sshuttle-12300 iptables -t nat -F sshuttle-12300 iptables -t nat -I OUTPUT 1 -j sshuttle-12300 iptables -t nat -I PREROUTING 1 -j sshuttle-12300 iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.10.0.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42 iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.0/8 -p tcp Is it possible to forward traffic from OpenVPN clients over the sshuttle tunnel to the remote Web server? I'd ultimately like to be able to set up any complicated tunneling on the server, and have relatively "dumb" clients (iPad, etc.) be able to access the remote servers via OpenVPN. Below is a basic diagram of the scenario: [Edit: added output from the OpenVPN box] $ sudo iptables -nL -v -t nat Chain PREROUTING (policy ACCEPT 1498 packets, 252K bytes) pkts bytes target prot opt in out source destination 1512 253K sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 322 packets, 58984 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 584 packets, 43241 bytes) pkts bytes target prot opt in out source destination 587 43421 sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain POSTROUTING (policy ACCEPT 589 packets, 43595 bytes) pkts bytes target prot opt in out source destination 1175 76298 MASQUERADE all -- * eth0 10.8.0.0/24 0.0.0.0/0 Chain sshuttle-12300 (2 references) pkts bytes target prot opt in out source destination 17 1076 REDIRECT tcp -- * * 0.0.0.0/0 10.10.0.0/24 TTL match TTL != 42 redir ports 12300 0 0 RETURN tcp -- * * 0.0.0.0/0 127.0.0.0/8 $ sudo iptables -nL -v -t filter Chain INPUT (policy ACCEPT 97493 packets, 30M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 131K 109M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1370 89160 ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable [Edit 2: more OpenVPN server output] $ netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 [Edit 3: still more debug output] IP forwarding appears to be enabled correctly on the OpenVPN server: # find /proc/sys/net/ipv4/conf/ -name forwarding -ls -execdir cat {} \; 18926 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/all/forwarding 1 18954 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/default/forwarding 1 18978 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/eth0/forwarding 1 19003 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/lo/forwarding 1 19028 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/tun0/forwarding 1 Client routing table: $ netstat -r Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire 0/1 10.8.0.5 UGSc 8 48 tun0 default 192.168.1.1 UGSc 2 1652 en1 10.8.0.1/32 10.8.0.5 UGSc 1 0 tun0 10.8.0.5 10.8.0.6 UHr 13 0 tun0 10.10.0/24 10.8.0.5 UGSc 0 0 tun0 <snip> Traceroute from client: $ traceroute 10.10.0.9 traceroute to 10.10.0.9 (10.10.0.9), 64 hops max, 52 byte packets 1 10.8.0.1 (10.8.0.1) 5.403 ms 1.173 ms 1.086 ms 2 192.168.1.1 (192.168.1.1) 4.693 ms 2.110 ms 1.990 ms 3 l100.my-verizon-garbage (client-ext-ip) 7.453 ms 7.089 ms 6.248 ms 4 * * * 5 10.10.0.9 (10.10.0.9) 14.915 ms !N * 6.620 ms !N

    Read the article

  • cannot send mail to postfix /w iptables linux proxy

    - by Juzzam
    I have two separate servers, both running Ubuntu 8.04. Server 1 has the real domain name of our site, let's refer to it as example.com. Server 2 is a mail server I have setup with postfix/courier. The hostname for this server is mail.example.com. I've setup iptables on Server 1 to forward all traffic on port 25 to Server 2. I used this script (except I changed the target ip address and the port from 80 to 25). When I send an email to [email protected] it works. However, when I try to send an email to [email protected] from gmail, I get this error: 550 550 #5.1.0 Address rejected [email protected] (state 14) /var/log/mail.log shows no new lines when this happens. What is strange is that it works with telnet from my local machine. For example: $ telnet example.com 25 220 VO13421.localdomain SMTP Postfix EHLO example.com 250-VO13421.localdomain 250-PIPELINING 250-SIZE 10240000 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN MAIL FROM: [email protected] 250 2.1.0 Ok RCPT TO: [email protected] 250 2.1.5 Ok data 354 Please start mail input. hello user... how have you been? . 250 Mail queued for delivery. quit 221 Closing connection. Good bye. /var/log/mail.log shows success (and the email goes to the maildr): Feb 24 09:47:36 VO13421 postfix/smtpd[2212]: connect from 81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:01 VO13421 postfix/smtpd[2212]: warning: restriction `smtpd_data_restrictions' after `permit' is ignored Feb 24 09:48:01 VO13421 postfix/smtpd[2212]: 65C68120321: client=81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:29 VO13421 postfix/smtpd[2212]: warning: restriction `smtpd_data_restrictions' after `permit' is ignored Feb 24 09:48:29 VO13421 postfix/smtpd[2212]: 6BDFA120321: client=81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:29 VO13421 postfix/cleanup[2216]: 6BDFA120321: message-id= Feb 24 09:48:29 VO13421 postfix/qmgr[2042]: 6BDFA120321: from=, size=395, nrcpt=1 (queue active) Feb 24 09:48:29 VO13421 postfix/virtual[2217]: 6BDFA120321: to=, relay=virtual, delay=0.28, delays=0.25/0.02/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Feb 24 09:48:29 VO13421 postfix/qmgr[2042]: 6BDFA120321: removed Feb 24 09:48:30 VO13421 postfix/smtpd[2212]: disconnect from 81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] iptables -L -n -v --line on example.com yields the following. Anyone know an iptables command to see the port forwarding? Also, it seems to accept all traffic, that's probably bad right? ;] num pkts bytes target prot opt in out source destination 1 14041 1023K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 338 20722 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 419K packets, 425M bytes) num pkts bytes target prot opt in out source destination 1 13711 2824K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 postconf -n results in: alias_database = hash:/etc/postfix/aliases alias_maps = hash:/etc/postfix/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix delay_warning_time = 4h disable_vrfy_command = yes inet_interfaces = all local_recipient_maps = mailbox_size_limit = 0 masquerade_domains = mail.example.com mail1.example.com masquerade_exceptions = root maximal_backoff_time = 8000s maximal_queue_lifetime = 7d minimal_backoff_time = 1000s mydestination = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mynetworks_style = host myorigin = example.com readme_directory = no recipient_delimiter = + relayhost = smtp_helo_timeout = 60s smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname SMTP $mail_name smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org smtpd_delay_reject = yes smtpd_hard_error_limit = 12 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_limit = 16 smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit smtpd_soft_error_limit = 3 smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes unknown_local_recipient_reject_code = 450 virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf virtual_gid_maps = mysql:/etc/postfix/mysql_gid.cf virtual_mailbox_base = /var/spool/mail/virtual virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf virtual_uid_maps = mysql:/etc/postfix/mysql_uid.cf

    Read the article

  • dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack)

    - by udo
    I had an issue (Question 199582) which was resolved. Unfortunately I am stuck at this point now. Running root@X100e:/var/cache/apt/archives# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following NEW packages will be installed: file libexpat1 libmagic1 libreadline6 libsqlite3-0 mime-support python python-minimal python2.6 python2.6-minimal readline-common 0 upgraded, 11 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/5,204kB of archives. After this operation, 19.7MB of additional disk space will be used. Do you want to continue [Y/n]? Y (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from .../python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i --force-depends python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb is not able to fix this. Any clues how to fix this?

    Read the article

  • Debian dependency problems / partially installed

    - by Michael
    I tried to install curl support for php 5 on my debian squeeze machine and since I'm having problems. After trying to install curl I got dependency issues which I tried to solve by removing what started the issues. From one thing came another and I'm currently looking at ~29 issues when I try to do an apt-get upgrade. These issues vary from unable to config, dependency and unable to remove errors. I tried apt-get upgrade -f and installing packages using dpkg command. I tried removing using purge and force. I manually removed stuff to try and fix it. I tried running dpkg --configure -a. I've to say I'm still pretty new to linux so I'm out of idea's and cant seem to find an answer online that matches my problems. Here's a part of the apt-get upgrade command output: Reading package lists... Building dependency tree... Reading state information... 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 29 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up libgeoip1 (1.4.7~beta6+dfsg-1) ... Bus error dpkg: error processing libgeoip1 (--configure): subprocess installed post-installation script returned error exit status 135 Setting up libisc62 (1:9.7.3.dfsg-1~squeeze3) ... Bus error dpkg: error processing libisc62 (--configure): subprocess installed post-installation script returned error exit status 135 dpkg: dependency problems prevent configuration of libdns69: libdns69 depends on libgeoip1 (>= 1.4.7~beta6+dfsg); however: Package libgeoip1 is not configured yet. libdns69 depends on libisc62; however: Package libisc62 is not configured yet. dpkg: error processing libdns69 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libisccc60: libisccc60 depends on libisc62; however: Package libisc62 is not configured yet. dpkg: error processing libisccc60 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libisccfg62: libisccfg62 depends on libdns69; however: Package libdns69 is not configured yet. .. continues Errors were encountered while processing: libgeoip1 libisc62 libdns69 libisccc60 libisccfg62 libbind9-60 liblwres60 bind9-host libavahi-core7 libdaemon0 avahi-daemon libexif12 libffi5 libgomp1 libgphoto2-port0 libgphoto2-2 libperl5.10 libsensors4 libsnmp15 libhpmud0 libieee1284-3 libnss-mdns libossp-uuid16 libpq5 libv4l-0 libsane libsane-hpaio libssh2-1 python-gobject dpkg --configure -a Setting up libpq5 (8.4.8-0squeeze2) ... Bus error dpkg: error processing libpq5 (--configure): subprocess installed post-installation script returned error exit status 135 Setting up libperl5.10 (5.10.1-17squeeze2) ... Bus error dpkg: error processing libperl5.10 (--configure): subprocess installed post-installation script returned error exit status 135 Setting up libffi5 (3.0.9-3) ... Bus error dpkg: error processing libffi5 (--configure): subprocess installed post-installation script returned error exit status 135 Setting up libexif12 (0.6.19-1) ... .. continues Suggestions are really welcome I really don't know what to do. Michael.

    Read the article

  • "Service Unavailable" when browsing to static HTML page in non-application IIS website on Windows 2003 (possibly SharePoint WSS 2.0 related?)

    - by Jordan Rieger
    Background: My client has an old Pentium III Windows 2003 server whose 16/36 GB disks are dying. On it he has a database-driven web site and email application that needs further customization by a developer (me). First we need to get it working on the new server. The original developer is no longer available to provide a system setup guide. So my client got a tech who imaged the old drives over to the new server and managed to get it booting. But the IIS-driven site no longer works. In fact it seems that IIS itself does not work. Problem: Service Unavailable when attempting to browse from the server itself to the URL for a local Web Site called test which I setup in IIS to serve a single static index.htm file. This I did to isolate the problem, and eliminate the client's application from the equation. The site is setup on port 80 with the host header "test.myclientsdomain.com", and I used the etc\hosts file to point that host at the local IP. I know the host entry took effect because I can ping it. When doing an iisreset, I get: Attempting start... Restart attempt failed. IIS Admin Service or a service dependent on IIS Admin is not active. It most likely failed to start, which may mean that it's disabled. Despite this message, the services all stay in the Started state. The only relevant System event logs I found are: Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1002 Date: 11/4/2012 Time: 11:04:47 PM User: N/A Computer: ALPHA1 Description: Application pool 'DefaultAppPool' is being automatically disabled due to a series of failures in the process(es) serving that application pool. Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1039 Date: 11/4/2012 Time: 11:13:12 PM User: N/A Computer: ALPHA1 Description: A process serving application pool 'DefaultAppPool' reported a failure. The process id was '5636'. The data field contains the error number. Data: 0000: 7e 00 07 80 ~.. And one Application event log: Event Type: Error Event Source: Windows SharePoint Services 2.0 Event Category: None Event ID: 1000 Date: 11/4/2012 Time: 11:34:04 PM User: N/A Computer: ALPHA1 Description: #50070: Unable to connect to the database STS_Config on ALPHA2\SharePoint. Check the database connection information and make sure that the database server is running. That last log tells me that the tech may have initially tried to have both the old and the new server running, by renaming the new server from ALPHA1 to ALPHA2. And perhaps SharePoint grabbed onto that change, and now can't tell that the machine name has been switched back to the old ALPHA1. But why would SharePoint interfere with a static IIS web site serving a single HTML file? The test site is not even within an Application pool (I clicked the Remove button.) What I have tried/eliminated: No relevant services seem to be disabled: IIS Admin, WWW Publishing, Sharepoint Timer Giving Full Control to All Users/Everyone on the c:\inetpub\test folder serving my test site. I can connect to and query the local SharePoint config database (ALPHA1\SHAREPOINT\STS_CONFIG) from SSMS. But when I try to do stsadm -o setconfigdb -connect -databaseserver ALPHA1\SHAREPOINT it tells me The SharePoint admininstration port does not exist. Please use stsadm.exe to create it. And when I do that, using the port 9487 specified in the IIS SharePoint Admin site config, it tells me the port is already in use. Needless to say, simply browsing to the admin site gives me a similar error about being unable to reach the config database. I didn't want to go further down the SharePoint path as it may be completed unrelated to my IIS issue, and I don't even know yet if SharePoint is required for this application to work. The app itself is ASP.Net/C#/Silverlight and a little MS Word integration (maybe that's where the SharePoint stuff comes in.)

    Read the article

  • File server share access intermittent/slow/machine unstable: win2k8r2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • File server share access intermittent/slow/machine unstable: win2kr2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • How to install Delorme StreetAtlas (any version) + GPS inside VirtualBox VM?

    - by hotei
    When I try to run the install program I get a popup message that says the installer program is not a valid executable. Background: I want a GPS with maps on my laptop running Ubuntu 10.4LTS. Unfortunately I can't find a decent native Linux GPS solution with 50 state US street level coverage. I have VirtualBox VMs available for WinXP and Win7 (among others). The VMs work fine with MicroSoft Streets and Trips (2010) and MapNGo 5 (a very! old Delorme product), but while both these products support GPS, they don't support the Earthmate LT-40 USB GPS I already have. I've got pretty much every Delorme Street Atlas they've released in the last decade and none of them will install in a VM. Any help would be much appreciated. Clarification: I've installed the Delorme products from these CDs before and the disks are fine - as long as installation is done on a "physical" machine. Added: I've tried install from an iso as well as the real CD. No difference in result (setup.exe is not a valid executable) The WinXP is SP-2 (held back on purpose at this point - I'll snapshot and fork a later SP to test). The Win2K is SP-6a. Win7(32) VM is whatever updates came out last week. The USB setup is working at least to the point where the GPS device is active in the device list (has an x in the box). At this point its not relevant because the program that needs to read it can't even be installed. Added 9-19: Added wine as harrymc suggested. Initial result was no change. Here's wines error message. The file '/media/Disk1/setup.exe' is not marked as executable. If this was downloaded or copied form an untrusted source, it may be dangerous to run. For more details, read about the executable bit. At first I thought the execute bit was the problem, but looking at several other windows CDs I see that the execute bit is not set on their exe files (which install to VM without error). Still it was worth a shot so I copied the StreetAtlas 9 DVD to my hard disk, changed the on-disk exe files to have the execute bit set and tried to install again. This time the install via wine got me through the installation process. When I start the program it bombs immediately, so we haven't made much real progress so far. I very much prefer the VM solution to wine, so I'm going back to that for now. To recap the VM situation, using an updated XP with SP3 and all recommended hotfixes: StreetAtlas 2009 USA fails with "not marked as executable". StreetAtlas 2007 USA fails with "not marked as executable". StreetAtlas 9 (copyright 2001) fails with "not marked as executable". SteeetAtlas (copyright 1991) fails with "not marked as executable" Delorme Topo 4 (copyright 2002) fails with "not marked as executable". Just about ready to give up. So I switched from XP VM to Win7 VM and tried StreetAtlas 2009 again. This time it installs. Earthmate USB GPS works. WTH? I feel like the monkey who just wrote a line of Shakespear. I'm smiling because it worked, but I have no clue why. I'm awarding the bounty to harrymc because wine did give some useful insight into the problem and a +1 to goyiux as thanks for helping.

    Read the article

  • Lockdown users on Windows Server 2012

    - by el.severo
    I set up a Active Directory on a server machine with Windows Server 2012 and I'd like to create some users with limitations like Windows Steady State does in Windows XP (locally). Seen already the Windows SteadyState Handbook (with Windows Server 2008), but I'd like to know if anyone has tried this before, the limitations are the following: 1. Prevent locked or roaming user profiles that cannot be found on the computer from logging on 2. Do not cache copies of locked or roaming user profiles for users who have previously logged on to this computer 3. Do not allow Windows to compute and store passwords using LAN Manager Hash values 4. Do not store usernames or passwords used to log on to the Windows Live ID or the domain 5. Prevent users from creating folders and files on drive C:\ 6. Lock profile to prevent the user from making permanent changes 7. Remove the Control Panel, Printer and Network Settings from the Classic Start menu 8. Remove the Favorites icon 9. Remove the My Network Places icon 10. Remove the Frequently Used Program list 11. Remove the Shared documents folder from My Computer 12. Remove control Panel icon 13. Remove the Set Program Access and Defaults icon 14. Remove the Network Connection(Connect To)icon 15. Remove the Printers and Faxes icon 16. Remove the Run icon 17. Prevent access to Windows Explorer features: Folder Options, Customize Toolbar, and the Notification Area 18. Prevent access to the taskbar 19. Prevent access to the command prompt 20. Prevent access to the registry editor 21. Prevent access to the Task Manager 22. Prevent access to Microsoft Management Console utilities 23. Prevent users from adding or removing printers 24. Prevent users from locking the computer 25. Prevent password changes (also requires the Control Panel icon to be removed) 26. Disable System Tools and other management programs 27. Prevent users from saving files to the desktop 28. Hide A Drive 29. Hide B Drive 30. Hide C Drive 31. Prevent changes to Internet Explorer registry settings 32. Empty the Temporary Internet Files folder when Internet Explorer is closed 33. Remove Internet Options 34. Remove General tab in Internet Options 35. Remove Security tab in Internet Options 36. Remove Privacy tab in Internet Options 37. Remove Content tab in Internet Options 38. Remove Connections tab in Internet Options 39. Remove Programs tab in Internet Options 40. Remove Advanced tab in Internet Options 41. Set a home page (Internet Explorer) 42. Restrict the possibility to change desktop image 43. Restrict the possibility to change wallpaper 44. Restrict usb flash drives Any suggestions for this? UPDATE: As @Dan suggested me I'd like to specify that would be applied to a educational scenario where students can login from a computer and want to add some restrictions to them.

    Read the article

  • Can't configure frame relay T1 on Cisco 1760

    - by sonar
    For the past few days I've been trying to configure a data T1 via a Frame Relay. Now I've been pretty unsuccessful at it, and it's been a while, since I've done this so please bare with me. The ISP provided me the following information: 1. IP address 2. Gateway address 3. Encapsulation Frame Relay 4. DLCI 100 5. BZ8 ESF (I think the bz8 was supposed to be b8zs) 6. Time Slot (1 al 24). And what I have configured up until now is the following: interface Serial0/0 ip address <ip address> 255.255.255.252 encapsulation frame-relay service-module t1 timeslots 1-24 frame-relay interface-dlci 100 sh service-module s0/0 (outputs): Module type is T1/fractional Hardware revision is 0.128, Software revision is 0.2, Image checksum is 0x73D70058, Protocol revision is 0.1 Receiver has no alarms. Framing is **ESF**, Line Code is **B8ZS**, Current clock source is line, Fraction has **24 timeslots** (64 Kbits/sec each), Net bandwidth is 1536 Kbits/sec. Last module self-test (done at startup): Passed Last clearing of alarm counters 00:17:17 loss of signal : 0, loss of frame : 0, AIS alarm : 0, Remote alarm : 2, last occurred 00:10:10 Module access errors : 0, Total Data (last 1 15 minute intervals): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs Data in current interval (138 seconds elapsed): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs sh int: FastEthernet0/0 is up, line protocol is up Hardware is PQUICC_FEC, address is 000d.6516.e5aa (bia 000d.6516.e5aa) Internet address is 10.0.0.1/24 MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, 100BaseTX/FX ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:20:00, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 0 packets input, 0 bytes Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog 0 input packets with dribble condition detected 191 packets output, 20676 bytes, 0 underruns 0 output errors, 0 collisions, 1 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out Serial0/0 is up, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY, loopback not set Keepalive set (10 sec) LMI enq sent 157, LMI stat recvd 0, LMI upd recvd 0, DTE LMI down LMI enq recvd 23, LMI stat sent 0, LMI upd sent 0 LMI DLCI 1023 LMI type is CISCO frame relay DTE FR SVC disabled, LAPF state down Broadcast queue 0/64, broadcasts sent/dropped 2/0, interface broadcasts 0 Last input 00:24:51, output 00:00:05, output hang never Last clearing of "show interface" counters 00:27:20 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: weighted fair Output queue: 0/1000/64/0 (size/max total/threshold/drops) Conversations 0/1/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) Available Bandwidth 1152 kilobits/sec 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 23 packets input, 302 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 1725 input errors, 595 CRC, 1099 frame, 0 overrun, 0 ignored, 30 abort 246 packets output, 3974 bytes, 0 underruns 0 output errors, 0 collisions, 48 interface resets 0 output buffer failures, 0 output buffers swapped out 4 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up Serial0/0.1 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never Serial0/0.100 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU Internet address is <ip address>/30 MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never And everything seems to be accounted for to me, but apparently I'm missing something. My issue is that I'm stuck on interface up, line protocol down, so the T1 doesn't go up. Any ideas? Thank you,

    Read the article

  • Monitor randomly shutting down, computer accepting no input, need to restart to get working

    - by Sebastian Lamerichs
    First off, spec list: OS: Windows 7 Ultimate 64-bit SP1 CPU: i7-4820k @ 3.7GHz (stock) GPU: Two 3GB Radeon HD 7970s @ 1.05GHz Mobo: AsRock X79 Extreme6 HDD: 2TB Seagate Barracuda 7200rpm RAM: 16GB quad-channel Kingston 1600MHz PSU: Antec HCG 900W Monitors: Acer S220HQL 1920x1080 + ViewSonic VA2251 1920x1080. Plugged into different GPUs. My problem is that, on a daily-ish basis, my monitors will turn off and not turn back on. My computer will still be running, GPU/CPU/case fans all still going, but the monitors will not turn back on. Additionally, it seems to cease all network activity. It doesn't seem to log any errors at all. I've verified that this is not a monitor issue, as when I press the num/caps/scroll lock buttons on my keyboard, the lights don't change, so the computer is clearly not accepting input. I have noticed a few other people on the internet with this problem, and some have claimed that it was solved by disabling PCI-Express Link State Power Management, but the issue still occurs for me after this. Whilst my CPU and GPUs both run at 100% 24/7, the temperatures are certainly not at dangerous levels, with the CPU averaging 65°C and the GPUs at 70°C and 78°C average. All components are brand new. I have tried forcing MSI Afterburner to start when Windows starts and to force a constant voltage, as this fixed the issue for a few days for another user, but he reported back saying that it had stopped working properly again, so I'm not putting too much faith in this working. Many people have said to adjust display sleep mode settings, but this will clearly not work, as the keyboard lights would still work if the monitors were the issue. The closest I can get to a log file for this issue is the following Folding@Home logs: 14:45:21:WU01:FS00:0x17:Completed 1120000 out of 2000000 steps (56%) 14:46:43:WU00:FS01:0x17:Completed 480000 out of 2000000 steps (24%) 14:46:49:WU01:FS00:0x17:Completed 1140000 out of 2000000 steps (57%) 14:48:30:WU01:FS00:0x17:Completed 1160000 out of 2000000 steps (58%) 14:49:55:WU01:FS00:0x17:Completed 1180000 out of 2000000 steps (59%) As you can see, the second GPU (FS01) stops computation approximately three and a half minutes before the issue occurs (it should be completing 1% every 80-120 seconds), and the first GPU (FS00) continues for a few minutes more before the logs just end. As far as I can tell, the computer has a network failure at the time the first GPU stops working, the latest IRC message I received from this time was at 14:47:58. That being said, there could have just not been any messages between then and 14:50:00, so I'm going to be connecting a laptop to the same bouncer to double-check if it happens again. The GPUs functioned perfectly well in another computer for a significant period of time, so I'm fairly confident that they aren't the issue, which means that this is being caused by either software or the motherboard, or possibly RAM. I really hope it's software. I heard from a forum board that there was a patch from Microsoft that fixed this problem, but "I've forgot which KB it was or the google search terms I used to find the patch, LOL.", so that's not much help. Haven't seen it mentioned by anyone else on about a dozen threads about this issue either. The computer is plugged in via a surge-protected power board, and I've run several other computers and pieces of hardware through it with no issues, so that is not the cause. I have just set the hard disk to never turn off, although I don't believe that that will solve the issue. Strangely, this has only happened when I'm not at the computer (which is actually a minority of the time). Until today it had only happened when I had not been actively using the computer for 6 hours, but today it happened within 10-30 minutes of me last using the computer actively. I have enabled file logging from MSI Afterburner, so hopefully this will shed some light on the issue, but I'm not too optimistic. I've heard that it could be a motherboard problem, but I figured I should ask around before RMAing it. Any help?

    Read the article

  • Error on 64 Bit Install of IIS &ndash; LoadLibraryEx failed on aspnet_filter.dll

    - by Rick Strahl
    I’ve been having a few problems with my Windows 7 install and trying to get IIS applications to run properly in 64 bit. After installing IIS and creating virtual directories for several of my applications and firing them up I was left with the following error message from IIS: Calling LoadLibraryEx on ISAPI filter “c:\windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll” failed This is on Windows 7 64 bit and running on an ASP.NET 4.0 Application configured for running 64 bit (32 bit disabled). It’s also on what is essentially a brand new installation of IIS and Windows 7. So it failed right out of the box. The problem here is that IIS is trying to loading this ISAPI filter from the 32 bit folder – it should be loading from Framework64 folder note the Framework folder. The aspnet_filter.dll component is a small Win32 ISAPI filter used to back up the cookieless session state for ASP.NET on IIS 7 applications. It’s not terribly important because of this focus, but it’s a default loaded component. After a lot of fiddling I ended up with two solutions (with the help and support of some Twitter folks): Switch IIS to run in 32 bit mode Fix the filter listing in ApplicationHost.config Switching IIS to allow 32 Bit Code This is a quick fix for the problem above which enables 32 bit code in the Application Pool. The problem above is that IIS is trying to load a 32 bit ISAPI filter and enabling 32 bit code gets you around this problem. To configure your Application Pool, open the Application Pool in IIS Manager bring up Advanced Options and Enable 32 Bit Applications: And voila the error message above goes away. Fix Filters Enabling 32 bit code is a quick fix solution to this problem, but not an ideal one. If you’re running a pure .NET application that doesn’t need to do COM or pInvoke Interop with 32 bit apps there’s usually no need for enabling 32 bit code in an Application Pool as you can run in native 64 bit code. So trying to get 64 bit working natively is a pretty key feature in my opinion :-) So what’s the problem – why is IIS trying to load a 32 bit DLL in a 64 bit install, especially if the application pool is configured to not allow 32 bit code at all? The problem lies in the server configuration and the fact that 32 bit and 64 bit configuration settings exist side by side in IIS. If I open my Default Web Site (or any other root Web Site) and go to the ISAPI filter list here’s what I see: Notice that there are 3 entries for ASP.NET 4.0 in this list. Only two of them however are specifically scoped to the specifically to 32 bit or 64 bit. As you can see the 64 bit filter correctly points at the Framework64 folder to load the dll, while both the 32 bit and the ‘generic’ entry point at the plain Framework 32 bit folder. Aha! Hence lies our problem. You can edit ApplicationHost.config manually, but I ran into the nasty issue of not being able to easily edit that file with the 32 bit editor (who ever thought that was a good idea???? WTF). You have to open ApplicationHost.Config in a 64 bit native text editor – which Visual Studio is not. Or my favorite editor: EditPad Pro. Since I don’t have a native 64 bit editor handy Notepad was my only choice. Or as an alternative you can use the IIS 7.5 Configuration Editor which lets you interactively browse and edit most ApplicationHost settings. You can drill into the configuration hierarchy visually to find your keys and edit attributes and sub values in property editor type interface. I had no idea this tool existed prior to today and it’s pretty cool as it gives you some visual clues to options available – especially in absence of an Intellisense scheme you’d get in Visual Studio (which doesn’t work). To use the Configuration Editor go the Web Site root and use the Configuration Editor option in the Management Group. Drill into System.webServer/isapiFilters and then click on the Collection’s … button on the right. You should now see a display like this: which shows all the same attributes you’d see in ApplicationHost.config (cool!). These entries correspond to these raw ApplicationHost.config entries: <filter name="ASP.Net_4.0" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0" /> <filter name="ASP.Net_4.0_64bit" path="C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness64" /> <filter name="ASP.Net_4.0_32bit" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness32" /> The key attribute we’re concerned with here is the preCondition and the bitness subvalue. Notice that the ‘generic’ version – which comes first in the filter list – has no bitness assigned to it, so it defaults to 32 bit and the 32 bit dll path. And this is where our problem comes from. The simple solution to fix the startup problem is to remove the generic entry from this list here or in the filters list shown earlier and leave only the bitness specific versions active. The preCondition attribute acts as a filter and as you can see here it filters the list by runtime version and bitness value. This is something to keep an eye out in general – if a bitness values are missing it’s easy to run into conflicts like this with any settings that are global and especially those that load modules and handlers and other executable code. On 64 bit systems it’s a good idea to explicitly set the bitness of all entries or remove the non-specific versions and add bit specific entries. So how did this get misconfigured? I installed IIS before everything else was installed on this machine and then went ahead and installed Visual Studio. I suspect the Visual Studio install munged this up as I never saw a similar problem on my live server where everything just worked right out of the box. In searching about this problem a lot of solutions pointed at using aspnet_regiis –r from the Framework64 directory, but that did not fix this extra entry in the filters list – it adds the required 32 bit and 64 bit entries, but it doesn’t remove the errand un-bitness set entry. Hopefully this post will help out anybody who runs into a similar situation without having to trouble shoot all the way down into the configuration settings and noticing the bitness settings. It’s a good lesson learned for me – this is my first desktop install of a 64 bit OS and things like this are what I was reluctant to find. Now that I ran into this I have a good idea what to look for with 32/64 bit misconfigurations in IIS at least.© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • This task is currently locked by a running workflow and cannot be edited. Limitation to both Nintex and SPD workflow

    - by ybbest
    Note, this post is from Nintex Forum here. These limitations apply to both SharePoint designer Workflow and Nintex Workflow as Nintex using the SharePoint workflow engine. The common cause that I experience is that ‘parent’ workflow is generating more than one task at once. This is common as you can have multiple approvers for certain approval process. You could also have workflow running when the task is created, one of the common scenario is you would like to set a custom column value in your approval task. For me this is huge limitation, as Nintex lover I really hope Nintex could solve this problem with Microsoft going forward. Introduction “This task is currently locked by a running workflow and cannot be edited” is a common message that is seen when an error occurs while the SharePoint workflow engine is processing a task item associated with a workflow. When a workflow processes a task normally, the following sequence of events is expected to occur: 1.       The process begins. 2.       The workflow places a ‘lock’ on the task so nothing else can change the values while the workflow is processing. 3.       The workflow processes the task. 4.       The lock is released when the task processing is finished. When the message is encountered, it usually indicates that an error occurred between step 2 and 4. As a result, the lock is never released. Therefore, the ‘task locked’ message is not an error itself, rather a symptom of another error – the ‘task locked’ message does not indicate what went wrong. In most cases, once this message is encountered, the workflow cannot be made to continue and must be terminated and started again. The following is a guide that can help troubleshoot the cause of these messages.  Some initial observations to narrow down the potential causes are: Is the error consistent or intermittent? When the error is consistent, it will happen every time the workflow is run. When it is intermittent, it may happen regularly, but not every time. Does the error occur the first time the user tries to respond to a task, or do they respond and notice the workflow does not continue, and when they respond again the error occurs? If the message is present when the user first responds to the task, the issue would have occurred when the task was created. Otherwise, it would have occurred when the user attempted to respond to the task. Causes Modifying the task list A cause of this error appearing consistently the first time a user tries to respond to a task is a modification to the default task list schema. For example, changing the ‘Assigned to’ field in a task list to be a multiple selection will cause the behaviour. Deleting the workflow task then restoring it from the Recycle bin If you start a workflow, delete the workflow task then restore it from the Recycle Bin in SharePoint, the workflow will fail with the ‘task locked’ error.  This is confirmed behaviour whether using a SharePoint Designer or a Nintex workflow.  You will need to terminate the workflow and start it again. Parallel simultaneous responses A cause of this error appearing inconsistently is multiple users responding to tasks in parallel at the same time. In this scenario, one task will complete correctly and the other will not process. When the user tries again, the ‘task locked’ message will display. Nintex included a workaround for this issue in build 11000. In build 11000 and later, one of the users will receive a message on the task form when they attempt to respond, stating that they need to try again in a few moments. Additional processing on the task A cause of this error appearing consistently and inconsistently is having an additional system running on the items in the task list. Some examples include: a workflow running on the task list, an event receiver running on the task list or another automated process querying and updating workflow tasks. Note: This Microsoft help article (http://office.microsoft.com/en-us/sharepointdesigner/HA102376561033.aspx#5) explains creating a workflow that runs on the task list to update a field on the task. Our experience shows that this causes the ‘Task Locked’ issues when the ‘parent’ workflow is generating more than one task at once. Isolated system error If the error is a rare event, or a ‘one off’ event, then an isolated system error may have occurred. For example, if there is a database connectivity issue while the workflow is processing the task response, the task will lock. In this case, the user will respond to a task but the workflow will not continue. When they respond again, the ‘task locked’ message will display. In this case, there will be an error in the SharePoint ULS Logs at the time that the user originally responded. Temporary delay while workflow processes If the workflow is taking a long time to process after a user submits a task, they may notice and try to respond to the task again. They will see the task locked error, but after a number of attempts (or after waiting some time) the task response page eventually indicates the task has been responded to. In this case, nothing actually went wrong, and the error message gives an accurate indication of what is happening – the workflow temporarily locked the task while it was processing. This scenario may occur in a very large workflow, or after the SharePoint application pool has just started. Modifying the task via a web service with an invalid url If the Nintex Workflow web service is used to respond to or delegate a task, the site context part of the url must be a valid alternative access mapping url. For example, if you access the web service via the IP address of the SharePoint server, and the IP address is not a valid AAM, the task can become locked. The workflow has become stuck without any apparent errors This behaviour can occur as a result of a bug in the SharePoint 2010 workflow engine.  If you do not have the August 2010 Cumulative Update (or later) for SharePoint, and your workflow uses delays, “Flexi-task”, State machine”, “Task Reminder” actions or variables, you could be affected. Check the SharePoint 2010 Updates site here: http://technet.microsoft.com/en-us/sharepoint/ff800847.  The October CU is recommended http://support.microsoft.com/kb/2553031.   The fix is described as “Consider the following scenario. You add a Delay activity to a workflow. Then, you set the duration for the Delay activity. You deploy the workflow in SharePoint Foundation 2010. In this scenario, the workflow is not resumed after the duration of the Delay activity”. If you find this is occurring in your environment, install the October CU, terminate all the running workflows affected and run them afresh. Investigative steps The first step to isolate the issue is to create a new task list on the site and configure the workflow to use it.  Any customizations that were made to the original task list should not be made to the new task list. If the new task list eliminates the issue, then the cause can be attributed to the original task list or a change that was made to it. To change the task list that the workflow uses: In Workflow Designer select Settings -> Startup Options Then configure the task list as required If any of the scenarios above do not help, check the SharePoint logs for any messages with a category of ‘Workflow Infrastructure’. Conclusion The information in this article has been gathered from observations and investigations by Nintex. The sources of these issues are the underlying SharePoint workflow engine. This article will be updated if further causes are discovered. From <http://connect.nintex.com/forums/thread/6503.aspx>

    Read the article

  • Parallelism in .NET – Part 10, Cancellation in PLINQ and the Parallel class

    - by Reed
    Many routines are parallelized because they are long running processes.  When writing an algorithm that will run for a long period of time, its typically a good practice to allow that routine to be cancelled.  I previously discussed terminating a parallel loop from within, but have not demonstrated how a routine can be cancelled from the caller’s perspective.  Cancellation in PLINQ and the Task Parallel Library is handled through a new, unified cooperative cancellation model introduced with .NET 4.0. Cancellation in .NET 4 is based around a new, lightweight struct called CancellationToken.  A CancellationToken is a small, thread-safe value type which is generated via a CancellationTokenSource.  There are many goals which led to this design.  For our purposes, we will focus on a couple of specific design decisions: Cancellation is cooperative.  A calling method can request a cancellation, but it’s up to the processing routine to terminate – it is not forced. Cancellation is consistent.  A single method call requests a cancellation on every copied CancellationToken in the routine. Let’s begin by looking at how we can cancel a PLINQ query.  Supposed we wanted to provide the option to cancel our query from Part 6: double min = collection .AsParallel() .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would rewrite this to allow for cancellation by adding a call to ParallelEnumerable.WithCancellation as follows: var cts = new CancellationTokenSource(); // Pass cts here to a routine that could, // in parallel, request a cancellation try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation()); } catch (OperationCanceledException e) { // Query was cancelled before it finished } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, if the user calls cts.Cancel() before the PLINQ query completes, the query will stop processing, and an OperationCanceledException will be raised.  Be aware, however, that cancellation will not be instantaneous.  When cts.Cancel() is called, the query will only stop after the current item.PerformComputation() elements all finish processing.  cts.Cancel() will prevent PLINQ from scheduling a new task for a new element, but will not stop items which are currently being processed.  This goes back to the first goal I mentioned – Cancellation is cooperative.  Here, we’re requesting the cancellation, but it’s up to PLINQ to terminate. If we wanted to allow cancellation to occur within our routine, we would need to change our routine to accept a CancellationToken, and modify it to handle this specific case: public void PerformComputation(CancellationToken token) { for (int i=0; i<this.iterations; ++i) { // Add a check to see if we've been canceled // If a cancel was requested, we'll throw here token.ThrowIfCancellationRequested(); // Do our processing now this.RunIteration(i); } } With this overload of PerformComputation, each internal iteration checks to see if a cancellation request was made, and will throw an OperationCanceledException at that point, instead of waiting until the method returns.  This is good, since it allows us, as developers, to plan for cancellation, and terminate our routine in a clean, safe state. This is handled by changing our PLINQ query to: try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation(cts.Token)); } catch (OperationCanceledException e) { // Query was cancelled before it finished } PLINQ is very good about handling this exception, as well.  There is a very good chance that multiple items will raise this exception, since the entire purpose of PLINQ is to have multiple items be processed concurrently.  PLINQ will take all of the OperationCanceledException instances raised within these methods, and merge them into a single OperationCanceledException in the call stack.  This is done internally because we added the call to ParallelEnumerable.WithCancellation. If, however, a different exception is raised by any of the elements, the OperationCanceledException as well as the other Exception will be merged into a single AggregateException. The Task Parallel Library uses the same cancellation model, as well.  Here, we supply our CancellationToken as part of the configuration.  The ParallelOptions class contains a property for the CancellationToken.  This allows us to cancel a Parallel.For or Parallel.ForEach routine in a very similar manner to our PLINQ query.  As an example, we could rewrite our Parallel.ForEach loop from Part 2 to support cancellation by changing it to: try { var cts = new CancellationTokenSource(); var options = new ParallelOptions() { CancellationToken = cts.Token }; Parallel.ForEach(customers, options, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // Check for cancellation here options.CancellationToken.ThrowIfCancellationRequested(); // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); } catch (OperationCanceledException e) { // The loop was cancelled } Notice that here we use the same approach taken in PLINQ.  The Task Parallel Library will automatically handle our cancellation in the same manner as PLINQ, providing a clean, unified model for cancellation of any parallel routine.  The TPL performs the same aggregation of the cancellation exceptions as PLINQ, as well, which is why a single exception handler for OperationCanceledException will cleanly handle this scenario.  This works because we’re using the same CancellationToken provided in the ParallelOptions.  If a different exception was thrown by one thread, or a CancellationToken from a different CancellationTokenSource was used to raise our exception, we would instead receive all of our individual exceptions merged into one AggregateException.

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >