Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 586/618 | < Previous Page | 582 583 584 585 586 587 588 589 590 591 592 593  | Next Page >

  • IPv6 host route is deleted after PMTU expires

    - by SAPikachu
    I am experimenting my new IPv6 tunnel setup between my local Ubuntu box and a scratch Linode. I set up some docker containers, configured 6in4 tunnel server and IPv6 forwarding on the Linode: # uname -a Linux argo 3.15.4-x86_64-linode45 #1 SMP Mon Jul 7 08:42:36 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 48: sit-sapikachu: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN group default link/sit 106.185.41.115 peer 1.2.3.4 inet6 fd00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::6ab9:2973/64 scope link valid_lft forever preferred_lft forever 13: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fc00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::5484:7aff:fefe:9799/64 scope link valid_lft forever preferred_lft forever // Docker containers are bridged to docker0 On my local box, I configured a 6in4 tunnel interface to connect to the Linode box, and added a host route to one of the docker container: # uname -a Linux sapikachu-netbox 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 16: sit-argo: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default link/sit 0.0.0.0 peer 106.185.41.115 inet6 fd00::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::a97:302/64 scope link valid_lft forever preferred_lft forever inet6 fe80::ac19:1/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1f0/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1fa/64 scope link valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether *** brd ff:ff:ff:ff:ff:ff .. snipped .. inet6 fd00:0:1::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::2e0:6fff:fe0e:365e/64 scope link valid_lft forever preferred_lft forever # ip route replace fc00::1875:8606:d8c1:8a9d via fd00::1 # Add route to docker container # ip -6 route .. snipped unrelated routes fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires 590sec mtu 1472 fd00::/64 dev sit-argo proto kernel metric 256 fd00:0:1::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sit-argo proto kernel metric 256 (Note that tunnel MTU on my local box is different from the server, this is intentional for testing) After adding the host route to the docker container (fc00::1875:8606:d8c1:8a9d), I can ping the container without problem until the route expires. After that I couldn't get reply any more. If I run ip -6 route in a few seconds after expiration, expiration time of the host route will be a negative number: fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires -1sec And output of ip route get fc00::1875:8606:d8c1:8a9d shows that it is routed to my default IPv6 gateway (which fails to route it correctly of course, since the address is not globally routable). After some time, the host route disappears without a trace. This problem won't happen if I do either one of the following things: Set MTU of tunnel on my local box to be the same as the server (1472). The route won't have expiration time in both ip -6 route and ip route get in this case. Instead of adding a host route, add a route with network mask (even /127 works). In this case ip -6 route shows the route without expiration time, ip route get shows expiration time but it will be correctly refreshed after expiration. Although this problem can be easily resolved, I am curious to know why this happens. Is there error in my configuration, or is this a kernel bug?

    Read the article

  • amplified reflected attack on dns

    - by Mike Janson
    The term is new to me. So I have a few questions about it. I've heard it mostly happens with DNS servers? How do you protect against it? How do you know if your servers can be used as a victim? This is a configuration issue right? my named conf file include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { "rndc-key"; }; }; options { /* make named use port 53 for the source of all queries, to allow * firewalls to block all ports except 53: */ // query-source port 53; /* We no longer enable this by default as the dns posion exploit has forced many providers to open up their firewalls a bit */ // Put files that named is allowed to write in the data/ directory: directory "/var/named"; // the default pid-file "/var/run/named/named.pid"; dump-file "data/cache_dump.db"; statistics-file "data/named_stats.txt"; /* memstatistics-file "data/named_mem_stats.txt"; */ allow-transfer {"none";}; }; logging { /* If you want to enable debugging, eg. using the 'rndc trace' command, * named will try to write the 'named.run' file in the $directory (/var/named"). * By default, SELinux policy does not allow named to modify the /var/named" directory, * so put the default debug log file in data/ : */ channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { /* This view sets up named to be a localhost resolver ( caching only nameserver ). * If all you want is a caching-only nameserver, then you need only define this view: */ match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; /* these are zones that contain definitions for all the localhost * names and addresses, as recommended in RFC1912 - these names should * ONLY be served to localhost clients: */ include "/var/named/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above :

    Read the article

  • dns server bind is not work [closed]

    - by user1742080
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; }; AND The content of "/var/named/data/named.mydomain.com": 1 $TTL 38400 2 3 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 4 2012101201 ; serial number YYMMDDNN 5 28800 ; Refresh 6 7200 ; Retry 7 864000 ; Expire 8 38400 ; Min TTL 9 ) 10 11 mydomain.com. IN A 1.2.3.4 12 www IN A 1.2.3.4 13 ns1.mydomain.com. IN A 1.2.3.4 14 ns2.mydomain.com. IN A 1.2.3.4 15 mydomain.com. IN NS ns1.mydomain.com. 16 mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running...

    Read the article

  • Identifying the cause of my DNS failure (domain not propagating)

    - by thejartender
    I have set up a DNS server with the help of two helpful tutorials: http://linuxconfig.org/linux-dns-server-bind-configuration http://ulyssesonline.com/2007/11/07/how-to-setup-a-dns-server-in-ubuntu/ I am using: Ubuntu Bind9 and had issues I tried negating on my own thanks to a question I posted here earlier that pointed out my mistake of using rfc 1918 addresses in my previous SOA record: $TTL 3D @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 10.0.0.42 @ IN NS ns.thejarbar,org. yuccalaptop IN A 10.0.0.19 ns IN A 10.0.0.42 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. $TTL 600 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 0.0.10.in-addr.arpa. IN NS ns.thejarbar.org. 42 IN PTR thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. I read the ranges that are used under rfc 1918 and modified my routers resource pool to assign LAN devices IP(s) within the 30.0.0.0 range and now modified my SOA to: $TTL 600 @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 30.0.0.42 @ IN NS ns.thejarbar,org. yuccalaptop IN A 10.0.0.19 ns IN A 30.0.0.42 gw IN A 30.0.0.138 www IN CNAME thejarbar.org. $TTL600 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 0.0.30.in-addr.arpa. IN NS ns.thejarbar.org. 42 IN PTR thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. I can ping my nameserverver ns.thejarbar.organd it gives me the correct isp IP address, but my domain never seems to propagate to my nameserver. I have searched for a concise tutorial that covers setting up a DNS with a nameserver that hosts (my) or the site. I am fully aware that this is not recommended and am using this for my learning purposes. Getting to the question, due to the lack of information in tutorials I looked at (nothing about rfc 1918 and no example of swapping these with ISP IP) is my router modification going to help me as it does not seem to be. I have also tried as recommended using my ISP IP instead of the values I posted. My site never propagated to my nameserver. What could be causing this? I have run dig thejarbar.org @88.89.190.171 and get an authorative response. Can anyone assist me with the final steps I may be missing here?

    Read the article

  • "ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver" wheel-click is wrong

    - by sputnick
    I use this mouse under archlinux x86_64 with 3.2.8-1-ARCH kernel. I have some problems to select and then paste with the wheel-click in some applications like konversation, not in a terminal nor an editor. I don't know if it's a hardware problem or a software one. $ lsusb -v Bus 002 Device 110: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x046d Logitech, Inc. idProduct 0xc50e Cordless Mouse Receiver bcdDevice 25.10 iManufacturer 1 Logitech iProduct 2 USB RECEIVER iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 34 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 70mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 2 Mouse iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.11 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 95 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Device Status: 0x0000 (Bus Powered) When I see what's happens in xev, the output is different compared to another mouse My buggy Logitech mouse : ButtonPress event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350700, (48,52), root:(1491,75), state 0x10, button 11, same_screen YES EnterNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170350700, (48,52), root:(1491,75), mode NotifyGrab, detail NotifyInferior, same_screen YES, focus YES, state 16 KeymapNotify event, serial 40, synthetic NO, window 0x0, keys: 90 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ButtonPress event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350716, (48,52), root:(1491,75), state 0x10, button 6, same_screen YES ButtonRelease event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350716, (48,52), root:(1491,75), state 0x10, button 6, same_screen YES ButtonRelease event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350988, (48,52), root:(1491,75), state 0x10, button 11, same_screen YES LeaveNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170350988, (48,52), root:(1491,75), mode NotifyUngrab, detail NotifyInferior, same_screen YES, focus YES, state 16 a working mouse (dell) : ButtonPress event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170245131, (46,32), root:(1489,55), state 0x10, button 2, same_screen YES EnterNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170245131, (46,32), root:(1489,55), mode NotifyGrab, detail NotifyInferior, same_screen YES, focus YES, state 528 KeymapNotify event, serial 40, synthetic NO, window 0x0, keys: 90 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ButtonRelease event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170245411, (46,32), root:(1489,55), state 0x210, button 2, same_screen YES LeaveNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170245411, (46,32), root:(1489,55), mode NotifyUngrab, detail NotifyInferior, same_screen YES, focus YES, state 16 A demo of the problem when I use konversation (IRC) : http://www.youtube.com/watch?v=lhmr92M7NCc I tried to modify the button map with xmodmap like this with no success (one at a time) : xmodmap -e "pointer = 1 0 3" xmodmap -e "pointer = 1 1 3" xmodmap -e "pointer = 1 2 3" xmodmap -e "pointer = 1 3 3" xmodmap -e "pointer = 1 4 3" xmodmap -e "pointer = 1 5 3" xmodmap -e "pointer = 1 6 3" xmodmap -e "pointer = 1 7 3" xmodmap -e "pointer = 1 8 3" xmodmap -e "pointer = 1 9 3" xmodmap -e "pointer = 1 10 3" xmodmap -e "pointer = 1 11 3" xmodmap -e "pointer = 1 12 3" xmodmap -e "pointer = 1 13 3" xmodmap -e "pointer = 1 14 3" xmodmap -e "pointer = 1 15 3" xmodmap -e "pointer = 1 16 3" xmodmap -e "pointer = 1 17 3" xmodmap -e "pointer = 1 18 3" xmodmap -e "pointer = 1 19 3" xmodmap -e "pointer = 1 20 3" xmodmap -e "pointer = 1 21 3" xmodmap -e "pointer = 1 22 3" xmodmap -e "pointer = 1 23 3" xmodmap -e "pointer = 1 24 3" xmodmap -e "pointer = 1 25 3" Any clue ? I would like to avoid buying a new mouse just for a paste problem.

    Read the article

  • configuring uppercut for automated build

    - by deepasun
    This is my cc.net's config file. http://confluence.public.thoughtworks.org/display/CCNET/Configuration+Preprocessor -- -- -- <!-- PROJECT STRUCTURE --> <cb:define name="WindowsFormsApplication1"> <project name="$(projectName)"> <workingDirectory>$(working_directory)\$(projectName)</workingDirectory> <artifactDirectory>$(drop_directory)\$(projectName)</artifactDirectory> <category>$(projectName)</category> <queuePriority>$(queuePriority)</queuePriority> <triggers> <intervalTrigger name="continuous" seconds="60" buildCondition="IfModificationExists" /> </triggers> <sourcecontrol type="svn"> <executable>c:\program files\subversion\bin\svn.exe</executable> <!--<trunkUrl>http://192.168.1.8/trainingrepos/deepasundari/WindowsFormsApplication1</trunkUrl>--> <trunkUrl>$(svnPath)</trunkUrl> <workingDirectory>$(working_directory)\$(projectName)</workingDirectory> </sourcecontrol> <tasks> <exec> <executable>$(working_directory)\$(projectName)\build.bat</executable> </exec> </tasks> <publishers> <merge> <files> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\*.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\mbunit\*-results.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\nunit\*-results.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\ncover\*-results.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\ndepend\*.xml</file> </files> </merge> <!--<email from="[email protected]" mailhost="smtp.somewhere.com" includeDetails="TRUE"> <users> <user name="YOUR NAME" group="BuildNotice" address="[email protected]" /> </users> <groups> <group name="BuildNotice" notification="change" /> </groups> </email>--> <xmllogger/> <statistics> <statisticList> <firstMatch name="Svn Revision" xpath="//modifications/modification/changeNumber" /> <firstMatch name="ILInstructions" xpath="//ApplicationMetrics/@NILInstruction" /> <firstMatch name="LinesOfCode" xpath="//ApplicationMetrics/@NbLinesOfCode" /> <firstMatch name="LinesOfComment" xpath="//ApplicationMetrics/@NbLinesOfComment" /> </statisticList> </statistics> <modificationHistory onlyLogWhenChangesFound="true" /> <rss/> </publishers> </project> </cb:define> <cb:WindowsFormsApplication1 projectname="WindowsFormsApplication1" queuepriority="80" svnpath="http://192.168.1.8/trainingrepos/deepasundari/WindowsFormsApplication1" /> It is not producing the build directory in code_drop, but updating reports.xml with updated build.. wht is the problem?

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • Windows 7 is stuck at "Starting Windows" when I attempt to boot computer

    - by Eli
    Basically, whenever I turn on my computer, it gets to the Starting Windows phase and just stays there. The startup animation still plays, yet it gets nowhere. I have tried booting into safe mode, however it gets stuck at loading CLASSPNP.SYS. It then freezes there and doesn't continue booting. I have tried booting into recovery mode from the hard drive, and it freezes after displaying the background image. I have tried booting from a recovery CD, which works, and I was able to use system restore. However, using system restore did not fix it, and it still is stuck at the Starting Windows screen. I have tried booting a Windows CD (Windows 8 Retail Installer) to see if I could upgrade it to fix this issue, however that froze at a blank screen after it got past the boot logo. I have tried changing around the BIOS settings (including resetting), to no avail. I have tried re-plugging the internal PSU cables (this is a custom-built desktop), yet this has changed nothing. I can boot into a loopback Ubuntu install on the same drive, which works fine, other than the fact that it has issues with some of the USB ports and the network card. This system has worked fine for the past few months, completely stable, and nothing in the configuration has changed before this error started happening. Startup Repair on the Windows recovery CD doesn't find any issues. Unplugging my secondary hard drive or swapping around memory doesn't change anything. The hard drive itself is fine, it hasn't shown any signs of failure and once again, boots my other OS fine. If anyone could help with this, that would be great. I can't seem to find any possible solution to this. If it makes any difference, my system specs are as follows: AMD FX-8320 Gigabyte GA-970A-D3 4GB of DDR3 Radeon HD 6870 550w PSU I'd like to not have to reinstall Windows, for I have more than a terabyte of data that I would have to back up if that becomes the only option. EDIT: I have since tried the following: Tried the solution involving restoring files from RegBackup, which changed nothing. Tried testing everything with Hiren's boot CD, everything comes back as fine. Tried disabling everything unnecessary in the BIOS and unplugging everything unneeded, it still hangs. Tried swapping out every possible combination of RAM, it still has the same result. The RAM is not at fault it seems Tried every GPU I own (which is many!) and it still hangs at the exact same place. Tried minimizing the power consumption as much as possible, even using an old PCI graphics card. It still hangs at the same place in the same way, signifying that it's not the PSU at fault. Tried resetting the BIOS again, still nothing. Tried every possible combination of BIOS options, even downclocking everything, it still hangs in the same spot. Tried upgrading the BIOS from version FB to FD, which changed nothing. Based on this, I would conclude the motherboard to be at fault. Are there any other possibilities? I don't want to spend $150 for a new motherboard. EDIT 2: This is what it gets stuck at when I try to boot into safe mode: Note the slight graphical corruption at the top of the screen. No matter how I set up the system, this seems to be there. In addition, either it has stopped booting into safe mode now, or it takes upwards of 2+ hours, and I haven't left it running for that long.

    Read the article

  • USB connection is unstable with Nexus S 2.3.4 on AMD 64 running 64-bit Windows 7, but works with 32-bit Windows Vista

    - by Mike
    The USB connection is unstable with Nexus S (Android 2.3.4) on AMD 64 running 64-bit Windows 7, but it works with 32-bit Windows Vista. Problem Description: On the 64-bit Windows 7 machine my Nexus S appears to connect, but then it disconnects moments later. Neither accessing USB storage or loading an Android application package file (APK) using the Android Debug Bridge (ADB) work. On 32-bit Windows Vista using the same USB cable, USB storage works. I haven't tried the ADB on 32-bit Windows Vista. Reproduction steps for USB storage: (I have provided the reproduction steps for USB storage and not ADB, because if one isn't working, then the other isn't working either and the USB storage reproduction steps are shorter to document.) Connect the USB cable to the Nexus S and my Windows 7 machine. Effect: The "USB Mass Storage, USB Connected" dialog appears with the button "Turn on USB storage." Click "Turn on USB Storage" Effect: The "working circle" appears. A dialog briefly appears saying "USB storage in use," then it either returns me to Step 1 (now that I am running 2.3.4) or is replaced with the Nexus S's application homepage (while I was running 2.3.3). I'm not sure if the version matters, but I mention it for completeness. On the 32-bit Windows Vista machine the connection is stable. I am able to navigate through the Nexus S file system create, read, update, and delete files, etc. I haven't tried connecting with the ADB. Troubleshooting summary: Tried and failed: Uninstalling and reinstalling the Android USB drivers including removing the files. Uninstalling my custom software Pulling the Nexus S's battery Restarting the Nexus S Restarting 64-bit Windows 7 Changing USB ports on the 64-bit Windows 7 box Compared the dates and file size on the DLLs in my google-usb_driver\amd64 directory and the windows\System32 directory. They match. The sizes for the google-usb_driver\i386 directory do not match (expected). Turning off Debugging mode on the Nexus S does not resolve the problem. Searching Google. Tried and succeeded: Connecting to another machine (Windows Vista) using the same USB cable and Nexus S phone. Troubleshooting observations: I notice that uninstalling the device drivers and deleting the files, then reinstalling the drivers, then rebooting 64-bit Windows 7 then unplugging the Nexus S, then plugging it back in occasionally helps for a short amount of time (minutes to hours, not days). When it is working, I can both access the Nexus S's drive and load/test applications using the ADB. I have observed some wonky behavior in the Device Manager that I haven't tracked down. Sometimes the black Nexus S image appears in the list of devices. Sometimes the image displays as a computer with a green ISA card. Sometimes it neither appears on the top level of devices nor under “other devices,” but it does appear under "disk drives" as "Android UMS Composite USB Device." System configuration: The Nexus S is running Android OS 2.3.4's "Settings\about phone\System updates" indicates that it is up to date as of May 21st 2011. Both 32-bit Windows Vista and 64-bit Windows 7 are up to date. The Windows Vista system is running on an Intel 32-bit processor. Windows 7 is running on an AMD 64-bit processor. I have done Android development on both systems, but I usually develop on the 64-bit Windows 7 machine.

    Read the article

  • What networking hardware do I need in this situation (Fairpoint [ISP] "E-DIA" connection)?

    - by Tegeril
    Right away you'd probably want to say, "Well just ask Fairpoint." I've done that, a number of times in as many different ways I can phrase it and just keep hitting a brick wall where they will not commit to giving any useful information and instead recommend contracting an outside firm and spending a pile of money. Anyway... I'm trying to help a family member out with an office connection that is being setup. I've managed to scrape tiny details here and there from our discussions with the ISP (Fairpoint in Maine) about what is going to be done and what is going to be needed. This is the connection that is being setup: http://www.fairpoint.com/enterprise/vantagepoint/e-dia/index.jsp Information I have been given: Via this connection I can get IPs across different C blocks if that were necessary (it is not) Fairpoint is bringing hardware with them that they claim simply does the conversion from whatever line is coming in the building to ethernet, they have referred to this as the "Fairpoint Netvanta" which I know suggests a line of products that I have looked up, but some (most? all?) of those seems to handle all the routing that I saw. Fairpoint says that I need to bring my own router to sit behind their device. They have literally declined to even suggest products that have worked for other clients in the past and fall back on "any business router works, not a home router." That alone makes my head spin. Detail and clarity hit a brick wall from there. At one moment I got them to cough up that the router I provide needs to be able to do VPN tunneling but they typically fall back to "not a home router" and I was even given "just a business router, Cisco or something, it'll be $500-$1000". Now I know that VPN tunneling routers exist well below that price point and since this connection is going to one machine, possibly two only via ethernet, my desire to purchase networking hardware that over-delivers what I need is not very high. They are literally setting all this up, have provided no configuration details for after they finish, and expect me to just plunk a $500+ router behind it and cross my fingers or contract out to a third party company. If there were other options available for the location, I would have dropped them in a second, but there aren't. The device that is connected requires a static IP and I'm honestly a bit hazy on the necessity of an additional router behind their device and generally a bit over my head. I presume that the router needs to be able to serve external static IPs to its clients, but I really don't know what is going to show up when they come to do the install. This was originally going to be run via an ADSL bridge modem with a range of static IPs (which is easy and is currently setup properly) but the location is too far from the telco to get speeds that we really want for upload and this is also a connection that needs high availability. Any suggestions would be greatly appreciated (I see a number of options in the Cisco Small Business line and other competitors that aren't going to break the bank…), especially if you've worked with Fairpoint before! Thanks for reading my wall of text.

    Read the article

  • Exchange 2003 mail non-delivery (NDR), spam activity? events 7002 & 7004

    - by HighTechGeek
    Windows Server 2003 Small Business Server SP2 Exchange Version 6.5 (Build 7638.2: Service Pack 2) This network has been neglected and has been having email problems for years and was on many blacklists. I was called in after the server eventually crashed... I got the server back up and running, but email problems persist. Outgoing mail delivery is sporadic. Sometimes the mail goes through, sometimes a delayed delivery report is generated after a day or more, and sometimes it seems to go through, but the recipient never receives it. Not sure if spammers are successfully using the server as a relay (see event entries below after turning on maximum SMTP logging)... User PCs infected with viruses and server was blacklisted on many sites (I used mxtoolbox.com) I have cleaned all the PCs and changed all passwords (including administrator) I have requested removal from all of the blacklists - most have removed the listing, some take more time. I have setup rDNS pointer records with the ISP (Comcast) - that was one reason for some of the blacklistings. I have tested that it's not an open relay using telnet as described here: www.amset.info/exchange/smtp-openrelay.asp I followed the advise of a Spamhaus & Microsoft article to enable maximum SMTP logging. http://www.spamhaus.org/faq/answers.lasso?section=isp%20spam%20issues#320 which directed me to Microsoft KB article 895853, specifically, the part 2/3 down titled: "If mail relay occurs from an account on an Exchange computer that is not configured as an open relay" . The Application Event Log is filling with this type of activity (Event ID 7002, 7002 & 3018 errors): Event Type: Error Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7004 Date: 1/18/2011 Time: 7:33:29 AM User: N/A Computer: SERVER Description: This is an SMTP protocol error log for virtual server ID 1, connection #621. The remote host "212.52.84.180", responded to the SMTP command "rcpt" with "550 #5.1.0 Address rejected [email protected] ". The full command sent was "RCPT TO: ". This will probably cause the connection to fail. and this: Event Type: Warning Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7002 Date: 1/18/2011 Time: 7:33:29 AM User: N/A Computer: SERVER Description: This is an SMTP protocol warning log for virtual server ID 1, connection #620. The remote host "212.52.84.170", responded to the SMTP command "rcpt" with "452 Too many recipients received this hour ". The full command sent was "RCPT TO: ". This may cause the connection to fail. or a variant of: Event Type: Warning Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7002 Date: 1/18/2011 Time: 8:39:21 AM User: N/A Computer: SERVER Description: This is an SMTP protocol warning log for virtual server ID 1, connection #661. The remote host "82.57.200.133", responded to the SMTP command "rcpt" with "421 Service not available - too busy ". The full command sent was "RCPT TO: ". This may cause the connection to fail. also Event Type: Error Event Source: MSExchangeTransport Event Category: NDR Event ID: 3018 Date: 1/18/2011 Time: 9:49:37 AM User: N/A Computer: SERVER Description: A non-delivery report with a status code of 5.4.0 was generated for recipient rfc822;[email protected] (Message-ID ). Causes: This message indicates a DNS problem or an IP address configuration problem Solution: Check the DNS using nslookup or dnsq. Verify the IP address is in IPv4 literal format. Data: 0000: ef 02 04 c0 ï..À Any guidance and/or suggestions and/or tests to perform would be greatly appreciated.

    Read the article

  • how to remove obsolete device and network entries? Device manager "uninstall" option has no effect

    - by Gizmo
    I am trying to remove a few "obsolete" things which annoy me (because I like to have everything cleen, working and not interferring with each other, fresh, etc..). I tried looking for solutions without any help, so here I am to ask. My first part is about removing obsolete networks, let me explain by showing the ipconfig output: C:\windows\system32>ipconfig Windows IP Configuration Wireless LAN adapter Local Area Connection* 11: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Wireless LAN adapter Local Area Connection* 9: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Ethernet adapter LAN: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Wireless LAN adapter Wi-Fi: Connection-specific DNS Suffix . : home Link-local IPv6 Address . . . . . : fe80::c129:8d57:bbd1:3564%10 IPv4 Address. . . . . . . . . . . : 192.168.2.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.2.254 Tunnel adapter isatap.home: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : home C:\windows\system32> Specificalyy the first two adapter entries annoy me because the adapters are not visible in the network connection menu (invisible folder / file visibility set to "show"): And here is the second problem altogether with the first one: No matter what I click/do, Uninstall option has no effect on the multiplexor driver. (bridging stuff, right?) I really want to remove the Wireless LAN adapter Local Area Connection entries and the adapter multiplexor stuff but it's impossible? Why is this? How can I remove them anyway?

    Read the article

  • How do I connect my Windows XP laptop to the internet?

    - by rubysiddhi
    Hello fellow super users, The Past I have a Acer Travelmate 2300 laptop running Windows XP. 6 months ago I moved into a new apartment and got a new internet connection set up. After getting an internet connection installed in my apartment I reinstalled Windows XP and at the same time wiped my drive clean losing all the original Acer software and drivers. Once XP was reinstalled I had to find all the drivers again to get the Travelmate laptop connected to the internet. So, using my Vista laptop which was connected fine, I went to the Acer Travelmate Series drivers download page to download the necessary drivers. I transferred them to my Acer XP machine and installed them the best I could (there were no easy instructions so I just had to find all the executables and run them). I eventually got connected to the internet but not exactly in the way I had hoped for. The Present To be connected to the internet I need to have an Ethernet cord connecting my computer (via the Ethernet port) to my router. This is a problem since it defeats the purpose of having a Wireless LAN card in my Acer laptop. One of the programs I downloaded from the Acer Travelmate Series page was the Acer Wireless LAN Configuration Utility. This program allows me to see the current network I am connected to and all the available networks I could potentially connect to. It reminds me of XP's Wireless Network Connection window/utility where you can see all available wireless networks, refresh the network list and connect to one of the networks. I should mention that my ISP set up a security enabled wireless network with WPA. This network requires a network key if you want to connect to it. I guess my Vista computer has the network key entered into it already. The problem is that I do not know what the network key is. Now obviously you would say just contact my ISP to get the key. And I will but there is just one extra weird issue. I am able to connect to another unsecured wireless network in the Wireless Network Connection window/utility. I can be on it as long as my Ethernet cable is plugged in. So this is not really wireless is it? And this indicates that even if I do get that network key password from my ISP, I will only solve one of the two problems I have. I will only solve being able to get online as long as I am connected to my router via the Ethernet cable. The Main Questions So how do I enable my acer IPN2220 Wireless LAN Card so that I can use my Acer laptop from anywhere with in my apartment? Or should I first get the network key from my ISP to access my security enabled wireless network? And then deal with getting the acer IPN2220 Wireless LAN Card working? Hard & Learned VS Easy & Stupid Of course contacting the ISP would be easier. Have em just come in here and do there thing. The problem with that is that they do not speak English (yeah, im in Poland) and it'd be a hell of a time trying to understand what they are doing (uncomfortable looking over their shoulder). Also, I want to learn how to do this task myself so that I can fix the problem if it ever happens again. You know, be more self sufficient. I look forward to helpful replies. Thanks, Xaviour

    Read the article

  • How do bots access directories on a server that are not DocumentRoot of public IP address? How do I stop them?

    - by tmsimont
    I have a local network set up with apache2 and "named" running on OpenSuse 13.1 Linux. I used the "named" service to use my computer as a domain server. I set up my router to point to ask my computer for domain lookups, so I have a chance to have it rewrite a bunch of domains on my network to its own local IP, 192.168.0.111 This works great. I use virtual host configuration to allow various domains and subdomains (re-routed to the same IP via named) to pull up different directories in my computer. For example: <VirtualHost *:80> ServerName 192.168.0.111 ServerAlias fmb.wa.net DocumentRoot /home/work/wa.net/fmb </VirtualHost> <VirtualHost *:80> ServerName 192.168.0.111 ServerAlias postrecord.wa.net DocumentRoot /home/work/wa.net/postrecord </VirtualHost> <VirtualHost *:80> ServerName 192.168.0.111 ServerAlias cvalley.wa.net DocumentRoot /home/work/wa.net/cvalley_local </VirtualHost> This makes it possible for me to hit cvalley.wa.net from any device in my network and get the site that lives in /home/work/wa.net/cvalley_local I decided to forward port 80 to this computer, so I could share a few development sites with coworkers. I can't control which site they see with the same named service, because they'd have to use my computer as their domain name server... So I added a line like this: <VirtualHost *:80> ServerName 192.168.0.111 ServerAlias MY.IP.XXX.XX DocumentRoot /home/work/wa.net/cvalley </VirtualHost> Where "MY.IP.XXX.XX" is my public IP address. This works as expected, when you hit my IP address from a public network you see the site that lives in /home/work/wa.net/cvalley. The point of confusion that I have is that there are public IP addresses in my logs in other sites. I would have expected it to be impossible to access other sites in my network, unless the public user somehow figured out what I'm calling my ServerAliases, and is mimicing my domain set up... How can public traffic be hitting my other local sites? How can I recreate this kind of access? Here are some examples of public IP's hitting my VirtualHost sites: 162.253.66.76 - - [15/Aug/2014:19:20:47 -0600] "GET /xmlrpc.php HTTP/1.0" 404 1004 "-" "-" 162.253.66.74 - - [16/Aug/2014:10:50:28 -0600] "GET / HTTP/1.0" 200 262 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)" 185.4.227.194 - - [16/Aug/2014:11:16:45 -0600] "GET http://24x7-allrequestsallowed.com/?PHPSESSID=1rysxtj500143WQMVT%5E_NAZ%5BQ HTTP/1.1" 200 262 "-" "-" 101.226.254.138 - - [16/Aug/2014:13:32:14 -0600] "HEAD / HTTP/1.0" 200 - "-" "-" 162.253.66.74 - - [16/Aug/2014:14:26:19 -0600] "GET / HTTP/1.0" 200 262 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)" 212.129.2.119 - - [16/Aug/2014:16:00:51 -0600] "HEAD / HTTP/1.0" 200 - "-" "-" 91.240.163.111 - - [16/Aug/2014:18:34:32 -0600] "GET / HTTP/1.0" 200 262 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)" 162.253.66.74 - - [16/Aug/2014:19:02:53 -0600] "GET / HTTP/1.0" 200 262 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)" 122.226.223.69 - - [17/Aug/2014:05:53:09 -0600] "GET http://www.k2proxy.com//hello.html HTTP/1.1" 404 1006 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)" ::1 - - [17/Aug/2014:10:19:26 -0600] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.4.6 (Linux/SUSE) OpenSSL/1.0.1e PHP/5.4.20 (internal dummy connection)" 162.209.65.196 - - [17/Aug/2014:15:31:53 -0600] "HEAD / HTTP/1.0" 200 - "-" "-" 111.206.199.163 - - [18/Aug/2014:11:12:56 -0600] "HEAD / HTTP/1.0" 200 - "-" "-" 37.187.180.168 - - [18/Aug/2014:15:40:00 -0600] "HEAD / HTTP/1.0" 200 - "-" "-" 62.210.38.226 - - [18/Aug/2014:18:35:16 -0600] "HEAD / HTTP/1.0" 200 - "-" "-" Is there anything that I can do to reliably deny public access by default, but allow it only in one VirtualHost?

    Read the article

  • BIND no longer responds to AXFR Requests

    - by djsumdog
    Recently we moved our primary external DNS server. It has three caching DNS slaves in front of it provided by our ISP. They've told us they've started getting access denied requests when doing zone transfers (AXFR). If I add in my own IPs to the allow-transfer list, I also get a transfer failed when using dig with the AXFR argument. Here is what my bind configuration looks like: options { directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; zone-statistics yes; statistics-file "/var/log/named.stats"; listen-on-v6 { any; }; notify-source 10.19.0.68 port 53; querylog yes; notify yes; allow-transfer { 127.0.0.1; //localhost 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; also-notify { 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; include "/etc/named.d/forwarders.conf"; }; logging { channel simple_log { file "/var/log/bind.log" versions 10 size 3m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ simple_log; }; channel log_zone_transfers { file "/var/log/axfr.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category xfer-out { log_zone_transfers; }; channel log_notify { file "/var/log/notify.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category notify { log_notify; }; channel queries { file "/var/log/queries.log" versions 10 size 30m; print-time yes; severity info; print-category yes; print-severity yes; }; category queries { queries; }; }; zone "." in { type hint; file "root.hint"; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; include "/etc/named.conf.include"; zone "example.net " { type master; file "/var/lib/named/master/example.net.hosts"; }; zone "example.com " { type master; file "/var/lib/named/master/example.com.hosts"; }; ## -- other master files -- And the errors in the xfer log look like the following: 29-Oct-2012 14:20:02.806 xfer-out: info: client 1.1.1.1#59069: bad zone transfer request: 'example.com./IN': non-authoritative zone (NOTAUTH) I've tried adding allow-transfer parameters directly on the zone files and still get failed transfers. Any idea what I'm doing wrong?

    Read the article

  • suddenly can't connect to router

    - by Khoi
    I was just downloading some stuff in ubuntu and snap, the connection cut and I can't even connect to my router. And the router, it still works fine, my laptop can connect wirelessly to it as usual. But my main computer (which connects to it directly through cable) can't even ping it. Here is my ipconfig: Windows IP Configuration Host Name . . . . . . . . . . . . : vento Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Realtek RTL8169/8110 Family Gigabit Ethernet NIC Physical Address. . . . . . . . . : 00-19-DB-4E-6C-56 Ethernet adapter {15B1F740-2F35-4FE4-9FEE-4052AFBAD096}: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Anchorfree HSS Adapter - Packet Sche duler Miniport Physical Address. . . . . . . . . : 00-FF-15-B1-F7-40

    Read the article

  • how to install 'version.h' in ubuntu ?

    - by user252098
    Just now , I try to install the Jungo WinDriver in the Ubuntu 13.10 . But I am puzzled by the its manual of how to Install version.h : Install version.h: The file version.h is created when you first compile the Linux kernel source code. Some distributions provide a compiled kernel without the file version.h. Look under /usr/src/linux/include/linux to see whether you have this file. If you do not, follow these steps: Become super user: $ su Change directory to the Linux source directory: # cd /usr/src/linux Type: # make xconfig Save the configuration by choosing Save and Exit. Type: # make dep Exit super user mode: # exit But the shell says: warning: make dep is unnecessary now. Then, I found out there is a version.h in /usr/src/linux-headers-3.11.0.12-generic, so I type: /usr/src/windriver/redist# ./configure --with-kernel-source=/usr/src/linux-headers-3.11.0.12-generic But, the windriver run fails: USE_KBUILD = yes checking for cpu architecture... x86_64 checking for WinDriver root directory... /usr/src/WinDriver checking for linux kernel source... found at /usr/src/linux checking for lib directory... ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT)_32.so /usr/lib/$(SHARED_OBJECT).so; ln -s /usr/lib /usr/lib64; ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT).so /usr/lib64/$(SHARED_OBJECT).so checking which directories to include... -I/usr/src/linux/include checking linux kernel version... 3.11.10.6 checking for modules installation directory... /lib/modules/3.11.0-12-generic/kernel/drivers/misc checking output directory... LINUX.3.11.0-12-generic.x86_64 checking target... LINUX.3.11.0-12-generic.x86_64/windrvr6_usb.ko checking for regparm kernel option... find: `/usr/src/WinDriver/redist/.tmp_driver/.tmp_versions': No such file or directory 0 checking for modpost location... /usr/src/linux/scripts/mod/modpost configure.usb: creating ./config.status config.status: creating makefile.usb.kbuild checking for cpu architecture... x86_64 checking for WinDriver root directory... /usr/src/WinDriver checking for linux kernel source... found at /usr/src/linux checking for lib directory... ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT)_32.so /usr/lib/$(SHARED_OBJECT).so; ln -s /usr/lib /usr/lib64; ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT).so /usr/lib64/$(SHARED_OBJECT).so checking which directories to include... -I/usr/src/linux/include checking linux kernel version... 3.11.10.6 checking for modules installation directory... /lib/modules/3.11.0-12-generic/kernel/drivers/misc checking output directory... LINUX.3.11.0-12-generic.x86_64 checking target... LINUX.3.11.0-12-generic.x86_64/windrvr6.ko checking for regparm kernel option... find: `/usr/src/WinDriver/redist/.tmp_driver/.tmp_versions': No such file or directory 0 checking for right linked object... windrvr_gcc_v3.a checking for modpost location... /usr/src/linux/scripts/mod/modpost configure.wd: creating ./config.status config.status: creating makefile.wd.kbuild What is the problem?

    Read the article

  • mod_rewrite settings causes server to throw HTTP 500 errors instead of 404

    - by FractalizeR
    Hello. I have a server with VBulletin forum (working under Apache 2.2, CentOS). The default settings for it in .htaccess are as follows: RewriteEngine on RewriteCond %{HTTP_HOST} ^gsmforum\.ru RewriteRule (.*) http://www.gsmforum.ru/$1 [R=301,L] # If you are having problems or are using VirtualDocumentRoot, uncomment this line and set it to your vBulletin directory. RewriteBase / RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] # Forum RewriteRule ^threads/.* showthread.php [QSA] RewriteRule ^forums/.* forumdisplay.php [QSA] RewriteRule ^members/.* member.php [QSA] RewriteRule ^blogs/.* blog.php [QSA] ReWriteRule ^entries/.* entry.php [QSA] RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] # MVC RewriteRule ^(?:(.*?)(?:/|$))(.*|$)$ $1.php?r=$2 [QSA] If I try to access any non-existent URL on forum like www.example.com/ajdsjaskasajs, server throws HTTP 500 error. Apache log says: [Sun Apr 25 17:24:32 2010] [error] [client 82.211.152.12] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://www.gsmforum.ru/forumdisplay.php?424-%CD%EE%E2%EE%F1%F2%E8-%EF%F0%EE%E3%F0%E0%EC%EC%E0%F2%EE%F0%EE%E2 If I switch LogLevel to Debug I get something like this: [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php.php.php.php.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php.php.php.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php.php.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt [root@server2 logs]# tail httpd_error.log [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php.php.php.php.php.php.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php.php.php.php.php.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php.php.php.php.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php.php.php.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php.php.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript/vbulletin_css/style-d95b06dc-00001.css, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru If I remove or comment the last (#MVC) line from .htaccess all is fine. Can you advise me what is the problem with mod_rewrite settings? Why does the last line cause infinite recursion?

    Read the article

  • Problem with setup VPN in Ubuntu Server 12.04

    - by Yozone W.
    I have a problem with setup VPN server on my Ubuntu VPS, here is my server environments: Ubuntu Server 12.04 x86_64 xl2tpd 1.3.1+dfsg-1 pppd 2.4.5-5ubuntu1 openswan 1:2.6.38-1~precise1 After install software and configuration: ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.2.0-24-virtual (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] /var/log/auth.log message: Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [RFC 3947] method set to=115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: ignoring Vendor ID payload [FRAGMENTATION 80000000] Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [Dead Peer Detection] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: responding to Main Mode from unknown peer [My IP Address] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R1: sent MR1, expecting MI2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R2: sent MR2, expecting MI3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: Main mode peer ID is ID_IPV4_ADDR: '192.168.12.52' Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: new NAT mapping for #5, was [My IP Address]:2251, now [My IP Address]:2847 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: the peer proposed: [My Server IP Address]/32:17/1701 -> 192.168.12.52/32:17/0 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: responding to Quick Mode proposal {msgid:8579b1fb} Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: us: [My Server IP Address]<[My Server IP Address]>:17/1701 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: them: [My IP Address][192.168.12.52]:17/65280===192.168.12.52/32 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x08bda158 <0x4920a374 xfrm=AES_256-HMAC_SHA1 NATOA=192.168.12.52 NATD=[My IP Address]:2847 DPD=enabled} Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA(0x08bda158) payload: deleting IPSEC State #6 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: ERROR: netlink XFRM_MSG_DELPOLICY response for flow eroute_connection delete included errno 2: No such file or directory Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received and ignored informational message Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA payload: deleting ISAKMP State #5 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address]: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:51:16 vpn pluto[3963]: packet from [My IP Address]:2847: received and ignored informational message xl2tpd -D message: xl2tpd[4289]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[4289]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[4289]: setsockopt recvref[30]: Protocol not available xl2tpd[4289]: This binary does not support kernel L2TP. xl2tpd[4289]: xl2tpd version xl2tpd-1.3.1 started on vpn.netools.me PID:4289 xl2tpd[4289]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[4289]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[4289]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[4289]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[4289]: Listening on IP address [My Server IP Address], port 1701 Then it just stopped here, and have no any response. I can't connect VPN on my mac client, the /var/log/system.log message: Oct 16 15:17:36 azone-iMac.local configd[17]: SCNC: start, triggered by SystemUIServer, type L2TP, status 0 Oct 16 15:17:36 azone-iMac.local pppd[3799]: pppd 2.4.2 (Apple version 596.13) started by azone, uid 501 Oct 16 15:17:38 azone-iMac.local pppd[3799]: L2TP connecting to server 'vpn.netools.me' ([My Server IP Address])... Oct 16 15:17:38 azone-iMac.local pppd[3799]: IPSec connection started Oct 16 15:17:38 azone-iMac.local racoon[359]: Connecting. Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 started (Initiated by me). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 started (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local pppd[3799]: IPSec connection established Oct 16 15:17:59 azone-iMac.local pppd[3799]: L2TP cannot connect to the server Oct 16 15:17:59 azone-iMac.local racoon[359]: IPSec disconnecting from server [My Server IP Address] Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Anyone help? Thanks a million!

    Read the article

  • Sonicwall VPN, Domain Controller Issues

    - by durilai
    I am trying to get the domain logon script to execute when I connect to VPN. I have a SonicWall 4060PRO, with the SonicOS Enhanced 4.2.0.0-10e. The VPN connects successfully, but the script does not execute. I am posting the log below, but I see two issues. The first is the inability to connect to domain. 2009/12/18 19:49:53:457 Information XXX.XXX.XXX.XXX NetGetDCName failed: Could not find domain controller for this domain. The second is the failure of the script. 2009/12/18 19:49:53:466 Warning XXX.XXX.XXX.XXX Failed to execute script file \DT-WIN7netlogondomain.bat, Last Error: The network name cannot be found.. I assume the second issue is caused because of the first, also on the second issue it seems to be trying to get the logon script from my local PC, not the server. Finally, the DC can be pinged and reached by its computer name once the VPN is connected. The shares that the script is tring to map can be mapped manually. Any help is appreciated. 2009/12/18 19:49:31:063 Information The connection "GroupVPN_0006B1030980" has been enabled. 2009/12/18 19:49:32:223 Information XXX.XXX.XXX.XXX Starting ISAKMP phase 1 negotiation. 2009/12/18 19:49:32:289 Information XXX.XXX.XXX.XXX Starting aggressive mode phase 1 exchange. 2009/12/18 19:49:32:289 Information XXX.XXX.XXX.XXX NAT Detected: Local host is behind a NAT device. 2009/12/18 19:49:32:289 Information XXX.XXX.XXX.XXX The SA lifetime for phase 1 is 28800 seconds. 2009/12/18 19:49:32:289 Information XXX.XXX.XXX.XXX Phase 1 has completed. 2009/12/18 19:49:32:336 Information XXX.XXX.XXX.XXX Received XAuth request. 2009/12/18 19:49:32:336 Information XXX.XXX.XXX.XXX XAuth has requested a username but one has not yet been specified. 2009/12/18 19:49:32:336 Information XXX.XXX.XXX.XXX Sending phase 1 delete. 2009/12/18 19:49:32:336 Information XXX.XXX.XXX.XXX User authentication information is needed to complete the connection. 2009/12/18 19:49:32:393 Information An incoming ISAKMP packet from XXX.XXX.XXX.XXX was ignored. 2009/12/18 19:49:36:962 Information XXX.XXX.XXX.XXX Starting ISAKMP phase 1 negotiation. 2009/12/18 19:49:37:036 Information XXX.XXX.XXX.XXX Starting aggressive mode phase 1 exchange. 2009/12/18 19:49:37:036 Information XXX.XXX.XXX.XXX NAT Detected: Local host is behind a NAT device. 2009/12/18 19:49:37:036 Information XXX.XXX.XXX.XXX The SA lifetime for phase 1 is 28800 seconds. 2009/12/18 19:49:37:036 Information XXX.XXX.XXX.XXX Phase 1 has completed. 2009/12/18 19:49:37:094 Information XXX.XXX.XXX.XXX Received XAuth request. 2009/12/18 19:49:37:100 Information XXX.XXX.XXX.XXX Sending XAuth reply. 2009/12/18 19:49:37:110 Information XXX.XXX.XXX.XXX Received initial contact notify. 2009/12/18 19:49:37:153 Information XXX.XXX.XXX.XXX Received XAuth status. 2009/12/18 19:49:37:154 Information XXX.XXX.XXX.XXX Sending XAuth acknowledgement. 2009/12/18 19:49:37:154 Information XXX.XXX.XXX.XXX User authentication has succeeded. 2009/12/18 19:49:37:247 Information XXX.XXX.XXX.XXX Received request for policy version. 2009/12/18 19:49:37:253 Information XXX.XXX.XXX.XXX Sending policy version reply. 2009/12/18 19:49:37:303 Information XXX.XXX.XXX.XXX Received policy change is not required. 2009/12/18 19:49:37:303 Information XXX.XXX.XXX.XXX Sending policy acknowledgement. 2009/12/18 19:49:37:303 Information XXX.XXX.XXX.XXX The configuration for the connection is up to date. 2009/12/18 19:49:37:377 Information XXX.XXX.XXX.XXX Starting ISAKMP phase 2 negotiation with 10.10.10.0/255.255.255.0:BOOTPC:BOOTPS:UDP. 2009/12/18 19:49:37:377 Information XXX.XXX.XXX.XXX Starting quick mode phase 2 exchange. 2009/12/18 19:49:37:472 Information XXX.XXX.XXX.XXX The SA lifetime for phase 2 is 28800 seconds. 2009/12/18 19:49:37:472 Information XXX.XXX.XXX.XXX Phase 2 with 10.10.10.0/255.255.255.0:BOOTPC:BOOTPS:UDP has completed. 2009/12/18 19:49:37:896 Information Renewing IP address for the virtual interface (00-60-73-4C-3F-45). 2009/12/18 19:49:40:189 Information The virtual interface has been added to the system with IP address 10.10.10.112. 2009/12/18 19:49:40:319 Information The system ARP cache has been flushed. 2009/12/18 19:49:40:576 Information XXX.XXX.XXX.XXX NetWkstaUserGetInfo returned: user: Dustin, logon domain: DT-WIN7, logon server: DT-WIN7 2009/12/18 19:49:53:457 Information XXX.XXX.XXX.XXX NetGetDCName failed: Could not find domain controller for this domain. 2009/12/18 19:49:53:457 Information XXX.XXX.XXX.XXX calling NetUserGetInfo: Server: , User: Dustin, level: 3 2009/12/18 19:49:53:460 Information XXX.XXX.XXX.XXX NetUserGetInfo returned: home dir: , remote dir: , logon script: 2009/12/18 19:49:53:466 Warning XXX.XXX.XXX.XXX Failed to execute script file \DT-WIN7netlogondomain.bat, Last Error: The network name cannot be found..

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • Update information outdated

    - by Achim Krause
    I have a warning triangle that my update information is outdated, the last update was 12 days ago. I use Ubuntu 11.10. A run of sudo apt-get update produces the following output: Ign http://ppa.launchpad.net oneiric InRelease Ign http://de.archive.ubuntu.com oneiric InRelease Ign http://de.archive.ubuntu.com oneiric-updates InRelease Ign http://de.archive.ubuntu.com oneiric-backports InRelease Ign http://security.ubuntu.com oneiric-security InRelease Ign http://extras.ubuntu.com oneiric InRelease Hit http://ppa.launchpad.net oneiric Release.gpg Get:1 http://de.archive.ubuntu.com oneiric Release.gpg [198 B] Hit http://security.ubuntu.com oneiric-security Release.gpg Get:2 http://extras.ubuntu.com oneiric Release.gpg [72 B] Hit http://ppa.launchpad.net oneiric Release Hit http://de.archive.ubuntu.com oneiric-updates Release.gpg Hit http://security.ubuntu.com oneiric-security Release Hit http://extras.ubuntu.com oneiric Release Err http://extras.ubuntu.com oneiric Release Hit http://ppa.launchpad.net oneiric/main Sources Hit http://de.archive.ubuntu.com oneiric-backports Release.gpg Hit http://security.ubuntu.com oneiric-security/main Sources Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Ign http://linux.dropbox.com oneiric InRelease Hit http://security.ubuntu.com oneiric-security/restricted Sources Hit http://security.ubuntu.com oneiric-security/universe Sources Hit http://security.ubuntu.com oneiric-security/multiverse Sources Hit http://security.ubuntu.com oneiric-security/main i386 Packages Hit http://security.ubuntu.com oneiric-security/restricted i386 Packages Hit http://security.ubuntu.com oneiric-security/universe i386 Packages Hit http://security.ubuntu.com oneiric-security/multiverse i386 Packages Hit http://security.ubuntu.com oneiric-security/main TranslationIndex Hit http://security.ubuntu.com oneiric-security/multiverse TranslationIndex Hit http://security.ubuntu.com oneiric-security/restricted TranslationIndex Hit http://security.ubuntu.com oneiric-security/universe TranslationIndex Hit http://de.archive.ubuntu.com oneiric Release Hit http://de.archive.ubuntu.com oneiric-updates Release Ign http://de.archive.ubuntu.com oneiric Release Hit http://security.ubuntu.com oneiric-security/main Translation-en Hit http://security.ubuntu.com oneiric-security/multiverse Translation-en Hit http://linux.dropbox.com oneiric Release.gpg Hit http://security.ubuntu.com oneiric-security/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-backports Release Hit http://security.ubuntu.com oneiric-security/universe Translation-en Ign http://de.archive.ubuntu.com oneiric/main Sources/DiffIndex Ign http://de.archive.ubuntu.com oneiric/restricted Sources/DiffIndex Ign http://de.archive.ubuntu.com oneiric/universe Sources/DiffIndex Ign http://de.archive.ubuntu.com oneiric/multiverse Sources/DiffIndex Ign http://de.archive.ubuntu.com oneiric/main i386 Packages/DiffIndex Ign http://de.archive.ubuntu.com oneiric/restricted i386 Packages/DiffIndex Ign http://de.archive.ubuntu.com oneiric/universe i386 Packages/DiffIndex Ign http://de.archive.ubuntu.com oneiric/multiverse i386 Packages/DiffIndex Hit http://linux.dropbox.com oneiric Release Get:3 http://de.archive.ubuntu.com oneiric/main TranslationIndex [3,289 B] Get:4 http://de.archive.ubuntu.com oneiric/multiverse TranslationIndex [2,265 B] Hit http://de.archive.ubuntu.com oneiric/restricted TranslationIndex Get:5 http://de.archive.ubuntu.com oneiric/universe TranslationIndex [2,640 B] Hit http://de.archive.ubuntu.com oneiric-updates/restricted i386 Packages Hit http://de.archive.ubuntu.com oneiric-updates/universe i386 Packages Hit http://de.archive.ubuntu.com oneiric-updates/multiverse i386 Packages Hit http://de.archive.ubuntu.com oneiric-updates/main TranslationIndex Hit http://de.archive.ubuntu.com oneiric-updates/multiverse TranslationIndex Hit http://de.archive.ubuntu.com oneiric-updates/restricted TranslationIndex Hit http://de.archive.ubuntu.com oneiric-updates/universe TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/main Sources Hit http://linux.dropbox.com oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://linux.dropbox.com oneiric/main TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/restricted Sources Hit http://de.archive.ubuntu.com oneiric-backports/universe Sources Hit http://de.archive.ubuntu.com oneiric-backports/multiverse Sources Hit http://de.archive.ubuntu.com oneiric-backports/main i386 Packages Hit http://de.archive.ubuntu.com oneiric-backports/restricted i386 Packages Hit http://de.archive.ubuntu.com oneiric-backports/universe i386 Packages Hit http://de.archive.ubuntu.com oneiric-backports/multiverse i386 Packages Hit http://de.archive.ubuntu.com oneiric-backports/main TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/multiverse TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/restricted TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/universe TranslationIndex Hit http://de.archive.ubuntu.com oneiric/main Sources Hit http://de.archive.ubuntu.com oneiric/restricted Sources Hit http://de.archive.ubuntu.com oneiric/universe Sources Hit http://de.archive.ubuntu.com oneiric/multiverse Sources Hit http://de.archive.ubuntu.com oneiric/main i386 Packages Hit http://de.archive.ubuntu.com oneiric/restricted i386 Packages Hit http://de.archive.ubuntu.com oneiric/universe i386 Packages Hit http://de.archive.ubuntu.com oneiric/multiverse i386 Packages Hit http://de.archive.ubuntu.com oneiric/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-updates/main Translation-en Hit http://de.archive.ubuntu.com oneiric-updates/multiverse Translation-en Hit http://de.archive.ubuntu.com oneiric-updates/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-updates/universe Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/main Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/universe Translation-en Err http://de.archive.ubuntu.com oneiric-updates/main Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Err http://de.archive.ubuntu.com oneiric-updates/restricted Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Err http://de.archive.ubuntu.com oneiric-updates/universe Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Err http://de.archive.ubuntu.com oneiric-updates/multiverse Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Err http://de.archive.ubuntu.com oneiric-updates/main i386 Packages 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Ign http://linux.dropbox.com oneiric/main Translation-en_US Ign http://linux.dropbox.com oneiric/main Translation-en Fetched 273 B in 2s (91 B/s) Reading package lists... Done W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://extras.ubuntu.com oneiric Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key <[email protected]> W: GPG error: http://de.archive.ubuntu.com oneiric Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/oneiric/Release W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/main/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/de.archive.ubuntu.com_ubuntu_dists_oneiric_main_i18n_Index W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/multiverse/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/de.archive.ubuntu.com_ubuntu_dists_oneiric_multiverse_i18n_Index W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/universe/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/de.archive.ubuntu.com_ubuntu_dists_oneiric_universe_i18n_Index W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/main/source/Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/restricted/source/Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/universe/source/Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/multiverse/source/Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/main/binary-i386/Packages 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Some index files failed to download. They have been ignored, or old ones used instead. There are some questions with similar problems, but no one seems to get these "Range Not Satisfiable" errors. I do not use a proxy, and the network configuration should not have changed since it worked the last time.

    Read the article

  • Parallelism in .NET – Part 10, Cancellation in PLINQ and the Parallel class

    - by Reed
    Many routines are parallelized because they are long running processes.  When writing an algorithm that will run for a long period of time, its typically a good practice to allow that routine to be cancelled.  I previously discussed terminating a parallel loop from within, but have not demonstrated how a routine can be cancelled from the caller’s perspective.  Cancellation in PLINQ and the Task Parallel Library is handled through a new, unified cooperative cancellation model introduced with .NET 4.0. Cancellation in .NET 4 is based around a new, lightweight struct called CancellationToken.  A CancellationToken is a small, thread-safe value type which is generated via a CancellationTokenSource.  There are many goals which led to this design.  For our purposes, we will focus on a couple of specific design decisions: Cancellation is cooperative.  A calling method can request a cancellation, but it’s up to the processing routine to terminate – it is not forced. Cancellation is consistent.  A single method call requests a cancellation on every copied CancellationToken in the routine. Let’s begin by looking at how we can cancel a PLINQ query.  Supposed we wanted to provide the option to cancel our query from Part 6: double min = collection .AsParallel() .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would rewrite this to allow for cancellation by adding a call to ParallelEnumerable.WithCancellation as follows: var cts = new CancellationTokenSource(); // Pass cts here to a routine that could, // in parallel, request a cancellation try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation()); } catch (OperationCanceledException e) { // Query was cancelled before it finished } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, if the user calls cts.Cancel() before the PLINQ query completes, the query will stop processing, and an OperationCanceledException will be raised.  Be aware, however, that cancellation will not be instantaneous.  When cts.Cancel() is called, the query will only stop after the current item.PerformComputation() elements all finish processing.  cts.Cancel() will prevent PLINQ from scheduling a new task for a new element, but will not stop items which are currently being processed.  This goes back to the first goal I mentioned – Cancellation is cooperative.  Here, we’re requesting the cancellation, but it’s up to PLINQ to terminate. If we wanted to allow cancellation to occur within our routine, we would need to change our routine to accept a CancellationToken, and modify it to handle this specific case: public void PerformComputation(CancellationToken token) { for (int i=0; i<this.iterations; ++i) { // Add a check to see if we've been canceled // If a cancel was requested, we'll throw here token.ThrowIfCancellationRequested(); // Do our processing now this.RunIteration(i); } } With this overload of PerformComputation, each internal iteration checks to see if a cancellation request was made, and will throw an OperationCanceledException at that point, instead of waiting until the method returns.  This is good, since it allows us, as developers, to plan for cancellation, and terminate our routine in a clean, safe state. This is handled by changing our PLINQ query to: try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation(cts.Token)); } catch (OperationCanceledException e) { // Query was cancelled before it finished } PLINQ is very good about handling this exception, as well.  There is a very good chance that multiple items will raise this exception, since the entire purpose of PLINQ is to have multiple items be processed concurrently.  PLINQ will take all of the OperationCanceledException instances raised within these methods, and merge them into a single OperationCanceledException in the call stack.  This is done internally because we added the call to ParallelEnumerable.WithCancellation. If, however, a different exception is raised by any of the elements, the OperationCanceledException as well as the other Exception will be merged into a single AggregateException. The Task Parallel Library uses the same cancellation model, as well.  Here, we supply our CancellationToken as part of the configuration.  The ParallelOptions class contains a property for the CancellationToken.  This allows us to cancel a Parallel.For or Parallel.ForEach routine in a very similar manner to our PLINQ query.  As an example, we could rewrite our Parallel.ForEach loop from Part 2 to support cancellation by changing it to: try { var cts = new CancellationTokenSource(); var options = new ParallelOptions() { CancellationToken = cts.Token }; Parallel.ForEach(customers, options, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // Check for cancellation here options.CancellationToken.ThrowIfCancellationRequested(); // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); } catch (OperationCanceledException e) { // The loop was cancelled } Notice that here we use the same approach taken in PLINQ.  The Task Parallel Library will automatically handle our cancellation in the same manner as PLINQ, providing a clean, unified model for cancellation of any parallel routine.  The TPL performs the same aggregation of the cancellation exceptions as PLINQ, as well, which is why a single exception handler for OperationCanceledException will cleanly handle this scenario.  This works because we’re using the same CancellationToken provided in the ParallelOptions.  If a different exception was thrown by one thread, or a CancellationToken from a different CancellationTokenSource was used to raise our exception, we would instead receive all of our individual exceptions merged into one AggregateException.

    Read the article

  • Enabling Kerberos Authentication for Reporting Services

    - by robcarrol
    Recently, I’ve helped several customers with Kerberos authentication problems with Reporting Services and Analysis Services, so I’ve decided to write this blog post and pull together some useful resources in one place (there are 2 whitepapers in particular that I found invaluable configuring Kerberos authentication, and these can be found in the references section at the bottom of this post). In most of these cases, the problem has manifested itself with the Login failed for User ‘NT Authority\Anonymous’ (“double-hop”) error. By default, Reporting Services uses Windows Integrated Authentication, which includes the Kerberos and NTLM protocols for network authentication. Additionally, Windows Integrated Authentication includes the negotiate security header, which prompts the client to select Kerberos or NTLM for authentication. The client can access reports which have the appropriate permissions by using Kerberos for authentication. Servers that use Kerberos authentication can impersonate those clients and use their security context to access network resources. You can configure Reporting Services to use both Kerberos and NTLM authentication; however this may lead to a failure to authenticate. With negotiate, if Kerberos cannot be used, the authentication method will default to NTLM. When negotiate is enabled, the Kerberos protocol is always used except when: Clients/servers that are involved in the authentication process cannot use Kerberos. The client does not provide the information necessary to use Kerberos. An in-depth discussion of Kerberos authentication is beyond the scope of this post, however when users execute reports that are configured to use Windows Integrated Authentication, their logon credentials are passed from the report server to the server hosting the data source. Delegation needs to be set on the report server and Service Principle Names (SPNs) set for the relevant services. When a user processes a report, the request must go through a Web server on its way to a database server for processing. Kerberos authentication enables the Web server to request a service ticket from the domain controller; impersonate the client when passing the request to the database server; and then restrict the request based on the user’s permissions. Each time a server is required to pass the request to another server, the same process must be used. Kerberos authentication is supported in both native and SharePoint integrated mode, but I’ll focus on native mode for the purpose of this post (I’ll explain configuring SharePoint integrated mode and Kerberos authentication in a future post). Configuring Kerberos avoids the authentication failures due to double-hop issues. These double-hop errors occur when a users windows domain credentials can’t be passed to another server to complete the user’s request. In the case of my customers, users were executing Reporting Services reports that were configured to query Analysis Services cubes on a separate machine using Windows Integrated security. The double-hop issue occurs as NTLM credentials are valid for only one network hop, subsequent hops result in anonymous authentication. The client attempts to connect to the report server by making a request from a browser (or some other application), and the connection process begins with authentication. With NTLM authentication, client credentials are presented to Computer 2. However Computer 2 can’t use the same credentials to access Computer 3 (so we get the Anonymous login error). To access Computer 3 it is necessary to configure the connection string with stored credentials, which is what a number of customers I have worked with have done to workaround the double-hop authentication error. However, to get the benefits of Windows Integrated security, a better solution is to enable Kerberos authentication. Again, the connection process begins with authentication. With Kerberos authentication, the client and the server must demonstrate to one another that they are genuine, at which point authentication is successful and a secure client/server session is established. In the illustration above, the tiers represent the following: Client tier (computer 1): The client computer from which an application makes a request. Middle tier (computer 2): The Web server or farm where the client’s request is directed. Both the SharePoint and Reporting Services server(s) comprise the middle tier (but we’re only concentrating on native deployments just now). Back end tier (computer 3): The Database/Analysis Services server/Cluster where the requested data is stored. In order to enable Kerberos authentication for Reporting Services it’s necessary to configure the relevant SPNs, configure trust for delegation for server accounts, configure Kerberos with full delegation and configure the authentication types for Reporting Services. Service Principle Names (SPNs) are unique identifiers for services and identify the account’s type of service. If an SPN is not configured for a service, a client account will be unable to authenticate to the servers using Kerberos. You need to be a domain administrator to add an SPN, which can be added using the SetSPN utility. For Reporting Services in native mode, the following SPNs need to be registered --SQL Server Service SETSPN -S mssqlsvc/servername:1433 Domain\SQL For named instances, or if the default instance is running under a different port, then the specific port number should be used. --Reporting Services Service SETSPN -S http/servername Domain\SSRS SETSPN -S http/servername.domain.com Domain\SSRS The SPN should be set for the NETBIOS name of the server and the FQDN. If you access the reports using a host header or DNS alias, then that should also be registered SETSPN -S http/www.reports.com Domain\SSRS --Analysis Services Service SETSPN -S msolapsvc.3/servername Domain\SSAS Next, you need to configure trust for delegation, which refers to enabling a computer to impersonate an authenticated user to services on another computer: Location Description Client 1. The requesting application must support the Kerberos authentication protocol. 2. The user account making the request must be configured on the domain controller. Confirm that the following option is not selected: Account is sensitive and cannot be delegated. Servers 1. The service accounts must be trusted for delegation on the domain controller. 2. The service accounts must have SPNs registered on the domain controller. If the service account is a domain user account, the domain administrator must register the SPNs. In Active Directory Users and Computers, verify that the domain user accounts used to access reports have been configured for delegation (the ‘Account is sensitive and cannot be delegated’ option should not be selected): We then need to configure the Reporting Services service account and computer to use Kerberos with full delegation:   We also need to do the same for the SQL Server or Analysis Services service accounts and computers (depending on what type of data source you are connecting to in your reports). Finally, and this is the part that sometimes gets over-looked, we need to configure the authentication type correctly for reporting services to use Kerberos authentication. This is configured in the Authentication section of the RSReportServer.config file on the report server. <Authentication> <AuthenticationTypes>           <RSWindowsNegotiate/> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> </Authentication> This will enable Kerberos authentication for Internet Explorer. For other browsers, see the link below. The report server instance must be restarted for these changes to take effect. Once these changes have been made, all that’s left to do is test to make sure Kerberos authentication is working properly by running a report from report manager that is configured to use Windows Integrated authentication (either connecting to Analysis Services or SQL Server back-end). Resources: Manage Kerberos Authentication Issues in a Reporting Services Environment http://download.microsoft.com/download/B/E/1/BE1AABB3-6ED8-4C3C-AF91-448AB733B1AF/SSRSKerberos.docx Configuring Kerberos Authentication for Microsoft SharePoint 2010 Products http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23176 How to: Configure Windows Authentication in Reporting Services http://msdn.microsoft.com/en-us/library/cc281253.aspx RSReportServer Configuration File http://msdn.microsoft.com/en-us/library/ms157273.aspx#Authentication Planning for Browser Support http://msdn.microsoft.com/en-us/library/ms156511.aspx

    Read the article

  • TFS 2010 Basic Concepts

    - by jehan
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here, I’m going to discuss some key Architectural changes and concepts that have taken place in TFS 2010 when compared to TFS 2008. In TFS 2010 Installation, First you need to do the Installation and then you have to configure the Installation Feature from the available features. This is bit similar to SharePoint Installation, where you will first do the Installation and then configure the SharePoint Farms. 1) Installation Features available in TFS2010: a) Basic: It is the most compact TFS installation possible. It will install and configure Source Control, Work Item tracking and Build Services only. (SharePoint and Reporting Integration will not be possible). b) Standard Single Server: This is suitable for Single Server deployment of TFS. It will install and configure Windows SharePoint Services for you and will use the default instance of SQL Server. c) Advanced: It is suitable, if you want use Remote Servers for SQL Server Databases, SharePoint Products and Technologies and SQL Server Reporting Services. d) Application Tier Only: If you want to configure high availability for Team Foundation Server in a Load Balanced Environment (NLB) or you want to move Team Foundation Server from one server to other or you want to restore TFS. e) Upgrade: If you want to upgrade from a prior version of TFS. Note: One more important thing to know here about  TFS 2010 Basic is that,  it can be installed on Client Operations Systems(Windows 7 and Windows Vista SP3), Where as  earlier you cannot Install previous version of TFS (2008 and 2005) on client OS. 2) Team Project Collections: Connect to TFS dialog box in TFS 2008:  In TFS 2008, the TFS Server contains a set of Team Projects and each project may or may not be independent of other projects and every checkin gets a ever increasing  changeset ID  irrespective of the team project in which it is checked in and the same applies to work items  also, who also gets unique Work Item Ids.The main problem with this approach was that there are certain things which were impossible to do; those were required as per the Application Development Process. a)      If something has gone wrong in one team project and now you want to restore it back to earlier state where it was working properly then it requires you to restore the Database of Team Foundation Server from the backup you have taken as per your Maintenance plans and because of this the other team projects may lose out on the work which is not backed up. b)       Your company had a merge with some other company and now you have two TFS servers. One TFS Server which you are working on and other TFS server which other company was working and now after the merge you want to integrate the team projects from two TFS servers into one, which is almost impossible to achieve in TFS 2008. Though you can create the Team Projects in one server manually (In Source Control) which you want to integrate from the other TFS Server, but will lose out on History of Change Sets and Work items and others which are very important. There were few more issues of this sort, which were difficult to resolve in TFS 2008. To resolve issues related to above kind of scenarios which were mainly related TFS Maintenance, Integration, migration and Security,  Microsoft has come up with Team Project Collections concept in TFS 2010.This concept is similar to SharePoint Site Collections and if you are familiar with SharePoint Architecture, then it will help you to understand TFS 2010 Architecture easily. Connect to TFS dialog box in TFS 2010: In above dialog box as you can see there are two Team Project Collections, each team project can contain any number of team projects as you can see on right side it shows the two Team Projects in Team Project Collection (Default Collection) which I have chosen. Note: You can connect to only one Team project Collection at a time using an instance of  TFS Team Explorer. How does it work? To introduce Team Project Collections, changes have been done in reorganization of TFS databases. TFS 2008 was composed of 5-7 databases partitioned by subsystem (each for Version Control, Work Item Tracking, Build, Integration, Project Management...) New TFS 2010 database architecture: TFS_Config: It’s the root database and it contains centralized TFS configuration data, including the list of all team projects exist in TFS server. TFS_Warehouse: The data warehouse contains all the reporting data of served by this server (farm). TFS_* : This contains individual team project collection data. This database contains all the operational data of team project collection regardless of subsystem.In additional to this, you will have databases for SharePoint and Report Server. 3) TFS Farms:  As TFS 2010 is more flexible to configure as multiple Application tiers and multiple Database tiers, so it will be more appropriate to call as TFS Farm if you going for multi server installation of TFS. NLB support for TFS application tiers – With TFS 2010: you can configure multiple TFS application tier machines to serve the same set of Team Project Collections. The primary purpose of NLB support is to enable a cleaner and more complete high availability than in TFS 2008. Even if any application tier in the farm fails then farm will automatically continue to work with hardly any indication to end users of a problem. SQL data tiers: With 2010 you can configure many SQL Servers. Each Database can be configured to be on any SQL Server because each Team Project Collection is an independent database. This feature can also be used to load balance databases across SQL Servers.These new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.

    Read the article

< Previous Page | 582 583 584 585 586 587 588 589 590 591 592 593  | Next Page >