Search Results

Search found 5643 results on 226 pages for 'machines'.

Page 214/226 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Hanging of host network connections when starting KVM guest on bridge

    - by Chris Phillips
    Hi, I've a KVM system upon which I'm running a network bridge directly between all VM's and a bond0 (eth0, eth1) on the host OS. As such, all machines are presented on the same subnet, available outside of the box. The bond is doing mode 1 active / passive, with an arp_ip_target set to the default gateway, which has caused some issues in itself, but I can't see the bond configs mattering here myself. I'm seeing odd things most times when I stop and start a guest on the platform, in that on the host I lose network connectivity (icmp, ssh) for about 30 seconds. I don't lose connectivity on the other already running VM's though... they can always ping the default GW, but the host can't. I say "about 30 seconds" but from some tests it actually seems to be 28 seconds usually (or at least, I lose 28 pings...) and I'm wondering if this somehow relates to the bridge config. I'm not running STP on the bridge at all, and the forwarding delay is set to 1 second, path cost on the bond0 lowered to 10 and port priority of bond0 also lowered to 1. As such I don't think that the bridge should ever be able to think that bond0 is not connected just fine (as continued guest connectivity implies) yet the IP of the host, which is on the bridge device (... could that matter?? ) becomes unreachable. I'm fairly sure it's about the bridged networking, but at the same time as this happens when a VM is started there are clearly loads of other things also happening so maybe I'm way off the mark. Lack of connectivity: # ping 10.20.11.254 PING 10.20.11.254 (10.20.11.254) 56(84) bytes of data. 64 bytes from 10.20.11.254: icmp_seq=1 ttl=255 time=0.921 ms 64 bytes from 10.20.11.254: icmp_seq=2 ttl=255 time=0.541 ms type=1700 audit(1293462808.589:325): dev=vnet6 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 type=1700 audit(1293462808.604:326): dev=vnet7 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 type=1700 audit(1293462808.618:327): dev=vnet8 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x130079 kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xffdd694a kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x530079 64 bytes from 10.20.11.254: icmp_seq=30 ttl=255 time=0.514 ms 64 bytes from 10.20.11.254: icmp_seq=31 ttl=255 time=0.551 ms 64 bytes from 10.20.11.254: icmp_seq=32 ttl=255 time=0.437 ms 64 bytes from 10.20.11.254: icmp_seq=33 ttl=255 time=0.392 ms brctl output of relevant bridge: # brctl showstp brdev brdev bridge id 8000.b2e1378d1396 designated root 8000.b2e1378d1396 root port 0 path cost 0 max age 19.99 bridge max age 19.99 hello time 1.99 bridge hello time 1.99 forward delay 0.99 bridge forward delay 0.99 ageing time 299.95 hello timer 0.50 tcn timer 0.00 topology change timer 0.00 gc timer 0.04 flags vnet5 (3) port id 8003 state forwarding designated root 8000.b2e1378d1396 path cost 100 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 8003 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags vnet0 (2) port id 8002 state forwarding designated root 8000.b2e1378d1396 path cost 100 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 8002 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags bond0 (1) port id 0001 state forwarding designated root 8000.b2e1378d1396 path cost 10 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 0001 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags I do see the new port listed as learning, but in line with the forward delay, only for 1 or 2 seconds when polling the brctl output on a loop. All pointers, tips or stabs in the dark appreciated.

    Read the article

  • What NAS setup for two-way syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker. Update I've been digging some more and it looks like QNAP's Remote Replication function is actually just Rsync, so only really suitable for one-way syncing. I've posted on their forum to double check, but I think that's the case. In which case, I think the focus of my question is now either: do any reasonably-priced NASs support bidirectional syncing over the internet?, or has anyone had any luck installing onto NASs for this purpose? (Also, updated question to clarify that I'm after two-way syncing.)

    Read the article

  • Installing .NET Framework 4 Client Profile breaks Windows Update

    - by Richard
    I have a Samsung NC-10 netbook with a fresh install of Windows 7 Home Premium 32-bit (it only had 2GB). If Microsoft .NET Framework 4 Client Profile is installed on it, Windows Update will always return error code 8024402F ("Windows Update encountered an unknown error"). As soon as I uninstall it, Windows Update works just fine again. Out of the four computers in this house, only this netbook has the problem. My question is: How can I get the .NET Framework 4 Client Profile installed on my netbook and continue to have a functioning Windows Update? ----- More information ----- The hard-drive recently died on my netbook so I replaced it with a nice new SSD and did a fresh installation of Windows 7 Home Premium (SP1) - along with all the updates. At some point I found that, when I ran Windows Update, I was greeted with error code 8024402F ("Windows Update encountered an unknown error"). Looking in C:\Windows\WindowsUpdate.log, I saw the following issue: WARNING: ECP: Failed to validate cab file digest downloaded from http://download.windowsupdate.com/msdownload/update/software/dflt/2012/02/4913552_4a5c9563d1f58c77f30d0d5c9999e4b8bff3ab21.cab with error 0x80091007 WARNING: ECP: This roundtrip contained some optimized updates which failed. New Update count = 0, Old Count = 3 FATAL: ProcessCoreMetadata did not return any update to be committed WARNING: Sync of Updates: 0x8024402f WARNING: SyncServerUpdatesInternal failed: 0x8024402f When I downloaded the CAB from the URL listed and opened it, it contained a file called 4913552.txt. A search on Google suggested that it's related to Microsoft .NET Framework 4 Client Profile. Other people had reported problems with it breaking Windows Update, but they were running Windows XP. I tried the steps outlined on the Microsoft site for this error code, but it reported that there was nothing wrong. I also found this superuser question, I tried all the answers listed but none of them made any difference. My router doesn't block ActiveX, changing my internet settings in IE made no difference, assuming it was a corrupted update repository didn't do anything (except wipe my update history), my date and time was correct, switching to Google's DNS didn't work and neither did disabling IPv6. Figuring that this update was corrupted, I repaired it and nothing changed. In desperation I un-installed it and Windows Update started working again! Brilliant! I then downloaded the full version from the Microsoft website, installed it and, thankfully, Windows Update continued to work just fine. A week later I turn on my netbook and Windows Update is broken again with exactly the same error message and log entries as before. Repairing .NET Framework 4 Client Profile did nothing, removing it entirely solved the problem again. Thinking this might be some odd Windows installation issue, I formatted the hard-drive and re-installed Windows. Same problem as before - as soon as .NET Framework 4 Client Profile ended up on the netbook, Windows Update stopped working and reported error 8024402F. As soon as it was un-installed, everything worked just fine again. There are three other machines in this house and all of them have working Windows Update and this Client Profile. Does anyone know why it causes this netbook to break and, more importantly, how I can fix it?

    Read the article

  • What NAS setup for syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker.

    Read the article

  • can't connect 2 subnets through RRAS 2008 r2

    - by mcdwight6
    I'm working on a project for a networking class. In VMWare Workstation, I have to set up a 2008 r2 server with DHCP reservations for 2 clients on separate subnets and have them ping each other. Here is the output of the route print command: =========================================================================== Interface List 13 ...00 50 56 2a e7 11 ...... Intel(R) PRO/1000 MT Network Connection #3 10 ...00 0c 29 66 88 dd ...... Intel(R) PRO/1000 MT Network Connection 1 ........................... Software Loopback Interface 1 24 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 11 ...02 00 54 55 4e 01 ...... Teredo Tunneling Pseudo-Interface 14 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 16 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 17 ...00 00 00 00 00 00 00 e0 isatap.{5B8FB196-616F-4168-A020-03E63A309CEC} =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 On-link 10.0.0.2 266 0.0.0.0 0.0.0.0 On-link 223.6.6.2 266 10.0.0.0 255.0.0.0 On-link 10.0.0.2 266 10.0.0.2 255.255.255.255 On-link 10.0.0.2 266 10.255.255.255 255.255.255.255 On-link 10.0.0.2 266 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.6.0.0 255.255.0.0 On-link 10.0.0.2 11 128.6.255.255 255.255.255.255 On-link 10.0.0.2 266 223.6.6.0 255.255.255.0 On-link 10.0.0.2 11 223.6.6.0 255.255.255.0 On-link 223.6.6.2 266 223.6.6.2 255.255.255.255 On-link 223.6.6.2 266 223.6.6.255 255.255.255.255 On-link 10.0.0.2 266 223.6.6.255 255.255.255.255 On-link 223.6.6.2 266 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.0.0.2 266 224.0.0.0 240.0.0.0 On-link 223.6.6.2 266 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.0.0.2 266 255.255.255.255 255.255.255.255 On-link 223.6.6.2 266 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 10.0.0.2 Default 0.0.0.0 0.0.0.0 128.6.0.2 Default 0.0.0.0 0.0.0.0 223.6.6.2 Default 128.6.0.0 255.255.0.0 10.0.0.2 1 223.6.6.0 255.255.255.0 10.0.0.2 1 =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 14 1010 2002::/16 On-link 14 266 2002:8006:2::8006:2/128 On-link 1 306 ff00::/8 On-link =========================================================================== Persistent Routes: None My problem is that although I have set up both dynamic and persistent static routes in my r2 server, neither of the clients can ping even the NIC outside its own subnet. For example Client A can ping the NIC at 10.0.0.2 and vice-versa, but it gets a general transmit failure when it tries to ping the card at 223.6.6.2, let alone trying to ping the other client. I have completely disabled the firewalls on all machines and anything else I could think of, without success. What am I missing? Edit: Since posting this, I also noticed that the default gateways on my 2 NICs keep getting zeroed out. Does anyone know a fix for this?

    Read the article

  • libvirt upgrade caused vms to not see drives (boot media not found)

    - by bias
    I upgraded to Ubuntu 12.04.1 and now libvirt (via open nebula) successfully runs vms but they aren't finding the 2 drives (specifically, the boot drive). One is "hd" the other is "cdrom". The machine boots but fails and displays something like "boot media not found hd" (this was in a vnc terminal and I didn't copy the output anywhere so that's not the verbatim message). I tried constructing a new disk using the new version of qemu (via vmbuilder) and this new machine has the same problem as the old machine. In case it matters (I can't see why it would) I'm using open nebula to manage the machines. There's nothing relevant in any of the logs: syslog, libvirtd, oned. Which is to say nothing interesting/anomalous is reported when the machine is brought up. Versions libvirt 0.9.8-2ubuntu17.4 qemu-kvm 1.0+noroms-0ubuntu14.3 The libvirt xml config portions (relavent) <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> ... <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/one//203/images/disk.0'/> <target dev='sda' bus='scsi'/> <alias name='scsi0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/var/lib/one//203/images/disk.1'/> <target dev='sdc' bus='scsi'/> <readonly/> <alias name='scsi0-0-2'/> <address type='drive' controller='0' bus='0' unit='2'/> </disk> <controller type='scsi' index='0'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> ... </devices> The libvirt/qemu log contains 2012-11-25 22:19:24.328+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 256 -smp 1,sockets=1,cores=1,threads=1 -name one-204 -uuid 4be6c276-19e8-bdc2-e9c9-9ca5352f2be3 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-204.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device lsi,id=scsi0,bus=pci.0,addr=0x5 -drive file=/var/lib/one//204/images/disk.0,if=none,id=drive-scsi0-0-0,format=qcow2 -device scsi-disk,bus=scsi0.0,scsi-id=0,drive=drive-scsi0-0-0,id=scsi0-0-0,bootindex=1 -drive file=/var/lib/one//204/images/disk.1,if=none,media=cdrom,id=drive-scsi0-0-2,readonly=on,format=raw -device scsi-disk,bus=scsi0.0,scsi-id=2,drive=drive-scsi0-0-2,id=scsi0-0-2 -netdev tap,fd=18,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:00:68,bus=pci.0,addr=0x3 -netdev tap,fd=19,id=hostnet1 -device rtl8139,netdev=hostnet1,id=net1,mac=02:00:ad:f0:1b:94,bus=pci.0,addr=0x4 -usb -vnc 0.0.0.0:204 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 kvm: -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:00:68,bus=pci.0,addr=0x3: pci_add_option_rom: failed to find romfile "pxe-rtl8139.rom" kvm: -device rtl8139,netdev=hostnet1,id=net1,mac=02:00:ad:f0:1b:94,bus=pci.0,addr=0x4: pci_add_option_rom: failed to find romfile "pxe-rtl8139.rom"

    Read the article

  • Choice of an OS for a home ZFS NAS

    - by OlafM
    I am preparing a home NAS with an old Athlon 64 X2 3800+, 4 GB ECC RAM, Asus M2V MX motherboard, and a single 3 TB WDC Green (another one as mirror may be installed in the future). It's the cheapest solution I found that includes ECC memory and the higher energy consumption is offset by the lower (zero) cost of acquisition. The system will be used for: music storage and stream to other desktop computers; storage of the scanned dia slides (3-4k slides, 180 MB TIFF each one plus reduced quality JPEG version); stream of these photos to a local iPad 2 (maybe Plex App? not yet sure); (one additional) remote backup via rsync/ssh or ZFS send/receive. It will be controlled via remote ssh, maybe VNC, no monitor attached. Absolute requirement is a reliable ZFS solution, plus the ability to easily install packets/software/virtual machines and to update remotely (I will be the admin and I don't live near the NAS). I have mainly three options: NAS4free/FreeNAS OpenIndiana Solaris Express 11 (yeah yeah I know the license requirements, I will write a perl script on it to count it as development machine). Problems: NAS4free/FreeNAS (I tested only NAS4free) required embedded installation for remote upgrading, but full install for easy addition of software packets. Since I need at least AirVideo Server (linux/win) and Plex App (win/linux) to stream the photos and some videos to iPad (they both require virtualbox), but I cannot be there to install updates, NAS4free/FreeNAS are excluded. http://www.nas4free.org/general_information.html explains the issue: embedded can be remotely updated, full cannot. Solaris has also another advantage: Crashplan client supports Solaris and I'm already using it for other backups. I would like to leave the option open, even if I will be doing backups probably through zfs send/receive. NexentaStor was left out because zfs send/receive are not included in the free version. The question is now Solaris 11 Express over OpenIndiana. To ease the management, I will be using http://www.napp-it.org Which one would you suggest and why? I found lots of informations and it's difficult for me to decide. I think (from the napp-it manual) that Solaris has some additional options for SMB shares, but are they really needed at home? I think I won't even use ACLs, since normal unix-style permissions are enough. OpenIndiana has maybe more frequent updates (Solaris offers only security updates between releases), but again, do I need them? I don't think so. Moreover, this is a NAS that has to work and nothing else, I cannot risk having problems that require me to access the server. Isn't OpenIndiana a bit more... cutting edge (in the Solaris world)? I'm just asking, no need to focus on this for the answer :-) I would limit myself to these two options (SE11.1/OI) also because I will be making a NAS for me in the future (where high performances with Mac shares are also required) and Solaris has kernel support for AFP. I will use this server to gather experience as well. After this long question, thanks in advance! If you need additional info, let me know and I will update this post. UPDATES Given the first answers, I will strongly suggest the person paying the hardware to insert a second HD. Better 2x2TB than 1x3TB (3 TB is oversized anyway). I was trying to keep the initial costs down to spread them over a longer period, but better having something good from the beginning.

    Read the article

  • Setting up home DNS with Ubuntu Server

    - by Zeophlite
    I have a webserver (with static IP 192.168.1.5), and I want to have my machines on my local network to be able to access it without modifying /etc/hosts (or equivalent for Windows/OSX). My router has Primary DNS server 192.168.1.5 Secondary DNS server 8.8.8.8 (Google's public DNS). Nginx is set up to server websites externally as *.example.com Internally, I want *.example.local to point to the server. My webserver has BIND9 installed, but I'm unsure of the settings. I've been through various contradicting tutorials, and so most of my settings have been clobbered. I've stripped out the lines which I'm confused about. The tutorials I looked at are http://tech.surveypoint.com/blog/installing-a-local-dns-server-behind-a-hardware-router/ and http://ubuntuforums.org/showthread.php?t=236093 . They mostly differ on what should be put in /etc/bind/zones/db.example.local and /etc/bind/zones/db.192, so I've left the conflicting lines out below. Can someone suggest what the correct lines are to give my above behaviour (namely *.example.local pointing to 192.168.1.5)? /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.254 /etc/hostname avalon /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.local zone "example.local" { type master; file "/etc/bind/zones/db.example.local"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/db.192"; }; /etc/bind/zones/db.example.local $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 5 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL /etc/bind/zones/db.192 $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 4 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; What do I need to add to the above files so that on a laptop on the internal network, I can type in webapp.example.local, and be served by my webserver? EDIT I made several changes to the above files on the webserver. /etc/network/interfaces (end of file) dns-nameservers 127.0.0.1 dns-search example.local /etc/bind/zones/db.example.local (end of file) @ IN NS avalon.example.local. @ IN A 192.168.1.5 avalon IN A 192.168.1.5 webapp IN A 192.168.1.5 www IN CNAME 192.168.1.5 /etc/bind/zones/db.192 (end of file) IN NS avalon.example.local. 73 IN PTR avalon.example.local. As a side note, my spare Win7 machine was able to connect directly to webapp.example.local, but for a Ubuntu 13.10 machine, I had to make the following changes as well (not on the webserver, but on a separate machine): /etc/nsswitch.conf before hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 after hosts: files dns /etc/NetworkManager/NetworkManager.conf before dns=dnsmasq after #dns=dnsmasq The issue remains that its not wildcard DNS, and so I have to add entries to /etc/bind/zones/db.example.local for webapp1, webapp2, ...

    Read the article

  • WNDR3700 Router + Cisco SG200-08 + LACP + Dual Uplink

    - by kobaltz
    Background I have a storage server that has several virtual machine images stored on them. I would store them locally, but I have limited space on my desktop (using SSD storage). I would like to increase the bandwidth between the desktop and the storage server by using two NICs on each computer. My original configuration allowed about 55MBps between the desktop and storage server. This storage server also has several TBs of documents, pictures, movies, vms, and ISO/programs. The storage server has 8 1.5TB hard drives in a RAID 10 configuration with a hardware RAID controller. The benchmarks on the RAID 10 are about 300MBps. Configuration In short, I am trying to bridge my switch and router. The switch is a small 8 port Cisco smart switch that supports 802.3ad LACP. I have two computers plugged into the switch, each with 2 Intel Gigabit NICs. The first computer is a Windows 7 machine that has the Intel ANS software installed. I have LACP configured with the computer and now show 3 NICs (2 Physical + 1 TEAM Virtual @ 2Gbps). It looks like this computer is configured correctly. I trunked the two ports that this computer is plugged into with the switch's web interface. The second computer is a homebrew storage box running debian. I also have the bonding enabled on this machine and the switch configured with LACP. Without having the WNDR3700 router in the picture yet, I am able to communicate between the Windows 7 machine and the debian box since they both have static IP addresses. With LACP enabled on both machines I am getting about 106-108MBps speeds. Issue I plug in a network cable from the switch into the router and enable DHCP on the desktop. I saw no need to have a static address on the desktop. My transfer rates are still from 106MBps-108MBps. While this is still a boost, I am trying to figure out how to get about 140-180MBps. I am thinking that I need to increase the bandwidth from the router to the switch. My switch allows 4 groups for port trunking. I plugged in a second network cable from the router to the switch. My question is, what is the proper way to fix this issue. Should I port trunk the two ports that are going from the switch to the router? Keep in mind that the router is a WNDR3700 and is unsure whether or not it supports LACP. I do have OpenWRT installed on the router, but it still wasn't clear in any documentation that I found if it supported 802.3ad LACP standards. I am also wondering if there needs to be anything changed within the Cisco settings. [Edit] - Corrected some numbers, wasn't really paying attention. It looks like the speeds though at least two NICs are bonded with LACP is still reaching the max bandwidth of one port. Is there a way to configure the switch so that I can increase this bandwidth? Also, on the storage server, I had a couple of extra NICs laying around and threw them on there as well. Another EDIT and More Findings I happened to look at the traffic of each individual NIC and think that I see the problem. I tested with a simple transfer for a 4GB file. I noticed that only one of the NICs was taking the load of the traffic. I then copied the file back to the Storage Server and noticed that the other NIC was sending out the traffic. I have 802.3ad LACP enabled on the two NICs and I see that it gets enabled dynamically on the switch's interface. Should I be using Static Link Aggregation?

    Read the article

  • Network update solutions for a company of ~20 (5 local, 15 remote)?

    - by Margaret
    Hi all This is probably going to be a bit up in the air, because we're still in the "reaching towards solutions" phase, but I figured I'd see what you guys had to say. Plus I honestly know very little about systems and what is good and bad pratice. My organisation has always more or less worked on the concept of local machines; since it primarily employed contractors who were working from home, each of those people was largely responsible for their own machine and backup procedures and the like. We're now expanding, though we're still reasonably small (we're up to about 20 staff members). Most people still work remotely, but we have a central office where about five people are working. But we're getting large enough that we're starting to think it would be a good idea to have a central file server, and things like that - if someone gets hit by a bus, we want someone else to know where to look for the files to continue their work. A lot of the people who work for us remotely work on projects for other companies as well, so I don't want to force them to log in to our server whenever they're on a network. But I do want to make connection to be as painless as possible to do so, to improve utilisation. The other thing is that we're getting more people who would like to remote into the office server and do their work there. Our current remote connection application is an SSH install that allows people access to the network; the problem is, it's a black box to me, and I've never understood how to even connect to it (despite supposedly being de facto sysadmin). Thus far I've been able to bounce questions about how to get it working to the guy who does know it well, but he's leaving the company soon. So we probably need a solution for this that I actually understand. We were knocking around the idea of implementing a VPN with some form of remote desktop, and someone mentioned that this was largely a matter of purchasing a router capable of it; I'm not sure of the truth of that statement. This is what we have in the office: Two shiny new i7 servers, each running Windows Server 2008. Precise eventual layout is still being debated, a little, but the current suggestion is that one is primary database crunching, while the other is a warm backup of the databases, along with running Reporting Services. They currently have SQL Server 2008 installed on them, which is being connected to via the 'sa' account. We're hoping to make each person use their own account (preferably one tied to the 'central' password we set up, so we can use Windows Authentication). An older server, running XP Pro, that we are currently using as a test bed for a project that requires access to older versions of software. This machine is also being used to take backups, but I'm thinking of moving that functionality elsewhere. A spare desktop from a guy who left the company (XP Pro). We're thinking of bumping up the hard disk space and using it as the magical file server that's going to solve one particular everything. Assorted desktops, laptops, etc, at least one for each person in the office (mix of Win XP and Win 7; occasionally a person who normally works remotely might drop in to the office and bring a laptop bearing Vista, but it's pretty rare). All are set up as local user accounts at the moment; I don't know if it's the best arrangement. Purchasing more hardware is not a big problem, but we figure we might as well make use of what we've got first. Is Active Directory a big magic wand that's going to solve all the world's problems? Is there some other arrangement we should be looking to instead?

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • How to diagnose cause, fix, or work around Adobe ActiveX / COM related error 0x80004005 progmaticall

    - by Streamline
    I've built a C# .NET app that uses the Adobe ActiveX control to display a PDF. It relies on a couple DLLs that get shipped with the application. These DLLs interact with the locally installed Adobe Acrobat or Adobe Acrobat Reader installed on the machine. This app is being used by some customer already and works great for nearly all users ( I check to see that the local machine is running at least version 9 of either Acrobat or Reader already ). I've found 3 cases where the app returns the error message "Error HRESULT E_FAIL has been returned from a call to a COM component" when trying to load (when the activex control is loading). I've checked one of these user's machines and he has Acrobat 9 installed and is using it frequently with no problems. It does appear that Acrobat 7 and 8 were installed at one time since there are entries for them in the registry along with Acrobat 9. I can't reproduce this problem locally, so I am not sure exactly which direction to go. The error at the top of the stacktrace is: System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component. Some research into this error indicates it is a registry problem. Does anyone have a clue as to how to fix or work around this problem, or determine how to get to the core root of the problem? The full content of the error message is this: System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component.    at System.Windows.Forms.UnsafeNativeMethods.CoCreateInstance(Guid& clsid, Object punkOuter, Int32 context, Guid& iid)    at System.Windows.Forms.AxHost.CreateWithoutLicense(Guid clsid)    at System.Windows.Forms.AxHost.CreateWithLicense(String license, Guid clsid)    at System.Windows.Forms.AxHost.CreateInstanceCore(Guid clsid)    at System.Windows.Forms.AxHost.CreateInstance()    at System.Windows.Forms.AxHost.GetOcxCreate()    at System.Windows.Forms.AxHost.TransitionUpTo(Int32 state)    at System.Windows.Forms.AxHost.CreateHandle()    at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)    at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)    at System.Windows.Forms.AxHost.EndInit()    at AcrobatChecker.Viewer.InitializeComponent()    at AcrobatChecker.Viewer..ctor()    at AcrobatChecker.Form1.btnViewer_Click(Object sender, EventArgs e)    at System.Windows.Forms.Control.OnClick(EventArgs e)    at System.Windows.Forms.Button.OnClick(EventArgs e)    at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)    at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)    at System.Windows.Forms.Control.WndProc(Message& m)    at System.Windows.Forms.ButtonBase.WndProc(Message& m)    at System.Windows.Forms.Button.WndProc(Message& m)    at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)    at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)    at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

    Read the article

  • Very slow compile times on Visual Studio

    - by johnc
    We are getting very slow compile times, which can take upwards of 20+ minutes on dual core 2GHz, 2G Ram machines. A lot of this is due to the size of our solution which has grown to 70+ projects, as well as VSS which is a bottle neck in itself when you have a lot of files. (swapping out VSS is not an option unfortunately, so I don't want this to descend into a VSS bash) We are looking at combing projects (not nice, as we like the separation of concerns, but is a good opportunity to refactor away some dead wood). We are also looking at having multiple solutions to achieve greater separation of concerns and quicker compile times for each element of the application. This I can see will become a dll hell as we try to keep things in synch. I am interested to know how other teams have dealt with this scaling issue, what do you do when your code base reaches a critical mass that you are wasting half the day watching the status bar deliver compile messages UPDATE Apologies, I neglected to mention this is a C# solution. Thanks for all the cpp suggestions, but it's been a few years since I've had to worry about headers. At a distance I say I miss C++, but I'm not sure I want to go back EDIT: Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, just what has helped) New 3GHz laptop - the power of lost utilization works wonders when whinging to management Disable Anti Virus during compile 'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI Still not rip-snorting through a compile, but every bit helps. Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance WORKAROUND We are testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them. We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.

    Read the article

  • Can't get gitosis and ssh to play nice on cygwin

    - by Noel Kennedy
    I have followed this guide to setting up gitosis on a windows 2003 server via cygwin. I have now got to a point where it largely works. I can clone, pull and push. The problem I am having is that I think I have not got the ssh bit right at all. When I connect via msysgit from machines and accounts where I have not created or uploaded ssh keys it works. Every time I clone, pull or push I get a password challenge for the 'git' user running on the server but basically I can execute git commands. When I connect with users with an ssh key in the ~/.ssh folder, I don't get the password challange and instead I get a permissions failure: DEBUG:gitosis.serve.main:Got command "git-upload-pack '/cris.git'" DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'writable' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'writeable' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'readonly' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' ERROR:gitosis.serve.main:Repository read access denied fatal: The remote end hung up unexpectedly I have uploaded the public rsa key into the key_dir folder. Here is my conf file: [gitosis] loglevel = DEBUG [group gitosis-admin] writable = gitosis-admin members = myemail@mydomain [group cris-developers] members = myemail@mydomain TeamCity@HHIT24808 writable = cris If it matters, I have generated a key without a passphrase as I believe this is necessary to enable ssh for automated scripts. When I use keys with a passphrase, I get challanged for the phrase but then get the same permissions problem. I have tried 'writable' and 'writeable' for permissions. Help!! Update 1: When I try to clone a non-existant repo, I get the same error message, co-incidence? Update 2: Wierd, I've got one machine and one login working. It seems to be something to do with the syntax for addressing git over ssh. This now works on one machine for one login: git clone git@servername:cris.git The same command fails for a user on another machine without an uploaded ssh key. But this command works (after being challanged for git@servername's password) git clone git@servername:/home/git/repositories/cris.git neither command works on a 2nd login whose ssh key has been uploaded

    Read the article

  • WPF Animation FPS vs. CPU usage - Am I expecting too much?

    - by Cory Charlton
    Working on a screen saver for my wife, http://cchearts.codeplex.com/, and while I've been able to improve FPS on lower end machines (switch from Path to StreamGeometry, use DrawingVisual instead of UserControl, etc) the CPU usage still seems very high. Here's some numbers I ran from a few 5 minute sampling periods: ~60FPS 35% average CPU on Core 2 Duo T7500 @ 2.2GHz, 3GB ram, NVIDIA Quadro NVS 140M (128MB), Vista [My dev laptop] ~40FPS 50% average CPU on Pentium D @ 3.4GHz, 1.5GB ram, Standard VGA Graphics Adapter (unknown), 2003 Server [A crappy desktop] I can understand the lower frame rate and higher CPU usage on the crappy desktop but it still seems pretty high and 35% on my dev laptop seems high as well. I'd really like to analyze the application to get more details but I'm having issues there as well so I'm wondering if I'm doing something wrong (never profiled WPF before). WPF Performance Suite: Process Launch Error Unable to attach to process: CCHearts.exe Do you want to kill it? This error message occurs when I click cancel after attempting launch. If I don't click cancel it sits there idle, I guess waiting to attach. Performance Explorer: Could not launch C:\Projects2\CC.Hearts\CC.Hearts\bin\Debug (USEVISUAL)\CCHearts.exe. Previous attempt to profile the application finished unsuccessfully. Please restart the application. Output Window from Performance: Profiling started. Profiling process ID 5360 (CCHearts). Process ID 5360 has exited. Data written to C:\Projects2\CC.Hearts\CCHearts100608.vsp. Profiling finished. PRF0025: No data was collected. Profiling complete. So I'm stuck wanting to improve performance but have no concrete way to determine where the bottleneck is. Have been relatively successful throwing darts at this point but I'm beyond that now :) PS: Screensaver is hosted at CodePlex if you want to look at the source and missed the link above. Edit: My RenderOptions darts... // NOTE: Grasping at straws here ;-) RenderOptions.SetBitmapScalingMode(newHeart, BitmapScalingMode.LowQuality); RenderOptions.SetCachingHint(newHeart, CachingHint.Cache); RenderOptions.SetEdgeMode(newHeart, EdgeMode.Aliased); I threw those a little while back and didn't see much difference (not sure if the bitmap scaling even comes into play). Really wish I could get profiling working to know where I should try to optimize. For now I assume there is some overhead in creating a new HeartVisual and the DrawingVisual contained inside. Maybe if I reset and reused the hearts (tossed them in a queue once they completed or something) I'd see an improvement. Shrug Throwing darts while blindfolder is always fun.

    Read the article

  • Word Spell Check pops up hidden and "freezes" my App

    - by Refracted Paladin
    I am using Word's Spell Check in my in house WinForm app. My clients are all XP machines with Office 2007 and randomly the spell check suggestion box pops up behind the App and makes everything "appear" frozen as you cannot get at it. Suggestions? What do other people do to work around this or stop it altogether? Thanks Below is my code, for reference, though I am doubtful that this has anything to do with my code but I'll take anything. public class SpellCheckers { public string CheckSpelling(string text) { Word.Application app = new Word.Application(); object nullobj = Missing.Value; object template = Missing.Value; object newTemplate = Missing.Value; object documentType = Missing.Value; object visible = false; object optional = Missing.Value; object savechanges = false; app.ShowMe(); Word._Document doc = app.Documents.Add(ref template, ref newTemplate, ref documentType, ref visible); doc.Words.First.InsertBefore(text); Word.ProofreadingErrors errors = doc.SpellingErrors; var ecount = errors.Count; doc.CheckSpelling(ref optional, ref optional, ref optional, ref optional, ref optional, ref optional, ref optional, ref optional, ref optional, ref optional, ref optional, ref optional); object first = 0; object last = doc.Characters.Count - 1; var results = doc.Range(ref first, ref last).Text; doc.Close(ref savechanges, ref nullobj, ref nullobj); app.Quit(ref savechanges, ref nullobj, ref nullobj); Marshal.ReleaseComObject(doc); Marshal.ReleaseComObject(app); Marshal.ReleaseComObject(errors); return results; } } And I call it from my WinForm app like so -- public static void SpellCheckControl(Control control) { if (IsWord2007Available()) { if (control.HasChildren) { foreach (Control ctrl in control.Controls) { SpellCheckControl(ctrl); } } if (IsValidSpellCheckControl(control)) { if (control.Text != String.Empty) { control.BackColor = Color.FromArgb(180, 215, 195); control.Text = Spelling.CheckSpelling(control.Text); control.Text = control.Text.Replace("\r", "\r\n"); control.ResetBackColor(); } } } }

    Read the article

  • How to troubleshoot a 'System.Management.Automation.CmdletInvocationException'

    - by JamesD
    Does anyone know how best to determine the specific underlying cause of this exception? Consider a WCF service that is supposed to use Powershell 2.0 remoting to execute MSBuild on remote machines. In both cases the scripting environments are being called in-process (via C# for Powershell and via Powershell for MSBuild), rather than 'shelling-out' - this was a specific design decision to avoid command-line hell as well as to enable passing actual objects into the Powershell script. The Powershell script that calls MSBuild is shown below: function Run-MSBuild { [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Build.Engine") $engine = New-Object Microsoft.Build.BuildEngine.Engine $engine.BinPath = "C:\Windows\Microsoft.NET\Framework\v3.5" $project = New-Object Microsoft.Build.BuildEngine.Project($engine, "3.5") $project.Load("deploy.targets") $project.InitialTargets = "DoStuff" # # Set some initial Properties & Items # # Optionally setup some loggers (have also tried it without any loggers) $consoleLogger = New-Object Microsoft.Build.BuildEngine.ConsoleLogger $engine.RegisterLogger($consoleLogger) $fileLogger = New-Object Microsoft.Build.BuildEngine.FileLogger $fileLogger.Parameters = "verbosity=diagnostic" $engine.RegisterLogger($fileLogger) # Run the build - this is the line that throws a CmdletInvocationException $result = $project.Build() $engine.Shutdown() } When running the above script from a PS command prompt it all works fine. However, as soon as the script is executed from C# it fails with the above exception. The C# code being used to call Powershell is shown below (remoting functionality removed for simplicity's sake): // Build the DTO object that will be passed to Powershell dto = SetupDTO() RunspaceConfiguration runspaceConfig = RunspaceConfiguration.Create(); using (Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfig)) { runspace.Open(); IList errors; using (var scriptInvoker = new RunspaceInvoke(runspace)) { // The Powershell script lives in a file that gets compiled as an embedded resource TextReader tr = new StreamReader(Assembly.GetExecutingAssembly().GetManifestResourceStream("MyScriptResource")); string script = tr.ReadToEnd(); // Load the script into the Runspace scriptInvoker.Invoke(script); // Call the function defined in the script, passing the DTO as an input object var psResults = scriptInvoker.Invoke("$input | Run-MSBuild", dto, out errors); } } Assuming that the issue was related to MSBuild outputting something that the Powershell runspace can't cope with, I have also tried the following variations to the second .Invoke() call: var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-String", dto, out errors); var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-Null", dto, out errors); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-Null"); I've also looked at using a custom PSHost (based on this sample: http://blogs.msdn.com/daiken/archive/2007/06/22/hosting-windows-powershell-sample-code.aspx), but during debugging I was unable to see any 'interesting' calls to it being made. Do the great and the good of Stackoverflow have any insight that might save my sanity?

    Read the article

  • Windows Azure : Server Error , 404 - File or directory not found.

    - by veda
    I want to upload some files of size 35MB on to the blob container. I have coded for splitting the data into blocks and upload it on to the blob container and form a blob using PUT. I tested the code for some files of Size 2MB or something... It worked well. But When I tried it for a large MB file, its giving me this error Server Error 404 - File or directory not found. The resource you are looking for might have been removed, had its name changed, or is temporarily unavailable. when I tried it for files of size 6MB, it gives me this error.. Server Error in '/' Application. Runtime Error Description: An application error occurred on the server. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). It could, however, be viewed by browsers running on the local server machine. Details: To enable the details of this specific error message to be viewable on remote machines, please create a <customErrors> tag within a "web.config" configuration file located in the root directory of the current web application. This <customErrors> tag should then have its "mode" attribute set to "Off". <!-- Web.Config Configuration File --> <configuration> <system.web> <customErrors mode="Off"/> </system.web> </configuration> Notes: The current error page you are seeing can be replaced by a custom error page by modifying the "defaultRedirect" attribute of the application's <customErrors> configuration tag to point to a custom error page URL. <!-- Web.Config Configuration File --> <configuration> <system.web> <customErrors mode="RemoteOnly" defaultRedirect="mycustompage.htm"/> </system.web> </configuration> Can anyone tell me, How to solve this...

    Read the article

  • javax.servlet.ServletException - how could i get to the cause?

    - by Michal
    Hi, i'm getting a very strange error while opening one of the pages in my web app. The application is built on Seam 2.2 and is using JSF (RichFaces) as a view technology. I run it on Tomcat 6. The error i'm describing doesn't occur on my machine (Mac OS X), but it does on my client's dev machines (Windows) and on the server (Linux Debian). I'm sure i'm running the same version on each system and i have tried connecting to the same database... In logs everything looks fine - each next JSF Phase executes normally, and after the last one, there is this moment when the request starts processing for the SEAM debug page... And this is the stack trace i see on the debug page (nothing is logged): Exception during request processing: Caused by javax.servlet.ServletException with message: "Servlet execution threw an exception" org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:313) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:83) org.jboss.seam.web.IdentityFilter.doFilter(IdentityFilter.java:40) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.MultipartFilter.doFilter(MultipartFilter.java:90) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.ExceptionFilter.doFilter(ExceptionFilter.java:64) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.RedirectFilter.doFilter(RedirectFilter.java:45) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.ajax4jsf.webapp.BaseXMLFilter.doXmlFilter(BaseXMLFilter.java:178) org.ajax4jsf.webapp.BaseFilter.handleRequest(BaseFilter.java:290) org.ajax4jsf.webapp.BaseFilter.processUploadsAndHandleRequest(BaseFilter.java:388) org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:515) org.jboss.seam.web.Ajax4jsfFilter.doFilter(Ajax4jsfFilter.java:56) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.LoggingFilter.doFilter(LoggingFilter.java:60) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.HotDeployFilter.doFilter(HotDeployFilter.java:53) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.servlet.SeamFilter.doFilter(SeamFilter.java:158) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) pl.mgibowski.alterium.util.LoggingFilter.doFilter(LoggingFilter.java:18) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:465) org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:619) Exception without any cause... I was trying to catch the exception with my custom Filter (LoggingFilter.java that you can see on the strack trace), using this code: try { filterChain.doFilter(servletRequest, servletResponse); } catch (Throwable e) { e.printStackTrace(); System.out.println("Stack trace:"); System.out.println(e.getStackTrace()); System.out.println("Cause:"); System.out.println(e.getCause()); } But it doesn't catch anything, the line 18 from the stack trace is this one: filterChain.doFilter(servletRequest, servletResponse); nothing gets caught by the try block. Anybody has any ideas about how could i get closer to the real cause?

    Read the article

  • "Bad binary signature" in ASP.NET MVC application

    - by David M
    We are getting the error above on some pages of an ASP.NET MVC application when it is deployed to a 64 bit Windows 2008 server box. It works fine on our development machines, though these are 32 bit XP. Just wondered if anyone had encountered this before, and has any suggestions? Details as follows: Bad binary signature. (Exception from HRESULT: 0x80131192) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Runtime.InteropServices.COMException: Bad binary signature. (Exception from HRESULT: 0x80131192) All projects are set to compile for Any CPU, and are compiled in Release mode. The ASP.NET site is precompiled, and the precompiled build is on a 64 bit Windows 2008 TeamCity build agent. Thanks in advance. EDIT We're still plagued by this. I have looked at all the binaries in the website's bin directory using corflags.exe. None has the 32BIT flag set, and all have a CorFlags value of 9 except for Antlr3.Runtime.dll which has a value of 1. The problem only affects certain pages, and it seems to be those which use FluentValidation (including FluentValidation.Mvc and FluentValidation.xValIntegration assemblies). None of these shows anything out of the ordinary when inspected with corflags.exe, and there are no odd looking dependencies revealed by ildasm. When built locally (32 bit Windows XP) the site deploys and runs fine. When built on the build agents (64 bit Windows 2008 Server) the site displays these errors. The site runs in Integrated Pipeline mode, and is not set to 32 bit. The stack trace is: [COMException (0x80131192): Bad binary signature. (Exception from HRESULT: 0x80131192)] ASP.views_user_newinternal_aspx.__RenderContent2(HtmlTextWriter __w, Control parameterContainer) in e:\TeamCity\buildAgent\work\605ee6b4a5d1dd36\...Admin.Mvc\Views\User\NewInternal.aspx:53 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +115 ASP.views_shared_site_master.__Render__control1(HtmlTextWriter __w, Control parameterContainer) in e:\TeamCity\buildAgent\work\605ee6b4a5d1dd36\...Admin.Mvc\Views\Shared\Site.Master:26 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +115 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +240 System.Web.UI.Page.Render(HtmlTextWriter writer) +38 System.Web.Mvc.ViewPage.Render(HtmlTextWriter writer) +94 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +4240

    Read the article

  • Installing Fabric On Windows (Error No Module Called Readline)

    - by Jon
    I'm trying to use the Fabric 0.1.1 deploy tool (http://docs.fabfile.org/) on Windows and we're running into an issue with the readline module. I've been through various threads but can't seem to solve the issue. It's important because we can't deploy applications from Windows based machines. C:\Documents and Settings\dev\Desktop\deploy>fab Traceback (most recent call last): File "C:\python\Scripts\fab-script.py", line 8, in <module> load_entry_point('fabric==0.1.1', 'console_scripts', 'fab')() File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\pkg_resources.py" , line 277, in load_entry_point File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\pkg_resources.py" , line 2180, in load_entry_point File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\pkg_resources.py" , line 1913, in load File "build\bdist.win32\egg\fabric.py", line 25, in <module> **ImportError: No module named readline** Installing the module results in: **easy_install readline** Searching for readline Reading http://pypi.python.org/simple/readline/ Reading http://www.python.org/ Best match: readline 2.6.4 Downloading http://pypi.python.org/packages/source/r/readline/readline-2.6.4.tar .gz#md5=7568e8b78f383443ba57c9afec6f4285 Processing readline-2.6.4.tar.gz Running readline-2.6.4\setup.py -q bdist_egg --dist-dir c:\docume~1\ji81b9~1.che \locals~1\temp\easy_install-pzkz1a\readline-2.6.4\egg-dist-tmp-szs2ps Traceback (most recent call last): File "C:\python\Scripts\easy_install-script.py", line 8, in <module> load_entry_point('setuptools==0.6c9', 'console_scripts', 'easy_install')() File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 1671, in main File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 1659, in with_ei_usage File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 1675, in <lambda> File "c:\python\lib\distutils\core.py", line 152, in setup dist.run_commands() File "c:\python\lib\distutils\dist.py", line 975, in run_commands self.run_command(cmd) File "c:\python\lib\distutils\dist.py", line 995, in run_command cmd_obj.run() File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 211, in run File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 446, in easy_install File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 476, in install_item File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 655, in install_eggs File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 930, in build_and_install File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 919, in run_setup File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\sandbo x.py", line 27, in run_setup File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\sandbo x.py", line 63, in run File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\sandbo x.py", line 29, in <lambda> File "setup.py", line 93, in <module> AttributeError: 'module' object has no attribute 'symlink' Has anybody solved this issue or can anybody suggest a workaround?

    Read the article

  • How can I reliably check client identity whilst making DCOM calls to a C# .Net 3.5 Server?

    - by pionium
    Hi, I have an old Win32 C++ DCOM Server that I am rewriting to use C# .Net 3.5. The client applications sit on remote XP machines and are also written in C++. These clients must remain unchanged, hence I must implement the interfaces on new .Net objects. This has been done, and is working successfully regarding the implementation of the interfaces, and all of the calls are correctly being made from the old clients to the new .Net objects. However, I'm having problems obtaining the identity of the calling user from the DCOM Client. In order to try to identify the user who instigated the DCOM call, I have the following code on the server... [DllImport("ole32.dll")] static extern int CoImpersonateClient(); [DllImport("ole32.dll")] static extern int CoRevertToSelf(); private string CallingUser { get { string sCallingUser = null; if (CoImpersonateClient() == 0) { WindowsPrincipal wp = System.Threading.Thread.CurrentPrincipal as WindowsPrincipal; if (wp != null) { WindowsIdentity wi = wp.Identity as WindowsIdentity; if (wi != null && !string.IsNullOrEmpty(wi.Name)) sCallingUser = wi.Name; } if (CoRevertToSelf() != 0) ReportWin32Error("CoRevertToSelf"); } else ReportWin32Error("CoImpersonateClient"); return sCallingUser; } } private static void ReportWin32Error(string sFailingCall) { Win32Exception ex = new Win32Exception(); Logger.Write("Call to " + sFailingCall + " FAILED: " + ex.Message); } When I get the CallingUser property, the value returned the first few times is correct and the correct user name is identified, however, after 3 or 4 different users have successfully made calls (and it varies, so I can't be more specific), further users seem to be identified as users who had made earlier calls. What I have noticed is that the first few users have their DCOM calls handled on their own thread (ie all calls from a particular client are handled by a single unique thread), and then subsequent users are being handled by the same threads as the earlier users, and after the call to CoImpersonateClient(), the CurrentPrincipal matches that of the initial user of that thread. To Illustrate: User Tom makes DCOM calls which are handled by thread 1 (CurrentPrincipal correctly identifies Tom) User Dick makes DCOM calls which are handled by thread 2 (CurrentPrincipal correctly identifies Dick) User Harry makes DCOM calls which are handled by thread 3 (CurrentPrincipal correctly identifies Harry) User Bob makes DCOM calls which are handled by thread 3 (CurrentPrincipal incorrectly identifies him as Harry) As you can see in this illustration, calls from clients Harry and Bob are being handled on thread 3, and the server is identifying the calling client as Harry. Is there something that I am doing wrong? Are there any caveats or restrictions on using Impersonations in this way? Is there a better or different way that I can RELIABLY achieve what I am trying to do? All help would be greatly appreciated. Regards Andrew

    Read the article

  • Managed (.net) library with html-tidy like functionality?

    - by Eamon Nerbonne
    Does anybody know of an html cleaner for .NET that can parse html and (for instance) convert it to a more machine friendly format such as xhtml? I've tried the HTML Agility Pack, but that fails to correctly parse even fairly simple examples. To give an example of html that should be parsed correctly: <html><body> <ul><li>TestElem1 <li>TestElem2 <li>TestElem3 List: <ul><li>Nested1 <li>Nested2</li> <li>Nested3 </ul> <li>TestElem4 </ul> <p>paragraph 1 <p>paragraph 2 <p>paragraph 3 </body></html> li tags don't need to be closed (see spec), and neither do P tags. In other words, the above sample should be parsed as: <html><body> <ul><li>TestElem1</li> <li>TestElem2</li> <li>TestElem3 List: <ul><li>Nested1</li> <li>Nested2</li> <li>Nested3</li> </ul></li> <li>TestElem4</li> </ul> <p>paragraph 1</p> <p>paragraph 2</p> <p>paragraph 3</p> </body></html> Since the aim is to use the library on various machines, it's a big disadvantage to need to fall back to native code (such as a wrapper around html tidy) which would require extra deployment hassle and sacrifice platform independance. Any suggestions? To recap, I'm looking for: An html cleaner ala HTML tidy Must be able to deal with real world html, not just xhtml, at the very least correctly reading valid html 4 Must be able to convert to a more easily processable xml format Should be a purely managed app.

    Read the article

  • get local groups and not the primary groups for a domain user

    - by user175084
    i have a code to get the groups a user belongs to. try { DirectoryEntry adRoot = new DirectoryEntry(string.Format("WinNT://{0}", Environment.UserDomainName)); DirectoryEntry user = adRoot.Children.Find(completeUserName, "User"); object obGroups = user.Invoke("Groups"); foreach (object ob in (IEnumerable)obGroups) { // Create object for each group. DirectoryEntry obGpEntry = new DirectoryEntry(ob); listOfMyWindowsGroups.Add(obGpEntry.Name); } return true; } catch (Exception ex) { new GUIUtility().LogMessageToFile("Error in getting User MachineGroups = " + ex); return false; } the above code works fine when i have to find the groups of a local user but for a domain user it returns a value "Domain User" which is kind of wierd as it is a part of 2 local groups. Please can some1 help in solving this mystery. thanks Research I did some finding and got that i am being returned the primary group of the domain user called "Domain User" group but what i actually want is the groups of the local machines the domain user is a part of... i cannot get that.. any suggestions another code using LDAP string domain = Environment.UserDomainName; DirectoryEntry DE = new DirectoryEntry("LDAP://" + domain, null, null, AuthenticationTypes.Secure); DirectorySearcher search = new DirectorySearcher(); search.SearchRoot = DE; search.Filter = "(SAMAccountName=" + completeUserName + ")"; //Searches active directory for the login name search.PropertiesToLoad.Add("displayName"); // Once found, get a list of Groups try { SearchResult result = search.FindOne(); // Grab the records and assign them to result if (result != null) { DirectoryEntry theUser = result.GetDirectoryEntry(); theUser.RefreshCache(new string[] { "tokenGroups" }); foreach (byte[] resultBytes in theUser.Properties["tokenGroups"]) { System.Security.Principal.SecurityIdentifier mySID = new System.Security.Principal.SecurityIdentifier(resultBytes, 0); DirectorySearcher sidSearcher = new DirectorySearcher(); sidSearcher.SearchRoot = DE; sidSearcher.Filter = "(objectSid=" + mySID.Value + ")"; sidSearcher.PropertiesToLoad.Add("distinguishedName"); SearchResult sidResult = sidSearcher.FindOne(); if (sidResult != null) { listOfMyWindowsGroups.Add((string)sidResult.Properties["distinguishedName"][0]); } } } else { new GUIUtility().LogMessageToFile("no user found"); } return true; } catch (Exception ex) { new GUIUtility().LogMessageToFile("Error obtaining group names: " + ex.Message + " Please contact your administrator."); // If an error occurs report it to the user. return false; } this works too but i get the same result "Domain Users" . Please can some1 tell me how to get the local machine groups...????

    Read the article

  • iPhone SDK 3/4 App will not run on a iPhone 2.x device even with deployment target set to 2.0!

    - by MarqueIV
    Ok... I know about the difference between the base/active SDKs and the deployment target. I have my base SDK set at 4.0 and the deployment target set at 2.0. I am not using any APIs post 2.x, conditional or otherwise. Since I can't debug on a 2.x device, after building it, I use the iPhone Configuration Utility to install the app on the device, which it does just fine. Problem is, it doesn't run! I just get a blank screen. The main window never comes up! Now before you ask... I had this same problem with the iPhone SDK 3.x. I upgraded to the 4.x hoping it would be solved. It wasn't. Yes the provisioning profile is installed. (Couldn't install the app if it wasn't.) This same compiled app works fine on 3.x devices. Same with 4.x devices. Just not 2.x devices. Again, no I am not using any post-2.x SDKs. To prove this I created a brand-new, window-based app from the 'New Project' dialog and the only changes I made was the background color of the window (to prove the XIB loaded) and I set the deployment target to 2.0 (It's still compiled against the 4.x SDK though.) Again, it runs fine on 3.x or 4.x devices, but just a black, blank screen on 2.x devices. I've tried this on three separate 2.x devices included one freshly restored. I've used three separate dev machines (MacBook Pro with the 3.x SDK, MacBook Pro with the 4.x SDK and a Mac Pro with the 3.x SDK.) Same result every time. I am stumped. The fact that even an unmodified project doesn't run really has me confused. Could it be the XIB file? Did they change the format from 2.x to something newer in the 3.x SDK? If so, how do I set it back to 2.x. (Again, this is just a complete guess.) But I'm really stumped! Mark

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >