Search Results

Search found 20300 results on 812 pages for 'link listing'.

Page 460/812 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • OpenSwan IPSec phase #2 complications

    - by XXL
    Phase #1 (IKE) succeeds without any problems (verified at the target host). Phase #2 (IPSec), however, is erroneous at some point (apparently due to misconfiguration on localhost). This should be an IPSec-only connection. I am using OpenSwan on Debian. The error log reads the following (the actual IP-addr. of the remote endpoint has been modified): pluto[30868]: "x" #2: initiating Quick Mode PSK+ENCRYPT+PFS+UP+IKEv2ALLOW+SAREFTRACK {using isakmp#1 msgid:5ece82ee proposal=AES(12)_256-SHA1(2)_160 pfsgroup=OAKLEY_GROUP_DH22} pluto[30868]: "x" #1: ignoring informational payload, type NO_PROPOSAL_CHOSEN msgid=00000000 pluto[30868]: "x" #1: received and ignored informational message pluto[30868]: "x" #1: the peer proposed: 0.0.0.0/0:0/0 - 0.0.0.0/0:0/0 pluto[30868]: "x" #3: responding to Quick Mode proposal {msgid:a4f5a81c} pluto[30868]: "x" #3: us: 192.168.1.76<192.168.1.76[+S=C] pluto[30868]: "x" #3: them: 222.222.222.222<222.222.222.222[+S=C]===10.196.0.0/17 pluto[30868]: "x" #3: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 pluto[30868]: "x" #3: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 pluto[30868]: "x" #1: ignoring informational payload, type NO_PROPOSAL_CHOSEN msgid=00000000 pluto[30868]: "x" #1: received and ignored informational message pluto[30868]: "x" #3: next payload type of ISAKMP Hash Payload has an unknown value: 97 X pluto[30868]: "x" #3: malformed payload in packet pluto[30868]: | payload malformed after IV I am behind NAT and this is all coming from wlan2. Here are the details: default via 192.168.1.254 dev wlan2 proto static 169.254.0.0/16 dev wlan2 scope link metric 1000 192.168.1.0/24 dev wlan2 proto kernel scope link src 192.168.1.76 metric 2 Output of ipsec verify: Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.37/K3.2.0-24-generic (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Two or more interfaces found, checking IP forwarding [OK] Checking NAT and MASQUERADEing [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] This is what happens when I run ipsec auto --up x: 104 "x" #1: STATE_MAIN_I1: initiate 003 "x" #1: received Vendor ID payload [RFC 3947] method set to=109 106 "x" #1: STATE_MAIN_I2: sent MI2, expecting MR2 003 "x" #1: received Vendor ID payload [Cisco-Unity] 003 "x" #1: received Vendor ID payload [Dead Peer Detection] 003 "x" #1: ignoring unknown Vendor ID payload [502099ff84bd4373039074cf56649aad] 003 "x" #1: received Vendor ID payload [XAUTH] 003 "x" #1: NAT-Traversal: Result using RFC 3947 (NAT-Traversal): i am NATed 108 "x" #1: STATE_MAIN_I3: sent MI3, expecting MR3 004 "x" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_128 prf=oakley_sha group=modp1024} 117 "x" #2: STATE_QUICK_I1: initiate 010 "x" #2: STATE_QUICK_I1: retransmission; will wait 20s for response 010 "x" #2: STATE_QUICK_I1: retransmission; will wait 40s for response 031 "x" #2: max number of retransmissions (2) reached STATE_QUICK_I1. No acceptable response to our first Quick Mode message: perhaps peer likes no proposal 000 "x" #2: starting keying attempt 2 of at most 3, but releasing whack I have enabled NAT traversal in ipsec.conf accordingly. Here are the settings relative to the connection in question: version 2.0 config setup plutoopts="--perpeerlog" plutoopts="--interface=wlan2" dumpdir=/var/run/pluto/ nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn x authby=secret pfs=yes auto=add phase2alg=aes256-sha1;dh22 keyingtries=3 ikelifetime=8h type=transport left=192.168.1.76 leftsubnet=192.168.1.0/24 leftprotoport=0/0 right=222.222.222.222 rightsubnet=10.196.0.0/17 rightprotoport=0/0 Here are the specs provided by the other end that must be met for Phase #2: encryption algorithm: AES (128 or 256 bit) hash algorithm: SHA local ident1 (addr/mask/prot/port): (10.196.0.0/255.255.128.0/0/0) local ident2 (addr/mask/prot/port): (10.241.0.0/255.255.0.0/0/0) remote ident (addr/mask/prot/port): (x.x.x.x/x.x.x.x/0/0) (internal network or localhost) Security association lifetime: 4608000 kilobytes/3600 seconds PFS: DH group2 So, finally, what might be the cause of the issue that I am experiencing? Thank you.

    Read the article

  • Increase samba space on open suse 12.1

    - by Kapil Sharma
    I know linux basics but not an expert. IT guy left the job here and there is some time before new hire. So sorry if question is very basic. We have local testing server based on Open SUSE 12.1, which also act as shared drive between dev/mgmt team here and using Samba for that. Now we are running out of space on samba, even though server's 2*1TB harddisk is nearly 90% free. My question is, what is limiting Samba and how can I increase its limit? We need around at least 500 GB as shared drive but currently its just 25 GB. I don't need step by step answer, just a link to any helpful article would be sufficient. Probably I'm putting wrong keywords in google so not getting any helpful link. EDIT: Output of commands in the first comment. All commands were run as root user df -h (getting error with df -ht) Filesystem Size Used Avail Use% Mounted on rootfs 30G 5.1G 23G 19% / devtmpfs 2.0G 36K 2.0G 1% /dev tmpfs 2.0G 1.1M 2.0G 1% /dev/shm tmpfs 2.0G 676K 2.0G 1% /run /dev/sda2 30G 5.1G 23G 19% / tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup tmpfs 2.0G 676K 2.0G 1% /var/run tmpfs 2.0G 0 2.0G 0% /media tmpfs 2.0G 676K 2.0G 1% /var/lock /dev/sda3 36G 31G 3.3G 91% /home fdisk -l /dev/[hmsv]d* Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2d4a2d49 Device Boot Start End Blocks Id System /dev/sda1 2048 16771071 8384512 82 Linux swap / Solaris /dev/sda2 * 16771072 79681535 31455232 83 Linux /dev/sda3 79681536 156301311 38309888 83 Linux Disk /dev/sda1: 8585 MB, 8585740288 bytes 255 heads, 63 sectors/track, 1043 cylinders, total 16769024 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda1 doesn't contain a valid partition table Disk /dev/sda2: 32.2 GB, 32210157568 bytes 255 heads, 63 sectors/track, 3915 cylinders, total 62910464 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System Disk /dev/sda3: 39.2 GB, 39229325312 bytes 255 heads, 63 sectors/track, 4769 cylinders, total 76619776 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda3 doesn't contain a valid partition table vgs No volume groups found lvs No volume groups found output of vi /etc/samba/smb.conf # smb.conf is the main Samba configuration file. You find a full commented # version at /usr/share/doc/packages/samba/examples/smb.conf.SUSE if the # samba-doc package is installed. # Date: 2011-11-02 [global] workgroup = WORKGROUP passdb backend = tdbsam printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User include = /etc/samba/dhcp.conf logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes [homes] comment = Home Directories valid users = %S, %D%w%S browseable = No read only = No inherit acls = Yes [profiles] comment = Network Profiles Service path = %H read only = No store dos attributes = Yes create mask = 0600 directory mask = 0700 [users] comment = All users path = /home read only = No inherit acls = Yes veto files = /aquota.user/groups/shares/ [groups] comment = All groups path = /home/groups read only = No inherit acls = Yes [printers] comment = All Printers path = /var/tmp printable = Yes create mask = 0600 browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/drivers write list = @ntadmin root force group = ntadmin create mask = 0664 directory mask = 0775 [allusers] comment = All Users path = /home/shares/allusers valid users = @users force group = users create mask = 0660 directory mask = 0771 writable = yes

    Read the article

  • Cannot connect to MySQL over TCP locally - Connection Timeout - Ubuntu 9.04

    - by gav
    I am running Ubuntu and am ultimately trying to connect Tomcat to my MySQL database using JDBC. It has worked previously but after a reboot the instance now fails to connect. Both Tomcat 6 and MySQL 5.0.75 are on the same machine Connection string: jdbc:mysql:///localhost:3306 I can connect to MySQL on the command line using the mysql command The my.cnf file is pretty standard (Available on request) has bind address: 127.0.0.1 I cannot Telnet to the MySQL port despite netstat saying MySQL is listening I have one IpTables rule to forward 80 - 8080 and no firewall I'm aware of. I'm pretty new to this and I'm not sure what else to test. I don't know whether I should be looking in etc/interfaces and if I did what to look for. It's weird because it used to work but after a reboot it's down so I must have changed something.... :). I realise a timeout indicates the server is not responding and I assume it's because the request isn't actually getting through. I installed MySQL via apt-get and Tomcat manually. MySqld processes root@88:/var/log/mysql# ps -ef | grep mysqld root 21753 1 0 May27 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe mysql 21792 21753 0 May27 ? 00:00:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock root 21793 21753 0 May27 ? 00:00:00 logger -p daemon.err -t mysqld_safe -i -t mysqld root 21888 13676 0 11:23 pts/1 00:00:00 grep mysqld Netstat root@88:/var/log/mysql# netstat -lnp | grep mysql tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 21792/mysqld unix 2 [ ACC ] STREAM LISTENING 1926205077 21792/mysqld /var/run/mysqld/mysqld.sock Toy Connection Class root@88:~# cat TestConnect/TestConnection.java import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; public class TestConnection { public static void main(String args[]) throws Exception { Connection con = null; try { Class.forName("com.mysql.jdbc.Driver").newInstance(); System.out.println("Got driver"); con = DriverManager.getConnection( "jdbc:mysql:///localhost:3306", "uname", "pass"); System.out.println("Got connection"); if(!con.isClosed()) System.out.println("Successfully connected to " + "MySQL server using TCP/IP..."); } finally { if(con != null) con.close(); } } } Toy Connection Class Output Note: This is the same error I get from Tomcat. root@88:~/TestConnect# java -cp mysql-connector-java-5.1.12-bin.jar:. TestConnection Got driver Exception in thread "main" com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 1 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:409) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122) at TestConnection.main(TestConnection.java:14) Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:409) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122) at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:344) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2181) ... 12 more Caused by: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) ... 13 more Telnet Output root@88:~/TestConnect# telnet localhost 3306 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection timed out

    Read the article

  • Cannot ping Localhost so I can't shutdown Tomcat

    - by gav
    Hi, I installed Tomcat 6 using the tar-ball via wget. Startup of the server is fine but on shutdown I get a timeout exception. root@88:/usr/local/tomcat/logs# /usr/local/tomcat/bin/shutdown.sh Using CATALINA_BASE: /usr/local/tomcat Using CATALINA_HOME: /usr/local/tomcat Using CATALINA_TMPDIR: /usr/local/tomcat/temp Using JRE_HOME: /usr Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar 30-Mar-2010 17:33:41 org.apache.catalina.startup.Catalina stopServer SEVERE: Catalina.stop: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) ... I read that this might be because I have a firewall blocking incoming connections on the shutdown port (8005). I have a default Ubuntu 9.04 installation running on a VPS with no rules in my iptables. How can I tell if that port is blocked? How can I check that the server is listening for connections on 8005? Bizarrely pinging localhost or the IP of my server fails from the server itself, whereas pinging the IP of my server from another machine succeeds. -------- EDIT -------- (In reply to Davey) Thanks for all the tips and suggestions! netstat -nlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:8005 0.0.0.0:* LISTEN 9611/java tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 28505/mysqld tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 9611/java tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN ... So we can see that tomcat is listening, I just don't seem to be able to reach it. root@88:/usr/local/tomcat# telnet localhost 8005 Trying 127.0.0.1... Trying to telnet to the port Hangs indefinitely. I have no rules in my iptables so I don't think it's a firewall thing. root@88:/usr/local/tomcat# iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination This is the contents of /etc/hosts 127.0.0.1 localhost.localdomain localhost # Auto-generated hostname. Please do not remove this comment. 88.198.31.14 88.198.31.14 88 88 But I still can't ping localhost... do I need to check a loopback device is enabled properly or something? (I'm unsure how to do that if you do say yes :)). root@88:/usr/local/tomcat# ping localhost PING localhost (127.0.0.1) 56(84) bytes of data. --- localhost ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 5999ms Trying to find out what the loop back is configured as; root@88:~# ifconfig lo lo Link encap:Local Loopback LOOPBACK MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) SOLUTION THANKS TO DAVEY I needed to bring up the interface (Not sure why it wasn't running). ifconfig lo up did the trick. root@88:~# ifconfig lo up root@88:~# ifconfig lo lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) root@88:~# ping localhost PING localhost.localdomain (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.025 ms Thanks again, Gav

    Read the article

  • Disk operations freeze Debian

    - by Grzenio
    Hi, I have just installed Debian testing on my new desktop and I am not very happy with performance - when I perform a disk intensive operation, e.g. upgrade packages in the system, everything seems to freeze, e.g. changing tabs in Iceweasel takes 3 seconds. I run the Debian on my 3 year old Thinkpad X60 ultra-portable, and I don't have these issues. (every single parameter of the laptop is much worse than the desktop). I am using the default packaged kernel and scripts. I run hdparm -t /dev/sda1 And I got around 96GB/s, which is expected. What else can I try to make it work better? EDIT: grzes:/home/ga# hdparm -i /dev/sda /dev/sda: Model=WDC WD15EARS-00Z5B1, FwRev=80.00A80, SerialNo=WD-WMAVU1362357 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=2930277168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode EDIT2: Even my wife said "on this new computer I can't do anything when I copy the photos from the camera and its much worse than on the old one". So it must be serious. EDIT3: Updated to 2.6.32, but still no improvement EDIT4: I forgot to mention that the new disk is ext4, the old was ext3. EDIT5: Still not solved. I have a P43 ASUS P5QL-E board. Lines from dmesg that seem relevant: [ 0.370850] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.370852] io scheduler noop registered [ 0.370853] io scheduler anticipatory registered [ 0.370854] io scheduler deadline registered [ 0.370876] io scheduler cfq registered (default) ... [ 0.908233] ata_piix 0000:00:1f.2: version 2.13 [ 0.908243] ata_piix 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.908246] ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] [ 0.908275] ata_piix 0000:00:1f.2: setting latency timer to 64 [ 0.908316] scsi0 : ata_piix [ 0.908374] scsi1 : ata_piix [ 0.909180] ata1: SATA max UDMA/133 cmd 0xa000 ctl 0x9c00 bmdma 0x9480 irq 19 [ 0.909183] ata2: SATA max UDMA/133 cmd 0x9880 ctl 0x9800 bmdma 0x9488 irq 19 [ 0.909199] ata_piix 0000:00:1f.5: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.909202] ata_piix 0000:00:1f.5: MAP [ P0 -- P1 -- ] [ 0.909228] ata_piix 0000:00:1f.5: setting latency timer to 64 [ 0.909279] scsi2 : ata_piix [ 0.909326] scsi3 : ata_piix [ 0.910021] ata3: SATA max UDMA/133 cmd 0xb000 ctl 0xac00 bmdma 0xa480 irq 19 [ 0.910024] ata4: SATA max UDMA/133 cmd 0xa880 ctl 0xa800 bmdma 0xa488 irq 19 [ 0.915575] FDC 0 is a post-1991 82077 ... [ 1.716062] ata1.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1.716074] ata1.01: SATA link down (SStatus 0 SControl 300) [ 1.724318] ata1.00: ATA-8: WDC WD15EARS-00Z5B1, 80.00A80, max UDMA/133 [ 1.724322] ata1.00: 2930277168 sectors, multi 16: LBA48 NCQ (depth 0/32) [ 1.740339] ata1.00: configured for UDMA/133 [ 1.740428] scsi 0:0:0:0: Direct-Access ATA WDC WD15EARS-00Z 80.0 PQ: 0 ANSI: 5 [ 1.746788] scsi 6:0:0:0: CD-ROM ASUS DRW-1608P 1.17 PQ: 0 ANSI: 5 ... [ 1.925981] sd 0:0:0:0: [sda] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB) [ 1.926005] sd 0:0:0:0: [sda] Write Protect is off [ 1.926007] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 1.926020] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 1.926092] sda:sr0: scsi3-mmc drive: 40x/40x writer cd/rw xa/form2 cdda tray [ 1.931106] Uniform CD-ROM driver Revision: 3.20 [ 1.931191] sr 6:0:0:0: Attached scsi CD-ROM sr0 ... [ 1.941936] sda1 sda2 sda3 sda4 < sda5 sda6 > [ 1.967691] sd 0:0:0:0: [sda] Attached SCSI disk [ 1.970938] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 1.970959] sr 6:0:0:0: Attached scsi generic sg1 type 5 ... [ 2.500086] EXT4-fs (sda3): mounted filesystem with ordered data mode ... [ 7.150468] EXT4-fs (sda6): mounted filesystem with ordered data mode

    Read the article

  • Dhcpd Daemon is trying to lease itself?

    - by tommieb75
    I have a Slackware Linux 13.0 box with two interfaces, eth0 and eth1. I have set this box up to be on the 192.168.1.0/24 network, with subnet mask of 255.255.255.0. I am trying to run a dhcpd server on this box to service two interfaces above, so I subnetted the 192.168.1.0/24 network into two subnets. For eth0 192.168.1.1, subnet mask 255.255.255.128, broadcast mask 192.168.1.127. For eth1 192.168.1.129, subnet mask 255.255.255.128, broadcast mask 192.168.1.255. Both the interfaces are assigned manually. eth0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:192.168.1.1 Bcast:192.168.1.127 Mask:255.255.255.128 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:39 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:1404 (1.3 KiB) Interrupt:11 Base address:0x8000 Memory:faffc000-faffcfff eth1 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:192.168.1.128 Bcast:192.168.1.255 Mask:255.255.255.128 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10003 errors:0 dropped:0 overruns:0 frame:0 TX packets:13286 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1589229 (1.5 MiB) TX bytes:9900005 (9.4 MiB) Interrupt:11 Here is the dhcpd.conf set up authoritative; ddns-update-style interim; ignore client-updates; subnet 192.168.1.0 netmask 255.255.255.128 { range 192.168.1.2 192.168.1.126; default-lease-time 86400; max-lease-time 86400; option routers 192.168.1.1; option ip-forwarding off; option domain-name-servers 208.67.222.222, 208.67.220.220; option broadcast-address 192.168.1.127; option subnet-mask 255.255.255.128; } subnet 192.168.1.128 netmask 255.255.255.128 { range 192.168.1.129 192.168.1.254; default-lease-time 86400; max-lease-time 86400; option routers 192.168.1.1; option ip-forwarding off; option domain-name-servers 208.67.222.222, 208.67.220.220; option broadcast-address 192.168.1.255; option subnet-mask 255.255.255.128; } This is what is showing in the log Apr 10 18:09:58 inspiron8600 dhcpd: DHCPDISCOVER from 00:00:00:00:00:00 (inspiron8600) via eth1 Apr 10 18:09:58 inspiron8600 dhcpd: DHCPOFFER on 192.168.1.131 to 00:00:00:00:00:00 (inspiron8600) via eth1 Apr 10 18:10:01 inspiron8600 dhcpcd[3832]: eth1: adding IP address 169.254.153.6/16 This is happening spuriously, and the log gets filled up with nonsense..so my question is this: How do I stop this from happening? And why would it be trying to give itself a lease? I am sure I have missed something but cannot see it and would appreciate a pair of eyes from the community to spot the obvious flaw!

    Read the article

  • how to make bridge networking with KVM work in Fedora19

    - by netllama
    I'm attempting to get several virtual machines setup on a Fedora-19 host system, with the traditional bridge network devices (br0, br1, etc). I've done this many times before with older versions of Fedora (16, 14, etc), and it just works. However, for reasons that I cannot figure out, the bridge doesn't seem to be working in Fedora19. While I can successfully connect to the outside world (local network + internet) from inside a VM, nothing can communicate with the VM from outside (local network). I'm referring to something as trivial as pinging. From inside the VM, I can ping anything successfully (0% packet loss). However, from outside the VM (on the host, or any other system on the same network), I see 100% packet loss when pinging the IP address of the VM. My first question is simply, does anyone else have this working successfully in F19? And if so, what steps did you need to follow? I'm not using NetworkManager at all, its all the network service. There are no firewalls involved anywhere (iptables & firewall services are currently disabled). Here's the current host configuration: # brctl show bridge name bridge id STP enabled interfaces br0 8000.38eaa792efe5 no em2 vnet1 br1 8000.38eaa792efe6 no em3 br2 8000.38eaa792efe7 no em4 vnet0 virbr0 8000.525400db3ebf yes virbr0-nic # more /etc/sysconfig/network-scripts/ifcfg-em2 TYPE=Ethernet BRIDGE="br0" NAME=em2 DEVICE="em2" UUID=aeaa839e-c89c-4d6e-9daa-79b6a1b919bd ONBOOT=yes HWADDR=38:EA:A7:92:EF:E5 NM_CONTROLLED="no" # more /etc/sysconfig/network-scripts/ifcfg-br0 TYPE=Bridge NM_CONTROLLED="no" BOOTPROTO=dhcp NAME=br0 DEVICE="br0" ONBOOT=yes # ifconfig em2 ;ifconfig br0 em2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::3aea:a7ff:fe92:efe5 prefixlen 64 scopeid 0x20<link> ether 38:ea:a7:92:ef:e5 txqueuelen 1000 (Ethernet) RX packets 100093 bytes 52354831 (49.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 25321 bytes 15791341 (15.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xf7d00000-f7e00000 br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.31.99.226 netmask 255.255.252.0 broadcast 10.31.99.255 inet6 fe80::3aea:a7ff:fe92:efe5 prefixlen 64 scopeid 0x20<link> ether 38:ea:a7:92:ef:e5 txqueuelen 0 (Ethernet) RX packets 19619 bytes 1963328 (1.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11 bytes 1074 (1.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Relevant section from /etc/libvirt/qemu/foo.xml (one of the VMs with this problem): <interface type='bridge'> <mac address='52:54:00:26:22:9d'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> # ps -ef | grep qemu qemu 1491 1 82 13:25 ? 00:42:09 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name cuda-linux64-build5 -S -machine pc-0.13,accel=kvm,usb=off -cpu SandyBridge,+pdpe1gb,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 16384 -smp 6,sockets=6,cores=1,threads=1 -uuid 6e930234-bdfd-044d-2787-22d4bbbe30b1 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/cuda-linux64-build5.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/cuda-linux64-build5.img,if=none,id=drive-virtio-disk0,format=raw,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:26:22:9d,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:1 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 I can provide additional information, if requested. thanks!

    Read the article

  • How should I ask for help in getting my emails to stop bouncing?

    - by Gregg Williams
    For several months, people have been telling me that emails they sent to me have been bouncing back, marked as undeliverable. The bounce message would contain portions like this: Final-Recipient: rfc822;[email protected] Action: failed Status: 5.7.1 Diagnostic-Code: smtp;550 5.7.1 <[email protected]>... Recipient declines email from 69.64.159.2, <spamhaus-xbl>, Ref: http://www.spamhaus.org/query/bl?ip=69.64.159.2 Clicking the link on the last line, the destination page told me that "this IP address is infected with/emitting spamware/spamtrojan traffic and needs to be fixed." I could temporarily de-list this node by clicking a link on that page, but it would get back on the list and more emails to me to bounce. I own a domain, innerpaths.net, and I normally use [email protected] for my email. I have my domain registrar, namecheap.com, forward all email from innerpaths.net to the email account [email protected]. (BTW, I had this same problem at a former registrar. I changed registrars, hoping that would fix the problem. It didn't.) Trying to isolate the problem, I asked namecheap.com what I should do. Their answer, though substantial, left me scratching my head: We have received feedback from our upstream provider which informed us that the mail server that you are trying to email subscribes to a 3rd party blacklist service which they appear to be listed on at the present time and is causing destination mail server to reject the messages. Being blocked with one of these services can happen to anyone for many reasons and is something that is beyond our control. 3rd party blacklist services require companies whose mail servers they have blacklisted, pay fees in order to be removed from their lists. As we cannot pay fees to blacklist services which require them for removal, you should contact your email provider and have them whitelist our mail server IP address: 69.64.157.73. My best guess is that I should email my ISP, sonic.net, tell them what is going on and ask them to whitelist the IP address 69.64.157.73. (If not, please let me know.) But I want to know what is going on and how email works. I understand that there's a device at location 69.64.159.2 that is doing something bad that causes the "destination mail server [sonic.net's, I assume --gw] to reject the messages." I know that email is sent through multiple devices in a way that eventually gets it to its destination. Beyond that, here are my questions: 1) I thought the Internet "routed around damage." Why does email starting at namecheap.com always (or is it 'sometimes'?) go through 69.64.159.2? 2) Who is the "upstream provider" that the namecheap.com representative mentions, and what is their role? 3) How does having sonic.net's whitelisting namecheap.com's mail server prevent my email being bounced by 69.64.159.2? I've searched the Internet for answers but have found nothing useful. Thanks for whatever answers you can provide.

    Read the article

  • Vagrant (Virtualbox) host-only multiple node networking issue

    - by Lorin Hochstein
    I'm trying to use a multi-VM vagrant environment as a testbed for deploying OpenStack, and I've run into a networking problem with trying to communicate from one VM, to a VM-inside-of-a-VM. I have two Vagrant nodes, a cloud controller node and a compute node. I'm using host-only networking. My Vagrantfile looks like this: Vagrant::Config.run do |config| config.vm.box = "precise64" config.vm.define :controller do |controller_config| controller_config.vm.network :hostonly, "192.168.206.130" # eth1 controller_config.vm.network :hostonly, "192.168.100.130" # eth2 controller_config.vm.host_name = "controller" end config.vm.define :compute1 do |compute1_config| compute1_config.vm.network :hostonly, "192.168.206.131" # eth1 compute1_config.vm.network :hostonly, "192.168.100.131" # eth2 compute1_config.vm.host_name = "compute1" compute1_config.vm.customize ["modifyvm", :id, "--memory", 1024] end end When I try to start up a (QEMU-based) VM, it boots successfully on compute1, and its virtual nic (vnet0) is connected via a bridge, br100: root@compute1:~# brctl show 100 bridge name bridge id STP enabled interfaces br100 8000.08002798c6ef no eth2 vnet0 When the QEMU VM makes a request to the DHCP server (dnsmasq) running on controller, I can see the request reaches the controller because of the output on the syslog on the controller: Aug 6 02:34:56 precise64 dnsmasq-dhcp[12042]: DHCPDISCOVER(br100) fa:16:3e:07:98:11 Aug 6 02:34:56 precise64 dnsmasq-dhcp[12042]: DHCPOFFER(br100) 192.168.100.2 fa:16:3e:07:98:11 However, the DHCPOFFER never makes it back to the VM running on compute1. If I watch the requests using tcpdump on the vboxnet3 interface on my host machine that runs Vagrant (Mac OS X), I can see both the requests and the replies $ sudo tcpdump -i vboxnet3 -n port 67 or port 68 tcpdump: WARNING: vboxnet3: That device doesn't support promiscuous mode (BIOCPROMISC: Operation not supported on socket) tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vboxnet3, link-type EN10MB (Ethernet), capture size 65535 bytes 22:51:20.694040 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:20.694057 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:20.696047 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 22:51:23.700845 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:23.700876 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:23.701591 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 22:51:26.705978 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:26.705995 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:26.706527 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 But, if I tcpdump on eth2 on compute, I only see the requests, not the replies: root@compute1:~# tcpdump -i eth2 -n port 67 or port 68 tcpdump: WARNING: eth2: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes 02:51:20.240672 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 02:51:23.249758 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 02:51:26.258281 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 At this point, I'm stuck. I'm not sure why the DHCP replies aren't making it to the compute node. Perhaps it has something to do with the configuration of the VirtualBox virtual switch/router? Note that eth2 interfaces on both nodes have been set to promiscuous mode.

    Read the article

  • linux routing bug?

    - by Balázs Pozsár
    I have been struggling with this not easily reproducible issue since a while. I am using linux kernel v3.1.0, and sometimes routing to a few IP addresses does not work. What seems to happen is that instead of sending the packet to the gateway, the kernel treats the destination address as local, and tries to gets its MAC address via ARP. For example, now my current IP address is 172.16.1.104/24, the gateway is 172.16.1.254: # ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:1B:63:97:FC:DC inet addr:172.16.1.104 Bcast:172.16.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:230772 errors:0 dropped:0 overruns:0 frame:0 TX packets:171013 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:191879370 (182.9 Mb) TX bytes:47173253 (44.9 Mb) Interrupt:17 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.16.1.254 0.0.0.0 UG 0 0 0 eth0 172.16.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 I can ping a few addresses, but not 172.16.0.59: # ping -c1 172.16.1.254 PING 172.16.1.254 (172.16.1.254) 56(84) bytes of data. 64 bytes from 172.16.1.254: icmp_seq=1 ttl=64 time=0.383 ms --- 172.16.1.254 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms root@pozsybook:~# ping -c1 172.16.0.1 PING 172.16.0.1 (172.16.0.1) 56(84) bytes of data. 64 bytes from 172.16.0.1: icmp_seq=1 ttl=63 time=5.54 ms --- 172.16.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 5.545/5.545/5.545/0.000 ms root@pozsybook:~# ping -c1 172.16.0.2 PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data. 64 bytes from 172.16.0.2: icmp_seq=1 ttl=62 time=7.92 ms --- 172.16.0.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 7.925/7.925/7.925/0.000 ms root@pozsybook:~# ping -c1 172.16.0.59 PING 172.16.0.59 (172.16.0.59) 56(84) bytes of data. From 172.16.1.104 icmp_seq=1 Destination Host Unreachable --- 172.16.0.59 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms When trying to ping 172.16.0.59, I can see in tcpdump that an ARP req was sent: # tcpdump -n -i eth0|grep ARP tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 15:25:16.671217 ARP, Request who-has 172.16.0.59 tell 172.16.1.104, length 28 and /proc/net/arp has an incomplete entry for 172.16.0.59: # grep 172.16.0.59 /proc/net/arp 172.16.0.59 0x1 0x0 00:00:00:00:00:00 * eth0 Please note, that 172.16.0.59 is accessible from this LAN from other computers. Does anyone have any idea of what's going on? Thanks. update: replies to the comments below: there are no interfaces besides eth0 and lo the ARP req cannot be seen on the other end, but that's how it should work. the main problem is that an ARP req should not even be sent at the first place the problem persist even if I add an explicit route with the command "route add -host 172.16.0.59 gw 172.16.1.254 dev eth0"

    Read the article

  • Centos/OVH: public IP on KVM virtual machine

    - by Sébastien
    Since a few days, I'm trying to configure my KVM vm to have a public IP address, without any success. First, I'm on OVH, and you need to know they don't allow networking from different mac addresses. I have so registered a virtual mac address associated with my failover IP Here's my configuration: Guest wanted IP: 46.105.40.x Host IP: 176.31.240.x Host configuration dummy0 interface: ifcfg-dummy0 BOOTPROTO=static IPADDR=10.0.0.1 NETMASK=255.0.0.0 ONBOOT=yes NM_CONTROLLED=no ARP=yes BRIDGE=br0 br0 bridge: ifcfg-br0 DEVICE=br0 TYPE=Bridge DELAY=0 ONBOOT=yes BOOTPROTO=static IPADDR=192.168.1.1 NETMASK=255.255.255.0 PEERDNS=yes NM_CONTROLLED=no ARP=yes Failover ip is redirected to the br0 bridge with ip route add 46.105.40.xxx dev br0 > cat /proc/sys/net/ipv4/ip_forward 1 > cat /proc/sys/net/ipv4/conf/vnet0/proxy_arp 1 > route -n Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 176.31.240.254 0.0.0.0 UG 0 0 0 eth0 46.105.40.x 0.0.0.0 255.255.255.255 UH 0 0 0 br0 176.31.240.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 Guest configuration: KVM: <interface type='bridge'> <mac address='02:00:00:30:22:05'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </interface> I've borrowed most of the OVH configuration here (in french, http://guides.ovh.com/BridgeClient) for the guest configuration eth0 interface: ifcfg-eth0 DEVICE="eth0" BOOTPROTO=none HWADDR="02:00:00:30:22:05" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" UUID="e9138469-0d81-4ee6-b5ab-de0d7d17d1c8" USERCTL=no PEERDNS=yes IPADDR=46.105.40.xxx NETMASK=255.255.255.255 GATEWAY=176.31.240.254 ARP=yes For the routes, I have in route-eth0: 176.31.240.254 dev eth0 default via 176.31.240.254 dev eth0 With this configuration, I don't have any access to the internet. The only thing I can do is to ping the public ip of the host, nothing more. My final conclusion is that the route does not work, because, when, on the guest, I run ping 8.8.8.8, I have, on the host: > tcpdump -i vnet0 icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br0, link-type EN10MB (Ethernet), capture size 65535 bytes 13:38:09.009324 IP 46-105-40-xxx.kimsufi.com > google-public-dns-a.google.com: ICMP echo request, id 50183, seq 1, length 64 13:38:09.815344 IP 46-105-40-xxx.kimsufi.com > google-public-dns-a.google.com: ICMP echo request, id 50183, seq 2, length 64 I never get the ping reply, only the request. It seems Guest - Host communication is fine. On eth0: > tcpdump -i eth0 icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 13:39:40.240561 IP 46-105-40-xxx.kimsufi.com > google-public-dns-a.google.com: ICMP echo request, id 50439, seq 1, length 64 13:39:40.250161 IP google-public-dns-a.google.com > 46-105-40-xxx.kimsufi.com: ICMP echo reply, id 50439, seq 1, length 64 I have the request and the reply on eth0, but reply is not forwarded to the bridge. I really don't understand why, I though it was the aim of the route to do that! IPtables is disabled on both host and guest. I really hope some of you will be able to help me! Many thanks in advance, Sébastien

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • PXE Boot not working

    - by Nishant
    Please explain the error in this screenshot DHCP Setting: This screenshot was taken after powering off the old comp hence he server interface is shown as the wireless card - it becomes 192.168.0.1 when I connect wires and power up the old laptop to boot via PXE. My scenario is simple. An old laptop and a new laptop . A cross over cable ( that I myself made from CAT 6 cable by cutting it and connecting 4 wires as mentioned in some doc). The new laptop ( tftp server ) has a Wirelss Card ( with which I am browsing and writing this ) . And the cable is connected between laptops . TFTP server ( new laptop details ) Windows IP Configuration Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::f511:3d4a:ca01:122e%16 IPv4 Address. . . . . . . . . . . : 192.168.0.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.0.2 Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Achilles Link-local IPv6 Address . . . . . : fe80::99b1:8ae0:9e6c:f300%11 IPv4 Address. . . . . . . . . . . : 192.168.2.3 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.2.1

    Read the article

  • Moving users folder on Windows-7 to another partition - bad idea?

    - by Donat
    Hi, I'd like to re-submit here a question posted by Benjol on Aug 17at 5:57 "Moving users folder on Windows Vista to another partition - bad idea?" (I can't post one than one link until I earn "10 reputation" and removed my "answer" there to post my follow-up questions here). I am anxiously getting ready at long last to to carry out a clean install (using custom install option) from Vista to Windows-7 Home Premium 64bit with the free upgrade I received late October. For my Vista system I successfully set-up last Summer a multi-partitions scheme with Users and Program Data on a a different partition than the operating system (see link below, and its subsequent links in my comment for details). http://tuts4tech.net/2009/08/05/windows-7-move-the-users-and-program-files-directories-to-a-different-partition/comment-page-1/#comment-562 I was planning a similar set-up for windows 7, a little more streamlined, with OS, Program Files on C:, Users and Program Data on D:, and TV media recording on a separate partition. Reading the Question submitted by Benjol, I am second guessing too. Is moving Users and Program Data on a different partition than the default primary partition with OS and Program Files such a good idea? The couple of people I talked to at the official Microsoft Windows 7 booth at CES 2010 gave the same answer to the intention of moving the Users profile folder to another partition. In a nutshell, they all told me that they used to do this in XP and less in Vista but not anymore with Windows 7... "It is stable, after two months still no problem" I had the feeling it was a scripted answer to emphasize how Windows 7 is so stable and efficient... (Will Windows-7 system not become bugged down over the course of several months to a year or two? Only time will tell) Long story short, I share the same view than Benjol expressed with respect to being "able to backup and restore system and user data independently." I just received a 2TB usb2, eSATA external hard drive as a back-up drive, which includes NTI Shadow 4 (4.1.0.150) for back-up solution. I took note of the issue with NTUSER.DAT and I will read more about Volume Shadow Copy Service (VSS) for Windows 7. I am willing to put the effort if placing Users and Program Data on a different partition would allow to restore a fresher OS+Program image when the system gets bugged down. Questions: Is it such a bad idea? What is the "easy route" referred by Benjol in his post? Is it to just relocate folders to another partition using the Folder property tool? (It is not practical for several users and might not provide a straightforward restore process of just OS and Program Files when needed.) I am starting to learn about Windows 7 libraries. Would Windows 7 libraries be another alternative to achieve this? All this reading to decide how to organize the partition scheme for my custom system is starting to be confusing. I apologize for this lengthy Question. It is my first day here on SuperUser and I am just learning how different from a discussion thread it is. Thank you in advance for all your suggestions and comments. Donat

    Read the article

  • Exchange Mail Flow

    - by Tuck918
    Hello. I have a question. We have one Exchange 2003 server and two Exchange 2007 servers. Most all of our mailboxes are on 2007 but we do still have one shared mailbox, unity mailbox and a journling mailbox on 2003. Public Folders have been set to replicate to 2007. I have set up a send connector on 2007 with a cost of 1. Receive connectors have Anonymous Users checked on 2007. On 2003 there are two connectors: the Internet Email connector and the connector that connects 2003 to 2007. We have a SPAM filtering device that email goes through before it is handed off to Exchange. The SPAM filtering device is set to send email to one of our Exchange 2007 servers. Here is my question/problem: Even though the SPAM filtering device is set to forward email to Exchange 2007, somehow all of our email is still going through the Exchange 2003 server before it finally hits the users mailboxes on the Exchange 2007 server. How can I change it so that all email goes directly to Exchange 2007 and never routes through Excahnge 2003 both ways, inbound and outbound? Would also like to add: In the EMC under Org- Hub- Send Connector there are two connectors. One is the "Internet Connector" from the 2003 box and the other is the new one I created. THe address space on the 2003 one is set to a cost of 2, no smart hosts and the 2003 box is listed as the Source Server. THe other Send Connector has an address space of 1, no smart host and has the 2 excahnge 2007 servers listed as the source servers. In EMC under Server- Hub- my two exchange 2007 servers are listed. Each one has 2 receive connectors. Both Recieve Connectors are setup the same way. THe Default Receive Connector has Anonymous Users checked. The other Recieve Connector is labled "Client" and I am not sure what it does or why its there. Anonymous Users are not checked. No smart hosts configured on 2003. Additional details Currently we have 3 excahnge servers. One exchange 2003 server and two excahnge 2007 servers. THe exchange 2003 server is the acting "bridgehead" serverand all email is routing through this server, inbound and outbound. We are wanting to decommission this server and use our two exchange 2007 servers as our mailbox servers. All of of user mailboxes are already on one of the exchange 2007 boxes and we want to put whats left on the exchange 2003 box on our other excahnge 2007 box. Both excahnge 2007 servers are currently CAS, HT and MB servers. We have a SPAM filtering device that sits between our excahnge servers and the firewall and have it configured to send messages to one of the excahgne 2007 servers but when we look at the message headers we can see that messgaes are still being routed to the excahnge 2003 box. We want to bypass the exchange 2003 in the routing process as it is dying and is starting to have major issues so everytime it goes down our email is down. Is there possible some sort of AD routing link/site link stuff going on?

    Read the article

  • Launchd item no longer firing in Snow Leopard

    - by ridogi
    A launchd item that was working in 10.5 is no longer working after my upgrade to 10.6. I am running 10.6.2 and I have recreated the launchd item and given it a new name and that one doesn't run either. I have found a link of people with the same problem on google groups but none of the advice in that link helps. My launchd item is not listed in /private/var/db/launchd.db/com.apple.launchd/overrides.plist or in any of the overrides.plist files in the subdirectories of /private/var/db/launchd.db/ I have also tried to set this up as both a user agent and a user daemon. My launchd item simply runs a shell script, which I have no problem launching manually. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.eric.tmnotify.launchd</string> <key>ProgramArguments</key> <array> <string>/<path_to>/tmnotify.sh</string> </array> <key>StartInterval</key> <integer>3600</integer> </dict> </plist> I have tried to load it by overriding the disabled key (even though it is not disabled in any of the overrides.plist files) with both: sudo launchctl load -F /Users/eric/Library/LaunchAgents/com.eric.tmnotify.launchd.plist sudo launchctl load -w /Users/eric/Library/LaunchAgents/com.eric.tmnotify.launchd.plist and after running either of them I can see that it is running by using sudo launchctl list but the shell script never fires. Edit: I have also put this in the formerly blank file at /private/var/db/launchd.db/com.apple.launchd.peruser.501/overrides.plist : <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>com.eric.tmnotify.launchd</key> <dict> <key>Disabled</key> <false/> </dict> </dict> </plist> I also tried inserting this alphabetically: <key>com.eric.tmnotify.launchd</key> <dict> <key>Disabled</key> <false/> </dict> into the file /private/var/db/launchd.db/com.apple.launchd/overrides.plist but still no dice.

    Read the article

  • How to use a different Ethernet connection

    - by SteveC
    I'm running a virtual machine at home which has a VPN connection to our main office, but I also want to connect to a share on another machine at home. When I check with IPCONFIG I can see two ethernet connections ... my work VPN ... Ethernet adapter Local Area Connection* 11: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : xxxx::xxxx:xxxx:xxxx:xxxxxxx IPv4 Address. . . . . . . . . . . : XXX.XXX.XXX.XXX Subnet Mask . . . . . . . . . . . : 255.255.254.0 Default Gateway . . . . . . . . . : XXX.XXX.XXX.XXX and home local ... Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : lan Link-local IPv6 Address . . . . . : xxxx::xxxx:xxxx:xxxx:xxxxxxx IPv4 Address. . . . . . . . . . . : 192.168.1.70 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : What's weird is when I've been working before with a plugged-in ethernet cable I've not had any problem getting to the share? I can PING the other machine, but I can't access the share at ... \\othermachine\c$ I tried 'TRACERT` but that disappears off to the work network and eventually gets back to the local other machine after a few time-outs Is there anyway to "force" the connection to stay local ? UPDATE: the VPN is AEP SSL Tunnel

    Read the article

  • I am starting to think that Prevx.com isnt a legit site...but heres my long-winded question

    - by cop1152
    I apologize in advance for the long-winded post. I posted it all because I believe its informative and may be useful. Also, I posted my question at the end. Moments ago I was RDC to a file server in my home (from inside my home). I had opened Firefox and Googled for a manufacturers website. Immediately after clicking the link, Firefox abruptly closed. This seemed odd to me to so I checked the running processes and discovered d.exe, e.exe, and f.exe running. I Googled these processes on a different machine and found them belonging to a key-logger/screen-capturer/trojan called defender.exe, which according to the Prevx lives in c:\documents and settings\user\local settings\temp. (Prevx link http://www.prevx.com/filenames/147352809685142526-X1/DEFENDER32.EXE.html) Simultaneously, an obviously-spoofed Windows Firewall popup appeared on the server asking me to click ‘yes’ to update Windows Firewall. At this time I ended all rogue processes, emptied the temp folder, removed defender.exe from startup, and checked my registry and a few other locations. Before deleting Defender.exe I noted that it was created moments ago, just before Firefox crashed. I believe that I was ‘almost’ infected with this malware. I believe that it needed me to click the phony popup in order to complete infection because it wasn’t allowed to execute processes from the temp folder. After cleaning the machine, I restarted it and have been monitoring it for over an hour. I am debating on whether or not to restore the Windows partition (a separate physical drive from the data) or to just watch it for awhle. I should mention that, because of the specs on this machine, I do not run antivirus software, but I know it well and inspect it regularly. It is a very old Compaq with a 400mhz processer and 512mb of ram. I have a static IP and the server is in the DMZ running an FTP client and some HTTP server software. All files transferred to and stored on this machine are scanned for malware before transferring. Usually the machine only runs 19 processes and performs pretty well for its intended purpose. I posted the story so that you could be aware of a possible new piece of malware and how it acts, but I also have a question or two. First, over the last few months I have noticed that PREVX is listed at the top of most of my Google searches when researching malware, especially for new or obscure malware…and they always want you to purchase something. I don’t think they are one of the top AV companies, so it seems odd that they are always the top Google result. Does anyone have any experience with any of their products? Also, what sites do you rely on for malware researching? Recently, I have found it difficult to find good info because of HijackThis-logs and other deadend info cluttering up my searches. And lastly, besides antivirus, third-party firewall, etc, what settings would you use to lock down a machine to make it more secure in instances where a stubborn admin like myself refuses to run AV? Thanks.

    Read the article

  • Using jQuery and SPServices to Display List Items

    - by Bil Simser
    I had an interesting challenge recently that I turned to Marc Anderson’s wonderful SPServices project for. If you haven’t already seen or used SPServices, please do. It’s a jQuery library that does primarily two things. First, it wraps up all of the SharePoint web services in a nice little AJAX wrapper for use in JavaScript. Second, it enhances the form editing of items in SharePoint so you’re not hacking up your List Form pages. My challenge was simple but interesting. The user wanted to display a SharePoint item page (DispForm.aspx, which already had some customization on it to display related items via this blog post from Codeless Solutions for SharePoint) but launch from an external application using the value of one of the fields in the SharePoint list. For simplicity let’s say my list is a list of customers and the related list is a list of orders for that customer. It would look something like this (click on the item to see the full image): Your first thought might be, that’s easy! Display the customer information using a DataView Web Part and filter the item using a query string to match the customer number. However there are a few problems with this idea: You’ll need to build a custom page and then attach that related orders view to it. This is a bit of a problem because the solution from Codeless Solutions relies on the Title field on the page to be displayed. On a custom page you would have to recreate all of the elements found on the DispForm.aspx page so the related view would work. The DataView Web Part doesn’t look *exactly* like what the out of the box display form page does. Not a huge problem and can be overcome with some CSS style overrides but still, more work. A DVWP showing a single record doesn’t have the same toolbar that you would using the DispForm.aspx. Not a show-stopper and you can rebuild the toolbar but it’s going to potentially require code and then there’s the security trimming, etc. that you have to get right. DVWPs are not automatically updated if you add a column to the list like DispForm.aspx is. Work, work, work. For these reasons I thought it would be easier to take the already existing (modified) DispForm.aspx page and just add some jQuery magic to the page to find the item. Why do we need to find it? DispForm.aspx relies on a querystring parameter called “ID” which then displays whatever that item ID number is in the list. Trouble is, when you’re coming in from an external app via a link, you don’t know what that internal ID is (and frankly shouldn’t). I don’t like exposing internal SharePoint IDs to the outside world for the same reason I don’t do it with database IDs. They’re internal and while it’s find to use on the site itself you don’t want external links using it. It’s volatile and can change (delete one item then re-add it back with the same data and watch any ID references break). The next thought might be to call a SharePoint web service with a CAML query to get the item ID number using some criteria (in this case, the customer number). That’s great if you have that ability but again we had an existing application we were just adding a link to. The last thing I wanted to do was to crack open the code on that sucker and start calling web services (primarily because it’s Java, but really I’m a lazy geek). However if you’re doing this and have access to call a web service that would be an option. Back to this problem, how do I a) find a SharePoint List Item based on some field value other than ID and b) make it low impact so I can just construct a URL to it? That’s where jQuery and SPServices came to the rescue. After spending a few hours of emails back and forth with Marc and a couple of phone calls (and updating jQuery to the latest version, duh!) it was a simple answer. First we need a reference to a) jQuery b) SPServices and c) our script. I just dropped a Content Editor Web Part, the Swiss Army Knives of Web Parts, onto the DispForm.aspx page and added these lines: <script type="text/javascript" src="http://intranet/JavaScript/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/jquery.SPServices-0.5.3.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/RedirectToID.js"> </script> Update it to point to where you keep your scripts located. I prefer to keep them all in Document Libraries as I can make changes to them without having to remote into the server (and on a multiple web front end, that’s just a PITA), it provides me with version control of sorts, and it’s quick to add new plugins and scripts. Now we can look at our RedirectToID.js script. This invokes the SPServices Library to call the GetListItems method of the Lists web service and then rewrites the URL to DispForm.aspx to use the correct SharePoint ID (the internal one). $(document).ready(function(){ var queryStringValues = $().SPServices.SPGetQueryString(); var id = queryStringValues["ID"]; if(id == "0") { var customer = queryStringValues["CustomerNumber"]; var query = "<Query><Where><Eq><FieldRef Name='CustomerNumber'/><Value Type='Text'>" + customer + "</Value></Eq></Where></Query>"; var url = window.location; $().SPServices({ operation: "GetListItems", listName: "Customers", async: false, CAMLQuery: query, completefunc: function (xData, Status) { $(xData.responseXML).find("[nodeName=z:row]").each(function(){ id = $(this).attr("ows_ID"); url = $().SPServices.SPGetCurrentSite() + "/Lists/Customers/DispForm.aspx?ID=" + id; window.location = url; }); } }); } }); What’s happening here? Line 3: We call SPServices.SPGetQueryString to get an array of query string values (a handy function in the library as I had 15 lines of code to do this which is now gone). Line 4: Extract the ID value from the query string Line 6: If we pass in “0” it means we’re looking up a field value. This allows DispForm.aspx to work like normal with SharePoint lists but lookup our values when invoked. Why ID at all? DispForm.aspx doesn’t work unless you pass in something and “0” is a *magic* number that will invoke the page but not lookup a value in the database. Line 8-15: Extract the CustomerNumber query string value, build a CAML query to find it then call the GetListitems method using SPServices Line 16: Process the results in our completefunc to iterate over all the rows (there should only be one) and extract the real ID of the item Line 17-20: Build a new URL based on the site (using a call to SPGetCurrentSite) and append our real ID to redirect to the DispForm.aspx page As you can see, it dynamically creates a CAML query for the call to the web service using the passed in value. You could even make this generic to take in different query strings, one for the field name to search for and the other for the value to find. That way it could be used for any field you want. For example you could bring up the correct item on the DispForm.aspx page based on customer name with something like this: http://myserver/Lists/Customers/DispForm.aspx?ID=0&FilterId=CustomerName&FilterValue=Sony Use your imagination. Some people would opt for building a custom page with a DVWP but if you want to leverage all the functionality of DispForm.aspx this might come in handy if you don’t want to rely on internal SharePoint IDs.

    Read the article

  • Scan a Windows PC for Viruses from a Ubuntu Live CD

    - by Trevor Bekolay
    Getting a virus is bad. Getting a virus that causes your computer to crash when you reboot is even worse. We’ll show you how to clean viruses from your computer even if you can’t boot into Windows by using a virus scanner in a Ubuntu Live CD. There are a number of virus scanners available for Ubuntu, but we’ve found that avast! is the best choice, with great detection rates and usability. Unfortunately, avast! does not have a proper 64-bit version, and forcing the install does not work properly. If you want to use avast! to scan for viruses, then ensure that you have a 32-bit Ubuntu Live CD. If you currently have a 64-bit Ubuntu Live CD on a bootable flash drive, it does not take long to wipe your flash drive and go through our guide again and select normal (32-bit) Ubuntu 9.10 instead of the x64 edition. For the purposes of fixing your Windows installation, the 64-bit Live CD will not provide any benefits. Once Ubuntu 9.10 boots up, open up Firefox by clicking on its icon in the top panel. Navigate to http://www.avast.com/linux-home-edition. Click on the Download tab, and then click on the link to download the DEB package. Save it to the default location. While avast! is downloading, click on the link to the registration form on the download page. Fill in the registration form if you do not already have a trial license for avast!. By the time you’ve filled out the registration form, avast! will hopefully be finished downloading. Open a terminal window by clicking on Applications in the top-left corner of the screen, then expanding the Accessories menu and clicking on Terminal. In the terminal window, type in the following commands, pressing enter after each line. cd Downloadssudo dpkg –i avast* This will install avast! on the live Ubuntu environment. To ensure that you can use the latest virus database, while still in the terminal window, type in the following command: sudo sysctl –w kernel.shmmax=128000000 Now we’re ready to open avast!. Click on Applications on the top-left corner of the screen, expand the Accessories folder, and click on the new avast! Antivirus item. You will first be greeted with a window that asks for your license key. Hopefully you’ve received it in your email by now; open the email that avast! sends you, copy the license key, and paste it in the Registration window. avast! Antivirus will open. You’ll notice that the virus database is outdated. Click on the Update database button and avast! will start downloading the latest virus database. To scan your Windows hard drive, you will need to “mount” it. While the virus database is downloading, click on Places on the top-left of your screen, and click on your Windows hard drive, if you can tell which one it is by its size. If you can’t tell which is the correct hard drive, then click on Computer and check out each hard drive until you find the right one. When you find it, make a note of the drive’s label, which appears in the menu bar of the file browser. Also note that your hard drive will now appear on your desktop. By now, your virus database should be updated. At the time this article was written, the most recent version was 100404-0. In the main avast! window, click on the radio button next to Selected folders and then click on the “+” button to the right of the list box. It will open up a dialog box to browse to a location. To find your Windows hard drive, click on the “>” next to the computer icon. In the expanded list, find the folder labelled “media” and click on the “>” next to it to expand it. In this list, you should be able to find the label that corresponds to your Windows hard drive. If you want to scan a certain folder, then you can go further into this hierarchy and select that folder. However, we will scan the entire hard drive, so we’ll just press OK. Click on Start scan and avast! will start scanning your hard drive. If a virus is found, you’ll be prompted to select an action. If you know that the file is a virus, then you can Delete it, but there is the possibility of false positives, so you can also choose Move to chest to quarantine it. When avast! is done scanning, it will summarize what it found on your hard drive. You can take different actions on those files at this time by right-clicking on them and selecting the appropriate action. When you’re done, click Close. Your Windows PC is now free of viruses, in the eyes of avast!. Reboot your computer and with any luck it will now boot up! Alternatives to avast! If avast! and a liberal amount of Googling doesn’t fix your problem, it’s possible that a different virus scanner will fix your obscure issue. Here are a list of other virus scanners available for Ubuntu that are either free or offer free trials. See their support forums for help on installing these virus scanners. Avira AntiVir Personal for Linux / Solaris Panda Antivirus for Linux Installation and usage guide from Ubuntu F-PROT Antivirus for Linux ClamAV installation and usage guide from Ubuntu NOD32 Antivirus for Linux Kaspersky Anti-Virus 2010 Bitdefender Antivirus for Unices Conclusion Running avast! from a Ubuntu Live CD can clean the vast majority of viruses from your Windows PC. This is another reason to always have a Ubuntu Live CD ready just in case something happens to your Windows installation! Similar Articles Productive Geek Tips Secure Computing: Windows Live OneCareHow To Remove Antivirus Live and Other Rogue/Fake Antivirus MalwareUse the Windows Key for the "Start" Menu in Ubuntu LinuxScan Files for Viruses Before You Download With Dr.WebAsk the Readers: Share Your Tips for Defeating Viruses and Malware TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC

    Read the article

  • Week in Geek: LastPass Rescues Xmarks Edition

    - by Asian Angel
    This week we learned how to breathe new life into an aging Windows Mobile 6.x device, use filters in Photoshop, backup and move VirtualBox machines, use the BitDefender Rescue CD to clean an infected PC, and had fun setting up a pirates theme on our computers. Photo by _nash. Weekly Feature Do you love using the Faenza icon set on your Ubuntu system but feel that there are a few much needed icons missing (or you desire a different version of a particular icon)? Then you may want to take a look at the Faenza Variants icon pack. The icons are available in the following sizes: 16px, 22px, 32px, 48px and scalable sizes. Photo by Asian Angel. Faenza Variants Random Geek Links Another week with extra link goodness to help keep you on top of the news. Photo by Asian Angel. LastPass acquires Xmarks, premium service announced Xmarks announced that it has been acquired by LastPass, a cross-platform password management service. This also means that Xmarks is now in transition from a “free” to a “freemium” business model. WikiLeaks reappears on European Net domains WikiLeaks has re-emerged on a Swiss Internet domain followed by domains in Germany, Finland, and the Netherlands, sidestepping a move that had in effect taken the controversial site off the Internet. Iran: Yes, Stuxnet hurt our nuclear program The Stuxnet worm got some big play from Iranian President Mahmoud Ahmadinejad, who acknowledged that the malware dinged his nuclear program. More Windows Rogues than Just AV – Fake Defragmenter Check Disk Don’t think for a second that rogues are limited to scareware, because as so-called products such as “System Defragmenter”, “Scan Disk” “Check Disk” prove, they’re not. Internet Explorer’s Protected Mode can be bypassed Researchers from Verizon Business have now described a way of bypassing Protected Mode in IE 7 and 8 in order to gain access to user accounts. Can you really see who viewed your Facebook profile? Rogue application spreads virally Once again, a rogue application is spreading virally between Facebook users pretending to offer you a way of seeing who has viewed your profile. More holes in Palm’s WebOS Researchers Orlando Barrera and Daniel Herrera, who both work for security firm SecTheory, have discovered a gaping security hole in Palm’s WebOS smartphone operating system. Next-gen banking Trojans hit APAC With the proliferation of banking Trojans, Web and smartphone users of online banking services have to be on constant alert to avoid falling prey to fraud schemes, warned Etay Maor, project manager for RSA Fraud Action. AVG update cripples 64-bit computers A signature update automatically deployed by the AVG virus scanner Thursday has crippled numerous computers. Article includes link to forums to fix computers affected after a restart. Congress moves to outlaw ‘mystery charges’ for Web shoppers Legislation that makes it illegal for Web merchants and so-called post-transaction marketers to charge credit cards without the card owners’ say-so came closer to becoming law this week. Ballmer Set to “Look Into” Windows Home Server Drive Extender Fiasco Tuesday’s announcement from Microsoft regarding the removal of Drive Extender from Windows Home Server has sent shock waves across the web. Google tweaks search recipe to ding scam artists Google has changed its search algorithm to penalize sites deemed to provide an “extremely poor user experience” following a New York Times story on a merchant who justified abusive behavior towards customers as a search-engine optimization tactic. Geek Video of the Week Watch as our two friends debate back and forth about the early adoption of new technology through multiple time periods (Stone Age to the far future). Will our reluctant friend finally succumb to the temptation? Photo by CollegeHumor. Early Adopters Through History Random TinyHacker Links Fix Issues in Windows 7 Using Reliability Monitor Learn how to analyze Windows 7 errors and then fix them using the built-in reliability monitor. Learn About IE Tab Groups Tab groups is a useful feature in IE 8. Here’s a detailed guide to what it is all about. Google’s Book Helps You Learn About Browsers and Web A cool new online book by the Google Chrome team on browsers and the web. TrustPort Internet Security 2011 – Good Security from a Less Known Provider TrustPort is not exactly a well-known provider of security solutions. At least not in the consumer space. This review tests in detail their latest offering. How the World is Using Cell phones An infographic showing the shocking demographics of cell phone use. Super User Questions See the great answers to these questions from Super User. I am unable to access my C drive. It says it is unable to display current owner. List of Windows special directories/shortcuts like ‘%TEMP%’ Is using multiple passes for wiping a disk really necessary? How can I view two files side by side in Notepad++ Is there any tool that automatically puts screenshots to my Dropbox? How-To Geek Weekly Article Recap Look through our hottest articles from this past week at How-To Geek. How to Create a Software RAID Array in Windows 7 9 Alternatives for Windows Home Server’s Drive Extender Why Doesn’t Disk Cleanup Delete Everything from the Temp Folder? Ask the Readers: How Much Do You Customize Your Operating System? How to Upload Really Large Files to SkyDrive, Dropbox, or Email One Year Ago on How-To Geek Enjoy reading through these awesome articles from one year ago. How To Upgrade from Vista to Windows 7 Home Premium Edition How To Fix No Aero Transparency in Windows 7 Troubleshoot Startup Problems with Startup Repair Tool in Windows 7 & Vista Rename the Guest Account in Windows 7 for Enhanced Security Disable Error Reporting in XP, Vista, and Windows 7 The Geek Note That wraps things up here for this week. Regardless of the weather wherever you may be, we hope that you have an opportunity to get outside and have some fun! Remember to keep sending those great tips in to us at [email protected]. Photo by Tony the Misfit. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • TFS 2010 Build Custom Activity for Merging Assemblies

    - by Jakob Ehn
    *** The sample build process template discussed in this post is available for download from here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/ILMerge.xaml ***   In my previous post I talked about library builds that we use to build and replicate dependencies between applications in TFS. This is typically used for common libraries and tools that several other application need to reference. When the libraries grow in size over time, so does the number of assemblies. So all solutions that uses the common library must reference all the necessary assemblies that they need, and if we for example do a refactoring and extract some code into a new assembly, all the clients must update their references to reflect these changes, otherwise it won’t compile. To improve on this, we use a tool from Microsoft Research called ILMerge (Download from here). It can be used to merge several assemblies into one assembly that contains all types. If you haven’t used this tool before, you should check it out. Previously I have implemented this in builds using a simple batch file that contains the full command, something like this: "%ProgramFiles(x86)%\microsoft\ilmerge\ilmerge.exe" /target:library /attr:ClassLibrary1.bl.dll /out:MyNewLibrary.dll ClassLibrary1.dll ClassLibrar2.dll ClassLibrary3.dll This merges 3 assemblies (ClassLibrary1, 2 and 3) into a new assembly called MyNewLibrary.dll. It will copy the attributes (file version, product version etc..) from ClassLibrary1.dll, using the /attr switch. For more info on ILMerge command line tool, see the above link. This approach works, but requires a little bit too much knowledge for the developers creating builds, therefor I have implemented a custom activity that wraps the use of ILMerge. This makes it much simpler to setup a new build definition and have the build automatically do the merging. The usage of the activity is then implemented as part of the Library Build process template mentioned in the previous post. For this article I have just created a simple build process template that only performs the ILMerge operation.   Below is the code for the custom activity. To make it compile, you need to reference the ILMerge.exe assembly. /// <summary> /// Activity for merging a list of assembies into one, using ILMerge /// </summary> public sealed class ILMergeActivity : BaseCodeActivity { /// <summary> /// A list of file paths to the assemblies that should be merged /// </summary> [RequiredArgument] public InArgument<IEnumerable<string>> InputAssemblies { get; set; } /// <summary> /// Full path to the generated assembly /// </summary> [RequiredArgument] public InArgument<string> OutputFile { get; set; } /// <summary> /// Which input assembly that the attibutes for the generated assembly should be copied from. /// Optional. If not specified, the first input assembly will be used /// </summary> public InArgument<string> AttributeFile { get; set; } /// <summary> /// Kind of assembly to generate, dll or exe /// </summary> public InArgument<TargetKindEnum> TargetKind { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { string message = InputAssemblies.Get(context).Aggregate("", (current, assembly) => current + (assembly + " ")); TrackMessage(context, "Merging " + message + " into " + OutputFile.Get(context)); ILMerge m = new ILMerge(); m.SetInputAssemblies(InputAssemblies.Get(context).ToArray()); m.TargetKind = TargetKind.Get(context) == TargetKindEnum.Dll ? ILMerge.Kind.Dll : ILMerge.Kind.Exe; m.OutputFile = OutputFile.Get(context); m.AttributeFile = !String.IsNullOrEmpty(AttributeFile.Get(context)) ? AttributeFile.Get(context) : InputAssemblies.Get(context).First(); m.SetTargetPlatform(RuntimeEnvironment.GetSystemVersion().Substring(0,2), RuntimeEnvironment.GetRuntimeDirectory()); m.Merge(); TrackMessage(context, "Generated " + m.OutputFile); } } [Browsable(true)] public enum TargetKindEnum { Dll, Exe } NB: The activity inherits from a BaseCodeActivity class which is an internal helper class which contains some methods and properties useful for moste custom activities. In this case, it uses the TrackeMessage method for writing to the build log. You either need to remove the TrackMessage method calls, or implement this yourself (which is not very hard… ) The custom activity has the following input arguments: InputAssemblies A list with the (full) paths to the assemblies to merge OutputFile The name of the resulting merged assembly AttributeFile Which assembly to use as the template for the attribute of the merged assembly. This argument is optional and if left blank, the first assembly in the input list is used TargetKind Decides what type of assembly to create, can be either a dll or an exe Of course, there are more switches to the ILMerge.exe, and these can be exposed as input arguments as well if you need it. To show how the custom activity can be used, I have attached a build process template (see link at the top of this post) that merges the output of the projects being built (CommonLibrary.dll and CommonLibrary2.dll) into a merged assembly (NewLibrary.dll). The build process template has the following custom process parameters:   The Assemblies To Merge argument is passed into a FindMatchingFiles activity to located all assemblies that are located in the BinariesDirectory folder after the compilation has been performed by Team Build. Here is the complete sequence of activities that performs the merge operation. It is located at the end of the Try, Compile, Test and Associate… sequence: It splits the AssembliesToMerge parameter and appends the full path (using the BinariesDirectory variable) and then enumerates the matching files using the FindMatchingFiles activity. When running the build, you can see that it merges two assemblies into a new one:     And the merged assembly (and associated pdb file) is copied to the drop location together with the rest of the assemblies:

    Read the article

  • Metro: Introduction to the WinJS ListView Control

    - by Stephen.Walther
    The goal of this blog entry is to provide a quick introduction to the ListView control – just the bare minimum that you need to know to start using the control. When building Metro style applications using JavaScript, the ListView control is the primary control that you use for displaying lists of items. For example, if you are building a product catalog app, then you can use the ListView control to display the list of products. The ListView control supports several advanced features that I plan to discuss in future blog entries. For example, you can group the items in a ListView, you can create master/details views with a ListView, and you can efficiently work with large sets of items with a ListView. In this blog entry, we’ll keep things simple and focus on displaying a list of products. There are three things that you need to do in order to display a list of items with a ListView: Create a data source Create an Item Template Declare the ListView Creating the ListView Data Source The first step is to create (or retrieve) the data that you want to display with the ListView. In most scenarios, you will want to bind a ListView to a WinJS.Binding.List object. The nice thing about the WinJS.Binding.List object is that it enables you to take a standard JavaScript array and convert the array into something that can be bound to the ListView. It doesn’t matter where the JavaScript array comes from. It could be a static array that you declare or you could retrieve the array as the result of an Ajax call to a remote server. The following JavaScript file – named products.js – contains a list of products which can be bound to a ListView. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99 }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44 }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The products variable represents a WinJS.Binding.List object. This object is initialized with a plain-old JavaScript array which represents an array of products. To avoid polluting the global namespace, the code above uses the module pattern and exposes the products using a namespace. The list of products is exposed to the world as ListViewDemos.products. To learn more about the module pattern and namespaces in WinJS, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/22/metro-namespaces-and-modules.aspx Creating the ListView Item Template The ListView control does not know how to render anything. It doesn’t know how you want each list item to appear. To get the ListView control to render something useful, you must create an Item Template. Here’s what our template for rendering an individual product looks like: <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> This template displays the product name and price from the data source. Normally, you will declare your template in the same file as you declare the ListView control. In our case, both the template and ListView are declared in the default.html file. To learn more about templates, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/27/metro-using-templates.aspx Declaring the ListView The final step is to declare the ListView control in a page. Here’s the markup for declaring a ListView: <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> You declare a ListView by adding the data-win-control to an HTML DIV tag. The data-win-options attribute is used to set two properties of the ListView. The ListView is associated with its data source with the itemDataSource property. Notice that the data source is ListViewDemos.products.dataSource and not just ListViewDemos.products. You need to associate the ListView with the dataSoure property. The ListView is associated with its item template with the help of the itemTemplate property. The ID of the item template — #productTemplate – is used to select the template from the page. Here’s what the complete version of the default.html page looks like: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ListViewDemos</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- ListViewDemos references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script src="/js/products.js" type="text/javascript"></script> <style type="text/css"> .product { width: 200px; height: 100px; border: white solid 1px; } </style> </head> <body> <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> </body> </html> Notice that the page above includes a reference to the products.js file: <script src=”/js/products.js” type=”text/javascript”></script> The page above also contains a Template control which contains the ListView item template. Finally, the page includes the declaration of the ListView control. Summary The goal of this blog entry was to describe the minimal set of steps which you must complete to use the WinJS ListView control to display a simple list of items. You learned how to create a data source, declare an item template, and declare a ListView control.

    Read the article

  • Tips on installing Visual Studio 2010 SP1

    - by Jon Galloway
    Visual Studio SP1 went up on MSDN downloads (here) on March 8, and will be released publicly on March 10 here. Release announcements: Soma: Visual Studio 2010 enhancements Jason Zander: Announcing Visual Studio 2010 Service Pack 1 I started on this post with tips on installing VS2010 SP1 when I realized I’ve been writing these up for Visual Studio and .NET framework SP releases for a while (e.g. VS2008 / .NET 3.5 SP1 post, VS2005 SP1 post). Looking back the years of Visual Studio SP installs (and remembering when we’d get up to SP6 for a Visual Studio release), I’m happy to see that it just keeps getting easier. Service Packs are a lot less finicky about requiring beta software to be uninstalled, install more quickly, and are just generally a lot less scary. If I can’t have a jetpack, at least my future provided me faster, easier service packs. Disclaimer: These tips are just general things I've picked up over the years. I don't have any inside knowledge here. If you see anything wrong, be sure to let me know in the comments. You may want to check the readme file before installing - it's short, and it's in that new-fangled HTML format. On with the tips! Before starting, uninstall Visual Studio features you don't use Visual Studio service packs (and other Microsoft service packs as well) install patches for the specific features you’ve got installed. This is a big reason to always do a custom install when you first install Visual Studio, but it’s not difficult to update your existing installation. Here’s the quick way to do that: Tap the windows key and type “add or remove programs” and press enter (or click on the “Add or remove programs” link if you must).   Type “Visual Studio 2010” in the search box in the upper right corner, click on the Visual Studio program (the one with the VS infinity looking logo) and click on Uninstall/Change. Click on Add or Remove Features The next part’s up to you – what features do you actually use? I’ve been doing primarily ASP.NET MVC development in C# lately, so I selected Visual C# and Visual Web Developer. Remember that you can install features later if needed, and can also install the express versions if you want. Selecting everything just because it’s there - or you paid for it – means that you install updates for everything, every time. When you’ve made your changes, click on the Update button to uninstall unused features. Shut down all instances of Visual Studio It probably goes without saying that you should close a program down before installing it, partly to avoid the file-in-use-reboot-after-install horror. Additional "hunch / works on my machine" quality tip: On one computer I saw a note in the setup log about Visual Studio a prompt for user input to close Visual Studio, although I never saw the prompt. Just to  be sure, I'd personally open up Task Manager and kill any devenv.exe processes I saw running, as it couldn't hurt. Use the web installer I use the Web Installers whenever possible. There’s no point in downloading the DVD unless you’re doing multiple installs or won’t have internet access. The DVD IS is 1.5GB, since it needs to be able to service every possible supported installation option on both x86 and x64. The web installer is 776 KB (smaller than calc.exe), so you can start the installation right away. Like other web installers, the real benefit is that it only installs the updates you need (hence the reason for step 1 – uninstalling unused components). Instead of 1.5GB, my download was roughly 530MB. If you’re installing from MSDN (this link takes you right to the Visual Studio installs), select the first one on the list: The first step in the installation process is to analyze the machine configuration and tell you what needs to be installed. Since I've trimmed down my features, that's a pretty short list. The time's not far off where I may not install SQL Server on my dev machines, just using SQL Server Compact - that would shorten the list further. When I hit next, you can see that the download size has shrunk considerably. When I start the install, note that the installation begins while other components are downloading - another benefit of the web install. On my mid-range desktop machine, the install took 25 minutes. What if it takes longer? According to Heath Stewart (Visual Studio installer guru), average SP1 installs take roughly 45 minutes. An installation which takes hours to complete may be a sign of a problem: see his post Visual Studio 2010 Service Pack 1 installing for over 2 hours could be a sign of a problem. Why so long? Yes, even 25 minutes is a while. Heath's got another blog post explaining why the update can take longer than the initial install (see: A patch may take as long or longer to install than the target product) which explains all the additional steps and complexities a patch needs to deal with, as well as some mitigation steps that deployment authors can take to mitigate the impact. Other things to know about Visual Studio 2010 SP1 Installs over Visual Studio 2010 SP1 Beta That's nice. Previous Visual Studio versions did a number of annoying things when you installed SP's over beta's - fail with weird errors, get part way through and tell you needed to cancel and uninstall first, etc. I've installed this on two machines that had random beta stuff installed without tears. That Readme file you didn't read I mentioned the readme file earlier (http://go.microsoft.com/fwlink/?LinkId=210711 ). Some interesting things I picked up in there: 2.1.3. Visual Studio 2010 Service Pack 1 installation may fail when a USB drive or other removeable drive is connected 2.1.4. Visual Studio must be restarted after Visual Studio 2010 SP1 tooling for SQL Server Compact (Compact) 4.0 is installed 2.2.1. If Visual Studio 2010 Service Pack 1 is uninstalled, Visual Studio 2010 must be reinstalled to restore certain components 2.2.2. If Visual Studio 2010 Service Pack 1 is uninstalled, Visual Studio 2010 must be reinstalled before SP1 can be installed again 2.4.3.1. Async CTP If you installed the pre-SP1 version of Async CTP but did not uninstall it before you installed Visual Studio 2010 SP1, then your computer will be in a state in which the version of the C# compiler in the .NET Framework does not match the C# compiler in Visual Studio. To resolve this issue: After you install Visual Studio 2010 SP1, reinstall the SP1 version of the Async CTP from here. Hardware acceleration for Visual Studio is disabled on Windows XP Visual Studio 2010 SP1 disables hardware acceleration when running on Windows XP (only on XP). You can turn it back on in the Visual Studio options, under Environment / General, as shown below. See Jason Zander's post titled Performance Troubleshooting Article and VS2010 SP1 Change.

    Read the article

  • Security Trimmed Cross Site Collection Navigation

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This article will serve as documentation of a fully functional codeplex project that I just created. This project will give you a WebPart that will give you security trimmed navigation across site collections. The first question is, why create such a project? In every single SharePoint project you will do, one question you will always be faced with is, what should the boundaries of sites be, and what should the boundaries of site collections be? There is no good or bad answer to this, because it really really depends on your needs. There are some factors in play here. Site Collections will allow you to scale, as a Site collection is the smallest entity you can put inside a content database Site collections will allow you to offer different levels of SLAs, because you put a site collection on a separate content database, and put that database on a separate server. Site collections are a security boundary – and they can be moved around at will without affecting other site collections. Site collections are also a branding boundary. They are also a feature deployment boundary, so you can have two site collections on the same web application with completely different nature of services. But site collections break navigation, i.e. a site collection at “/”, and a site collection at “/sites/mySiteCollection”, are completely independent of each other. If you have access to both, the navigation of / won’t show you a link to /sites/mySiteCollection. Some people refer to this as a huge issue in SharePoint. Luckily, some workarounds exist. A long time ago, I had blogged about “Implementing Consistent Navigation across Site Collections”. That approach was a no-code solution, it worked – it gave you a consistent navigation across site collections. But, it didn’t work in a security trimmed fashion! i.e., if I don’t have access to Site Collection ‘X’, it would still show me a link to ‘X’. Well this project gets around that issue. Simply deploy this project, and it’ll give you a WebPart. You can use that WebPart as either a webpart or as a server control dropped via SharePoint designer, and it will give you Security Trimmed Cross Site Collection Navigation. The code has been written for SP2010, but it will work in SP2007 with the help of http://spwcfsupport.codeplex.com . What do I need to do to make it work? I’m glad you asked! Simple! Deploy the .wsp (which you can download here). This will give you a site collection feature called “Winsmarts Cross Site Collection Navigation” as shown below. Go ahead and activate it, and this will give you a WebPart called “Winsmarts Navigation Web Part” as shown below: Just drop this WebPart on your page, and it will show you all site collections that the currently logged in user has access to. Really it’s that easy! This is shown as below - In the above example, I have two site collections that I created at /sites/SiteCollection1 and /sites/SiteCollection2. The navigation shows the titles. You see some extraneous crap as well, you might want to clean that – I’ll talk about that in a minute. What? You’re running into problems? If the problem you’re running into is that you are prompted to login three times, and then it shows a blank webpart that says “Loading your applications ..” and then craps out!, then most probably you’re using a different authentication scheme. Behind the scenes I use a custom WCF service to perform this job. OOTB, I’ve set it to work with NTLM, but if you need to make it work alternate authentications such as forms based auth, or client side certs, you will need to edit the %14%\ISAPI\Winsmarts.CrossSCNav\web.config file, specifically, this section - 1: <bindings> 2: <webHttpBinding> 3: <binding name="customWebHttpBinding"> 4: <security mode="TransportCredentialOnly"> 5: <transport clientCredentialType="Ntlm"/> 6: </security> 7: </binding> 8: </webHttpBinding> 9: </bindings> For Kerberos, change the “clientCredentialType” to “Windows” For Forms auth, remove that transport line For client certs – well that’s a bit more involved, but it’s just web.config changes – hit a good book on WCF or hire me for a billion trillion $. But fair warning, I might be too busy to help immediately. If you’re running into a different problem, please leave a comment below, but the code is pretty rock solid, so .. hmm .. check what you’re doing! BTW, I don’t  make any guarantee/warranty on this – if this code makes you sterile, unpopular, bad hairstyle, anything else, that is your problem! But, there are some known issues - I wrote this as a concept – you can easily extend it to be more flexible. Example, hierarchical nav, or, horizontal nav, jazzy effects with jquery or silverlight– all those are possible very very easily. This webpart is not smart enough to co-exist with another instance of itself on the same page. I can easily extend it to do so, which I will do in my spare(!?) time! Okay good! But that’s not all! As you can see, just dropping the WebPart may show you many extraneous site collections, or maybe you want to restrict which site collections are shown, or exclude a certain site collection to be shown from the navigation. To support that, I created a property on the WebPart called “UrlMatchPattern”, which is a regex expression you specify to trim the results :). So, just edit the WebPart, and specify a string property of “http://sp2010/sites/” as shown below. Note that you can put in whatever regex expression you want! So go crazy, I don’t care! And this gives you a cleaner look.   w00t! Enjoy! Comment on the article ....

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >