Search Results

Search found 18976 results on 760 pages for 'ash machine'.

Page 187/760 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • FreeBSD with Vagrant - don't know how to check guest additions version

    - by joelmaranhao
    On Mac OS X 10.9.3 Picked a box from the VagrantCloud Init the vagrant box $ vagrant init chef/freebsd-9.2-i386 A `Vagrantfile` has been placed in this directory. You are now ready to `vagrant up` your first virtual environment! Please read the comments in the Vagrantfile as well as documentation on `vagrantup.com` for more information on using Vagrant. List the files $ ls -al -rw-r--r-- 1 joel staff 4831 Jun 5 17:17 Vagrantfile Vagrantfile content VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "chef/freebsd-9.2-i386" end Starting my virtual box leads to Errors $ vagrant up Bringing machine 'default' up with 'virtualbox' provider... ==> default: Box 'chef/freebsd-9.2-i386' could not be found. Attempting to find and install... default: Box Provider: virtualbox default: Box Version: >= 0 ==> default: Loading metadata for box 'chef/freebsd-9.2-i386' default: URL: https://vagrantcloud.com/chef/freebsd-9.2-i386 ==> default: Adding box 'chef/freebsd-9.2-i386' (v1.0.0) for provider: virtualbox default: Downloading: https://vagrantcloud.com/chef/freebsd-9.2-i386/version/1/provider/virtualbox.box ==> default: Successfully added box 'chef/freebsd-9.2-i386' (v1.0.0) for 'virtualbox'! ==> default: Importing base box 'chef/freebsd-9.2-i386'... ==> default: Matching MAC address for NAT networking... ==> default: Checking if box 'chef/freebsd-9.2-i386' is up to date... ==> default: Setting the name of the VM: freebsd92-i386_default_1401982167145_49633 ==> default: Fixed port collision for 22 => 2222. Now on port 2201. ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat ==> default: Forwarding ports... default: 22 => 2201 (adapter 1) ==> default: Booting VM... ==> default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 127.0.0.1:2201 default: SSH username: vagrant default: SSH auth method: private key default: Warning: Connection timeout. Retrying... default: Warning: Connection timeout. Retrying... ==> default: Machine booted and ready! Sorry, don't know how to check guest version of Virtualbox Guest Additions on this platform. Stopping installation. ==> default: Checking for guest additions in VM... default: The guest additions on this VM do not match the installed version of default: VirtualBox! In most cases this is fine, but in rare cases it can default: prevent things such as shared folders from working properly. If you see default: shared folder errors, please make sure the guest additions within the default: virtual machine match the version of VirtualBox you have installed on default: your host and reload your VM. default: default: Guest Additions Version: 4.2.16 default: VirtualBox Version: 4.3 ==> default: Mounting shared folders... default: /vagrant => /Users/joel/Code/anybots/operations/robot/freebsd92-i386 Vagrant attempted to execute the capability 'mount_virtualbox_shared_folder' on the detect guest OS 'freebsd', but the guest doesn't support that capability. This capability is required for your configuration of Vagrant. Please either reconfigure Vagrant to avoid this capability or fix the issue by creating the capability. Note that I have recently installed the latest version of VirtualBox, but somehow I can't find the Guest Additions.

    Read the article

  • PC runs very slowly for no apparent reason

    - by GalacticCowboy
    I have a Dell Latitude D820 that I've owned for about 2.5 years. It is a Core 2 Duo T7200 2.0GHz, with 2 GB of RAM, an 80 GB hard drive and an NVidia Quadro 120M video card. The computer was purchased in late November of 2006 with XP Pro, and included a free upgrade to Vista Business. (Vista was available on MSDN but not yet via retail, so the Vista Business upgrades weren't shipped until March of '07.) Since we had an MSDN subscription at the time, I installed Vista Ultimate on it pretty much as soon as I got it. It ran happily until sometime in the spring of 2007 when Media Center (which I had never used except to watch DVDs) started throwing some kind of bizarre SQL (CE?) error. This error would pop up at random times just while using the computer. Furthermore, Media Center would no longer start. I never identified the cause of this error. I had the Vista Business upgrade by this time, so I nuked the machine, installed XP and all the drivers, and then the Vista Business upgrade. Again, it ran happily for a few months and then started behaving badly once again. Vista Business doesn't have Media Center, so this exhibited completely unrelated symptoms. For no apparent reason and at fairly random times, the machine would suddenly appear to freeze up or run very slowly. For example, launching a new application window (any app) might take 30-45 seconds to paint fully. However, Task Manager showed very low CPU load, memory, etc. I tried all the normal stuff (chkdsk, defrag, etc.) and ran several diagnostic programs to try to identify any problems, but none found anything. It eventually reached the point that the computer was all but unusable, so I nuked it again and installed XP. This time I decided to stick with XP instead of going to Vista. However, within the past couple of months it has started to exhibit the same symptoms in XP that I used to see in Vista. The computer is still under Dell warranty until December, but so far they aren't any help unless I can identify a specific problem. A friend (partner in a now-dead business) has an identical machine that was purchased at the same time. His machine exhibits none of these symptoms, which leads me to believe it is a hardware issue, but I can't figure out how to identify it. Any ideas? Utilities? Seen something similar? At this point I can't even identify any pattern to the behavior, but would be willing to run a "stress test"-style app for as long as a couple days if I had any hope that it would find something. EDIT July 17 I'm testing jerryjvl's answer regarding the video card, though I'm not sure it fully explains the symptoms yet. This morning I ran a video stress test. The test itself ran fine, but immediately afterward the PC started acting up again. I left ProcExp open and various system processes were consuming 50-60% of the CPU but with no apparent reason. For example, "services.exe" was eating about 40%, but the sum of its child processes wasn't higher than about 5%. I left it alone for several minutes to settle down, and then it was fine again. I used the "video card stability test" from firestone-group.com. Its output isn't very detailed, but it at least exercises the hardware pretty hard. EDIT July 22 Thanks for your excellent suggestions. Here is an update on what I have tried so far. Ran memtest86, SeaTools (Seagate), Hitachi drive fitness test, video card stability test (mentioned above). The video card test was the only one that seemed to produce any results, though it didn't occur during the actual test. I defragged the drive (again...) with JkDefrag I dropped the video card

    Read the article

  • Windows XP Video Configuration Issues

    - by Matt
    Recently I had my motherboard burn out on me. Needing the machine for work, I purchased a different motherboard and installed that. Generally a reinstall of windows is good at that point but I am not in a position to do that so I just decided I would live with it for now. When I can log-in, everything works fine, what doesn't is getting to the log-in prompt to begin with. Basically when I first installed the new mobo, every time I rebooted the machine, I would not get the windows login prompt. One of the monitors would receive a signal but the screen would be black. Moving the mouse would not show the cursor and hitting the up arrow key and typing my password and hitting enter (which will normally log you in without mouse) wouldn't change anything. I would then change the monitor configuration around (2 lcd's and a crt) and reboot and at least one of the monitors would work and display the login prompt. I could then go into display properties and turn on the other monitors. However if I rebooted again, I would get the black screen on one monitor again. I would then have to change the configuration again to one not used before and I could re-do the manual setup at that point. I think windows saves the configurations so I had to keep giving it new ones. Needless to say I've been trying to not turn off my machine. Early this week I actually got the prompt to come up without playing musical monitors. Thinking everything was getting better, I found no harm in rebooting to install the latest windows updates. Boy was I wrong. Now no matter what I do I can't get a windows log-in prompt to display. I've tried almost every conceivable combination. The new mobo has onboard video so I set that in the bios (yea bios screen always displays fine, its not until windows boots that there is a problem) to be the primary video. Still no luck. I have two other graphics cards in the machine which I'm using. Tried all kinds of configurations between those and on-board but still get this black screen of death. I read somewhere that deleting the video drivers would reset the configurations. I logged into safe mode (which works on one monitor), and uninstalled the display drivers. Still no luck and when I booted back into safe mode, it wanted to install new hardware and the display adapters weren't there as expected. Anyone have any ideas? A fresh install would be a pain and I might be getting my old board back from RMA soon so not sure I want to go through with that just yet. Only thing I can think of is to continue to try other combinations like physically removing the graphics cards. They are both EVGA 8600 cards and the windows boot screen does display fwiw.

    Read the article

  • Dynamic DNS registration for VPN clients

    - by Eric Falsken
    I've got a VPN server set up in my Active Directory on a remote network. (VPN Server is separate box from DNS/AD) When I dial into the network (client machine is not a member of the AD) the machine does not register its IP or Hostname in the DNS. I've played with all possible combinations of DHCP and RRAS-allocated IP pools, and none of them seem to cause my client to register. Is it because my client has to be a member of the domain? Are there some security settins I can tweak so that it can register its hostname/ip? I've looked in the event logs (System and Security) for the AD, DNS, DHCP, RRAS, and the client machine, and don't see anything relating to DNS Registration. Here's the IPConfig on the client machine (once connected): PPP adapter My VPN Name: Connection-specific DNS Suffix . : mydomain.local Description . . . . . . . . . . . : My VPN Name Physical Address. . . . . . . . . : DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.1.22(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : 192.168.1.52 <- DC1 192.168.1.53 <- DC2 NetBIOS over Tcpip. . . . . . . . : Enabled

    Read the article

  • Putting indexes in separate filegroup kills our queries

    - by womp
    Can anyone shed some light on this? On our dev boxes, our database resides entirely in the PRIMARY filegroup, and everything works fine. On one of our production servers, recently upgraded from 2005 to 2008, we noticed it was performing slower than it should. On this machine, there are two filegroups - PRIMARY and INDEXES. Both filegroups contain 1 file per logical volume, one logical volume per CPU, (and each logical volume is a RAID 10 of 4 physical disks). We isolated a few queries that were performing fast on the dev boxes and slow (up to 40x slower) on the production machine. Turned out these queries were using the non-clustered indexes that resided in the INDEXES filegroup. Tweaking some of the queries to only use clustered indexes that were in the PRIMARY filegroup dropped their times back to normal. As a final confirmation, we redeployed the same database on the same machine to have everything in PRIMARY, and things went back to normal! Here's the statistics output of one of the queries, run identically on the machine with different filegroup configurations (table names changed to protect the innocent): FAST (everything in PRIMARY filegroup): (3 row(s) affected) Table '0'. Scan count 2, logical reads 14, ... Table '1'. Scan count 0, logical reads 0, ... Table '1'. Scan count 0, logical reads 0, ... Table '2'. Scan count 2, logical reads 7, ... Table '3'. Scan count 2, logical reads 1012, ... Table '4'. Scan count 1, logical reads 3, ... SQL Server Execution Times: CPU time = 437 ms, elapsed time = 445 ms. SLOW (indexes split into their own filegroup): (3 row(s) affected) Table '0'. Scan count 209, logical reads 428, ... Table '1'. Scan count 0, logical reads 0,... Table '2'. Scan count 1021, logical reads 9043,.... Table '3'. Scan count 209, logical reads 105754, .... Table '4'. Scan count 0, logical reads 0, .... Table '5'. Scan count 1, logical reads 695, ... **Table '#46DA8CA9'. Scan count 205, logical reads 205, ...** Table '6'. Scan count 6, logical reads 436, ... Table '7'. Scan count 1, logical reads 12,.... SQL Server Execution Times: CPU time = 17581 ms, elapsed time = 17595 ms. Notice the weird temp table and extra tables involved in the slow query. It seems clear that having a second file group is making SQL Server batty with choosing an execution plan. What the heck is going on?

    Read the article

  • How to use more than 3 virtual disks in Linux using CentOS and XenServer

    - by 010110110101
    I've attached 5 virtual disks to a Virtual Machine in Citrix XenServer. The VM has the xs-tools installed. Initially it said that it couldn't add so many disks. After I installed the xs-tools, it let me add all the disks. But /dev doesn't show all the disks. It shows these: /dev/xvda /dev/xvdb /dev/xvdc /dev/cdrom Perhaps it is bound by the limits of an IDE bus? (3 disks + CD-ROM) If so, how does one change the VM to use SCSI? Edit: According to the documentation: 2.6.3. VM Block Devices In the PV Linux case, block devices are passed through as PV devices. XenServer does not attempt to emulate SCSI or IDE, but instead provides a more suitable interface in the virtual environment in the form of xvd* devices. It is also possible to get an sd* device using the same mechanism, where the PV driver inside the VM takes over the SCSI device namespace. This is not desirable so it is best to use xvd* where possible for PV guests (this is the default for Debian and RHEL). For Windows or other fully virtualized guests, XenServer emulates an IDE bus in the form of an hd* device. When using Windows, installing the Citrix Tools for Virtual Machines installs a special PV driver that works in a similar way to Linux, except in the fully virtualized environment. Still, with 5 virtual disks attached, I don't see the other xvd devices. Edit #2: (attached requested info) Host Machine: XenServer 6.1 Linux version 2.6.32.43-0.4.1.xs1.6.10.777.170770xen (geeko@buildhost) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-51)) #1 SMP Wed Apr 17 05:52:03 EDT 2013 Guest Machine: CentOS release 6.4 (Final) Linux version 2.6.32-358.6.2.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 20:59:36 UTC 2013 Output of 'fdisk -l' on Guest Machine: Note, the disk beyond the first 3 attached are not displaying -- there should be 4 100GB disks. (There are a total of 5 disks displayed in XenCenter -- 16GB, 100GB, 100GB, 100GB, 100GB) Disk /dev/xvdb: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfb6c95b9 Device Boot Start End Blocks Id System /dev/xvdb1 1 13054 104856223+ 83 Linux Disk /dev/xvda: 17.2 GB, 17179869184 bytes 255 heads, 63 sectors/track, 2088 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e5f41 Device Boot Start End Blocks Id System /dev/xvda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvda2 64 2089 16264192 8e Linux LVM Disk /dev/xvdc: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xed249ced Device Boot Start End Blocks Id System /dev/xvdc1 1 13054 104856223+ 83 Linux Disk /dev/mapper/vg_blue-lv_root: 14.6 GB, 14571012096 bytes 255 heads, 63 sectors/track, 1771 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_blue-lv_swap: 2080 MB, 2080374784 bytes 255 heads, 63 sectors/track, 252 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 I see that the Linux versions say SMP. The Guest VM doesn't say "xen" in the name. However, I have already run yum install kernel-xen. Could be a clue?

    Read the article

  • Just a few questions about Hyper-V virtual machines and clustering

    - by René Kåbis
    I have been using Microsoft’s Hyper-V technology for a little while now, but I am just now dipping my toe into clustering. In particular, I am trying to implement a fault-tolerant SQL DB. This involves setting up two VMs, clustering them via Failover Cluster, and then installing SQL Server in some fashion. I have two physical machines - one high-end and rather beefy “heavy lifter” to contain the majority of the VMs, and another “backup” (a repurposed desktop) to hold the essential “secondary” (or failover) AD-DC, SQL and FS VMs. The main reason why I find the failover cluster at the VM level so attractive is that it presents a single IP and DNS entry to the network as a whole - if one machine (physical or virtual) goes down, you might loose some ping and the connections get reset, but the network applications (Microsoft RMS connection to backend SQL) can still connect to a viable DB without having to mess around with the settings at all. My first question is in terms of SQL Server itself. If I have a cluster between two VMs, does it make more sense to install the SQL Server in Failover Cluster configuration or should I simply install it in a stand-alone config and mirror the DBs? For example, this post suggests just mirroring the DBs, but do I just mirror standalone DBs on standalone VMs, or can I get the network and failover benefits of clustered VMs while still utilizing (on each clustered VM) standalone DBs that have been mirrored between each other? As well, I have come across a lot of documentation about SQL clustering, but most assume a number (#2) of physical machines to hold not only the actual SQL VMs but also the Quorum and Witness stores. I will not be able to muster more than two physical machines. As such, I will have to be satisfied with a VM cluster that does not exceed two VMs (one for each physical machine). Another issue involves MSDTC - the Distributed Transaction Coordinator. When attempting to install the SQL Failover Cluster (I never completed it for this reason) it threw a hissy fit because MSDTC had not been clustered. Search as I might, I have not yet found a way to do so under Windows Server 2012 R2. I have found plenty of docs for Windows 2008 and 2008 R2, but these instructions don’t align with 2012 R2 (at least, not in a way that allows me to successfully cluster MSDTC). Plus, some of the instructions that I have found for SQL Server Failover Cluster installation suggest that a third “network device” - shared network storage (a SAN) - is required for the DB itself (and other functionality). I do not have this, and won’t be getting this. Most of my storage exists on the “heavy lifter” that was designed for all of the “primary” VMs. If that physical machine goes down, so does the storage. The secondary server does have enough resources for an AD-DC Server, an SQL server and a File Server, so it will handle the “secondary” failover versions of those VMs (clustered or not). My final question involves file servers. If I cluster file servers between two VMs (one on my “heavy lifter” and another on my “backup”, how do I mirror the data between them? Clustering VMs only provides a single point of access on the network for a resource, it doesn’t exactly replicate data between the two - that is left to the services that serve up that data. I am unsure how I can ensure that file server data between two clustered file server VMs can be properly mirrored. Remember, I only have two devices to be used here - my primary machine and a backup secondary. There is no chance of me obtaining a SAN or any other type of network attached storage. What exists on the machines must act as the storage. Thanks in advance for any suggestions.

    Read the article

  • vagrant fails to bring up additional adapter for centos vm using virtual box provider

    - by Anadi Misra
    this is in continuation of the question asked here about host only adapter on dhcp I upgraded to vagrant 1.6.3 and the updated Vagrantfile to following setting for multiple adapters # add additional adapter for inter machine networking dev.vm.network :private_network, :type => "dhcp", :adapter => "2", :netmask => "255.255.255.0" it goes through creating adapters but then fails bringing up the mic on vm Anadis-MacBook-Pro:full-stack-env anadi$ vagrant up Bringing machine 'full-stack-env' up with 'virtualbox' provider... ==> full-stack-env: Clearing any previously set forwarded ports... ==> full-stack-env: Clearing any previously set network interfaces... ==> full-stack-env: Preparing network interfaces based on configuration... full-stack-env: Adapter 1: nat full-stack-env: Adapter 2: hostonly ==> full-stack-env: Forwarding ports... full-stack-env: 22 => 4223 (adapter 1) full-stack-env: 8080 => 8090 (adapter 1) ==> full-stack-env: Running 'pre-boot' VM customizations... ==> full-stack-env: Booting VM... ==> full-stack-env: Waiting for machine to boot. This may take a few minutes... full-stack-env: SSH address: 127.0.0.1:4223 full-stack-env: SSH username: vagrant full-stack-env: SSH auth method: private key full-stack-env: Warning: Connection timeout. Retrying... full-stack-env: Warning: Connection timeout. Retrying... full-stack-env: Warning: Remote connection disconnect. Retrying... ==> full-stack-env: Machine booted and ready! ==> full-stack-env: Checking for guest additions in VM... ==> full-stack-env: Setting hostname... ==> full-stack-env: Configuring and enabling network interfaces... The following SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed! ARPCHECK=no /sbin/ifup eth 2> /dev/null Stdout from the command: Device eth does not seem to be present, delaying initialization. Stderr from the command: how ever when I log in to the environment I see two network interfaces as expected Anadis-MacBook-Pro:full-stack-env anadi$ vagrant ssh Last login: Wed Jun 4 12:54:47 2014 from 10.0.2.2 [vagrant@full-stack-env ~]$ ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:BD:39:57 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:febd:3957/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:511 errors:0 dropped:0 overruns:0 frame:0 TX packets:360 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:54574 (53.2 KiB) TX bytes:46675 (45.5 KiB) eth1 Link encap:Ethernet HWaddr 08:00:27:A3:86:C9 inet addr:172.28.128.3 Bcast:172.28.128.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea3:86c9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1360 (1.3 KiB) TX bytes:894 (894.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) I am bit confused here on why it is trying to add another mic (eth2)? In the VM I used for creating this vagrant box, I had added two NICs already.

    Read the article

  • How to use a different Ethernet connection

    - by SteveC
    I'm running a virtual machine at home which has a VPN connection to our main office, but I also want to connect to a share on another machine at home. When I check with IPCONFIG I can see two ethernet connections ... my work VPN ... Ethernet adapter Local Area Connection* 11: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : xxxx::xxxx:xxxx:xxxx:xxxxxxx IPv4 Address. . . . . . . . . . . : XXX.XXX.XXX.XXX Subnet Mask . . . . . . . . . . . : 255.255.254.0 Default Gateway . . . . . . . . . : XXX.XXX.XXX.XXX and home local ... Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : lan Link-local IPv6 Address . . . . . : xxxx::xxxx:xxxx:xxxx:xxxxxxx IPv4 Address. . . . . . . . . . . : 192.168.1.70 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : What's weird is when I've been working before with a plugged-in ethernet cable I've not had any problem getting to the share? I can PING the other machine, but I can't access the share at ... \\othermachine\c$ I tried 'TRACERT` but that disappears off to the work network and eventually gets back to the local other machine after a few time-outs Is there anyway to "force" the connection to stay local ? UPDATE: the VPN is AEP SSL Tunnel

    Read the article

  • stunnel crashing

    - by Jay
    I'm trying to use stunnel to secure a legacy application's communications. I can't seem to get it setup and working. Can anyone provide any hints where I'm going wrong? Here's what I'm trying to accomplish: A windows service on a client machine connects to a server on port 7000 using TCP. I'd like to encrypt the communication between client and server. Here's what I've tried: Created a new server that accepts ssl connections on port 7443. Got a certificate for the server and installed it. That seems to work with my test setup. Installed stunnel on my windows machine (version 7.43 from the distribution archive file). Installed libssl32.dll and libeay32.dll in the same directory as stunnel.exe ( from the openssl-0.9.8h-1 binary distribution). Installed it as a service using "stunnel -install" Configured stunnel as follows: debug=7 output=C:\p4\internal\Utility\Proxy\proxy.log service=Proxy taskbar=no [exchange] accept=7000 client=yes connect=proxy.blah.com:7443 I changed my hosts file to trick the old application into connecting through stunnel: server.blah.com 127.0.0.1 # when client looks up server it goes to stunnel proxy.blah.com IP-address-of-server.blah.com # stunnel connects to new server "server.blah.com" now resolves to the machine it's running on (i.e. stunnel). "proxy.blah.com" goes to the real server. stunnel should connect to the server. I start the stunnel service and try to connect. It looks like it's working but the stunnel service just shuts down with no message. 2010.04.19 13:16:21 LOG5[4924:3716]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:16:21 LOG5[4924:3716]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange accepted connection from 127.0.0.1:4134 2010.04.19 13:16:49 LOG6[4924:3748]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange connected remote server from x.253.120.19:4135 2010.04.19 13:20:24 LOG5[3668:3856]: Reading configuration from file stunnel.conf 2010.04.19 13:20:24 LOG7[3668:3856]: Snagged 64 random bytes from C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: Wrote 1024 new random bytes to C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: RAND_status claims sufficient entropy for the PRNG 2010.04.19 13:20:24 LOG7[3668:3856]: PRNG seeded successfully 2010.04.19 13:20:24 LOG7[3668:3856]: SSL context initialized for service exchange 2010.04.19 13:20:24 LOG5[3668:3856]: Configuration successful 2010.04.19 13:20:24 LOG5[3668:3856]: No limit detected for the number of clients 2010.04.19 13:20:24 LOG7[3668:3856]: FD=312 in non-blocking mode 2010.04.19 13:20:24 LOG7[3668:3856]: Option SO_REUSEADDR set on accept socket 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange bound to 0.0.0.0:7000 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange opened FD=312 2010.04.19 13:20:24 LOG5[3668:3856]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:20:24 LOG5[3668:3856]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:21:02 LOG7[3668:4556]: Service exchange accepted FD=372 from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:4556]: Creating a new thread 2010.04.19 13:21:02 LOG7[3668:4556]: New thread created 2010.04.19 13:21:02 LOG7[3668:3756]: Service exchange started 2010.04.19 13:21:02 LOG7[3668:3756]: FD=372 in non-blocking mode 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange accepted connection from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:3756]: FD=396 in non-blocking mode 2010.04.19 13:21:02 LOG6[3668:3756]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:21:02 LOG7[3668:3756]: connect_blocking: s_poll_wait x.80.60.32:7443: waiting 10 seconds 2010.04.19 13:21:02 LOG5[3668:3756]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange connected remote server from x.253.120.19:4157 2010.04.19 13:21:02 LOG7[3668:3756]: Remote FD=396 initialized 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): before/connect initialization 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server certificate A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server done A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client key exchange A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write change cipher spec A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write finished A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 flush data 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read finished A The client thinks the connection is closed: No connection could be made because the target machine actively refused it 127.0.0.1:7000 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.Sockets.Socket.Connect(EndPoint remoteEP) at Service.ConnUtility.Connect() Any suggestions?

    Read the article

  • WNDR3700 Router + Cisco SG200-08 + LACP + Dual Uplink

    - by kobaltz
    Background I have a storage server that has several virtual machine images stored on them. I would store them locally, but I have limited space on my desktop (using SSD storage). I would like to increase the bandwidth between the desktop and the storage server by using two NICs on each computer. My original configuration allowed about 55MBps between the desktop and storage server. This storage server also has several TBs of documents, pictures, movies, vms, and ISO/programs. The storage server has 8 1.5TB hard drives in a RAID 10 configuration with a hardware RAID controller. The benchmarks on the RAID 10 are about 300MBps. Configuration In short, I am trying to bridge my switch and router. The switch is a small 8 port Cisco smart switch that supports 802.3ad LACP. I have two computers plugged into the switch, each with 2 Intel Gigabit NICs. The first computer is a Windows 7 machine that has the Intel ANS software installed. I have LACP configured with the computer and now show 3 NICs (2 Physical + 1 TEAM Virtual @ 2Gbps). It looks like this computer is configured correctly. I trunked the two ports that this computer is plugged into with the switch's web interface. The second computer is a homebrew storage box running debian. I also have the bonding enabled on this machine and the switch configured with LACP. Without having the WNDR3700 router in the picture yet, I am able to communicate between the Windows 7 machine and the debian box since they both have static IP addresses. With LACP enabled on both machines I am getting about 106-108MBps speeds. Issue I plug in a network cable from the switch into the router and enable DHCP on the desktop. I saw no need to have a static address on the desktop. My transfer rates are still from 106MBps-108MBps. While this is still a boost, I am trying to figure out how to get about 140-180MBps. I am thinking that I need to increase the bandwidth from the router to the switch. My switch allows 4 groups for port trunking. I plugged in a second network cable from the router to the switch. My question is, what is the proper way to fix this issue. Should I port trunk the two ports that are going from the switch to the router? Keep in mind that the router is a WNDR3700 and is unsure whether or not it supports LACP. I do have OpenWRT installed on the router, but it still wasn't clear in any documentation that I found if it supported 802.3ad LACP standards. I am also wondering if there needs to be anything changed within the Cisco settings. [Edit] - Corrected some numbers, wasn't really paying attention. It looks like the speeds though at least two NICs are bonded with LACP is still reaching the max bandwidth of one port. Is there a way to configure the switch so that I can increase this bandwidth? Also, on the storage server, I had a couple of extra NICs laying around and threw them on there as well. Another EDIT and More Findings I happened to look at the traffic of each individual NIC and think that I see the problem. I tested with a simple transfer for a 4GB file. I noticed that only one of the NICs was taking the load of the traffic. I then copied the file back to the Storage Server and noticed that the other NIC was sending out the traffic. I have 802.3ad LACP enabled on the two NICs and I see that it gets enabled dynamically on the switch's interface. Should I be using Static Link Aggregation?

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • Why doesn't pppd over ssh work here? Why can't I kill pppd?

    - by Peter V. Mørch
    I'm trying to setup a simple ppp tunnel over ssh. It works on several machines just fine. But on one machine, pppd gets "stuck": > pgrep pppd | xargs ps up USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 4178 0.0 0.1 3020 1088 pts/1 Ds+ 05:28 0:00 /usr/sbin/pppd Any attempt to kill it (even sudo kill -9 4178) has no effect that I can see. strace -p 4178 also hangs similarly. After it has been started for a while, I start getting messages in dmesg like shown below. It is started like so from another machine: ssh -t root@server /usr/sbin/pppd passive noauth When I do this to one of the machines that work, the remote end's pppd spits out garbage/binary data to the console (as expected). When I do it to the one that fails, I get no output from pppd, but the ssh session eventually times out. If I instead ssh to the machine, and then run /usr/sbin/pppd passive noauth in a separate step I also get the expected binary output. I now have a couple of questions: What could be up with the one machine where pppd fails? I don't even know where to start looking... What could be the difference between ssh -t root@server /usr/sbin/pppd passive noauth in a single step and ssh root@server and /usr/sbin/pppd passive noauth in two steps? How can it be that I can't kill the process even with sudo kill -9? The only way I know is to reboot. (I've tried searching for something similar but didn't get anywhere so I'm sorry I don't have any more leads) Any ideas? The problem machine runs in debian on VMware "hardware" (as do the ones that work) and it exhibits the problem when cloned and on both debian lenny (original) and squeeze (after upgrade) dmesg entries: [ 1198.727248] INFO: task pppd:4178 blocked for more than 120 seconds. [ 1198.727507] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1198.727904] pppd D ece2dc9c 0 4178 4174 0x00000004 [ 1198.727908] 00000098 00000082 f2503520 ece2dc9c 0000b1e7 00000000 c148d1c0 c148d1c0 [ 1198.727913] f2a06100 f6e071c0 00000000 ece2dc18 f5cd07e0 00000000 ece2d400 ece2dc9d [ 1198.727918] 00c52300 ece2dcbc f67bfef8 ec98e480 f291cec0 00000000 c10cf5b0 c10dfd21 [ 1198.727923] Call Trace: [ 1198.727926] [<c10cf5b0>] ? nameidata_to_filp+0x37/0x41 [ 1198.727929] [<c10dfd21>] ? dput+0x21/0xb7 [ 1198.727932] [<c11cfecc>] ? tty_ldisc_ref_wait+0x5f/0x76 [ 1198.727935] [<c104de7a>] ? wake_up_bit+0x5c/0x5c [ 1198.727938] [<c11cb91b>] ? tty_ioctl+0x85f/0x8ba [ 1198.727941] [<c10fec18>] ? do_lock_file_wait+0x3d/0xd9 [ 1198.727944] [<c1162c97>] ? _copy_from_user+0x2b/0x102 [ 1198.727946] [<c11cb0bc>] ? tty_check_change+0xb9/0xb9 [ 1198.727949] [<c10dbeb7>] ? do_vfs_ioctl+0x485/0x4c7 [ 1198.727952] [<c10db59a>] ? do_fcntl+0x24f/0x3a2 [ 1198.727954] [<c10dbf3a>] ? sys_ioctl+0x41/0x58 [ 1198.727957] [<c12c6a1f>] ? sysenter_do_call+0x12/0x28 [ 1318.457225] INFO: task sshd:4174 blocked for more than 120 seconds. [ 1318.457500] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1318.457896] sshd D f25024cc 0 4174 2393 0x00000000 [ 1318.457901] 00000098 00000086 f2a06940 f25024cc 0000b246 00000000 c148d1c0 c148d1c0 [ 1318.457906] f2503520 f6e071c0 00000000 3f056585 0000000f ece2d4bc 3f056585 f2503520 [ 1318.457911] ec98bb38 ec98bbdc 00000000 00000000 00000000 c12c09b5 f2503520 c10327cb [ 1318.457916] Call Trace: [ 1318.457926] [<c12c09b5>] ? schedule_hrtimeout_range_clock+0x3c/0xd9 [ 1318.457931] [<c10327cb>] ? try_to_wake_up+0x13f/0x13f [ 1318.457935] [<c11cfecc>] ? tty_ldisc_ref_wait+0x5f/0x76 [ 1318.457940] [<c104de7a>] ? wake_up_bit+0x5c/0x5c [ 1318.457943] [<c11c9ad3>] ? tty_poll+0x32/0x5e [ 1318.457947] [<c10dd4d5>] ? do_select+0x2a1/0x42e [ 1318.457950] [<c10dcb83>] ? poll_freewait+0x69/0x69 [ 1318.457953] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457955] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457958] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457960] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457963] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457965] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457968] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457971] [<c10429c2>] ? lock_timer_base+0x19/0x35 [ 1318.457974] [<c1042eb5>] ? __mod_timer+0x10c/0x116 [ 1318.457977] [<c1042f89>] ? mod_timer+0x69/0x6e [ 1318.457981] [<c121325d>] ? sk_reset_timer+0xc/0x16 [ 1318.457984] [<c1252f57>] ? tcp_event_new_data_sent+0x66/0x6b [ 1318.457987] [<c1255b85>] ? tcp_write_xmit+0x7a7/0x86a [ 1318.457990] [<c121760d>] ? __alloc_skb+0x50/0xfd [ 1318.457994] [<c12c12bc>] ? _raw_spin_lock_bh+0x8/0x1e [ 1318.457996] [<c1212e98>] ? release_sock+0x10/0xc4 [ 1318.457999] [<c124b543>] ? tcp_sendmsg+0x6dd/0x7b7 [ 1318.458003] [<c1162c97>] ? _copy_from_user+0x2b/0x102 [ 1318.458006] [<c10dd7a0>] ? core_sys_select+0x13e/0x1c3 [ 1318.458009] [<c12102a3>] ? sock_aio_write+0xc0/0xd4 [ 1318.458012] [<c10d0655>] ? do_sync_write+0xa0/0xe4 [ 1318.458016] [<c10b141c>] ? handle_mm_fault+0x222/0x238 [ 1318.458019] [<c10f6096>] ? fsnotify+0x1de/0x1f9 [ 1318.458022] [<c10dd9e8>] ? sys_select+0x6e/0x8f [ 1318.458024] [<c10d105e>] ? sys_write+0x3c/0x63 [ 1318.458028] [<c12c6a1f>] ? sysenter_do_call+0x12/0x28

    Read the article

  • Java update/install via group policy

    - by Maximus
    I trying to deploy the latest Java RE version via GP, Java 7 update 9. I want to update computers that are currently running an older version of Java, a mixture of 7.6 and 7.7, some computers are running versions as old as 6.31. Some are running a mixture of both. I would also like this GP to install Java if it's not installed. Previously I used push out Java updates to users machines as Java didn't remove the old version. So when it was done the user would restart their browser or pc to start using the latest version. Not the best way to manage it as it leaves the old version installed but it worked. I've created group policies before for printer deployment, log on drive mapping scripts, but never software deployment. I've extracted the Java MSI and created a transform file to suppress reboot etc using orca. As described on this site http://ivan.dretvic.com/2011/06/how-to-package-and-deploy-java-jre-1-6-0_26-via-group-policy/. I have also tried saving the edited MSI directly and that didn't work either. But it just won't deploy. I have tried to enable logging as suggested on this site http://openofficetechnology.com/node/32, GPO logging via UserEnvDebugLevel, Software deployment logging via AppmgmtDebugLevel and MSI logging, but there is no log C:\Windows\Debug\UserMode\userenv.log being created. The windows event viewer has the following errors: Error 24/10/2012 11:44:04 AM - "Failed to apply changes to software installation settings. Software changes could not be applied. A previous log entry with details should exist. The error was : %%1612" Information 24/10/2012 11:44:04 AM - "The removal of the assignment of application Java 7 Update 9 - FB Java Transform from policy JavaDeploy succeeded." Error 24/10/2012 11:44:04 AM - "The install of application Java 7 Update 9 - FB Java Transform from policy JavaDeploy failed. The error was : %%1612" There is a log created for MSI logging and it's as below. It says the source is invalid but it exists on the share and the PC that I'm testing has permissions and I've included the recommendation here Group Policy installation failed error 1274 to enable "Always wait for the network at computer startup and logon" === Verbose logging started: 24/10/2012 11:43:59 Build type: SHIP UNICODE 5.00.7601.00 Calling process: C:\Windows\system32\svchost.exe === MSI (c) (9C:EC) [11:43:59:898]: Resetting cached policy values MSI (c) (9C:EC) [11:43:59:898]: Machine policy value 'Debug' is 3 MSI (c) (9C:EC) [11:43:59:898]: ******* RunEngine: ******* Product: {26a24ae4-039d-4ca4-87b4-2f83217009ff} ******* Action: ******* CommandLine: ********** MSI (c) (9C:EC) [11:43:59:898]: Client-side and UI is none or basic: Running entire install on the server. MSI (c) (9C:EC) [11:43:59:898]: Grabbed execution mutex. MSI (c) (9C:EC) [11:44:03:431]: Cloaking enabled. MSI (c) (9C:EC) [11:44:03:431]: Attempting to enable all disabled privileges before calling Install on Server MSI (c) (9C:EC) [11:44:03:439]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (2C:70) [11:44:03:574]: Running installation inside multi-package transaction {26a24ae4-039d-4ca4-87b4-2f83217009ff} MSI (s) (2C:70) [11:44:03:574]: Grabbed execution mutex. MSI (s) (2C:7C) [11:44:03:607]: Resetting cached policy values MSI (s) (2C:7C) [11:44:03:607]: Machine policy value 'Debug' is 3 MSI (s) (2C:7C) [11:44:03:607]: ******* RunEngine: ******* Product: {26a24ae4-039d-4ca4-87b4-2f83217009ff} ******* Action: ******* CommandLine: ********** MSI (s) (2C:7C) [11:44:03:607]: Machine policy value 'DisableUserInstalls' is 0 MSI (s) (2C:7C) [11:44:03:623]: User policy value 'SearchOrder' is 'nmu' MSI (s) (2C:7C) [11:44:03:624]: User policy value 'DisableMedia' is 0 MSI (s) (2C:7C) [11:44:03:624]: Machine policy value 'AllowLockdownMedia' is 0 MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Media enabled only if package is safe. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Looking for sourcelist for product {26a24ae4-039d-4ca4-87b4-2f83217009ff} MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Adding {26a24ae4-039d-4ca4-87b4-2f83217009ff}; to potential sourcelist list (pcode;disk;relpath). MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Now checking product {26a24ae4-039d-4ca4-87b4-2f83217009ff} MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Media is enabled for product. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Attempting to use LastUsedSource from source list. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Processing net source list. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Trying source \\server\share\deployment\Java\stable\x32\. MSI (s) (2C:7C) [11:44:03:650]: Note: 1: 2303 2: 5 3: \\server\share\ MSI (s) (2C:7C) [11:44:03:650]: Note: 1: 1325 2: deployment MSI (s) (2C:7C) [11:44:03:650]: ConnectToSource: CreatePath/CreateFilePath failed with: -2147483648 1325 -2147483648 MSI (s) (2C:7C) [11:44:03:650]: ConnectToSource (con't): CreatePath/CreateFilePath failed with: -2147483648 -2147483648 MSI (s) (2C:7C) [11:44:03:650]: SOURCEMGMT: net source '\\server\share\deployment\Java\stable\x32\' is invalid. MSI (s) (2C:7C) [11:44:03:650]: Note: 1: 1706 2: -2147483647 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:03:650]: SOURCEMGMT: Processing media source list. MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 2203 2: 3: -2147287037 MSI (s) (2C:7C) [11:44:04:668]: SOURCEMGMT: Source is invalid due to missing/inaccessible package. MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1706 2: -2147483647 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:04:668]: SOURCEMGMT: Processing URL source list. MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1402 2: UNKNOWN\URL 3: 2 MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1706 2: -2147483647 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1706 2: 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:04:668]: SOURCEMGMT: Failed to resolve source MSI (s) (2C:7C) [11:44:04:668]: MainEngineThread is returning 1612 MSI (s) (2C:70) [11:44:04:670]: User policy value 'DisableRollback' is 0 MSI (s) (2C:70) [11:44:04:670]: Machine policy value 'DisableRollback' is 0 MSI (s) (2C:70) [11:44:04:670]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (2C:70) [11:44:04:670]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2 MSI (s) (2C:70) [11:44:04:671]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2 MSI (s) (2C:70) [11:44:04:671]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2 MSI (s) (2C:70) [11:44:04:671]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2 MSI (s) (2C:70) [11:44:04:671]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (s) (2C:70) [11:44:04:671]: Restoring environment variables MSI (c) (9C:EC) [11:44:04:675]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (9C:EC) [11:44:04:675]: MainEngineThread is returning 1612 === Verbose logging stopped: 24/10/2012 11:44:04 === I'm not sure what my next approach should be. Any help would be much appreciated. Thanks.

    Read the article

  • cannot access a site from Mac OSX Lion but can from other machines on network?

    - by house9
    SOLVED: The issue is with the hamachi client, hamachi is hi-jacking all of the 5.0.0.0/8 address block http://en.wikipedia.org/wiki/Hamachi_(software)#Criticism http://b.logme.in/2012/11/07/changes-to-hamachi-on-november-19th/ The fix on Mac LogMeIn Hamachi Preferences Settings Advanced Peer Connections IP protocol mode IPv6 only (default is both) If you can only connect to some of your network over IPv4 this 'fix' will NOT work for you ----- A few weeks ago I started using a service - https://semaphoreapp.com I think they made DNS changes a week ago and ever since I cannot access the site from my Mac OSX Lion (10.7.4) machine (my main development machine) but I can access the site from other machines on my network ipad windows machine MacMini (10.6.8) After some google searching I tried both of these dscacheutil -flushcache sudo killall -HUP mDNSResponder but no go, I've contacted semaphoreapp as well, but nothing so far - also of interest, one of my colleagues has the exact same problem, cannot access via Mac OSX Lion but can via windows machine, we work remotely and are not on the same ISP some additional info Lion (10.7.4) cannot access site host semaphoreapp.com semaphoreapp.com has address 5.9.53.16 ping semaphoreapp.com PING semaphoreapp.com (5.9.53.16): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 ping: sendto: No route to host Request timeout for icmp_seq 4 ping: sendto: Host is down Request timeout for icmp_seq 5 ping: sendto: Host is down Request timeout for icmp_seq 6 ping: sendto: Host is down Request timeout for icmp_seq 7 .... traceroute semaphoreapp.com traceroute to semaphoreapp.com (5.9.53.16), 64 hops max, 52 byte packets 1 * * * 2 * * * traceroute: sendto: No route to host 3 traceroute: wrote semaphoreapp.com 52 chars, ret=-1 *traceroute: sendto: Host is down traceroute: wrote semaphoreapp.com 52 chars, ret=-1 .... and MacMini (10.6.8) can access it host semaphoreapp.com semaphoreapp.com has address 5.9.53.16 ping semaphoreapp.com PING semaphoreapp.com (5.9.53.16): 56 data bytes 64 bytes from 5.9.53.16: icmp_seq=0 ttl=44 time=191.458 ms 64 bytes from 5.9.53.16: icmp_seq=1 ttl=44 time=202.923 ms 64 bytes from 5.9.53.16: icmp_seq=2 ttl=44 time=180.746 ms 64 bytes from 5.9.53.16: icmp_seq=3 ttl=44 time=200.616 ms 64 bytes from 5.9.53.16: icmp_seq=4 ttl=44 time=178.818 ms .... traceroute semaphoreapp.com traceroute to semaphoreapp.com (5.9.53.16), 64 hops max, 52 byte packets 1 192.168.0.1 (192.168.0.1) 1.677 ms 1.446 ms 1.445 ms 2 * LOCAL ISP 11.957 ms * 3 etc... 10.704 ms 14.183 ms 9.341 ms 4 etc... 32.641 ms 12.147 ms 10.850 ms 5 etc.... 44.205 ms 54.563 ms 36.243 ms 6 vlan139.car1.seattle1.level3.net (4.53.145.165) 50.136 ms 45.873 ms 30.396 ms 7 ae-32-52.ebr2.seattle1.level3.net (4.69.147.182) 31.926 ms 40.507 ms 49.993 ms 8 ae-2-2.ebr2.denver1.level3.net (4.69.132.54) 78.129 ms 59.674 ms 49.905 ms 9 ae-3-3.ebr1.chicago2.level3.net (4.69.132.62) 99.019 ms 82.008 ms 76.074 ms 10 ae-1-100.ebr2.chicago2.level3.net (4.69.132.114) 96.185 ms 75.658 ms 75.662 ms 11 ae-6-6.ebr2.washington12.level3.net (4.69.148.145) 104.322 ms 105.563 ms 118.480 ms 12 ae-5-5.ebr2.washington1.level3.net (4.69.143.221) 93.646 ms 99.423 ms 96.067 ms 13 ae-41-41.ebr2.paris1.level3.net (4.69.137.49) 177.744 ms ae-44-44.ebr2.paris1.level3.net (4.69.137.61) 199.363 ms 198.405 ms 14 ae-47-47.ebr1.frankfurt1.level3.net (4.69.143.141) 176.876 ms ae-45-45.ebr1.frankfurt1.level3.net (4.69.143.133) 170.994 ms ae-46-46.ebr1.frankfurt1.level3.net (4.69.143.137) 177.308 ms 15 ae-61-61.csw1.frankfurt1.level3.net (4.69.140.2) 176.769 ms ae-91-91.csw4.frankfurt1.level3.net (4.69.140.14) 178.676 ms 173.644 ms 16 ae-2-70.edge7.frankfurt1.level3.net (4.69.154.75) 180.407 ms ae-3-80.edge7.frankfurt1.level3.net (4.69.154.139) 174.861 ms 176.578 ms 17 as33891-net.edge7.frankfurt1.level3.net (195.16.162.94) 175.448 ms 185.658 ms 177.081 ms 18 hos-bb1.juniper4.rz16.hetzner.de (213.239.240.202) 188.700 ms 190.332 ms 188.196 ms 19 hos-tr4.ex3k14.rz16.hetzner.de (213.239.233.98) 199.632 ms hos-tr3.ex3k14.rz16.hetzner.de (213.239.233.66) 185.938 ms hos-tr2.ex3k14.rz16.hetzner.de (213.239.230.34) 182.378 ms 20 * * * 21 * * * 22 * * * any ideas? EDIT: adding tcpdump MacMini (which can connect) while running - ping semaphoreapp.com sudo tcpdump -v -i en0 dst semaphoreapp.com Password: tcpdump: listening on en0, link-type EN10MB (Ethernet), capture size 65535 bytes 17:33:03.337165 IP (tos 0x0, ttl 64, id 20153, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->3129)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 0, length 64 17:33:04.337279 IP (tos 0x0, ttl 64, id 26049, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->1a21)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 1, length 64 17:33:05.337425 IP (tos 0x0, ttl 64, id 47854, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->c4f3)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 2, length 64 17:33:06.337548 IP (tos 0x0, ttl 64, id 24772, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->1f1e)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 3, length 64 17:33:07.337670 IP (tos 0x0, ttl 64, id 8171, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->5ff7)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 4, length 64 17:33:08.337816 IP (tos 0x0, ttl 64, id 35810, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->f3ff)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 5, length 64 17:33:09.337948 IP (tos 0x0, ttl 64, id 31120, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->652)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 6, length 64 ^C 7 packets captured 1047 packets received by filter 0 packets dropped by kernel OSX Lion (cannot connect) while running - ping semaphoreapp.com # wireless ~ $ sudo tcpdump -v -i en1 dst semaphoreapp.com Password: tcpdump: listening on en1, link-type EN10MB (Ethernet), capture size 65535 bytes ^C 0 packets captured 262 packets received by filter 0 packets dropped by kernel and # wired ~ $ sudo tcpdump -v -i en0 dst semaphoreapp.com tcpdump: listening on en0, link-type EN10MB (Ethernet), capture size 65535 bytes ^C 0 packets captured 219 packets received by filter 0 packets dropped by kernel above output after Request timeout for icmp_seq 25 or 30 times from ping. I don't know much about tcpdump, but to me it doesn't seem like the ping requests are leaving my machine?

    Read the article

  • Issue 15: The Benefits of Oracle Exastack

    - by rituchhibber
         SOLUTIONS FOCUS The Benefits of Oracle Exastack Paul ThompsonDirector, Alliances and Solutions Partner ProgramsOracle EMEA Alliances & Channels RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Ready Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Exastack Labs Video Tour SUBSCRIBE FEEDBACK PREVIOUS ISSUES Exastack is a revolutionary programme supporting Oracle independent software vendor partners across the entire Oracle technology stack. Oracle's core strategy is to engineer software and hardware together, and our ISV strategy is the same. At Oracle we design engineered systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimises performance at every layer of the stack to simplify business operations, drive down costs and accelerate business innovation. Our engineered systems are optimised to achieve enterprise performance levels that are unmatched in the industry. Faster time to production is achieved by implementing pre-engineered and pre-assembled hardware and software bundles. Our strategy of delivering a single-vendor stack simplifies and reduces costs associated with purchasing, deploying, and supporting IT environments for our customers and partners. In parallel to this core engineered systems strategy, the Oracle Exastack Program enables our Oracle ISV partners to leverage a scalable, integrated infrastructure that delivers their applications tuned, tested and optimised for high-performance. Specifically, the Oracle Exastack Program helps ISVs run their solutions on the Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4 - integrated systems products in which the software and hardware are engineered to work together. These products provide OPN members with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Ready and Optimized Oracle Partners can now leverage our new Oracle Exastack Program to become Oracle Exastack Ready and Oracle Exastack Optimized. Partners can achieve Oracle Exastack Ready status through their support for Oracle Solaris, Oracle Linux, Oracle VM, Oracle Database, Oracle WebLogic Server, Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. By doing this, partners can demonstrate to their customers that their applications are available on the latest major releases of these products. The Oracle Exastack Ready programme helps customers readily differentiate Oracle partners from lesser software developers, and identify applications that support Oracle engineered systems. Achieving Oracle Exastack Optimized status demonstrates that an OPN member has proven itself against goals for performance and scalability on Oracle integrated systems. This status enables end customers to readily identify Oracle partners that have tested and tuned their solutions for optimum performance on an Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. These ISVs can display the Oracle Exadata Optimized, Oracle Exalogic Optimized or Oracle SPARC SuperCluster Optimized logos on websites and on all their collateral to show that they have tested and tuned their application for optimum performance. Deliver higher value to customers Oracle's investment in engineered systems enables ISV partners to deliver higher value to customer business processes. New innovations are enabled through extreme performance unachievable through traditional best-of-breed multi-vendor server/software approaches. Core product requirements can be launched faster, enabling ISVs to focus research and development investment on core competencies in order to bring value to market as quickly as possible. Through Exastack, partners no longer have to worry about the underlying product stack, which allows greater focus on the development of intellectual property above the stack. Partners are not burdened by platform issues and can concentrate simply on furthering their applications. The advantage to end customers is that partners can focus all efforts on business functionality, rather than bullet-proofing underlying technologies, and so will inevitably deliver application updates faster. Exastack provides ISVs with a number of flexible deployment options, such as on-premise or Cloud, while maintaining one single code base for applications regardless of customer deployment preference. Customers buying their solutions from Exastack ISVs can therefore be confident in deploying on their own networks, on private clouds or into a public cloud. The underlying platform will support all conceivable deployments, enabling a focus on the ISV's application itself that wouldn't be possible with other vendor partners. It stands to reason that Exastack accelerates time to value as well as lowering implementation costs all round. There is a big competitive advantage in partners being able to offer customers an optimised, pre-configured solution rather than an assortment of components and a suggested fit. Once a customer has decided to buy an Oracle Exastack Ready or Optimized partner solution, it will be up and running without any need for the customer to conduct testing of its own. Operational costs and complexity are also reduced, thanks to streamlined customer support through standardised configurations and pro-active monitoring. 'Engineered to Work Together' is a significant statement of Oracle strategy. It guarantees smoother deployment of a single vendor solution, clear ownership with no finger-pointing and the peace of mind of the Oracle Support Centre underpinning the entire product stack. Next steps Every OPN member with packaged applications must seriously consider taking steps to become Exastack Ready, or Exastack Optimized at the first opportunity. That first step down the track is to talk to an expert on the OPN Portal, at the Oracle Partner Business Center or to discuss the next steps with the closest Oracle account manager. Oracle Exastack lab environments and other technical enablement resources are available for OPN members wishing to further their knowledge of Oracle Exastack and qualify their applications for Oracle Exastack Optimized. New Boot Camps and Guided Learning Paths (GLPs), tailored specifically for ISVs, are available for Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle Linux, Oracle Solaris, Oracle Database, and Oracle WebLogic Server. More information about these GLPs and Boot Camps (including delivery dates and locations) are posted on the OPN Competency Center and corresponding OPN Knowledge Zones. Learn more about Oracle Exastack labs and ISV specific enablement resources. "Oracle Specialized partners are of course front-and-centre, with potential customers clearly directed to those partners and to Exadata Ready partners as a matter of priority." --More OpenWorld 2011 highlights for Oracle partners and customers Oracle Application Testing Suite 9.3 application testing solution for Web, SOA and Oracle Applications Oracle Application Express Release 4.1 improving the development of database-centric Web 2.0 applications and reports Oracle Unified Directory 11g helping customers manage the critical identity information that drives their business applications Oracle SOA Suite for healthcare integration Oracle Enterprise Pack for Eclipse 11g demonstrating continued commitment to the developer and open source communities Oracle Coherence 3.7.1, the latest release of the industry's leading distributed in-memory data grid Oracle Process Accelerators helping to simplify and accelerate time-to-value for customers' business process management initiatives Oracle's JD Edwards EnterpriseOne on the iPad meeting the increasingly mobile demands of today's workforces Oracle CRM On Demand Release 19 Innovation Pack introducing industry-leading hosted call centre and enterprise-marketing capabilities designed to drive further revenue and productivity while reducing costs and improving the customer experience Oracle's Primavera Portfolio Management 9 for businesses delivering on project portfolio goals with increased versatility, transparency and accuracy Oracle's PeopleSoft Human Capital Management (HCM) 9.1 On Demand Standard Edition helping customers manage their long-term investment in enterprise-wide business applications New versions of Oracle FLEXCUBE Universal Banking and Oracle FLEXCUBE Investor Servicing for Financial Institutions, as well as Oracle Financial Services Enterprise Case Management, Oracle Financial Services Pricing Management, Oracle Financial Management Analytics and Oracle Tax Analytics Oracle Utilities Network Management System 1.11 offering new modelling and analysis features to improve distribution-grid management for electric utilities Oracle Communications Network Charging and Control 4.4 helping communications service providers (CSPs) offer their customers more flexible charging options Plus many, many more technology announcements, enhancements, momentum news and community updates -- Oracle OpenWorld 2012 A date has already been set for Oracle OpenWorld 2012. Held once again in San Francisco, exhibitors, partners, customers and Oracle people will gather from 30 September until 4 November to meet, network and learn together with the rest of the global Oracle community. Register now for Oracle OpenWorld 2012 and save $$$! We'll reward your early planning for Oracle OpenWorld 2012 with reduced rates. Super Saver deals are now available! -- Back to the welcome page

    Read the article

  • Fixing a SkyDrive Sync Disaster

    - by Rick Strahl
    For a few months I've been using SkyDrive to handle some basic synching tasks for a number of folders of mine. Specifically I've been dumping a few of my development folders into sky drive so I have a live running backup. It had been working just fine until about a week ago when something went awry. Badly! The idea is that the SkyDrive should sync files, but somewhere in its sync relationship it appears that SkyDrive got confused and assumed it needed to sync back older files to my local machine from the SkyDrive server. So rather than syncing my newer files to the server SkyDrive was pushing older files back to me. Because SkyDrive is so slow actually updating data it's not unusual for SkyDrive to be far behind in syncing and apparently some files were out of date by several months. Of course this is insidious because I didn't notice it for quite some time. I'd been happily working away on my files when a few days ago I noted a bunch of files with -RasXps (my machine name) popping up in various folders. At first I thought my Git repository was giving me a fit, but eventually realized that SkyDrive was actually pushing old files into my monitored folders. To be fair SkyDrive did make backups of the existing files, but by the time I caught it there were literally a few thousand files scattered on my machine that were now updated with old files from online. Here's what some of this looks like: If you look at the directory list you see a bunch of files with a -RasXps postfix appended to them. Those are the files that SkyDrive replaced and backed up on my machine. As you can see the backed up files are actually newer than the ones it pulled from the online SkyDrive. Unless I modified the files after they were updated they all were older than the existing local files. Not exactly how I imagined my synching would work. At first I started cleaning up this mess manually. In most cases the obvious solution was to simply delete the original file and replace with the -RasXps file, but not in all files. Some scrutiny was required and besides being a pain in the ass to rename files, quite frequently I had to dig out Beyond Compare to compare a few files where it wasn't quite clear what's wrong. I quickly realized that doing this by hand would be too hard for the large number of files that got hosed. Hacking together a small .NET Utility So, I figured the easiest way to tackle this is to write a small utility app that shows me all the mangled files that have backups, allows me to compare them and then quickly select and update them, removing the -RasXps file after choosing one of the two files. What I ended up with was a quick and dirty WinForms app that allows me to pick a root folder, and then shows all the -MachineName files: I start by picking a base folder and a template to search for - typically the -MachineName. Clicking Go brings up a list of all files in that folder and its subdirectories.  The list also displays the dates for the saved (-MachineName) file and the current file on disk, along with highlighting for the newer of the two. I can right click on any file and get a context menu pop up to open the folder in Explorer, or open Beyond Compare and view the two files to compare differences which I found very helpful for a number of files where I had modified the files after SkyDrive had updated to an old one. Typically these would be the green files (of which there were thankfully few). To 'fix' files I can select any number of files in the list, then use one of the three buttons on the right to apply an operation. I can use the Saved files - that is the backup file that SkyDrive created with the -MachineName extension (-RasXps above). Or I can use the current file, which is the file with the right name on disk right now and delete the -MachineName file. Or on some occasions I can just opt to delete both of them. For some files like binaries it's often easier to just delete and them be rebuild than choosing. For the most part the process involves accepting the pink files, and checking the few green files and see if any modifications were made since the file was updated incorrectly by SkyDrive. For me luckily those are few in number. Anyways, I thought I share this utility in case anybody else runs into this issue. I've included the VS2012 solution and all the source code so you can see how it works and you can tweak it as needed. The .NET 4.5 binaries are also included if you can't compile. Be warned though!  This rough code is provided as is and makes no guarantees or claims about file safety. All three of the action buttons on the form will delete data. It's a very rough utility and there are no safeguards that ask nicely before deleting files. I highly recommend you make a backup before you have at it. This tools is very narrow in focus, but it might also work with other sync issues from other vendors. I seem to remember that I had similar issues with SugarSync at some point and it too created the -MachineName style files on sync conflicts. Hope this helps somebody out so you can avoid wasting the better part of a full work day on this… Resources Download the Source Code and Binaries for SkyDrive Rescue© Rick Strahl, West Wind Technologies, 2005-2013Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Self-signed certificates for a known community

    - by costlow
    Recently announced changes scheduled for Java 7 update 51 (January 2014) have established that the default security slider will require code signatures and the Permissions Manifest attribute. Code signatures are a common practice recommended in the industry because they help determine that the code your computer will run is the same code that the publisher created. This post is written to help users that need to use self-signed certificates without involving a public Certificate Authority. The role of self-signed certificates within a known community You may still use self-signed certificates within a known community. The difference between self-signed and purchased-from-CA is that your users must import your self-signed certificate to indicate that it is valid, whereas Certificate Authorities are already trusted by default. This works for known communities where people will trust that my certificate is mine, but does not scale widely where I cannot actually contact or know the systems that will need to trust my certificate. Public Certificate Authorities are widely trusted already because they abide by many different requirements and frequent checks. An example would be students in a university class sharing their public certificates on a mailing list or web page, employees publishing on the intranet, or a system administrator rolling certificates out to end-users. Managed machines help this because you can automate the rollout, but they are not required -- the major point simply that people will trust and import your certificate. How to distribute self-signed certificates for a known community There are several steps required to distribute a self-signed certificate to users so that they will properly trust it. These steps are: Creating a public/private key pair for signing. Exporting your public certificate for others Importing your certificate onto machines that should trust you Verify work on a different machine Creating a public/private key pair for signing Having a public/private key pair will give you the ability both to sign items yourself and issue a Certificate Signing Request (CSR) to a certificate authority. Create your public/private key pair by following the instructions for creating key pairs.Every Certificate Authority that I looked at provided similar instructions, but for the sake of cohesiveness I will include the commands that I used here: Generate the key pair.keytool -genkeypair -alias erikcostlow -keyalg EC -keysize 571 -validity 730 -keystore javakeystore_keepsecret.jks Provide a good password for this file. The alias "erikcostlow" is my name and therefore easy to remember. Substitute your name of something like "mykey." The sigalg of EC (Elliptical Curve) and keysize of 571 will give your key a good strong lifetime. All keys are set to expire. Two years or 730 days is a reasonable compromise between not-long-enough and too-long. Most public Certificate Authorities will sign something for one to five years. You will be placing your keys in javakeystore_keepsecret.jks -- this file will contain private keys and therefore should not be shared. If someone else gets these private keys, they can impersonate your signature. Please be cautious about automated cloud backup systems and private key stores. Answer all the questions. It is important to provide good answers because you will stick with them for the "-validity" days that you specified above.What is your first and last name?  [Unknown]:  First LastWhat is the name of your organizational unit?  [Unknown]:  Line of BusinessWhat is the name of your organization?  [Unknown]:  MyCompanyWhat is the name of your City or Locality?  [Unknown]:  City NameWhat is the name of your State or Province?  [Unknown]:  CAWhat is the two-letter country code for this unit?  [Unknown]:  USIs CN=First Last, OU=Line of Business, O=MyCompany, L=City, ST=CA, C=US correct?  [no]:  yesEnter key password for <erikcostlow>        (RETURN if same as keystore password): Verify your work:keytool -list -keystore javakeystore_keepsecret.jksYou should see your new key pair. Exporting your public certificate for others Public Key Infrastructure relies on two simple concepts: the public key may be made public and the private key must be private. By exporting your public certificate, you are able to share it with others who can then import the certificate to trust you. keytool -exportcert -keystore javakeystore_keepsecret.jks -alias erikcostlow -file erikcostlow.cer To verify this, you can open the .cer file by double-clicking it on most operating systems. It should show the information that you entered during the creation prompts. This is the file that you will share with others. They will use this certificate to prove that artifacts signed by this certificate came from you. If you do not manage machines directly, place the certificate file on an area that people within the known community should trust, such as an intranet page. Import the certificate onto machines that should trust you In order to trust the certificate, people within your known network must import your certificate into their keystores. The first step is to verify that the certificate is actually yours, which can be done through any band: email, phone, in-person, etc. Known networks can usually do this Determine the right keystore: For an individual user looking to trust another, the correct file is within that user’s directory.e.g. USER_HOME\AppData\LocalLow\Sun\Java\Deployment\security\trusted.certs For system-wide installations, Java’s Certificate Authorities are in JAVA_HOMEe.g. C:\Program Files\Java\jre8\lib\security\cacerts File paths for Mac and Linux are included in the link above. Follow the instructions to import the certificate into the keystore. keytool -importcert -keystore THEKEYSTOREFROMABOVE -alias erikcostlow -file erikcostlow.cer In this case, I am still using my name for the alias because it’s easy for me to remember. You may also use an alias of your company name. Scaling distribution of the import The easiest way to apply your certificate across many machines is to just push the .certs or cacerts file onto them. When doing this, watch out for any changes that people would have made to this file on their machines. Trusted.certs: When publishing into user directories, your file will overwrite any keys that the user has added since last update. CACerts: It is best to re-run the import command with each installation rather than just overwriting the file. If you just keep the same cacerts file between upgrades, you will overwrite any CAs that have been added or removed. By re-importing, you stay up to date with changes. Verify work on a different machine Verification is a way of checking on the client machine to ensure that it properly trusts signed artifacts after you have added your signing certificate. Many people have started using deployment rule sets. You can validate the deployment rule set by: Create and sign the deployment rule set on the computer that holds the private key. Copy the deployment rule set on to the different machine where you have imported the signing certificate. Verify that the Java Control Panel’s security tab shows your deployment rule set. Verifying an individual JAR file or multiple JAR files You can test a certificate chain by using the jarsigner command. jarsigner -verify filename.jar If the output does not say "jar verified" then run the following command to see why: jarsigner -verify -verbose -certs filename.jar Check the output for the term “CertPath not validated.”

    Read the article

  • Windows 8 Launch&ndash;Why OEM and Retailers Should STFU

    - by D'Arcy Lussier
    Microsoft has gotten a lot of flack for the Surface from OEM/hardware partners who create Windows-based devices and I’m sure, to an extent, retailers who normally stock and sell Windows-based devices. I mean we all know how this is supposed to work – Microsoft makes the OS, partners make the hardware, retailers sell the hardware. Now Microsoft is breaking the rules by not only offering their own hardware but selling them via online and through their Microsoft branded stores! The thought has been that Microsoft is trying to set a standard for the other hardware companies to reach for. Maybe. I hope, at some level, Microsoft may be covertly responding to frustrations associated with trusting the OEMs and Retailers to deliver on their part of the supply chain. I know as a consumer, I’m very frustrated with the Windows 8 launch. Aside from the Surface sales, there’s nothing happening at the retail level. Let me back up and explain. Over the weekend I visited a number of stores in hopes of trying out various Windows 8 devices. Out of three retailers (Staples, Best Buy, and Future Shop), not *one* met my expectations. Let me be honest with you Staples, I never really have high expectations from your computer department. If I need paper or pens, whatever, but computers – you’re not the top of my list for price or selection. Still, considering you flaunted Win 8 devices in your flyer I expected *something* – some sign of effort that you took the Windows 8 launch seriously. As I entered the 1910 Pembina Highway location in Winnipeg, there was nothing – no signage, no banners – nothing that would suggest Windows 8 had even launched. I made my way to the laptops. I had to play with each machine to determine which ones were running Windows 8. There wasn’t anything on the placards that made it obvious which were Windows 8 machines and which ones were Windows 7. Likewise, there was no easy way to identify the touch screen laptop (the HP model) from the others without physically touching the screen to verify. Horrible experience. In the same mall as the Staples I mentioned above, there’s a Future Shop. Surely they would be more on the ball. I walked in to the 1910 Pembina Highway location and immediately realized I would not get a better experience. Except for the sign by the front door mentioning Windows 8, there was *nothing* in the computer department pointing you to the Windows 8 devices. Like in Staples, the Win 8 laptops were mixed in with the Win 7 ones and there was nothing notable calling out which ones were running Win 8. I happened to hit up the St. James Street location today, thinking since its a busier store they must have more options. To their credit, they did have two staff members decked out in Windows 8 shirts and who were helping a customer understand Windows 8. But otherwise, there was nothing highlighting the Windows 8 devices and they were again mixed in with the rest of the Win 7 machines. Finally, we have the St. James Street Best Buy location here in Winnipeg. I’m sure Best Buy will have their act together. Nope, not even close. Same story as the others: minimal signage (there was a sign as you walked in with a link to this schedule of demo days), Windows 8 hardware mixed with the rest of the PC offerings, and no visible call-outs identifying which were Win 8 based. This meant that, like Future Shop and Staples, if you wanted to know which machine had Windows 8 you had to go and scrutinize each machine. Also, there was nothing identifying which ones were touch based and which were not. Just Another Day… To these retailers, it seemed that the Windows 8 launch was just another day, with another product to add to the showroom floor. Meanwhile, Apple has their dedicated areas *in all three stores*. It was dead simple to find where the Apple products were compared to the Windows 8 products. No wonder Microsoft is starting to push their own retail stores. No wonder Microsoft is trying to funnel orders through them instead of relying on these bloated retail big box stores who obviously can’t manage a product launch. It’s Not Just The Retailers… Remember when the Acer CEO, Founder, and President of Computer Global Operations all weighed in on how Microsoft releasing the Surface would have a “huge negative impact for the ecosystem and other brands may take a negative reaction”? Also remember the CEO stating “[making hardware] is not something you are good at so please think twice”? Well the launch day has come and gone, and so far Microsoft is the only one that delivered on having hardware available on the October 26th date. Oh sure, there are laptops running Windows 8 – but all in one desktop PCs? I’ve only seen one or two! And tablets are *non existent*, with some showing an early to late November availability on Best Buy’s website! So while the retailers could be doing more to make it easier to find Windows 8 devices, the manufacturers could help by *getting devices into stores*! That’s supposedly something that these companies are good at, according to the Acer CEO. So Here’s What the Retailers and Manufacturers Need To Do… Get Product Out The pivotal timeframe will be now to the end of November. We need to start seeing all these fantastic pieces of hardware ship – including the Samsung ATIV Smart PC Pro, the Acer Iconia, the Asus TAICHI 21, and the sexy Samsung Series 7 27” desktop. It’s not enough to see product announcements, we need to see actual devices. Make It Easy For Customers To Find Win8 Devices You want to make it easy to sell these things? Make it easy for people to find them! Have staff on hand that really know how these devices run and what can be done with them. Don’t just have a single demo day, have people who can demo it every day! Make It Easy to See the Features There’s touch screen desktops, touch screen laptops, tablets, non-touch laptops, etc. People need to easily find the features for each machine. If I’m looking for a touch-laptop, I shouldn’t need to sift through all the non-touch laptops to find them – at the least, I need to quickly be able to see which ones are touch. I feel silly even typing this because this should be retail 101 and I have no retail background (but I do have an extensive background as a customer). In Summary… Microsoft launching the Surface and selling them through their own channels isn’t slapping its OEM and retail partners in the face; its slapping them to wake the hell up and stop coasting through Windows launch events like they don’t matter. Unless I see some improvements from vendors and retailers in November, I may just hold onto my money for a Surface Pro even if I have to wait until early 2013. Your move OEM/Retailers. *Update – While my experience has been in Winnipeg, similar experiences have been voiced from colleagues in Calgary and Edmonton.

    Read the article

  • Announcing: Improvements to the Windows Azure Portal

    - by ScottGu
    Earlier today we released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Service Bus Management and Monitoring Support for Managing Co-administrators Import/Export support for SQL Databases Virtual Machine Experience Enhancements Improved Cloud Service Status Notifications Media Services Monitoring Support Storage Container Creation and Access Control Support All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Service Bus Management and Monitoring The new Windows Azure Management Portal now supports Service Bus management and monitoring. Service Bus provides rich messaging infrastructure that can sit between applications (or between cloud and on-premise environments) and allow them to communicate in a loosely coupled way for improved scale and resiliency. With the new Service Bus experience, you can now create and manage Service Bus Namespaces, Queues, Topics, Relays and Subscriptions. You can also get rich monitoring for Service Bus Queues, Topics and Subscriptions. To create a Service Bus namespace, you can now select the “Service Bus” tab in the Windows Azure portal and then simply select the CREATE command: Doing so will bring up a new “Create a Namespace” dialog that allows you to name and create a new Service Bus Namespace: Once created, you can obtain security credentials associated with the Namespace via the ACCESS KEY command. This gives you the ability to obtain the connection string associated with the service namespace. You can copy and paste these values into any application that requires these credentials: It is also now easy to create Service Bus Queues and Topics via the NEW experience in the portal drawer.  Simply click the NEW command and navigate to the “App Services” category to create a new Service Bus entity: Once you provision a new Queue or Topic it can be managed in the portal.  Clicking on a namespace will display all queues and topics within it: Clicking on an item in the list will allow you to drill down into a dashboard view that allows you to monitor the activity and traffic within it, as well as perform operations on it. For example, below is a view of an “orders” queue – note how we now surface both the incoming and outgoing message flow rate, as well as the total queue length and queue size: To monitor pub/sub subscriptions you can use the ADD METRICS command within a topic and select a specific subscription to monitor. Support for Managing Co-Administrators You can now add co-administrators for your Windows Azure subscription using the new Windows Azure Portal. This allows you to share management of your Windows Azure services with other users. Subscription co-administrators share the same administrative rights and permissions that service administrator have - except a co-administrator cannot change or view billing details about the account, nor remove the service administrator from a subscription. In the SETTINGS section, click on the ADMINISTRATORS tab, and select the ADD button to add a co-administrator to your subscription: To add a co-administrator, you specify the email address for a Microsoft account (formerly Windows Live ID) or an organizational account, and choose the subscription you want to add them to: You can later update the subscriptions that the co-administrator has access to by clicking on the EDIT button, and then select or deselect the subscriptions to which they belong. Import/Export Support for SQL Databases The Windows Azure administration portal now supports importing and exporting SQL Databases to/from Blob Storage.  Databases can be imported/exported to blob storage using the same BACPAC file format that is supported with SQL Server 2012.  Among other benefits, this makes it easy to copy and migrate databases between on-premise and cloud environments. SQL Databases now have an EXPORT command in the bottom drawer that when pressed will prompt you to save your database to a Windows Azure storage container: The UI allows you to choose an existing storage account or create a new one, as well as the name of the BACPAC file to persist in blob storage: You can also now import and create a new SQL Database by using the NEW command.  This will prompt you to select the storage container and file to import the database from: The Windows Azure Portal enables you to monitor the progress of import and export operations. If you choose to log out of the portal, you can come back later and check on the status of all of the operations in the new history tab of the SQL Database server – this shows your entire import and export history and the status (success/fail) of each: Enhancements to the Virtual Machine Experience One of the common pain-points we have heard from customers using the preview of our new Virtual Machine support has been the inability to delete the associated VHDs when a VM instance (or VM drive) gets deleted. Prior to today’s release the VHDs would continue to be in your storage account and accumulate storage charges. You can now navigate to the Disks tab within the Virtual Machine extension, select a VM disk to delete, and click the DELETE DISK command: When you click the DELETE DISK button you have the option to delete the disk + associated .VHD file (completely clearing it from storage).  Alternatively you can delete the disk but still retain a .VHD copy of it in storage. Improved Cloud Service Status Notifications The Windows Azure portal now exposes more information of the health status of role instances.  If any of the instances are in a non-running state, the status at the top of the dashboard will summarize the status (and update automatically as the role health changes): Clicking the instance hyperlink within this status summary view will navigate you to a detailed role instance view, and allow you to get more detailed health status of each of the instances.  The portal has been updated to provide more specific status information within this detailed view – giving you better visibility into the health of your app: Monitoring Support for Media Services Windows Azure Media Services allows you to create media processing jobs (for example: encoding media files) in your Windows Azure Media Services account. In the Windows Azure Portal, you can now monitor the number of encoding jobs that are queued up for processing as well as active, failed and queued tasks for encoding jobs. On your media services account dashboard, you can visualize the monitoring data for last 6 hours, 24 hours or 7 days. Storage Container Creation and Access Control Support You can now create Windows Azure Storage storage containers from within the Windows Azure Portal.  After selecting a storage account, you can navigate to the CONTAINERS tab and click the ADD CONTAINER command: This will display a dialog that lets you name the new container and control access to it: You can also update the access setting as well as container metadata of existing containers by selecting one and then using the new EDIT CONTAINER command: This will then bring up the edit container dialog that allows you to change and save its settings: In addition to creating and editing containers, you can click on them within the portal to drill-in and view blobs within them.  Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming later this month – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5, and support .NET 4.5 with Websites).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • Can Google Employees See My Saved Google Chrome Passwords?

    - by Jason Fitzpatrick
    Storing your passwords in your web browser seems like a great time saver, but are the passwords secure and inaccessible to others (even employees of the browser company) when squirreled away? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader MMA is curious if Google employees have (or could have) access to the passwords he stores in Google Chrome: I understand that we are really tempted to save our passwords in Google Chrome. The likely benefit is two fold, You don’t need to (memorize and) input those long and cryptic passwords. These are available wherever you are once you log in to your Google account. The last point sparked my doubt. Since the password is available anywhere, the storage must in some central location, and this should be at Google. Now, my simple question is, can a Google employee see my passwords? Searching over the Internet revealed several articles/messages. Do you save passwords in Chrome? Maybe you should reconsider: Talks about your passwords being stolen by someone who has access to your computer account. Nothing mentioned about the central storage security and vulnerability. There is even a response from Chrome browser security tech lead about the first issue. Chrome’s insane password security strategy: Mostly along the same line. You can steal password from somebody if you have access to the computer account. How to Steal Passwords Saved in Google Chrome in 5 Simple Steps: Teaches you how to actually perform the act mentioned in the previous two when you have access to somebody else’s account. There are many more (including this one at this site), mostly along the same line, points, counter-points, huge debates. I refrain from mentioning them here, simply carry a search if you want to find them. Coming back to my original query, can a Google employee see my password? Since I can view the password using a simple button, definitely they can be unhashed (decrypted) even if encrypted. This is very different from the passwords saved in Unix-like OS’s where the saved password can never be seen in plain text. They use a one-way encryption algorithm to encrypt your passwords. This encrypted password is then stored in the passwd or shadow file. When you attempt to login, the password you type in is encrypted again and compared with the entry in the file that stores your passwords. If they match, it must be the same password, and you are allowed access. Thus, a superuser can change my password, can block my account, but he can never see my password. So are his concerns well founded or will a little insight dispel his worry? The Answer SuperUser contributor Zeel helps put his mind at ease: Short answer: No* Passwords stored on your local machine can be decrypted by Chrome, as long as your OS user account is logged in. And then you can view those in plain text. At first this seems horrible, but how did you think auto-fill worked? When that password field gets filled in, Chrome must insert the real password into the HTML form element – or else the page wouldn’t work right, and you could not submit the form. And if the connection to the website is not over HTTPS, the plain text is then sent over the internet. In other words, if chrome can’t get the plain text passwords, then they are totally useless. A one way hash is no good, because we need to use them. Now the passwords are in fact encrypted, the only way to get them back to plain text is to have the decryption key. That key is your Google password, or a secondary key you can set up. When you sign into Chrome and sync the Google servers will transmit the encrypted passwords, settings, bookmarks, auto-fill, etc, to your local machine. Here Chrome will decrypt the information and be able to use it. On Google’s end all that info is stored in its encrpyted state, and they do not have the key to decrypt it. Your account password is checked against a hash to log in to Google, and even if you let chrome remember it, that encrypted version is hidden in the same bundle as the other passwords, impossible to access. So an employee could probably grab a dump of the encrypted data, but it wouldn’t do them any good, since they would have no way to use it.* So no, Google employees can not** access your passwords, since they are encrypted on their servers. * However, do not forget that any system that can be accessed by an authorized user can be accessed by an unauthorized user. Some systems are easier to break than other, but none are fail-proof. . . That being said, I think I will trust Google and the millions they spend on security systems, over any other password storage solution. And heck, I’m a wimpy nerd, it would be easier to beat the passwords out of me than break Google’s encryption. ** I am also assuming that there isn’t a person who just happens to work for Google gaining access to your local machine. In that case you are screwed, but employment at Google isn’t actually a factor any more. Moral: Hit Win + L before leaving machine. While we agree with zeel that it’s a pretty safe bet (as long as your computer is not compromised) that your passwords are in fact safe while stored in Chrome, we prefer to encrypt all our logins and passwords in a LastPass vault. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Cobol: science and fiction

    - by user847
    There are a few threads about the relevance of the Cobol programming language on this forum, e.g. this thread links to a collection of them. What I am interested in here is a frequently repeated claim based on a study by Gartner from 1997: that there were around 200 billion lines of code in active use at that time! I would like to ask some questions to verify or falsify a couple of related points. My goal is to understand if this statement has any truth to it or if it is totally unrealistic. I apologize in advance for being a little verbose in presenting my line of thought and my own opinion on the things I am not sure about, but I think it might help to put things in context and thus highlight any wrong assumptions and conclusions I have made. Sometimes, the "200 billion lines" number is accompanied by the added claim that this corresponded to 80% of all programming code in any language in active use. Other times, the 80% merely refer to so-called "business code" (or some other vague phrase hinting that the reader is not to count mainstream software, embedded systems or anything else where Cobol is practically non-existent). In the following I assume that the code does not include double-counting of multiple installations of the same software (since that is cheating!). In particular in the time prior to the y2k problem, it has been noted that a lot of Cobol code is already 20 to 30 years old. That would mean it was written in the late 60ies and 70ies. At that time, the market leader was IBM with the IBM/370 mainframe. IBM has put up a historical announcement on his website quoting prices and availability. According to the sheet, prices are about one million dollars for machines with up to half a megabyte of memory. Question 1: How many mainframes have actually been sold? I have not found any numbers for those times; the latest numbers are for the year 2000, again by Gartner. :^( I would guess that the actual number is in the hundreds or the low thousands; if the market size was 50 billion in 2000 and the market has grown exponentially like any other technology, it might have been merely a few billions back in 1970. Since the IBM/370 was sold for twenty years, twenty times a few thousand will result in a couple of ten-thousands of machines (and that is pretty optimistic)! Question 2: How large were the programs in lines of code? I don't know how many bytes of machine code result from one line of source code on that architecture. But since the IBM/370 was a 32-bit machine, any address access must have used 4 bytes plus instruction (2, maybe 3 bytes for that?). If you count in operating system and data for the program, how many lines of code would have fit into the main memory of half a megabyte? Question 3: Was there no standard software? Did every single machine sold run a unique hand-coded system without any standard software? Seriously, even if every machine was programmed from scratch without any reuse of legacy code (wait ... didn't that violate one of the claims we started from to begin with???) we might have O(50,000 l.o.c./machine) * O(20,000 machines) = O(1,000,000,000 l.o.c.). That is still far, far, far away from 200 billion! Am I missing something obvious here? Question 4: How many programmers did we need to write 200 billion lines of code? I am really not sure about this one, but if we take an average of 10 l.o.c. per day, we would need 55 million man-years to achieve this! In the time-frame of 20 to 30 years this would mean that there must have existed two to three million programmers constantly writing, testing, debugging and documenting code. That would be about as many programmers as we have in China today, wouldn't it? Question 5: What about the competition? So far, I have come up with two things here: 1) IBM had their own programming language, PL/I. Above I have assumed that the majority of code has been written exclusively using Cobol. However, all other things being equal I wonder if IBM marketing had really pushed their own development off the market in favor of Cobol on their machines. Was there really no relevant code base of PL/I? 2) Sometimes (also on this board in the thread quoted above) I come across the claim that the "200 billion lines of code" are simply invisible to anybody outside of "governments, banks ..." (and whatnot). Actually, the DoD had funded their own language in order to increase cost effectiveness and reduce the proliferation of programming language. This lead to their use of Ada. Would they really worry about having so many different programming languages if they had predominantly used Cobol? If there was any language running on "government and military" systems outside the perception of mainstream computing, wouldn't that language be Ada? I hope someone can point out any flaws in my assumptions and/or conclusions and shed some light on whether the above claim has any truth to it or not.

    Read the article

  • Oracle Certification and virtualization Solutions.

    - by scoter
    As stated in official MOS ( My Oracle Support ) document 249212.1 support for Oracle products on non-Oracle VM platforms follow exactly the same stance as support for VMware and, so, the only x86 virtualization software solution certified for any Oracle product is "Oracle VM". Based on the fact that: Oracle VM is totally free ( you have the option to buy Oracle-Support ) Certified is pretty different from supported ( OracleVM is certified, others could be supported ) With Oracle VM you may not require to reproduce your issue(s) on physical server Oracle VM is the only x86 software solution that allows hard-partitioning *** *** see details to these Oracle public links: http://www.oracle.com/technetwork/server-storage/vm/ovm-hardpart-168217.pdf http://www.oracle.com/us/corporate/pricing/partitioning-070609.pdf people started asking to migrate from third party virtualization software (ex. RH KVM, VMWare) to Oracle VM. Migrating RH KVM guest to Oracle VM. OracleVM has a built-in P2V utility ( Official Documentation ) but in some cases we can't use it, due to : network inaccessibility between hypervisors ( KVM and OVM ) network slowness between hypervisors (KVM and OVM) size of the guest virtual-disks Here you'll find a step-by-step guide to "manually" migrate a guest machine from KVM to OVM. 1. Verify source guest characteristics. Using KVM web console you can verify characteristics of the guest you need to migrate, such as: CPU Cores details Defined Memory ( RAM ) Name of your guest Guest operating system Disks details ( number and size ) Network details ( number of NICs and network configuration ) 2. Export your guest in OVF / OVA format.  The export from Redhat KVM ( kernel virtual machine ) will create a structured export of your guest: [root@ovmserver1 mnt]# lltotal 12drwxrwx--- 5 36 36 4096 Oct 19 2012 b8296fca-13c4-4841-a50f-773b5139fcee b8296fca-13c4-4841-a50f-773b5139fcee is the ID of the guest exported from RH-KVM [root@ovmserver1 mnt]# cd b8296fca-13c4-4841-a50f-773b5139fcee/[root@ovmserver1 b8296fca-13c4-4841-a50f-773b5139fcee]# ls -ltrtotal 12drwxr-x--- 4 36 36 4096 Oct 19  2012 masterdrwxrwx--- 2 36 36 4096 Oct 29  2012 dom_mddrwxrwx--- 4 36 36 4096 Oct 31  2012 images images contains your virtual-disks exported [root@ovmserver1 b8296fca-13c4-4841-a50f-773b5139fcee]# cd images/[root@ovmserver1 images]# ls -ltratotal 16drwxrwx--- 5 36 36 4096 Oct 19  2012 ..drwxrwx--- 2 36 36 4096 Oct 31  2012 d4ef928d-6dc6-4743-b20d-568b424728a5drwxrwx--- 2 36 36 4096 Oct 31  2012 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1drwxrwx--- 4 36 36 4096 Oct 31  2012 .[root@ovmserver1 images]# cd d4ef928d-6dc6-4743-b20d-568b424728a5/[root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# ls -ltotal 5169092-rwxr----- 1 36 36 187904819200 Oct 31  2012 4c03b1cf-67cc-4af0-ad1e-529fd665dac1-rw-rw---- 1 36 36          341 Oct 31  2012 4c03b1cf-67cc-4af0-ad1e-529fd665dac1.meta[root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# file 4c03b1cf-67cc-4af0-ad1e-529fd665dac14c03b1cf-67cc-4af0-ad1e-529fd665dac1: LVM2 (Linux Logical Volume Manager) , UUID: sZL1Ttpy0vNqykaPahEo3hK3lGhwspv 4c03b1cf-67cc-4af0-ad1e-529fd665dac1 is the first exported disk ( physical volume ) [root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# cd ../4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/[root@ovmserver1 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1]# ls -ltotal 5568076-rwxr----- 1 36 36 107374182400 Oct 31  2012 9020f2e1-7b8a-4641-8f80-749768cc237a-rw-rw---- 1 36 36          341 Oct 31  2012 9020f2e1-7b8a-4641-8f80-749768cc237a.meta[root@ovmserver1 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1]# file 9020f2e1-7b8a-4641-8f80-749768cc237a9020f2e1-7b8a-4641-8f80-749768cc237a: x86 boot sector; partition 1: ID=0x83, active, starthead 1, startsector 63, 401562 sectors; partition 2: ID=0x82, starthead 0, startsector 401625, 65529135 sectors; startsector 63, 401562 sectors; partition 2: ID=0x82, starthead 0, startsector 401625, 65529135 sectors; partition 3: ID=0x83, starthead 254, startsector 65930760, 8385930 sectors; partition 4: ID=0x5, starthead 254, startsector 74316690, 135395820 sectors, code offset 0x48 9020f2e1-7b8a-4641-8f80-749768cc237a is the second exported disk, with partition 1 bootable 3. Prepare the new guest on Oracle VM. By Ovm-Manager we can prepare the guest where we will move the exported virtual-disks; under the Tab "Servers and VMs": click on  and create your guest with parameters collected before (point 1): - add NICs on different networks: - add virtual-disks; in this case we add two disks of 1.0 GB each one; we will extend the virtual disk copying the source KVM virtual-disk ( see next steps ) - verify virtual-disks created ( under Repositories tab ) 4. Verify OVM virtual-disks names. [root@ovmserver1 VirtualMachines]# grep -r hyptest_rdbms * 0004fb0000060000a906b423f44da98e/vm.cfg:OVM_simple_name = 'hyptest_rdbms' [root@ovmserver1 VirtualMachines]# cd 0004fb0000060000a906b423f44da98e [root@ovmserver1 0004fb0000060000a906b423f44da98e]# more vm.cfgvif = ['mac=00:21:f6:0f:3f:85,bridge=0004fb001089128', 'mac=00:21:f6:0f:3f:8e,bridge=0004fb00101971d'] OVM_simple_name = 'hyptest_rdbms' vnclisten = '127.0.0.1' disk = ['file:/OVS/Repositories/0004fb00000300004f17b7368139eb41/ VirtualDisks/0004fb000012000097c1bfea9834b17d.img,xvda,w', 'file:/OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img,xvdb,w'] vncunused = '1' uuid = '0004fb00-0006-0000-a906-b423f44da98e' on_reboot = 'restart' cpu_weight = 27500 memory = 32768 cpu_cap = 0 maxvcpus = 8 OVM_high_availability = True maxmem = 32768 vnc = '1' OVM_description = '' on_poweroff = 'destroy' on_crash = 'restart' name = '0004fb0000060000a906b423f44da98e' guest_os_type = 'linux' builder = 'hvm' vcpus = 8 keymap = 'en-us' OVM_os_type = 'Oracle Linux 5' OVM_cpu_compat_group = '' OVM_domain_type = 'xen_hvm' disk2 ovm ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img disk1 ovm ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb000012000097c1bfea9834b17d.img Summarizing disk1 --source ==> /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/9020f2e1-7b8a-4641-8f80-749768cc237a disk1 --dest ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb000012000097c1bfea9834b17d.img disk2 --source ==> /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/d4ef928d-6dc6-4743-b20d-568b424728a5/4c03b1cf-67cc-4af0-ad1e-529fd665dac1 disk2 --dest ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img 5. Copy KVM exported virtual-disks to OVM virtual-disks. Keeping your Oracle VM guest stopped you can copy KVM exported virtual-disks to OVM virtual-disks; what I did is only to locally mount the filesystem containing the exported virtual-disk ( by an usb device ) on my OVS; the copy automatically resize OVM virtual-disks ( previously created with a size of 1GB ) . nohup cp /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/9020f2e1-7b8a-4641-8f80-749768cc237a /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/0004fb000012000097c1bfea9834b17d.img & nohup cp /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/d4ef928d-6dc6-4743-b20d-568b424728a5/4c03b1cf-67cc-4af0-ad1e-529fd665dac1 /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/0004fb0000120000cde6a11c3cb1d0be.img & 7. When copy completed refresh repository to aknowledge the new-disks size. 7. After "refresh repository" is completed, start guest machine by Oracle VM manager. After the first start of your guest: - verify that you can see all disks and partitions - verify that your guest is network reachable ( MAC Address of your NICs changed ) Eventually you can also evaluate to convert your guest to PVM ( Paravirtualized virtual Machine ) following official Oracle documentation. Ciao Simon COTER ps: next-time I'd like to post an article reporting how to manually migrate Virtual-Iron guests to OracleVM.  Comments and corrections are welcome. 

    Read the article

  • Create a Customized Tab on the Office 2010 Ribbon

    - by Mysticgeek
    Some MS Office users were put off a bit by the Ribbon feature in 2007 for being cumbersome and confusing. Today we look at a cool new feature in Office 2010 that allows you to create your own custom tabs with specific commands for easier document creation. Create a Customized Tab In our example we’re using Word, but you can create a custom tab in the other Office apps as well. To do so, right-click on the Ribbon and select Customize the Ribbon. The Word Options screen opens up and from here you can manage a lot of customization options. We want to create a new customized tab, so click on the New Tab button.   Now give it a name… Now just drag the commands you want to add from the left column over to your new custom group. You have every command available to choose from. You can select specific groups or all commands from the dropdown menu on the left. That is all there is to it…now you have your own customized tab with the commands you use most often to help you work more efficiently. In this example We didn’t add a whole lot of commands, but you can customize it with as many as you need. You can also create other tabs with different sets of commands too. When you create a customized tab in one application, it’s only going to be in that app. For example if you create on in Word, it’s not going to show in Excel as commands differ between apps. If you want a custom tab in another Office app you’ll need to create one for it. Another very cool thing you can do is export the customizations to use on another machine or pass them to a coworker. To export the customizations, go to the Customize Ribbon section and at the bottom of the right field click Import/Export then Export all customizations. Then save the file to a location on your hard drive.   To import the settings to another machine, go into Ribbon Customizations and select Import customizations file… then browse the the file you exported. You’ll be prompted to confirm you want to import he customizations… After confirming the choice now you’ll see the customization show up on the other machine. This is very handy if you work on several machines throughout the day and want to easily bring your customized tabs with you. If you find yourself using a lot of specific commands throughout the day, creating your own customized tab will help access them more quickly. If you want to test out Office 2010 it’s currently in Public Beta and can be downloaded for free. Download Office 2010 Beta Similar Articles Productive Geek Tips Maximize Space by "Auto-Hiding" the Ribbon in Office 2007Make Learning Office 2007 & 2010 Fun with Ribbon HeroAdd or Remove Apps from the Microsoft Office 2007 or 2010 SuiteHow To Bring Back the Old Menus in Office 2007How To Take Screenshots with Word 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >