Search Results

Search found 12844 results on 514 pages for 'manual testing'.

Page 418/514 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • Choice of an OS for a home ZFS NAS

    - by OlafM
    I am preparing a home NAS with an old Athlon 64 X2 3800+, 4 GB ECC RAM, Asus M2V MX motherboard, and a single 3 TB WDC Green (another one as mirror may be installed in the future). It's the cheapest solution I found that includes ECC memory and the higher energy consumption is offset by the lower (zero) cost of acquisition. The system will be used for: music storage and stream to other desktop computers; storage of the scanned dia slides (3-4k slides, 180 MB TIFF each one plus reduced quality JPEG version); stream of these photos to a local iPad 2 (maybe Plex App? not yet sure); (one additional) remote backup via rsync/ssh or ZFS send/receive. It will be controlled via remote ssh, maybe VNC, no monitor attached. Absolute requirement is a reliable ZFS solution, plus the ability to easily install packets/software/virtual machines and to update remotely (I will be the admin and I don't live near the NAS). I have mainly three options: NAS4free/FreeNAS OpenIndiana Solaris Express 11 (yeah yeah I know the license requirements, I will write a perl script on it to count it as development machine). Problems: NAS4free/FreeNAS (I tested only NAS4free) required embedded installation for remote upgrading, but full install for easy addition of software packets. Since I need at least AirVideo Server (linux/win) and Plex App (win/linux) to stream the photos and some videos to iPad (they both require virtualbox), but I cannot be there to install updates, NAS4free/FreeNAS are excluded. http://www.nas4free.org/general_information.html explains the issue: embedded can be remotely updated, full cannot. Solaris has also another advantage: Crashplan client supports Solaris and I'm already using it for other backups. I would like to leave the option open, even if I will be doing backups probably through zfs send/receive. NexentaStor was left out because zfs send/receive are not included in the free version. The question is now Solaris 11 Express over OpenIndiana. To ease the management, I will be using http://www.napp-it.org Which one would you suggest and why? I found lots of informations and it's difficult for me to decide. I think (from the napp-it manual) that Solaris has some additional options for SMB shares, but are they really needed at home? I think I won't even use ACLs, since normal unix-style permissions are enough. OpenIndiana has maybe more frequent updates (Solaris offers only security updates between releases), but again, do I need them? I don't think so. Moreover, this is a NAS that has to work and nothing else, I cannot risk having problems that require me to access the server. Isn't OpenIndiana a bit more... cutting edge (in the Solaris world)? I'm just asking, no need to focus on this for the answer :-) I would limit myself to these two options (SE11.1/OI) also because I will be making a NAS for me in the future (where high performances with Mac shares are also required) and Solaris has kernel support for AFP. I will use this server to gather experience as well. After this long question, thanks in advance! If you need additional info, let me know and I will update this post. UPDATES Given the first answers, I will strongly suggest the person paying the hardware to insert a second HD. Better 2x2TB than 1x3TB (3 TB is oversized anyway). I was trying to keep the initial costs down to spread them over a longer period, but better having something good from the beginning.

    Read the article

  • Mysqld not starting due to apparent db corruption

    - by pitosalas
    I am very new at admining mysql, and bad for me, something caused the db to get clobbered. There are many error messages in the log that I am not sure how to safely proceed. Can you give some tips? Here's the log: 110107 15:07:15 mysqld started 110107 15:07:15 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 110107 15:07:15 InnoDB: Starting log scan based on checkpoint at InnoDB: log sequence number 35 515914826. InnoDB: Doing recovery: scanned up to log sequence number 35 515915839 InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 1 row operations to undo InnoDB: Trx id counter is 0 1697553664 110107 15:07:15 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Starting rollback of uncommitted transactions InnoDB: Rolling back trx with id 0 1697553198, 1 rows to undoInnoDB: Error: trying to access page number 3522914176 in space 0, InnoDB: space name ./ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10 110107 15:07:15InnoDB: Assertion failure in thread 3086403264 in file fil0fil.c line 3922 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html InnoDB: about forcing recovery. mysqld got signal 11; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=0 read_buffer_size=131072 max_used_connections=0 max_connections=100 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 217599 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=(nil) Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Cannot determine thread, fp=0xbffc55ac, backtrace may not be correct. Stack range sanity check OK, backtrace follows: 0x8139eec 0x83721d5 0x833d897 0x833db71 0x832aa38 0x835f025 0x835f7a3 0x830a77e 0x8326b57 0x831c825 0x8317b8d 0x82a9e66 0x8315732 0x834fc9a 0x828d7c3 0x81c29dd 0x81b5620 0x813d9fe 0x40fdf3 0x80d5ff1 New value of fp=(nil) failed sanity check, terminating stack trace! Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash. 110107 15:07:15 mysqld ended

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • KVM with one host IP and a different subnet for machines

    - by Jguy
    I've already setup a KVM host with proper IP configurations, but my host had me create DHCP and use that to assign the IP's to the machines. I want to see if there's an easier way to do it (or better). Upon my first setting out on this, I didn't find anything that pointed me in the right direction. I'm coming off a fresh install of Debian 6.0 x64, so I have nothing installed. I've logged in, queried for the below information and changed the password from my host set one. I have a Debian 6.0 x64 system with the following initial network configuration (substituted 255 in place of my real first octave): # tail /etc/network/interfaces auto eth0 iface eth0 inet static address 255.9.24.80 broadcast 255.9.24.95 netmask 255.255.255.224 gateway 255.9.24.65 # default route to access subnet up route add -net 255.9.24.64 netmask 255.255.255.224 gw 255.9.24.65 eth0 I have a /29 subnet that I want the virtual machines to use from my host: IP: 255.46.187.152 /29 Mask: 255.255.255.248 Broadcast: 255.46.187.159 Usable IP addresses: 255.46.187.153 to 255.46.187.158 I like the interface of Cloudmin, so I want to try and use that if I can to administrate my guests. So, my questions: How do I set this up on the host system the best so that I can use the additional Subnet IP's on the guests and have them accessible from the internet? I also need to host a DNS server, which means one of these VM's has to have two IP's assigned to it and accessable from the outside world. How can I do that using Cloudmin? I had a question about this here: Multiple IP addresses assigned to one KVM VM But I just reformatted the entire server and am trying to figure out a better way of doing this. Machine information: # ip route show 255.9.24.64/27 via 255.9.24.65 dev eth0 255.9.24.64/27 dev eth0 proto kernel scope link src 255.9.24.80 default via 255.9.24.65 dev eth0 brctl is empty # ip addr list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether c8:60:00:54:b5:d8 brd ff:ff:ff:ff:ff:ff inet 255.9.24.80/27 brd 255.9.24.95 scope global eth0 inet6 fe80::ca60:ff:fe54:b5d8/64 scope link valid_lft forever preferred_lft forever Thank you for any help you can provide me. EDIT: I've installed kvm and cloudmin: aptitude install qemu-kvm libvirt-bin wget http://cloudmin.virtualmin.com/gpl/scripts/cloudmin-kvm-debian-install.sh ./cloudmin-kvm-debian-install.sh Rebooted and now my network configuration looks like this: # device: eth0 iface eth0 inet manual # default route to access subnet iface br0 inet static address 255.9.24.80 netmask 255.255.255.224 broadcast 255.9.24.95 network 255.9.24.64 bridge_ports eth0 gateway 255.9.24.65 I setup in Cloudmin the Start IP as 255.46.187.153 and End IP as 255.46.187.158. The CIDR is 29 and the gateway is 255.46.187.152. I've installed a guest with ubuntuserver 12.04 x64, which was able to get and retrieve internet resources during installation, but now cannot reach anything nor can it be reached from anything. Its network configuration is: iface eth0 inet static address 255.46.187.153 netmask 255.255.255.224 broadcast 255.46.187.159 gateway 255.46.187.152 dns-nameservers <host provided nameservers> And is not able to ping google.com through DNS or direct IP, I can't ping the VM from the outside or the host. any ideas now?

    Read the article

  • MySQL crash. Unknown cause. Signal 11

    - by fortmac
    This is a database that I installed ~6 months ago and had been running fine. This is currently running in Ubuntu 12.04. Attempting to connect to MySQL causes this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) Then theres: $ sudo mysqld which returns: 130702 15:38:54 [Note] Plugin 'FEDERATED' is disabled. 130702 15:38:54 InnoDB: The InnoDB memory heap is disabled 130702 15:38:54 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130702 15:38:54 InnoDB: Compressed tables use zlib 1.2.3.4 130702 15:38:54 InnoDB: Initializing buffer pool, size = 128.0M 130702 15:38:54 InnoDB: Completed initialization of buffer pool 130702 15:38:54 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130702 15:38:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130702 15:38:55 InnoDB: Waiting for the background threads to start 130702 15:38:56 InnoDB: 1.1.8 started; log sequence number 5201901917 130702 15:38:56 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130702 15:38:56 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130702 15:38:56 [Note] Server socket created on IP: '127.0.0.1'. 130702 15:38:56 [Note] Event Scheduler: Loaded 0 events 130702 15:38:56 [Note] mysqld: ready for connections. Version: '5.5.28-0ubuntu0.12.04.3' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 19:39:02 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f9509e51530 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f94f1d3de60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f95083427b9] mysqld(handle_fatal_signal+0x483)[0x7f9508209b43] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f9506f5bcb0] mysqld(+0x320e1c)[0x7f9508113e1c] mysqld(_ZN4JOIN15alloc_func_listEv+0x9c)[0x7f950812391c] mysqld(_ZN4JOIN7prepareEPPP4ItemP10TABLE_LISTjS1_jP8st_orderS7_S1_S7_P13st_select_lexP18st_select_lex_unit+0x918)[0x7f9508124658] mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x130)[0x7f950812d060] mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x7f9508132fbc] mysqld(+0x2f6714)[0x7f95080e9714] mysqld(_Z21mysql_execute_commandP3THD+0x16d8)[0x7f95080f1178] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f95080f5e0f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f95080f7260] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f950819b80d] mysqld(handle_one_connection+0x50)[0x7f950819b870] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f9506f53e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9506684cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f94e0004b80): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. I'm at a loss. What other reports would be useful in diagnosing this? /var/log/mysql.err & /var/log/mysql.log are empty.

    Read the article

  • how to install 'version.h' in ubuntu ?

    - by user252098
    Just now , I try to install the Jungo WinDriver in the Ubuntu 13.10 . But I am puzzled by the its manual of how to Install version.h : Install version.h: The file version.h is created when you first compile the Linux kernel source code. Some distributions provide a compiled kernel without the file version.h. Look under /usr/src/linux/include/linux to see whether you have this file. If you do not, follow these steps: Become super user: $ su Change directory to the Linux source directory: # cd /usr/src/linux Type: # make xconfig Save the configuration by choosing Save and Exit. Type: # make dep Exit super user mode: # exit But the shell says: warning: make dep is unnecessary now. Then, I found out there is a version.h in /usr/src/linux-headers-3.11.0.12-generic, so I type: /usr/src/windriver/redist# ./configure --with-kernel-source=/usr/src/linux-headers-3.11.0.12-generic But, the windriver run fails: USE_KBUILD = yes checking for cpu architecture... x86_64 checking for WinDriver root directory... /usr/src/WinDriver checking for linux kernel source... found at /usr/src/linux checking for lib directory... ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT)_32.so /usr/lib/$(SHARED_OBJECT).so; ln -s /usr/lib /usr/lib64; ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT).so /usr/lib64/$(SHARED_OBJECT).so checking which directories to include... -I/usr/src/linux/include checking linux kernel version... 3.11.10.6 checking for modules installation directory... /lib/modules/3.11.0-12-generic/kernel/drivers/misc checking output directory... LINUX.3.11.0-12-generic.x86_64 checking target... LINUX.3.11.0-12-generic.x86_64/windrvr6_usb.ko checking for regparm kernel option... find: `/usr/src/WinDriver/redist/.tmp_driver/.tmp_versions': No such file or directory 0 checking for modpost location... /usr/src/linux/scripts/mod/modpost configure.usb: creating ./config.status config.status: creating makefile.usb.kbuild checking for cpu architecture... x86_64 checking for WinDriver root directory... /usr/src/WinDriver checking for linux kernel source... found at /usr/src/linux checking for lib directory... ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT)_32.so /usr/lib/$(SHARED_OBJECT).so; ln -s /usr/lib /usr/lib64; ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT).so /usr/lib64/$(SHARED_OBJECT).so checking which directories to include... -I/usr/src/linux/include checking linux kernel version... 3.11.10.6 checking for modules installation directory... /lib/modules/3.11.0-12-generic/kernel/drivers/misc checking output directory... LINUX.3.11.0-12-generic.x86_64 checking target... LINUX.3.11.0-12-generic.x86_64/windrvr6.ko checking for regparm kernel option... find: `/usr/src/WinDriver/redist/.tmp_driver/.tmp_versions': No such file or directory 0 checking for right linked object... windrvr_gcc_v3.a checking for modpost location... /usr/src/linux/scripts/mod/modpost configure.wd: creating ./config.status config.status: creating makefile.wd.kbuild What is the problem?

    Read the article

  • WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    - by Mladen Prajdic
    In previous posts I’ve shown you our SuperForm test application solution structure and how the main wxs and wxi include file look like. In this post I’ll show you how to automate inclusion of files to install into your build process. For our SuperForm application we have a single exe to install. But in the real world we have 10s or 100s of different files from dll’s to resource files like pictures. It all depends on what kind of application you’re building. Writing a directory structure for so many files by hand is out of the question. What we need is an automated way to create this structure. Enter Heat.exe. Heat is a command line utility to harvest a file, directory, Visual Studio project, IIS website or performance counters. You might ask what harvesting means? Harvesting is converting a source (file, directory, …) into a component structure saved in a WiX fragment (a wxs) file. There are 2 options you can use: Create a static wxs fragment with Heat and include it in your project. The pro of this is that you can add or remove components by hand. The con is that you have to do the pro part by hand. Automation always beats manual labor. Run heat command line utility in a pre-build event of your WiX project. I prefer this way. By always recreating the whole fragment you don’t have to worry about missing any new files you add. The con of this is that you’ll include files that you otherwise might not want to. There is no perfect solution so pick one and deal with it. I prefer using the second way. A neat way of overcoming the con of the second option is to have a post-build event on your main application project (SuperForm.MainApp in our case) to copy the files needed to be installed in a special location and have the Heat.exe read them from there. I haven’t set this up for this tutorial and I’m simply including all files from the default SuperForm.MainApp \bin directory. Remember how we created a System Environment variable called SuperFormFilesDir? This is where we’ll use it for the first time. The command line text that you have to put into the pre-build event of your WiX project looks like this: "$(WIX)bin\heat.exe" dir "$(SuperFormFilesDir)" -cg SuperFormFiles -gg -scom -sreg -sfrag -srd -dr INSTALLLOCATION -var env.SuperFormFilesDir -out "$(ProjectDir)Fragments\FilesFragment.wxs" After you install WiX you’ll get the WIX environment variable. In the pre/post-build events environment variables are referenced like this: $(WIX). By using this you don’t have to think about the installation path of the WiX. Remember: for 32 bit applications Program files folder is named differently between 32 and 64 bit systems. $(ProjectDir) is obviously the path to your project and is a Visual Studio built in variable. You can view all Heat.exe options by running it without parameters but I’ll explain some that stick out the most. dir "$(SuperFormFilesDir)": tell Heat to harvest the whole directory at the set location. That is the location we’ve set in our System Environment variable. –cg SuperFormFiles: the name of the Component group that will be created. This name is included in out Feature tag as is seen in the previous post. -dr INSTALLLOCATION: the directory reference this fragment will fall under. You can see the top level directory structure in the previous post. -var env.SuperFormFilesDir: the name of the variable that will replace the SourceDir text that would otherwise appear in the fragment file. -out "$(ProjectDir)Fragments\FilesFragment.wxs": the full path and name under which the fragment file will be saved. If you have source control you have to include the FilesFragment.wxs into your project but remove its source control binding. The auto generated FilesFragment.wxs for our test app looks like this: <?xml version="1.0" encoding="utf-8"?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <ComponentGroup Id="SuperFormFiles"> <ComponentRef Id="cmp5BB40DB822CAA7C5295227894A07502E" /> <ComponentRef Id="cmpCFD331F5E0E471FC42A1334A1098E144" /> <ComponentRef Id="cmp4614DD03D8974B7C1FC39E7B82F19574" /> <ComponentRef Id="cmpDF166522884E2454382277128BD866EC" /> </ComponentGroup> </Fragment> <Fragment> <DirectoryRef Id="INSTALLLOCATION"> <Component Id="cmp5BB40DB822CAA7C5295227894A07502E" Guid="{117E3352-2F0C-4E19-AD96-03D354751B8D}"> <File Id="filDCA561ABF8964292B6BC0D0726E8EFAD" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.exe" /> </Component> <Component Id="cmpCFD331F5E0E471FC42A1334A1098E144" Guid="{369A2347-97DD-45CA-A4D1-62BB706EA329}"> <File Id="filA9BE65B2AB60F3CE41105364EDE33D27" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.pdb" /> </Component> <Component Id="cmp4614DD03D8974B7C1FC39E7B82F19574" Guid="{3443EBE2-168F-4380-BC41-26D71A0DB1C7}"> <File Id="fil5102E75B91F3DAFA6F70DA57F4C126ED" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe" /> </Component> <Component Id="cmpDF166522884E2454382277128BD866EC" Guid="{0C0F3D18-56EB-41FE-B0BD-FD2C131572DB}"> <File Id="filF7CA5083B4997E1DEC435554423E675C" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe.manifest" /> </Component> </DirectoryRef> </Fragment></Wix> The $(env.SuperFormFilesDir) will be replaced at build time with the directory where the files to be installed are located. There is nothing too complicated about this. In the end it turns out that this sort of automation is great! There are a few other ways that Heat.exe can compose the wxs file but this is the one I prefer. It just seems the clearest. Play with its options to see what can it do. It’s one awesome little tool.   WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

  • Solaris 10 branded zone VM Templates for Solaris 11 on OTN

    - by jsavit
    Early this year I wrote the article Ours Goes To 11 which describes the ability to import Solaris 10 systems into a "Solaris 10 branded zone" under Oracle Solaris 11. I did this using Solaris 11 Express, and the capability remains in Solaris 11 with only slight changes. This important tool lets you painlessly inhaling a Solaris Container from Solaris 10 or entire Solaris 10 systems ("the global zone") into virtualized environments on a Solaris 11 OS. Just recently, Oracle provided Oracle VM Templates for Oracle Solaris 10 Zones to let you create Solaris 10 branded zones for Solaris 11 even if you don't currently have access to install media or a running Solaris 10 system. To use this, just download the Oracle VM Template for Oracle Solaris Zone 10 from OTN at http://www.oracle.com/technetwork/server-storage/solaris11/downloads/virtual-machines-1355605.html. This page contains images of Oracle Solaris 10 8/11 (the recent update to Solaris 10) in SPARC and x86 formats suitable for creating branded zones. The same page also has a VirtualBox image you can download for a complete Solaris 10 install in a guest virtual machine you can run on any host OS that supports VirtualBox. Both sets of downloads provide a quick - and extremely easy - way to set up a virtual Solaris 10 environment. In the case of the Oracle VM Templates, they illustrate several advanced features of Solaris 11. To start, just go to the above link, download the template for the hardware platform (SPARC or x86) you want, and download the README file also linked from that page. Install prerequisites The README file tells you to install the prerequisite Solaris 11 package that implements the Solaris 10 brand. Then you can install instances of zones with that brand. # pkg install pkg:/system/zones/brand/brand-solaris10 Packages to install: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 44/44 0.4/0.4 PHASE ACTIONS Install Phase 74/74 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 That took only a few minutes, and didn't require a reboot. Install the Solaris 10 zone Now it's time to run the downloaded template file. First make it executable via the chmod command, of course. I found that (unlike stated in the README) there was no need to rename the downloaded file to remove the .bin. When you run it you provide several parameters to describe the zone configuration: -a IP address - the IP address and optional netmask for the zone. This is the only mandatory parameter. -z zonename - the name of the zone you would like to create. -i interface - the package will create an exclusive-IP zone using a virtual NIC (vnic) based on this physical interface. In my case, I have a NIC called rge0. -p PATH - specifies the path in which you want the zoneroot to be placed. In my case, I have a ZFS dataset mounted at /zones, and this will create a zoneroot at /zones/s10u10. Kicking it off, you will see a copyright message, and then messages showing progress building the zone, which only takes a few minutes. # ./solaris-10u10-x86.bin -p /zones -a 192.168.1.100 -i rge0 -z s10u10 ... ... Checking disk-space for extraction Ok Extracting in /export/home/CDimages/s10zone/bootimage.ihaqvh ... 100% [===============================] Checking data integrity Ok Checking platform compatibility The host and the image do not have the same Solaris release: host Solaris release: 5.11 image Solaris release: 5.10 Will create a Solaris 10 branded zone. Warning: could not find a defaultrouter Zone won't have any defaultrouter configured IMAGE: ./solaris-10u10-x86.bin ZONE: s10u10 ZONEPATH: /zones/s10u10 INTERFACE: rge0 VNIC: vnicZBI13379 MAC ADDR: 2:8:20:5c:1a:cc IP ADDR: 192.168.1.100 NETMASK: 255.255.255.0 DEFROUTER: NONE TIMEZONE: US/Arizona Checking disk-space for installation Ok Installing in /zones/s10u10 ... 100% [===============================] Using a static exclusive-IP Attaching s10u10 Booting s10u10 Waiting for boot to complete booting... booting... booting... Zone s10u10 booted The zone's root password has been set using the root password of the local host. You can change the zone's root password to further harden the security of the zone: being root, log into the zone from the local host with the command 'zlogin s10u10'. Once logged in, change the root password with the command 'passwd'. The nifty part in my opinion (besides being so easy), is that the zone was created as an exclusive-IP zone on a virtual NIC. This network configuration lets you enforce traffic isolation from other zones, enforce network Quality of Service, and even let the zone set its own characteristics like IP address and packet size. Independence of the zone's network characteristics from the global zone is one of the enhancements in Solaris 10 that make it easier to consolidate zones while preserving their autonomy, yet provide control in a consolidated environment. Let's see what the virtual network environment looks like by issuing commands from the Solaris 11 global zone. First I'll use Old School ifconfig, and then I'll use the new ipadm and dladm commands. # ifconfig -a4 lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=1004943<UP,BROADCAST,RUNNING,PROMISC,MULTICAST,DHCP,IPv4> mtu 1500 index 2 inet 192.168.1.3 netmask ffffff00 broadcast 192.168.1.255 ether 0:14:d1:18:ac:bc vboxnet0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3 inet 192.168.56.1 netmask ffffff00 broadcast 192.168.56.255 ether 8:0:27:f8:62:1c # dladm show-phys LINK MEDIA STATE SPEED DUPLEX DEVICE yge0 Ethernet unknown 0 unknown yge0 yge1 Ethernet unknown 0 unknown yge1 rge0 Ethernet up 1000 full rge0 vboxnet0 Ethernet up 1000 full vboxnet0 # dladm show-link LINK CLASS MTU STATE OVER yge0 phys 1500 unknown -- yge1 phys 1500 unknown -- rge0 phys 1500 up -- vboxnet0 phys 1500 up -- vnicZBI13379 vnic 1500 up rge0 s10u10/vnicZBI13379 vnic 1500 up rge0 s10u10/net0 vnic 1500 up rge0 # dladm show-vnic LINK OVER SPEED MACADDRESS MACADDRTYPE VID vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/net0 rge0 1000 2:8:20:9d:d0:79 random 0 # ipadm show-addr ADDROBJ TYPE STATE ADDR lo0/v4 static ok 127.0.0.1/8 rge0/_a dhcp ok 192.168.1.3/24 vboxnet0/_a static ok 192.168.56.1/24 lo0/v6 static ok ::1/128 Log into the zone The install step already booted the zone, so lets log into it. Notice how you have to be appropriately privileged to log into a zone. This is my home system so I'm being a bit cavalier, but in a production environment you can give granular control of who can login to which zones. Voila! a Solaris 10 environment under a Solaris 11 kernel. Notice the output from the uname -a and ifconfig commands, and output from a ping to a nearby host. $ zlogin s10u10 zlogin: You lack sufficient privilege to run this command (all privs required) savit@home:~$ sudo zlogin s10u10 Password: [Connected to zone 's10u10' pts/5] Oracle Corporation SunOS 5.10 Generic Patch January 2005 # uname -a SunOS s10u10 5.10 Generic_Virtual i86pc i386 i86pc # ifconfig -a4 lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc # bash bash-3.2# ifconfig -a lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc bash-3.2# ping 192.168.1.2 192.168.1.2 is alive For fun, I configured Apache (setting its configuration file in /etc/apache2) and brought it up. Easy - took just a few minutes. bash-3.2# svcs apache2 STATE STIME FMRI disabled 12:38:46 svc:/network/http:apache2 bash-3.2# svcadm enable apache2 Summary In just a few minutes, I built a functioning virtual Solaris 10 environment under by Solaris 11 system. It was... easy! While I can still do it the manual way (creating and using a system archive), this is a low-effort way to create a Solaris 10 zone on Solaris 11.

    Read the article

  • BNF – how to read syntax?

    - by Piotr Rodak
    A few days ago I read post of Jen McCown (blog) about her idea of blogging about random articles from Books Online. I think this is a great idea, even if Jen says that it’s not exciting or sexy. I noticed that many of the questions that appear on forums and other media arise from pure fact that people asking questions didn’t bother to read and understand the manual – Books Online. Jen came up with a brilliant, concise acronym that describes very well the category of posts about Books Online – RTFM365. I take liberty of tagging this post with the same acronym. I often come across questions of type – ‘Hey, i am trying to create a table, but I am getting an error’. The error often says that the syntax is invalid. 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT DEFAULT Guid_Default NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); 5 The answer is usually(1), ‘Ok, let me check it out.. Ah yes – you have to put name of the DEFAULT constraint before the type of constraint: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); Why many people stumble on syntax errors? Is the syntax poorly documented? No, the issue is, that correct syntax of the CREATE TABLE statement is documented very well in Books Online and is.. intimidating. Many people can be taken aback by the rather complex block of code that describes all intricacies of the statement. However, I don’t know better way of defining syntax of the statement or command. The notation that is used to describe syntax in Books Online is a form of Backus-Naur notatiion, called BNF for short sometimes. This is a notation that was invented around 50 years ago, and some say that even earlier, around 400 BC – would you believe? Originally it was used to define syntax of, rather ancient now, ALGOL programming language (in 1950’s, not in ancient India). If you look closer at the definition of the BNF, it turns out that the principles of this syntax are pretty simple. Here are a few bullet points: italic_text is a placeholder for your identifier <italic_text_in_angle_brackets> is a definition which is described further. [everything in square brackets] is optional {everything in curly brackets} is obligatory everything | separated | by | operator is an alternative ::= “assigns” definition to an identifier Yes, it looks like these six simple points give you the key to understand even the most complicated syntax definitions in Books Online. Books Online contain an article about syntax conventions – have you ever read it? Let’s have a look at fragment of the CREATE TABLE statement: 1 CREATE TABLE 2 [ database_name . [ schema_name ] . | schema_name . ] table_name 3 ( { <column_definition> | <computed_column_definition> 4 | <column_set_definition> } 5 [ <table_constraint> ] [ ,...n ] ) 6 [ ON { partition_scheme_name ( partition_column_name ) | filegroup 7 | "default" } ] 8 [ { TEXTIMAGE_ON { filegroup | "default" } ] 9 [ FILESTREAM_ON { partition_scheme_name | filegroup 10 | "default" } ] 11 [ WITH ( <table_option> [ ,...n ] ) ] 12 [ ; ] Let’s look at line 2 of the above snippet: This line uses rules 3 and 5 from the list. So you know that you can create table which has specified one of the following. just name – table will be created in default user schema schema name and table name – table will be created in specified schema database name, schema name and table name – table will be created in specified database, in specified schema database name, .., table name – table will be created in specified database, in default schema of the user. Note that this single line of the notation describes each of the naming schemes in deterministic way. The ‘optionality’ of the schema_name element is nested within database_name.. section. You can use either database_name and optional schema name, or just schema name – this is specified by the pipe character ‘|’. The error that user gets with execution of the first script fragment in this post is as follows: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'DEFAULT'. Ok, let’s have a look how to find out the correct syntax. Line number 3 of the BNF fragment above contains reference to <column_definition>. Since column_definition is in angle brackets, we know that this is a reference to notion described further in the code. And indeed, the very next fragment of BNF contains syntax of the column definition. 1 <column_definition> ::= 2 column_name <data_type> 3 [ FILESTREAM ] 4 [ COLLATE collation_name ] 5 [ NULL | NOT NULL ] 6 [ 7 [ CONSTRAINT constraint_name ] DEFAULT constant_expression ] 8 | [ IDENTITY [ ( seed ,increment ) ] [ NOT FOR REPLICATION ] 9 ] 10 [ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ] 11 [ SPARSE ] Look at line 7 in the above fragment. It says, that the column can have a DEFAULT constraint which, if you want to name it, has to be prepended with [CONSTRAINT constraint_name] sequence. The name of the constraint is optional, but I strongly recommend you to make the effort of coming up with some meaningful name yourself. So the correct syntax of the CREATE TABLE statement from the beginning of the article is like this: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); That is practically everything you should know about BNF. I encourage you to study the syntax definitions for various statements and commands in Books Online, you can find really interesting things hidden there. Technorati Tags: SQL Server,t-sql,BNF,syntax   (1) No, my answer usually is a question – ‘What error message? What does it say?’. You’d be surprised to know how many people think I can go through time and space and look at their screen at the moment they received the error.

    Read the article

  • Solaris 11 Launch Blog Carnival Roundup

    - by constant
    Solaris 11 is here! And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter go to eleven. Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure: Getting Started/Overview A lot of people speculated that the official launch of Solaris 11 would be on 11/11 (whatever way you want to turn it), but it actually happened two days earlier. Larry Wake himself offers 11 Reasons Why Oracle Solaris 11 11/11 Isn't Being Released on 11/11/11. Then, Larry goes on with a summary: Oracle Solaris 11: The First Cloud OS gives you a short and sweet rundown of what the major new features of Solaris 11 are. Jeff Victor has his own list of What's New in Oracle Solaris 11. A popular Solaris 11 meme is to write a blog post about 11 favourite features: Jim Laurent's 11 Reasons to Love Solaris 11, Darren Moffat's 11 Favourite Solaris 11 Features, Mike Gerdt's 11 of My Favourite Things! are just three examples of "11 Favourite Things..." type blog posts, I'm sure many more will follow... More official overview content for Solaris 11 is available from the Oracle Tech Network Solaris 11 Portal. Also, check out Rick Ramsey's blog post Solaris 11 Resources for System Administrators on the OTN Blog and his secret 5 Commands That Make Solaris Administration Easier post from the OTN Garage. (Automatic) Installation and the Image Packaging System (IPS) The brand new Image Packaging System (IPS) and the Automatic Installer (IPS), together with numerous other install/packaging/boot/patching features are among the most significant improvements in Solaris 11. But before installing, you may wonder whether Solaris 11 will support your particular set of hardware devices. Again, the OTN Garage comes to the rescue with Rick Ramsey's post How to Find Out Which Devices Are Supported By Solaris 11. Included is a useful guide to all the first steps to get your Solaris 11 system up and running. Tim Foster had a whole handful of blog posts lined up for the launch, teaching you everything you need to know about IPS but didn't dare to ask: The IPS System Repository, IPS Self-assembly - Part 1: Overlays and Part 2: Multiple Packages Delivering Configuration. Watch out for more IPS posts from Tim! If installing packages or upgrading your system from the net makes you uneasy, then you're not alone: Jim Laurent will tech you how Building a Solaris 11 Repository Without Network Connection will make your life easier. Many of you have already peeked into the future by installing Solaris 11 Express. If you're now wondering whether you can upgrade or whether a fresh install is necessary, then check out Alan Hargreaves's post Upgrading Solaris 11 Express b151a with support to Solaris 11. The trick is in upgrading your pkg(1M) first. Networking One of the first things to do after installing Solaris 11 (or any operating system for that matter), is to set it up for networking. Solaris 11 comes with the brand new "Network Auto-Magic" feature which can figure out everything by itself. For those cases where you want to exercise a little more control, Solaris 11 left a few people scratching their heads. Fortunately, Tschokko wrote up this cool blog post: Solaris 11 manual IPv4 & IPv6 configuration right after the launch ceremony. Thanks, Tschokko! And Milek points out a long awaited networking feature in Solaris 11 called Solaris 11 - hostmodel, which I know for a fact that many customers have looked forward to: How to "bind" a Solaris 11 system to a specific gateway for specific IP address it is using. Steffen Weiberle teaches us how to tune the Solaris 11 networking stack the proper way: ipadm(1M). No more fiddling with ndd(1M)! Check out his tutorial on Solaris 11 Network Tunables. And if you want to get even deeper into the networking stack, there's nothing better than DTrace. Alan Maguire teaches you in: DTracing TCP Congestion Control how to probe deeply into the Solaris 11 TCP/IP stack, the TCP congestion control part in particular. Don't miss his other DTrace and TCP related blog posts! DTrace And there we are: DTrace, the king of all observability tools. Long time DTrace veteran and co-author of The DTrace book*, Brendan Gregg blogged about Solaris 11 DTrace syscall provider changes. BTW, after you install Solaris 11, check out the DTrace toolkit which is installed by default in /usr/dtrace/DTT. It is chock full of handy DTrace scripts, many of which contributed by Brendan himself! Security Another big theme in Solaris 11, and one that is crucial for the success of any operating system in the Cloud is Security. Here are some notable posts in this category: Darren Moffat starts by showing us how to completely get rid of root: Completely Disabling Root Logins on Solaris 11. With no root user, there's one major entry point less to worry about. But that's only the start. In Immutable Zones on Encrypted ZFS, Darren shows us how to double the security of your services: First by locking them into the new Immutable Zones feature, then by encrypting their data using the new ZFS encryption feature. And if you're still missing sudo from your Linux days, Darren again has a solution: Password (PAM) caching for Solaris su - "a la sudo". If you're wondering how much compute power all this encryption will cost you, you're in luck: The Solaris X86 AESNI OpenSSL Engine will make sure you'll use your Intel's embedded crypto support to its fullest. And if you own a brand new SPARC T4 machine you're even luckier: It comes with its own SPARC T4 OpenSSL Engine. Dan Anderson's posts show how there really is now excuse not to encrypt any more... Developers Solaris 11 has a lot to offer to developers as well. Ali Bahrami has a series of blog posts that cover diverse developer topics: elffile: ELF Specific File Identification Utility, Using Stub Objects and The Stub Proto: Not Just For Stub Objects Anymore to name a few. BTW, if you're a developer and want to shape the future of Solaris 11, then Vijay Tatkar has a hint for you: Oracle (Sun Systems Group) is hiring! Desktop and Graphics Yes, Solaris 11 is a 100% server OS, but it can also offer a decent desktop environment, especially if you are a developer. Alan Coopersmith starts by discussing S11 X11: ye olde window system in today's new operating system, then Calum Benson shows us around What's new on the Solaris 11 Desktop. Even accessibility is a first-class citizen in the Solaris 11 user interface. Peter Korn celebrates: Accessible Oracle Solaris 11 - released! Performance Gone are the days of "Slowaris", when Solaris was among the few OSes that "did the right thing" while others cut corners just to win benchmarks. Today, Solaris continues doing the right thing, and it delivers the right performance at the same time. Need proof? Check out Brian's BestPerf blog with continuous updates from the benchmarking lab, including Recent Benchmarks Using Oracle Solaris 11! Send Me More Solaris 11 Launch Articles! These are just a few of the more interesting blog articles that came out around the Solaris 11 launch, I'm sure there are many more! Feel free to post a comment below if you find a particularly interesting blog post that hasn't been listed so far and share your enthusiasm for Solaris 11! *Affiliate link: Buy cool stuff and support this blog at no extra cost. We both win! var flattr_uid = '26528'; var flattr_tle = 'Solaris 11 Launch Blog Carnival Roundup'; var flattr_dsc = '<strong>Solaris 11 is here!</strong>And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter <a href="http://en.wikipedia.org/wiki/Up_to_eleven">go to eleven</a>.Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure:'; var flattr_tag = 'blogging,digest,Oracle,Solaris,solaris,solaris 11'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2011/11/solaris-11-launch-blog-carnival-roundup'; var flattr_lng = 'en_GB'

    Read the article

  • CodePlex Daily Summary for Sunday, September 30, 2012

    CodePlex Daily Summary for Sunday, September 30, 2012Popular ReleasesCAPTCHA Solver: Initial Release: This is the initial Release :) Still very much a WIP.MCEBuddy 2.x: MCEBuddy 2.2.17: Reccomended update to 2.2.16 Changelog for 2.2.17 (32bit and 64bit) 1. Fixed bugs around thread synchronization with new remote model (fixes cause the app to crash or hang) 2. Updated UPnP code base, faster and more reliable now 3. Now you can get audio/video properties for multiple files on main page. Selected multiple files and right click, all selected files properties will be shown. 4. Fix a bug, not able to enter a conversion task name in the GUIAggravation: Version 1.0: This version 1.0 release is pretty stable. You need the Silverlight 4 runtime, developer tools, and Experssion Blend 4 installed.Readable Passphrase Generator: KeePass Plugin 0.7.1: See the KeePass Plugin Step By Step Guide for instructions on how to install the plugin. Changes Built against KeePass 2.20Windows 8 Toolkit - Charts and More: Beta 1.0: The First Compiled Version of my LibraryPDF.NET: PDF.NET.Ver4.5-OpenSourceCode: PDF.NET Ver4.5 ????,????Web??????。 PDF.NET Ver4.5 Open Source Code,include a sample Web application project.D3 Loot Tracker: 1.4: Session name is displayed in the UI. Changes data directory for clickonce deployment so that sessions files are persisted between versions. Added a delete button in the sessions list window. Allow opening of the sessions local folder from the session list widow. Display the session name in the main window Ability to select which diablo process to hook up to when pressing new () function BUT only if multi-process support is selected in the generals settings tab menu. Session picker...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor 1.1 Beta: Visual Ribbon Editor 1.1 Beta What's New: Fixed scrolling issue in UnHide dialog Added support for connecting via ADFS / IFD Added support for more than one action for a button Added support for empty StringParameter for Javascript functions Fixed bug in rule CrmClientTypeRule when selecting Outlook option Extended Prefix field in New Button dialogVisual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textZXing.Net: ZXing.Net 0.9.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2393 of the java version improved api better Unity support Windows RT binaries Windows CE binaries new Windows Service demo new WPF demo WindowsCE Hotfix: Fixes an error with ISO8859-1 encoding and scannning of QR-Codes. The hotfix is only needed for the WindowsCE platform.C.B.R. : Comic Book Reader: CBR 0.7: Synthesis since 0.6 : ePUB : Complete refactoring Add a new dedicated feed viewer for opds stream PDF conversion : improved with image merge Make all backstage panel scrollable Integrate the new AvalonDock 2 library. Support multi-document. Library explorer and Table of content are now toolboxes Designer for dynamic books is now mvvm and much better New BrowserForControl Customized xps viewer to suppress toolbars and bind it to cbr commands Add quick start manual and button ...menu4web: menu4web 1.0 - free javascript menu for web sites: menu4web 1.0 has been tested with all major browsers: Firefox, Chrome, IE, Opera and Safari. Minified m4w.js library is less than 9K. Includes 21 menu examples of different styles. Can be freely distributed under The MIT License (MIT).Rawr: Rawr 5.0.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Coevery - Free CRM: Coevery 1.0.0.26: The zh-CN issue has been solved. We also add a project management module.VidCoder: 1.4.1 Beta: Updated to HandBrake 4971. This should fix some issues with stuck PGS subtitles. Fixed build break which prevented pre-compiled XML serializers from showing up. Fixed problem where a preset would get errantly marked as modified when re-opening the encode settings window or importing a new preset.Snake!: Snake 1.0: Version 1 StablePaging SharePoint ListItems using listitems position: Paginglistitems V1.0: This is a console application which has two methods both on CSOM and SOM to display the listitems in a paged manner.SharePoint Move Discussion Threads: SharePoint Move Discussion Threads ver 0.1: ver 0.1NTCPMSG: V1.1.1.0: increase the performance. Support .net framework 4.0.BlackJumboDog: Ver5.7.2: 2012.09.23 Ver5.7.2 (1)InetTest?? (2)HTTP?????????????????100???????????New Projects2D Sprite Editor: This is a 2d sprite editor. Import your sprite sheet, trace your animations frame and export the coordinates points in a simple txt file, ready to import.caifenweb1: test project.CatchThatException: This is a small logging library We created at developerpath.com to help us log exceptions. It write it to a text file and you can easilay open that txt.FsxWs - WebServices for Microsoft FSX: WebServices for MS Flight Simulator. Get flights data as JSON, KML. !! Still in SetUp phase - be patient !!GetTPB: Some training in downloading and parsing web pages, with multithreading too.JSON-RPC Client Generator (for XBMC): The goal of this project is to provide a .Net client for the XBMC JSONRPC API. The main part is not XBMC dependent and may be used for any JSON-RPC client.matlab-silhouette-pose-wtf: Whatevermfp: this is random codeMVC Grid: MVC Grid ExampleMyWebSocketTry: sssssssssssssssssssssssssssssssssssssssNetduino Console: Netduino Console is an interface with built in messaging layers that allows you as a developer to dynamically create plugins following a provided interface to iSharePoint ASP.NET Verifier: Project will allow to verify SharePoint 2010 components using ASP.NET web applicationSharepoint Custom Upload: This is a SharePoint solution that allows an administrator to customize the upload page individually for each document library in a site.. It allows you to makeWinWeb Browser Deluxe: WinWeb Browser Deluxe es un navegador web de código abierto basado en Internet Explorer hecho en Visual Basic .NET. Descargalo ya!writethatoutput: This is the official release page for WriteThatOutPut from developerpath.com

    Read the article

  • The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

    - by Alexandra Georgescu
    After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it. One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals: The Approvals Administrator consisting of the following roles: Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities). Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned. Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities) Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays: After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are: All, which pre-selects all entities to be included for the definitions on the subsequent tabs None, manual entity selections will be made subsequently Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions. Finally a pattern needs to be selected, which will determine the general flow of ownership: Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2 In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path. Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process. Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity. After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members. -------------------------------------------------------------------------------------------------------- TIP: In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.). Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute. -------------------------------------------------------------------------------------------------------- In the next step owners and reviewers are assigned to the PUH. Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the ordermthat you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit. -------------------------------------------------------------------------------------------------------- TIP: Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units. -------------------------------------------------------------------------------------------------------- In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment. Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries. After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals. The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here: Export/Import of PUHs: Out of Office agent: Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)): And various new and helpful reviewer actions with corresponding approval states. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CD

    - by Trevor Bekolay
    Accidentally deleting a file is a terrible feeling. Not being able to boot into Windows and undelete that file makes that even worse. Fortunately, you can recover deleted files on NTFS hard drives from an Ubuntu Live CD. To show this process, we created four files on the desktop of a Windows XP machine, and then deleted them. We then booted up the same machine with the bootable Ubuntu 9.10 USB Flash Drive that we created last week. Once Ubuntu 9.10 boots up, open a terminal by clicking Applications in the top left of the screen, and then selecting Accessories > Terminal. To undelete our files, we first need to identify the hard drive that we want to undelete from. In the terminal window, type in: sudo fdisk –l and press enter. What you’re looking for is a line that ends with HPSF/NTFS (under the heading System). In our case, the device is “/dev/sda1”. This may be slightly different for you, but it will still begin with /dev/. Note this device name. If you have more than one hard drive partition formatted as NTFS, then you may be able to identify the correct partition by the size. If you look at the second line of text in the screenshot above, it reads “Disk /dev/sda: 136.4 GB, …” This means that the hard drive that Ubuntu has named /dev/sda is 136.4 GB large. If your hard drives are of different size, then this information can help you track down the right device name to use. Alternatively, you can just try them all, though this can be time consuming for large hard drives. Now that you know the name Ubuntu has assigned to your hard drive, we’ll scan it to see what files we can uncover. In the terminal window, type: sudo ntfsundelete <HD name> and hit enter. In our case, the command is: sudo ntfsundelete /dev/sda1 The names of files that can recovered show up in the far right column. The percentage in the third column tells us how much of that file can be recovered. Three of the four files that we originally deleted are showing up in this list, even though we shut down the computer right after deleting the four files – so even in ideal cases, your files may not be recoverable. Nevertheless, we have three files that we can recover – two JPGs and an MPG. Note: ntfsundelete is immediately available in the Ubuntu 9.10 Live CD. If you are in a different version of Ubuntu, or for some other reason get an error when trying to use ntfsundelete, you can install it by entering “sudo apt-get install ntfsprogs” in a terminal window. To quickly recover the two JPGs, we will use the * wildcard to recover all of the files that end with .jpg. In the terminal window, enter sudo ntfsundelete <HD name> –u –m *.jpg which is, in our case, sudo ntfsundelete /dev/sda1 –u –m *.jpg The two files are recovered from the NTFS hard drive and saved in the current working directory of the terminal. By default, this is the home directory of the current user, though we are working in the Desktop folder. Note that the ntfsundelete program does not make any changes to the original NTFS hard drive. If you want to take those files and put them back in the NTFS hard drive, you will have to move them there after they are undeleted with ntfsundelete. Of course, you can also put them on your flash drive or open Firefox and email them to yourself – the sky’s the limit! We have one more file to undelete – our MPG. Note the first column on the far left. It contains a number, its Inode. Think of this as the file’s unique identifier. Note this number. To undelete a file by its Inode, enter the following in the terminal: sudo ntfsundelete <HD name> –u –i <Inode> In our case, this is: sudo ntfsundelete /dev/sda1 –u –i 14159 This recovers the file, along with an identifier that we don’t really care about. All three of our recoverable files are now recovered. However, Ubuntu lets us know visually that we can’t use these files yet. That’s because the ntfsundelete program saves the files as the “root” user, not the “ubuntu” user. We can verify this by typing the following in our terminal window: ls –l We want these three files to be owned by ubuntu, not root. To do this, enter the following in the terminal window: sudo chown ubuntu <Files> If the current folder has other files in it, you may not want to change their owner to ubuntu. However, in our case, we only have these three files in this folder, so we will use the * wildcard to change the owner of all three files. sudo chown ubuntu * The files now look normal, and we can do whatever we want with them. Hopefully you won’t need to use this tip, but if you do, ntfsundelete is a nice command-line utility. It doesn’t have a fancy GUI like many of the similar Windows programs, but it is a powerful tool that can recover your files quickly. See ntfsundelete’s manual page for more detailed usage information Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDUse Ubuntu Live CD to Backup Files from Your Dead Windows ComputerCreate a Bootable Ubuntu 9.10 USB Flash DriveCreate a Bootable Ubuntu USB Flash Drive the Easy WayGuide to Using Check Disk in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper

    Read the article

  • Flashing your Windows Phone Dummies

    - by Martin Hinshelwood
    The rate at which vendors release new updates for the HD2 is ridiculously slow. You have to wait for Microsoft to release the new OS, then you wait for HTC to build it into a ROM, and then you have to wait up to 6 months for your operator to badly customise it for their network. Once Windows Phone 7 is released this problem should go away as Microsoft is likely to be able to update the phone over the air, but what do we do until then? I want Windows Mobile 6.5.5 now!   I’m an early adopter. If there is a new version of something then that’s the version I want. As long as you accept that you are using something on a “let the early adopter beware” and accept that there may be bugs, sometimes serious crippling bugs the go for it. Note that I won't be responsible if you end up bricking your phone, unlocking or flashing your radio or ROM can be risky. If you follow the instructions then you should be fine, I've flashed my phones (SPV, M300, M1000, M2000, M3100, TyTN, TyTN 2, HD2) hundreds of times without any problems! I have been using Windows Mobile 6.5.5 before it was called 6.5.5 and for long enough that I don’t even remember when I first started using it. I was using it on my HTC TyTN 2 before I got an HD2 a couple of months before Christmas, and the first custom ROM’s for the HD2 were a couple of months after that. I always update to the latest ROM that I like, and occasionally I go back to the stock ROM’s to have a look see, but I am always disappointed. Terms: Soft Reset: Same as pulling out the battery, but is like a reboot for your phone Hard Reset: Reinstalls the Operating system from the Image that is stored on it ROM: This is Image that is loaded onto your phone and it is used to reinstall your phone whenever you do a “hard reset”. Stock ROM: A ROM from the original vendor… So HTC Cook a ROM: Referring to Cooking a ROM is the process a ROM developer goes through to take all of the parts (OS, Drivers and Applications) that make up a running phone and compiling them into a ROM. ROM Kitchen: A place where you get an SDK and all the component parts of the phone: OD, Drivers and Application. There are usually lots of Tools for making it easier to compile and build the image. Flashing: The process of updating one of the layers of your phone with a new layer Bricked: This is what happens when flashing goes wrong. Your phone is now good for only one thing… stopping paper blowing away in a windy place. You can “cook” you own ROM using one of the many good “ROM Kitchens” or you can use a ROM built and tested by someone else. I have cooked my own ROM before, and while the tutorials are good, it is a lot of hassle. You can only Flash new ROM’s that are specifically for your phone only so find a ROM for your phone and XDA Developers is the best place to look. It has a forum based structure and you can find your phone quite easily. XDA Developer Forum Installing a new ROM does have its risks. In the past there have been stories about phones being “bricked” but I have not heard of a bricked phone for quite some years. if you follow the instructions carefully you should not have any problems. note: Most of the tools are written by people for whom English is not their first language to you will need concentrate hard to understand some of the instructions. Have you ever read a manual that was just literally translated from another language? Enough said… There are a number of layers on your phone that you will need to know about: SPL: This is the lowest level, like a BIOS on a PC and is the Operating Systems gateway to the hardware Radio: I think of this as the hardware drivers, and you will need a different Radio for CDMA than GSM networks ROM: This is like your Windows CD, but it is stored internally to the Phone. Flashing your phone consists of replacing one Image with another and then wiping your phone and automatically reinstall from the Image. Sometimes when you download an Image wither it is for a Radio or for ROM you only get a file called *.nbh. What do you do with this? Well you need an RUU application to push that Image to your phone. The RUU’s are different per phone, but there is a CustomRUU for the HD2 that will update your phone with any *.nbh placed in the same directory. Download and Instructions for CustomRUU #1 Flash HardSPL An SPL is kind of like a BIOS, and the default one has checks to make sure that you are only installing a signed ROM. This would prevent you from installing one that comes from any other source but the vendor. NOTE: Installing a HARD SPL invalidates your warranty so remember to Flash your phone with a “stock” vendor ROM before trying to send your phone in for repairs. Is the warranty reinstated when you go back to a stock ROM? I don’t know… Updating your SPL to a HardSPL effectively unlocks your phone so you can install anything you like. I would recommend the HardSPL2. Download and Instructions for HardSPL2 #2 Task29 One of the problems that has been seen on the HD2 when flashing new ROM’s is that things are left over from the old ROM. For a while the recommendation was to Flash a stock ROM first, but some clever cookies have come up with “Task29” which formats your phone first. After running this your phone will be blank and will only boot to the white HTC logo and no further. You should follow the instructions and reboot (remove battery) and hold down the “volume down” button while turning you HD2 on to enter the bootloader. From here you can run CustomRUU once the USB message appears. Download and Instructions for Task29 #2 Flash Radio You may need to play around with this one, there is no good and bad version and the latest is not always the best. You know that annoying thing when you hit “end call” on your phone and nothing happens? Well that's down to the Radio. Get this version right for you and you may even be able to make calls. From a Windows Mobile as well Download There are no instructions here, but they are the same as th ROM, but you use this *.nbh file. #3 Flash ROM If you have gotten this far then you are probably a pro by now Just download the latest ROM below and Flash to your phone. I have been really impressed by the Artemis line of ROM’s but it is no way the only choice. I like this one as the developer builds them as close to the stock ROM as possible while updating to the latest of everything. Download and Instructions for  Artemis HD2 vXX Conclusion While updating your ROM is not for the faint hearted it provides more options than the Stock ROM’s and quicker feature updates than waiting… Technorati Tags: WM6

    Read the article

  • SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS

    - by pinaldave
    Data Quality Services is a very important concept of SQL Server. I have recently started to explore the same and I am really learning some good concepts. Here are two very important blog posts which one should go over before continuing this blog post. Installing Data Quality Services (DQS) on SQL Server 2012 Connecting Error to Data Quality Services (DQS) on SQL Server 2012 This article is introduction to Data Quality Services for beginners. We will be using an Excel file Click on the image to enlarge the it. In the first article we learned to install DQS. In this article we will see how we can learn about building Knowledge Base and using it to help us identify the quality of the data as well help correct the bad quality of the data. Here are the two very important steps we will be learning in this tutorial. Building a New Knowledge Base  Creating a New Data Quality Project Let us start the building the Knowledge Base. Click on New Knowledge Base. In our project we will be using the Excel as a knowledge base. Here is the Excel which we will be using. There are two columns. One is Colors and another is Shade. They are independent columns and not related to each other. The point which I am trying to show is that in Column A there are unique data and in Column B there are duplicate records. Clicking on New Knowledge Base will bring up the following screen. Enter the name of the new knowledge base. Clicking NEXT will bring up following screen where it will allow to select the EXCE file and it will also let users select the source column. I have selected Colors and Shade both as a source column. Creating a domain is very important. Here you can create a unique domain or domain which is compositely build from Colors and Shade. As this is the first example, I will create unique domain – for Colors I will create domain Colors and for Shade I will create domain Shade. Here is the screen which will demonstrate how the screen will look after creating domains. Clicking NEXT it will bring you to following screen where you can do the data discovery. Clicking on the START will start the processing of the source data provided. Pre-processed data will show various information related to the source data. In our case it shows that Colors column have unique data whereas Shade have non-unique data and unique data rows are only two. In the next screen you can actually add more rows as well see the frequency of the data as the values are listed unique. Clicking next will publish the knowledge base which is just created. Now the knowledge base is created. We will try to take any random data and attempt to do DQS implementation over it. I am using another excel sheet here for simplicity purpose. In reality you can easily use SQL Server table for the same. Click on New Data Quality Project to see start DQS Project. In the next screen it will ask which knowledge base to use. We will be using our Colors knowledge base which we have recently created. In the Colors knowledge base we had two columns – 1) Colors and 2) Shade. In our case we will be using both of the mappings here. User can select one or multiple column mapping over here. Now the most important phase of the complete project. Click on Start and it will make the cleaning process and shows various results. In our case there were two columns to be processed and it completed the task with necessary information. It demonstrated that in Colors columns it has not corrected any value by itself but in Shade value there is a suggestion it has. We can train the DQS to correct values but let us keep that subject for future blog posts. Now click next and keep the domain Colors selected left side. It will demonstrate that there are two incorrect columns which it needs to be corrected. Here is the place where once corrected value will be auto-corrected in future. I manually corrected the value here and clicked on Approve radio buttons. As soon as I click on Approve buttons the rows will be disappeared from this tab and will move to Corrected Tab. If I had rejected tab it would have moved the rows to Invalid tab as well. In this screen you can see how the corrected 2 rows are demonstrated. You can click on Correct tab and see previously validated 6 rows which passed the DQS process. Now let us click on the Shade domain on the left side of the screen. This domain shows very interesting details as there DQS system guessed the correct answer as Dark with the confidence level of 77%. It is quite a high confidence level and manual observation also demonstrate that Dark is the correct answer. I clicked on Approve and the row moved to corrected tab. On the next screen DQS shows the summary of all the activities. It also demonstrates how the correction of the quality of the data was performed. The user can explore their data to a SQL Server Table, CSV file or Excel. The user also has an option to either explore data and all the associated cleansing info or data only. I will select Data only for demonstration purpose. Clicking explore will generate the files. Let us open the generated file. It will look as following and it looks pretty complete and corrected. Well, we have successfully completed DQS Process. The process is indeed very easy. I suggest you try this out yourself and you will find it very easy to learn. In future we will go over advanced concepts. Are you using this feature on your production server? If yes, would you please leave a comment with your environment and business need. It will be indeed interesting to see where it is implemented. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • CodePlex Daily Summary for Friday, September 07, 2012

    CodePlex Daily Summary for Friday, September 07, 2012Popular ReleasesUmbraco CMS: Umbraco 4.9.0: Whats newThe media section has been overhauled to support HTML5 uploads, just drag and drop files in, even multiple files are supported on any HTML5 capable browser. The folder content overview is also much improved allowing you to filter it and perform common actions on your media items. The Rich Text Editor’s “Media” button now uses an embedder based on the open oEmbed standard (if you’re upgrading, enable the media button in the Rich Text Editor datatype settings and set TidyEditorConten...menu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.B INI Sharp Library: B INI Sharp Library v1.0.0.0 Final Realsed: The frist realsedActive Social Migrator: ActiveSocialMigrator 1.0.0 Beta: Beta release for the Active Social Migration tool.EntLib.com????????: ??????demo??-For SQL 2005-2008: EntLibShopping ???v3.0 - ??????demo??,?????SQL SERVER 2005/2008/2008 R2/2012 ??????。 ??(??)??????。 THANKS.Sistem LPK Pemkot Semarang: Panduan Penggunaan Sistem LPK: Panduan cara menggunakan Aplikasi Sistem LPK Bagian Pembangunan Kota SemarangActive Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.New ProjectsAdding 2013 Jewish Holidays for Outlook2003: Instruction: Copy the outlook.hol file to your compuer where Outlook2003 is installed. Double click the file, choose "Israel" and continue. That's it Agilcont System: Sistema de contabilidad para empresas privadas de preferencia para cajas que trabajan con efectivo en soles, dolares y con el material oroARB (A Request Broker): The idea is something like a Request Broker, but with some additional functionality.BATTLE.NET - SDK: This SDK provides the ability to use the Battle.net (Blizzard) Services for all supported Games such Diablo 3, World of Warcraft. Container Terminal System: SummaryDeclarative UX Streaming Data Language for the Cloud: Bringing a better communication paradigm for media and data..Get User Profile Information from SharePoint UserProfile Service: Used SharePoint object model to get the user profile information from the User Profile Service.Guess The City & State Windows 8 Source Code: Source code for Guess The City & State in Malaysia Windows 8 AppJquery Tree: This project is to demonstrate tree basic functionality.MCEBuddy 2.x: Convert and Remove Commercials for your Windows Media CenterMvcDesign: MvcDesign engine implementation projectMy Task Manager: This is a task manager module for DotNetNuke. I am using it to get started developing modules.MyAppwithbranches: MyAppwithbranchesProjecte prova: rpyGEO: pyGEO is a python package capable of parsing microarray data files. It also has a primitive plotting function.Scarlet Road: Scarlet Road is a top-down shooter. It's you against an unending horde of monsters.simplecounter: A simple counter, cick and counter.SiteEmpires: ????????Soundcloud Loader: Simple Tool for downloading Tracks from Soundcloud.Windows Phone Samples: Windows Phone code samples.

    Read the article

  • New Validation Attributes in ASP.NET MVC 3 Future

    - by imran_ku07
         Introduction:             Validating user inputs is an very important step in collecting information from users because it helps you to prevent errors during processing data. Incomplete or improperly formatted user inputs will create lot of problems for your application. Fortunately, ASP.NET MVC 3 makes it very easy to validate most common input validations. ASP.NET MVC 3 includes Required, StringLength, Range, RegularExpression, Compare and Remote validation attributes for common input validation scenarios. These validation attributes validates most of your user inputs but still validation for Email, File Extension, Credit Card, URL, etc are missing. Fortunately, some of these validation attributes are available in ASP.NET MVC 3 Future. In this article, I will show you how to leverage Email, Url, CreditCard and FileExtensions validation attributes(which are available in ASP.NET MVC 3 Future) in ASP.NET MVC 3 application.       Description:             First of all you need to download ASP.NET MVC 3 RTM Source Code from here. Then extract all files in a folder. Then open MvcFutures project from mvc3-rtm-sources\mvc3\src\MvcFutures folder. Build the project. In case, if you get compile time error(s) then simply remove the reference of System.Web.WebPages and System.Web.Mvc assemblies and add the reference of System.Web.WebPages and System.Web.Mvc 3 assemblies again but from the .NET tab and then build the project again, it will create a Microsoft.Web.Mvc assembly inside mvc3-rtm-sources\mvc3\src\MvcFutures\obj\Debug folder. Now we can use Microsoft.Web.Mvc assembly inside our application.             Create a new ASP.NET MVC 3 application. For demonstration purpose, I will create a dummy model UserInformation. So create a new class file UserInformation.cs inside Model folder and add the following code,   public class UserInformation { [Required] public string Name { get; set; } [Required] [EmailAddress] public string Email { get; set; } [Required] [Url] public string Website { get; set; } [Required] [CreditCard] public string CreditCard { get; set; } [Required] [FileExtensions(Extensions = "jpg,jpeg")] public string Image { get; set; } }             Inside UserInformation class, I am using Email, Url, CreditCard and FileExtensions validation attributes which are defined in Microsoft.Web.Mvc assembly. By default FileExtensionsAttribute allows png, jpg, jpeg and gif extensions. You can override this by using Extensions property of FileExtensionsAttribute class.             Then just open(or create) HomeController.cs file and add the following code,   public class HomeController : Controller { public ActionResult Index() { return View(); } [HttpPost] public ActionResult Index(UserInformation u) { return View(); } }             Next just open(or create) Index view for Home controller and add the following code,  @model NewValidationAttributesinASPNETMVC3Future.Model.UserInformation @{ ViewBag.Title = "Index"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Index</h2> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>UserInformation</legend> <div class="editor-label"> @Html.LabelFor(model => model.Name) </div> <div class="editor-field"> @Html.EditorFor(model => model.Name) @Html.ValidationMessageFor(model => model.Name) </div> <div class="editor-label"> @Html.LabelFor(model => model.Email) </div> <div class="editor-field"> @Html.EditorFor(model => model.Email) @Html.ValidationMessageFor(model => model.Email) </div> <div class="editor-label"> @Html.LabelFor(model => model.Website) </div> <div class="editor-field"> @Html.EditorFor(model => model.Website) @Html.ValidationMessageFor(model => model.Website) </div> <div class="editor-label"> @Html.LabelFor(model => model.CreditCard) </div> <div class="editor-field"> @Html.EditorFor(model => model.CreditCard) @Html.ValidationMessageFor(model => model.CreditCard) </div> <div class="editor-label"> @Html.LabelFor(model => model.Image) </div> <div class="editor-field"> @Html.EditorFor(model => model.Image) @Html.ValidationMessageFor(model => model.Image) </div> <p> <input type="submit" value="Save" /> </p> </fieldset> } <div> @Html.ActionLink("Back to List", "Index") </div>             Now just run your application. You will find that both client side and server side validation for the above validation attributes works smoothly.                      Summary:             Email, URL, Credit Card and File Extension input validations are very common. In this article, I showed you how you can validate these input validations into your application. I explained this with an example. I am also attaching a sample application which also includes Microsoft.Web.Mvc.dll. So you can add a reference of Microsoft.Web.Mvc assembly directly instead of doing any manual work. Hope you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Observations in Migrating from JavaFX Script to JavaFX 2.0

    - by user12608080
    Observations in Migrating from JavaFX Script to JavaFX 2.0 Introduction Having been available for a few years now, there is a decent body of work written for JavaFX using the JavaFX Script language. With the general availability announcement of JavaFX 2.0 Beta, the natural question arises about converting the legacy code over to the new JavaFX 2.0 platform. This article reflects on some of the observations encountered while porting source code over from JavaFX Script to the new JavaFX API paradigm. The Application The program chosen for migration is an implementation of the Sudoku game and serves as a reference application for the book JavaFX – Developing Rich Internet Applications. The design of the program can be divided into two major components: (1) A user interface (ideally suited for JavaFX design) and (2) the puzzle generator. For the context of this article, our primary interest lies in the user interface. The puzzle generator code was lifted from a sourceforge.net project and is written entirely in Java. Regardless which version of the UI we choose (JavaFX Script vs. JavaFX 2.0), no code changes were required for the puzzle generator code. The original user interface for the JavaFX Sudoku application was written exclusively in JavaFX Script, and as such is a suitable candidate to convert over to the new JavaFX 2.0 model. However, a few notable points are worth mentioning about this program. First off, it was written in the JavaFX 1.1 timeframe, where certain capabilities of the JavaFX framework were as of yet unavailable. Citing two examples, this program creates many of its own UI controls from scratch because the built-in controls were yet to be introduced. In addition, layout of graphical nodes is done in a very manual manner, again because much of the automatic layout capabilities were in flux at the time. It is worth considering that this program was written at a time when most of us were just coming up to speed on this technology. One would think that having the opportunity to recreate this application anew, it would look a lot different from the current version. Comparing the Size of the Source Code An attempt was made to convert each of the original UI JavaFX Script source files (suffixed with .fx) over to a Java counterpart. Due to language feature differences, there are a small number of source files which only exist in one version or the other. The table below summarizes the size of each of the source files. JavaFX Script source file Number of Lines Number of Character JavaFX 2.0 Java source file Number of Lines Number of Characters ArrowKey.java 6 72 Board.fx 221 6831 Board.java 205 6508 BoardNode.fx 446 16054 BoardNode.java 723 29356 ChooseNumberNode.fx 168 5267 ChooseNumberNode.java 302 10235 CloseButtonNode.fx 115 3408 CloseButton.java 99 2883 ParentWithKeyTraversal.java 111 3276 FunctionPtr.java 6 80 Globals.java 20 554 Grouping.fx 8 140 HowToPlayNode.fx 121 3632 HowToPlayNode.java 136 4849 IconButtonNode.fx 196 5748 IconButtonNode.java 183 5865 Main.fx 98 3466 Main.java 64 2118 SliderNode.fx 288 10349 SliderNode.java 350 13048 Space.fx 78 1696 Space.java 106 2095 SpaceNode.fx 227 6703 SpaceNode.java 220 6861 TraversalHelper.fx 111 3095 Total 2,077 79,127 2531 87,800 A few notes about this table are in order: The number of lines in each file was determined by running the Unix ‘wc –l’ command over each file. The number of characters in each file was determined by running the Unix ‘ls –l’ command over each file. The examination of the code could certainly be much more rigorous. No standard formatting was performed on these files.  All comments however were deleted. There was a certain expectation that the new Java version would require more lines of code than the original JavaFX script version. As evidenced by a count of the total number of lines, the Java version has about 22% more lines than its FX Script counterpart. Furthermore, there was an additional expectation that the Java version would be more verbose in terms of the total number of characters.  In fact the preceding data shows that on average the Java source files contain fewer characters per line than the FX files.  But that's not the whole story.  Upon further examination, the FX Script source files had a disproportionate number of blank characters.  Why?  Because of the nature of how one develops JavaFX Script code.  The object literal dominates FX Script code.  Its not uncommon to see object literals indented halfway across the page, consuming lots of meaningless space characters. RAM consumption Not the most scientific analysis, memory usage for the application was examined on a Windows Vista system by running the Windows Task Manager and viewing how much memory was being consumed by the Sudoku version in question. Roughly speaking, the FX script version, after startup, had a RAM footprint of about 90MB and remained pretty much the same size. The Java version started out at about 55MB and maintained that size throughout its execution. What About Binding? Arguably, the most striking observation about the conversion from JavaFX Script to JavaFX 2.0 concerned the need for data synchronization, or lack thereof. In JavaFX Script, the primary means to synchronize data is via the bind expression (using the “bind” keyword), and perhaps to a lesser extent it’s “on replace” cousin. The bind keyword does not exist in Java, so for JavaFX 2.0 a Data Binding API has been introduced as a replacement. To give a feel for the difference between the two versions of the Sudoku program, the table that follows indicates how many binds were required for each source file. For JavaFX Script files, this was ascertained by simply counting the number of occurrences of the bind keyword. As can be seen, binding had been used frequently in the JavaFX Script version (and does not take into consideration an additional half dozen or so “on replace” triggers). The JavaFX 2.0 program achieves the same functionality as the original JavaFX Script version, yet the equivalent of binding was only needed twice throughout the Java version of the source code. JavaFX Script source file Number of Binds JavaFX Next Java source file Number of “Binds” ArrowKey.java 0 Board.fx 1 Board.java 0 BoardNode.fx 7 BoardNode.java 0 ChooseNumberNode.fx 11 ChooseNumberNode.java 0 CloseButtonNode.fx 6 CloseButton.java 0 CustomNodeWithKeyTraversal.java 0 FunctionPtr.java 0 Globals.java 0 Grouping.fx 0 HowToPlayNode.fx 7 HowToPlayNode.java 0 IconButtonNode.fx 9 IconButtonNode.java 0 Main.fx 1 Main.java 0 Main_Mobile.fx 1 SliderNode.fx 6 SliderNode.java 1 Space.fx 0 Space.java 0 SpaceNode.fx 9 SpaceNode.java 1 TraversalHelper.fx 0 Total 58 2 Conclusions As the JavaFX 2.0 technology is so new, and experience with the platform is the same, it is possible and indeed probable that some of the observations noted in the preceding article may not apply across other attempts at migrating applications. That being said, this first experience indicates that the migrated Java code will likely be larger, though not extensively so, than the original Java FX Script source. Furthermore, although very important, it appears that the requirements for data synchronization via binding, may be significantly less with the new platform.

    Read the article

  • 24+ Coda Alternatives for Windows and Linux

    - by Matt
    Coda plays an important role in designing layout on Mac. There are numerous coda alternatives for windows and Linux too. It is not possible to describe each and everyone so some of the coda alternatives, which work on both windows and Linux platforms, are discussed below. EditPlus $35.00 Good thing about EditPlus is that it highlights URLs and email addresses, activating them when you ‘crtl + double-click’. It also has a built in browser for previewing HTML, and FTP and SFTP support. Also supports Macros and RegEx find and replace. UltraEdit $49.99 It is another good coda alternative for windows and Linux. It is the best suited editor for text, HTML and HEX. It also plays an advanced PHP, Perl, Java and JavaScript editor for programmers. It supports disk-based 64-bit or standard file handling on 32-bit Windows platforms or window 2000 and later versions. HippoEdit $39.95 HippoEDIT has the best autocomplete it gives pop a ‘tooltip’ above your cursor as you type, suggesting words you’ve already typed. It does syntax highlighting for over 2 dozen language. Sublime Text $59.00 Sublime Text awesome ‘zoomed out’ view of the file lets you focus on the area you want. It lets you open a local file when you right-click on its link, and there are a few automation features, so this would make a solid choice of a text editor. Textpad $24.70 TextPad is simple editor with nifty features such as column select, drag-and-drop text between files, and hyperlink support. It also supports large files. Aptana Free Aptana Studio is one of the best editors working on both windows and Linux. It is a complete web development setting that has a nice blend of powerful authoring tools with a collection of online hosting and collaboration services. It is quite helpful as it support for PHP, CSS, FTP, and more. SciTE Free It is a SCIntilla based Text Editor. It has gradually developed as a generally useful editor. It provides for building and running programs. It is best to be used for jobs with simple configurations. SciTE is currently available for Intel Win32 and Linux compatible operating systems with GTK+. It has been run on Windows XP and on Fedora 8 and Ubuntu 7.10 with GTK+ 2.12 E Text Editor $34.96 E Text Editor is a new text editor for Windows, which also works on Linux as well. It has powerful editing features and also some unique abilities. It makes text manipulation quite fast and easy, and makes user focus on his writing as it automatically does all the manual work. It can be extend it in any language. It supports Text Mate bundles, thus allows the user to tap into a huge and active community. Editra Free Editra is an upcoming editor, with some fantastic features such as user profiles, auto-completion, session saving, and syntax highlighing for 60+ languages. Plugins can extend the feature set, offering an integrated python console, FTP client, file browser, and calculator, among others. PSPad Free PSPad is a good Template for writing CSS, as it an internal web browser, and a macro recorder to the table. It also supports hex editing, and some degree of code compiling. JEdit Free It is a mature programmer’s text editor and has taken a good deal of time to be developed as it is today. It is better than many costlier development tools due to its features and simplicity of use. It has been released as free software with full source code, provided under the terms of the GPL 2.0. Which also adds to its attractiveness. NEdit Free It is a multi-purpose text editor for the X Window System, which also works on Linux. It combines a standard, easy to use, graphical user interface with the full functionality and stability required by users who edit text for long period a day. It also provides for thorough support for development in various languages. It also facilitates the use of text processors, and other tools at the same time. It can be used productively by anyone who needs to edit text. It is quite a user-friendly tool. Its salient features include syntax highlighting with built in pattern, auto indent, tab emulation, block indentation adjustment etc. As of version 5.1, NEdit may be freely distributed under the terms of the GNU General Public License. MadEdit Free Mad Edit is an Open-Source and Cross-Platform Text/Hex Editor. It is written in C++ and wxWidgets. MadEdit can edit files in Text/Column/Hex modes. It also supports many useful functions, such as Syntax Highlighting, Word Wrap, Encoding for UTF8/16/32,and others. It also supports word count, which makes it quite a useful text editor for both windows and Linux. It has been recently modified on 10/09/2010. KompoZer Free Kompozer is a complete web authoring system that has a combination of web file management and easy-to-use WYSIWYG web page editing. KompoZer has been designed to be completely and extensively easy to use. It is thus an ideal tool for non-technical computer users who want to create an attractive, professional-looking web site without knowing HTML or web coding. It is based on the NVU source code. Vim Free Vim or “Vi IMproved” is an advanced text editor. Its salient features are syntax highlighting, word completion and it also has a huge amount of contributed content. Vim has several “modes” on offer for editing, which adds to the efficiency in editing. Thus it becomes a non-user-friendly application but it is also strength for its users. The normal mode binds alphanumeric keys to task-oriented commands. The visual mode highlights text. More tools for search & replace, defining functions, etc. are offered through command line mode. Vim comes with complete help. NotePad ++ Free One of the the best free text editor for Windows out there; with support for simple things—like syntax highlighting and folding—all the way up to FTP, Notepad++ should tick most of the boxes Notepad2 Free Notepad2 is also based on the Scintilla editing engine, but it’s much simpler than Notepad++. It bills itself as being fast, light-weight, and Notepad-like. Crimson Editor Free Crimson Editor has the ability to edit remote files, using a built-in FTP client; there’s also a spell checker. TotalEdit Free TotalEdit allows file comparison, RegEx search and replace, and has multiple options for file backup / versioning. For cleanup, it offers (X)HTML and XML customizable formatting, and a spell checker. In-Type Free ConTEXT Free SourceEdit Free SourceEdit includes features such as clipboard history, syntax highlighting and autocompletion for a decent set of languages. A hex editor and FTP client. RJ TextED Free RJ TextED supports integration with TopStyle Lite. Provides HTML validation and formatting. It includes an FTP client, a file browser, and a code browser, as well as a character map and support for email. GEDIT Free It is one of the best coda alternatives for windows and Linux. It has syntax highlighting and is best suitable for programming. It has many attractive features such as full support for UTF-8, undo/redo, and clipboard support, search and replace, configurable syntax highlighting for various languages and many more supportive features. It is extensible with plug ins. Other important coda alternatives for windows and Linux are Redcar, Bluefish Editor, NVU, Ruby Mine, Slick Edit, Geany, Editra, txt2html and CSSED. There are many more. Its up to user to decide which one suits best to his requirements. Related posts:10 Useful Text Editor For Developer Applications to Install & Run Windows on Linux Open Source WYSIWYG Text Editors

    Read the article

  • Brighton Rocks: UA Europe 2011

    - by ultan o'broin
    User Assistance Europe 2011 was held in Brighton, UK. Having seen Quadrophenia a dozen times, I just had to go along (OK, I wanted to talk about messages in enterprise applications). Sadly, it rained a lot, though that was still eminently more tolerable than being stuck home in Dublin during Bloomsday. So, here are my somewhat selective highlights and observations from the conference, massively skewed towards my own interests, as usual. Enjoyed Leah Guren's (Cow TC) great start ‘keynote’ on the Cultural Dimensions of Software Help Usage. Starting out by revisiting Hofstede's and Hall's work on culture (how many times I have done this for Multilingual magazine?) and then Neilsen’s findings on age as an indicator of performance, Leah showed how it is the expertise of the user that user assistance (UA) needs to be designed for (especially for high-end users), with some considerations made for age, while the gender and culture of users are not major factors. Help also needs to be contextual and concise, embedded close to the action. That users are saying things like “If I want help on Office, I go to Google ” isn't all that profound at this stage, but it is always worth reiterating how search can be optimized to return better results for users. Interestingly, regardless of user education level, the issue of information quality--hinging on the lynchpin of terminology reflecting that of the user--is critical. Major takeaway for me there. Matthew Ellison’s sessions on embedded help and demos were also impressive. Embedded help that is concise and contextual is definitely a powerful UX enabler, and I’m pleased to say that in Oracle Fusion Applications we have embraced the concept fully. Matthew also mentioned in his session about successful software demos that the principle of modality with demos is a must. Look no further than Oracle User Productivity Kit demos See It!, Try It!, Know It, and Do It! modes, for example. I also found some key takeaways in the presentation by Marie-Louise Flacke on notes and warnings. Here, legal considerations seemed to take precedence over providing any real information to users. I was delighted when Marie-Louise called out the Oracle JDeveloper documentation as an exemplar of how to use notes and instructions instead of trying to scare the bejaysus out of people and not providing them with any real information they’d find useful instead. My own session on designing messages for enterprise applications was well attended. Knowing your user profiles (remember user expertise is the king maker for UA so write for each audience involved), how users really work, the required application business and UI rules, what your application technology supports, and how messages integrate with the enterprise help desk and support policies and you will go much further than relying solely on the guideline of "writing messages in plain language". And, remember the value in warnings and confirmation messages too, and how you can use them smartly. I hope y’all got something from my presentation and from my answers to questions afterwards. Ellis Pratt stole the show with his presentation on applying game theory to software UA, using plenty of colorful, relevant examples (check out the Atlassian and DropBox approaches, for example), and striking just the right balance between theory and practice. Completely agree that the approach to take here is not to make UA itself a game, but to invoke UA as part of a bigger game dynamic (time-to-task completion, personal and communal goals, personal achievement and status, and so on). Sure there are gotchas and limitations to gamification, and we need to do more research. However, we'll hear a lot more about this subject in coming years, particularly in the enterprise space. I hope. I also heard good things about the different sessions about DITA usage (including one by Sonja Fuga that clearly opens the door for major innovation in the community content space using WordPress), the progressive disclosure of information (Cerys Willoughby), an overview of controlled language (or "information quality", as I like to position it) solutions and rationale by Dave Gash, and others. I also spent time chatting with Mike Hamilton of MadCap Software, who showed me a cool demo of their Flare product, and the Lingo translation solution. I liked the idea of their licensing model for workers-on-the-go; that’s smart UX-awareness in itself. Also chatted with Julian Murfitt of Mekon about uptake of DITA in the enterprise space. In all, it's worth attending UA Europe. I was surprised, however, not to see conference topics about mobile UA, community conversation and content, and search in its own right. These are unstoppable forces now, and the latter is pretty central to providing assistance now to all but the most irredentist of hard-copy fetishists or advanced technical or functional users working away on the back end of applications and systems. Only saw one iPad too (says the guy who carries three laptops). Tweeting during the conference was pretty much nonexistent during the event, so no community energy there. Perhaps all this can be addressed next year. I would love to see the next UA Europe event come to Dublin (despite Bloomsday, it's not a bad place place, really) now that hotels are so cheap and all. So, what is my overall impression of the state of user assistance in Europe? Clearly, there are still many people in the industry who feel there is something broken with the traditional forms of user assistance (particularly printed doc) and something needs to be done about it. I would suggest they move on and try and embrace change, instead. Many others see new possibilities, offered by UX and technology, as well as the reality of online user behavior in an increasingly connected world and that is encouraging. Such thought leaders need to be listened to. As Ellis Pratt says in his great book, Trends in Technical Communication - Rethinking Help: “To stay relevant means taking a new perspective on the role (of technical writer), and delivering “products” over and above the traditional manual and online Help file... there are a number of new trends in this field - some complementary, some conflicting. Whatever trends emerge as the norm, it’s likely the status quo will change.” It already has, IMO. I hear similar debates in the professional translation world about the onset of translation crowd sourcing (the Facebook model) and machine translation (trust me, that battle is over). Neither of these initiatives has put anyone out of a job and probably won't, though the nature of the work might change. If anything, such innovations have increased the overall need for professional translators as user expectations rise, new audiences emerge, and organizations need to collate and curate user-generated content, combining it with their own. Perhaps user assistance professionals can learn from other professions and grow accordingly.

    Read the article

  • Networking in VirtualBox

    - by Fat Bloke
    Networking in VirtualBox is extremely powerful, but can also be a bit daunting, so here's a quick overview of the different ways you can setup networking in VirtualBox, with a few pointers as to which configurations should be used and when. VirtualBox allows you to configure up to 8 virtual NICs (Network Interface Controllers) for each guest vm (although only 4 are exposed in the GUI) and for each of these NICs you can configure: Which virtualized NIC-type is exposed to the Guest. Examples include: Intel PRO/1000 MT Server (82545EM),  AMD PCNet FAST III (Am79C973, the default) or  a Paravirtualized network adapter (virtio-net). How the NIC operates with respect to your Host's physical networking. The main modes are: Network Address Translation (NAT) Bridged networking Internal networking Host-only networking NAT with Port-forwarding The choice of NIC-type comes down to whether the guest has drivers for that NIC.  VirtualBox, suggests a NIC based on the guest OS-type that you specify during creation of the vm, and you rarely need to modify this. But the choice of networking mode depends on how you want to use your vm (client or server) and whether you want other machines on your network to see it. So let's look at each mode in a bit more detail... Network Address Translation (NAT) This is the default mode for new vm's and works great in most situations when the Guest is a "client" type of vm. (i.e. most network connections are outbound). Here's how it works: When the guest OS boots,  it typically uses DHCP to get an IP address. VirtualBox will field this DHCP request and tell the guest OS its assigned IP address and the gateway address for routing outbound connections. In this mode, every vm is assigned the same IP address (10.0.2.15) because each vm thinks they are on their own isolated network. And when they send their traffic via the gateway (10.0.2.2) VirtualBox rewrites the packets to make them appear as though they originated from the Host, rather than the Guest (running inside the Host). This means that the Guest will work even as the Host moves from network to network (e.g. laptop moving between locations), and from wireless to wired connections too. However, how does another computer initiate a connection into a Guest?  e.g. connecting to a web server running in the Guest. This is not (normally) possible using NAT mode as there is no route into the Guest OS. So for vm's running servers we need a different networking mode.... Bridged Networking Bridged Networking is used when you want your vm to be a full network citizen, i.e. to be an equal to your host machine on the network. In this mode, a virtual NIC is "bridged" to a physical NIC on your host, like this: The effect of this is that each VM has access to the physical network in the same way as your host. It can access any service on the network such as external DHCP services, name lookup services, and routing information just as the host does. Logically, the network looks like this: The downside of this mode is that if you run many vm's you can quickly run out of IP addresses or your network administrator gets fed up with you asking for statically assigned IP addresses. Secondly, if your host has multiple physical NICs (e.g. Wireless and Wired) you must reconfigure the bridge when your host jumps networks.  Hmm, so what if you want to run servers in vm's but don't want to involve your network administrator? Maybe one of the next 2 modes is for you... Internal Networking When you configure one or more vm's to sit on an Internal network, VirtualBox ensures that all traffic on that network stays within the host and is only visible to vm's on that virtual network. Configuration looks like this: The internal network ( in this example "intnet" ) is a totally isolated network and so is very "quiet". This is good for testing when you need a separate, clean network, and you can create sophisticated internal networks with vm's that provide their own services to the internal network. (e.g. Active Directory, DHCP, etc). Note that not even the Host is a member of the internal network, but this mode allows vm's to function even when the Host is not connected to a network (e.g. on a plane). Note that in this mode, VirtualBox provides no "convenience" services such as DHCP, so your machines must be statically configured or one of the vm's needs to provide a DHCP/Name service. Multiple internal networks are possible and you can configure vm's to have multiple NICs to sit across internal and other network modes and thereby provide routes if needed. But all this sounds tricky. What if you want an Internal Network that the host participates on with VirtualBox providing IP addresses to the Guests? Ah, then for this, you might want to consider Host-only Networking... Host-only Networking Host-only Networking is like Internal Networking in that you indicate which network the Guest sits on, in this case, "vboxnet0": All vm's sitting on this "vboxnet0" network will see each other, and additionally, the host can see these vm's too. However, other external machines cannot see Guests on this network, hence the name "Host-only". Logically, the network looks like this: This looks very similar to Internal Networking but the host is now on "vboxnet0" and can provide DHCP services. To configure how a Host-only network behaves, look in the VirtualBox Manager...Preferences...Network dialog: Port-Forwarding with NAT Networking Now you may think that we've provided enough modes here to handle every eventuality but here's just one more... What if you cart around a mobile-demo or dev environment on, say, a laptop and you have one or more vm's that you need other machines to connect into? And you are continually hopping onto different (customer?) networks. In this scenario: NAT - won't work because external machines need to connect in. Bridged - possibly an option, but does your customer want you eating IP addresses and can your software cope with changing networks? Internal - we need the vm(s) to be visible on the network, so this is no good. Host-only - same problem as above, we want external machines to connect in to the vm's. Enter Port-forwarding to save the day! Configure your vm's to use NAT networking; Add Port Forwarding rules; External machines connect to "host":"port number" and connections are forwarded by VirtualBox to the guest:port number specified. For example, if your vm runs a web server on port 80, you could set up rules like this:  ...which reads: "any connections on port 8080 on the Host will be forwarded onto this vm's port 80".  This provides a mobile demo system which won't need re-configuring every time you open your laptop lid. Summary VirtualBox has a very powerful set of options allowing you to set up almost any configuration your heart desires. For more information, check out the VirtualBox User Manual on Virtual Networking. -FB 

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >