Search Results

Search found 7777 results on 312 pages for 'my resources'.

Page 159/312 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • DisplayMemberPath is not working in WPF

    - by WpfBee
    I want to display CustomerList\CustomerName property items to the listBox using ItemsSource DisplayMemberPath property only. But it is not working. I do not want to use DataContext or any other binding in my problem. Please help. My code is given below: MainWindow.xaml.cs namespace BindingAnItemControlToAList { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } } public class Customer { public string Name {get;set;} public string LastName { get; set; } } public class CustomerList { public List<Customer> Customers { get; set; } public List<string> CustomerName { get; set; } public List<string> CustomerLastName { get; set; } public CustomerList() { Customers = new List<Customer>(); CustomerName = new List<string>(); CustomerLastName = new List<string>(); CustomerName.Add("Name1"); CustomerLastName.Add("LastName1"); CustomerName.Add("Name2"); CustomerLastName.Add("LastName2"); Customers.Add(new Customer() { Name = CustomerName[0], LastName = CustomerLastName[0] }); Customers.Add(new Customer() { Name = CustomerName[1], LastName = CustomerLastName[1] }); } } } **MainWindow.Xaml** <Window x:Class="BindingAnItemControlToAList.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:BindingAnItemControlToAList" Title="MainWindow" Height="350" Width="525" Loaded="Window_Loaded" > <Window.Resources> <local:CustomerList x:Key="Cust"/> </Window.Resources> <Grid Name="Grid1"> <ListBox ItemsSource="{Binding Source={StaticResource Cust}}" DisplayMemberPath="CustomerName" Height="172" HorizontalAlignment="Left" Margin="27,23,0,0" Name="lstStates" VerticalAlignment="Top" Width="245" /> </Grid> </Window>

    Read the article

  • Android programming help

    - by Tuffy G
    I want to start making a program for a local charity on the Android platform. Where can go I for resources and tutorials to learn? I'm very new at this, so would like something simple that can be followed by someone with minimal technical knowledge.

    Read the article

  • Dealing with HTTP w00tw00t attacks

    - by Saif Bechan
    I have a server with apache and I recently installed mod_security2 because I get attacked a lot by this: My apache version is apache v2.2.3 and I use mod_security2.c This were the entries from the error log: [Wed Mar 24 02:35:41 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:31 2010] [error] [client 202.75.211.90] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:49 2010] [error] [client 95.228.153.177] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:48:03 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) Here are the errors from the access_log: 202.75.211.90 - - [29/Mar/2010:10:43:15 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:11:40:41 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:12:37:19 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" I tried configuring mod_security2 like this: SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecFilterSelective REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" The thing in mod_security2 is that SecFilterSelective can not be used, it gives me errors. Instead I use a rule like this: SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecRule REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" Even this does not work. I don't know what to do anymore. Anyone have any advice? Update 1 I see that nobody can solve this problem using mod_security. So far using ip-tables seems like the best option to do this but I think the file will become extremely large because the ip changes serveral times a day. I came up with 2 other solutions, can someone comment on them on being good or not. The first solution that comes to my mind is excluding these attacks from my apache error logs. This will make is easier for me to spot other urgent errors as they occur and don't have to spit trough a long log. The second option is better i think, and that is blocking hosts that are not sent in the correct way. In this example the w00tw00t attack is send without hostname, so i think i can block the hosts that are not in the correct form. Update 2 After going trough the answers I came to the following conclusions. To have custom logging for apache will consume some unnecessary recourses, and if there really is a problem you probably will want to look at the full log without anything missing. It is better to just ignore the hits and concentrate on a better way of analyzing your error logs. Using filters for your logs a good approach for this. Final thoughts on the subject The attack mentioned above will not reach your machine if you at least have an up to date system so there are basically no worries. It can be hard to filter out all the bogus attacks from the real ones after a while, because both the error logs and access logs get extremely large. Preventing this from happening in any way will cost you resources and they it is a good practice not to waste your resources on unimportant stuff. The solution i use now is Linux logwatch. It sends me summaries of the logs and they are filtered and grouped. This way you can easily separate the important from the unimportant. Thank you all for the help, and I hope this post can be helpful to someone else too.

    Read the article

  • broadcom 5722 NIC not installed on Ubuntu Server, although driver present

    - by Bastien
    Hello, I just installed Ubuntu Server 10.04 LTS, running kernel 2.6.32-24-server, on a brand new Dell T110 server, supposedly fully compatible with Ubuntu Server. I have two NICs: one ONBOARD, the other additional on PCI. both of them are Broadcom netXtreme 5572. on the first boot of the system, I could see both cards as eth0 and eth1 (with ifconfig) I configured eth0 as static IP (as planned), and did not configure eth1. after rebooting, one of the two NICs "disappeared": it does not appear in ifconfig at all. the one that disappeared is the ONBOARD one. I investigated a bit and found the following things: the card is SEEN, but not "installed", it appears as "UNCLAIMED" in lshw: *-network UNCLAIMED description: Ethernet controller product: NetXtreme BCM5722 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress cap_list configuration: latency=0 resources: memory:df9f0000-df9fffff *-network description: Ethernet interface product: NetXtreme BCM5722 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 00 serial: 00:10:18:60:23:64 size: 100MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.102 duplex=full firmware=5722-v3.09 ip=10.129.167.25 latency=0 link=yes multicast=yes port=twisted pair speed=100MB/s resources: irq:35 memory:dfaf0000-dfafffff so I checked my dmesg and found a few strange lines, showing, there actually is a problem bringing up this card: [ 3.737506] tg3: Could not obtain valid ethernet address, aborting. [ 3.737527] tg3 0000:04:00.0: PCI INT A disabled [ 3.737535] tg3: probe of 0000:04:00.0 failed with error -22 [ 3.737553] alloc irq_desc for 17 on node -1 [ 3.737555] alloc kstat_irqs on node -1 [ 3.737560] tg3 0000:05:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 3.737566] tg3 0000:05:00.0: setting latency timer to 64 [ 3.793529] eth0: Tigon3 [partno(BCM95722A2202G) rev a200] (PCI Express) MAC address 00:10:18:60:23:64 [ 3.793532] eth0: attached PHY is 5722/5756 (10/100/1000Base-T Ethernet) (WireSpeed[1]) [ 3.793534] eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[1] [ 3.793536] eth0: dma_rwctrl[76180000] dma_mask[64-bit] that actually shows that one NIC is recognized, the other is not. I researched a bit more, with lspci -v: 04:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Subsystem: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Flags: fast devsel, IRQ 16 Memory at df9f0000 (64-bit, non-prefetchable) [size=64K] Capabilities: [48] Power Management version 3 Capabilities: [50] Vital Product Data <?> Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number 00-00-00-fe-ff-00-00-00 Kernel modules: tg3 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Subsystem: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Flags: bus master, fast devsel, latency 0, IRQ 35 Memory at dfaf0000 (64-bit, non-prefetchable) [size=64K] Expansion ROM at <ignored> [disabled] Capabilities: [48] Power Management version 3 Capabilities: [50] Vital Product Data <?> Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable+ Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number 64-23-60-fe-ff-18-10-00 Capabilities: [16c] Power Budgeting <?> Kernel driver in use: tg3 Kernel modules: tg3 here I could see that the MAC address is 00-00-00-FE-FF-00-00-00, which, according to some forum posts on several websites, could be an issue. I've researched everything I could on the net, and found out several people having slightly comparable issues, but they usually involve different HW, and do not provide a proper explanation / solution... I would appreciate if anyone around here has some info to share ! thanks

    Read the article

  • One email user keeps disconnecting from our exchange server

    - by Funky Si
    I have one user who keeps reporting that Outlook keeps disconnecting from our email server. All other users are fine. Our email server is running Exchange 2010 and the client is running Outlook 2003. The disconnection only lasts a moment. I have checked for logs on the client and Exchange server and can not see any reason for the disconnect, On the Client I get EventId 26 telling me Outlook has disconnected and reconnected but no reason why. Can anyone give me some suggestions of things to try and track down where the problem could be? --Update-- I have found the following log file C:\Program Files\Microsoft\Exchange Server\V14\Logging\RPC Client Access which suggests that it is a problem with RPC sessions. Excerpt is below 2013-01-31T15:21:24.015Z,6413,15,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,0,00:00:00,"BS=Conn:24,HangingConn:0,AD:$null/$null/0%,CAS:$null/$null/2%,AB:$null/$null/0%,RPC:$null/$null/1%,FC:$null/0,Policy:ClientThrottlingPolicy2,Norm[Resources:(Mdb)Mailbox Database 0765959540(Health:-1%,HistLoad:0),(Mdb)Public Folder Database 1945427388(Health:-1%,HistLoad:0),];GC:6/1/0;", 2013-01-31T15:21:24.015Z,6413,16,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,PublicLogoff,0,00:00:00,LogonId: 1, 2013-01-31T15:21:24.015Z,6413,16,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,,00:00:00,"BS=Conn:24,HangingConn:0,AD:$null/$null/0%,CAS:$null/$null/2%,AB:$null/$null/0%,RPC:$null/$null/1%,FC:$null/0,Policy:ClientThrottlingPolicy2,Norm[Resources:(Mdb)Mailbox Database 0765959540(Health:-1%,HistLoad:0),(Mdb)Public Folder Database 1945427388(Health:-1%,HistLoad:0),];GC:6/1/0;", 2013-01-31T15:21:24.015Z,6417,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,OwnerLogoff,0,00:00:00,LogonId: 0, 2013-01-31T15:21:24.015Z,6417,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,0x6BA (rpc::Exception),00:02:54.7668000,Budget Highs [AD = 0][CAS = 3][RPC = 1] Session Throttled Count = 0; SessionDropped,"RpcEndPoint: [ServerUnavailableException] Connection must be re-established - [SessionDeadException] Connection doesn't have any open logons, but has client activity. This may be masking synchronization stalls. Dropping a connection." 2013-01-31T15:21:24.015Z,6420,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,DelegateLogoff,0,00:00:00,LogonId: 0, 2013-01-31T15:21:24.031Z,6420,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,,00:00:00.0156000,Budget Highs [AD = 0][CAS = 3][RPC = 1] Session Throttled Count = 0, 2013-01-31T15:21:24.031Z,6420,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,Disconnect,0,00:02:54.2364000,, 2013-01-31T15:21:24.031Z,6419,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,OwnerLogoff,0,00:00:00,LogonId: 0, 2013-01-31T15:21:24.031Z,6419,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,,00:00:00,Budget Highs [AD = 0][CAS = 3][RPC = 1] Session Throttled Count = 0, 2013-01-31T15:21:24.031Z,6419,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,Disconnect,0,00:02:54.4392000,, 2013-01-31T15:21:24.031Z,6416,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,DelegateLogoff,0,00:00:00,LogonId: 0, 2013-01-31T15:21:24.031Z,6416,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,,00:00:00,Budget Highs [AD = 0][CAS = 3][RPC = 1] Session Throttled Count = 0, Can anyone help point me in the right direction for a solution?

    Read the article

  • Active Directory Time Synchronisation - Time-Service Event ID 50

    - by George
    I have an Active Directory domain with two DCs. The first DC in the forest/domain is Server 2012, the second is 2008 R2. The first DC holds the PDC Emulator role. I sporadically receive a warning from the Time-Service source, event ID 50: The time service detected a time difference of greater than %1 milliseconds for %2 seconds. The time difference might be caused by synchronization with low-accuracy time sources or by suboptimal network conditions. The time service is no longer synchronized and cannot provide the time to other clients or update the system clock. When a valid time stamp is received from a time service provider, the time service will correct itself. Time sync in the domain is configured with the second DC to synchronise using the /syncfromflags:DOMHIER flag. The first DC is configured to sync time using a /syncfromflags:MANUAL /reliable:YES, from a peerlist consisting of a number of UK based stratum 2 servers, such as ntp2d.mcc.ac.uk. I'm confused why I receive this event warning. It implies that my PDC emulator cannot synchronise time with a supposedly reliable external time source, and it quotes a time difference of 5 seconds for 900 seconds. It's worth also mentioning that I used to use a UK pool from ntp.org but I would receive the warning much more often. Since updating to a number of UK based academic time servers, it seems to be more reliable. Can someone with more experience shed some light on this - perhaps it is purely transient? Should I disregard the warning? Is my configuration sound? EDIT: I should add that the DCs are virtual, and installed on two separate VMware ESXi/vSphere physical hosts. I can also confirm that as per MDMarra's comment and best practice, VMware timesync is disabled, since: c:\Program Files\VMware\VMware Tools\VMwareToolboxCmd.exe timesync status returns Disabled. EDIT 2 Some strange new issue has cropped up. I've noticed a pattern. Originally, the event ID 50 warnings would occur at about 1230pm each day. This is interesting since our veeam backup happens at 12 midday. Since I made the changes discussed here, I now receive an event ID 51 instead of 50. The new warning says that: The time sample received from peer server.ac.uk differs from the local time by -40 seconds (Or approximately 40 seconds). This has happened two days in a row. Now I'm even more confused. Obviously the time never updates until I manually intervene. The issue seems to be related to virtualisation and veeam. Something may be occuring when veeam is backing up the PDCe. Any suggestions? UPDATE & SUMMARY msemack's excellent list of resources below (the accepted answer) provided enough information to correctly configure the time service in the domain. This should be the first port of call for any future people looking to verify their configuration. The final "40 second jump" issue I have resolved (there are no more warnings) through adjusting the VMware time sync settings as noted in the veeam knowledge base article here: http://www.veeam.com/kb1202 In any case, should any future reader use ESXi, veeam or not, the resources here are an excellent source of information on the time sync topic and msemack's answer is particularly invaluable.

    Read the article

  • Crash in audio resampler with some audio rates - FFMPEG PHP ( Solved! )

    - by Olaf Erlandsen
    i have a problem with this command( FFMPEG PHP ): Command: ffmpeg -i 62f76f050494f0ed6a5997967c00c0c0.wmv -ss 0 -t 99 -y -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 -f flv 62f76f050494f0ed6a5997967c00c0c0.flv Output: FFmpeg version 0.6.5, Copyright (c) 2000-2010 the FFmpeg developers built on Jan 29 2012 17:52:15 with gcc 4.4.5 20110214 (Red Hat 4.4.5-6) configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --enable-avfilter --enable-avfilter-lavf --enable-libdc1394 --enable-libdirac --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab libavutil 50.15. 1 / 50.15. 1 libavcodec 52.72. 2 / 52.72. 2 libavformat 52.64. 2 / 52.64. 2 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.19. 0 / 1.19. 0 libswscale 0.11. 0 / 0.11. 0 libpostproc 51. 2. 0 / 51. 2. 0 [asf @ 0xe81670]max_analyze_duration reached Input #0, asf, from '/var/www/resources/tmp/62f76f050494f0ed6a5997967c00c0c0.wmv': Metadata: WMFSDKVersion : 12.0.7601.17514 WMFSDKNeeded : 0.0.0.0000 IsVBR : 0 Duration: 00:00:50.87, bitrate: 2467 kb/s Stream #0.0: Audio: wmapro, 44100 Hz, stereo, flt, 256 kb/s Stream #0.1: Video: vc1, yuv420p, 950x460 [PAR 1:1 DAR 95:46], 25 fps, 25 tbr, 1k tbn, 25 tbc Output #0, flv, to '/var/www/resources/media/62f76f050494f0ed6a5997967c00c0c0.flv': Metadata: encoder : Lavf52.64.2 Stream #0.0: Video: flv, yuv420p, 950x460 [PAR 1:1 DAR 95:46], q=2-31, 200 kb/s, 1k tbn, 29.97 tbc Stream #0.1: Audio: libmp3lame, 11025 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.1 -> #0.0 Stream #0.0 -> #0.1 Press [q] to stop encoding frame= 72 fps= 0 q=5.0 size= 0kB time=10.91 bitrate= 0.0kbits/s Multiple frames in a packet from stream 0 Warning, using s16 intermediate sample format for resampling frame= 141 fps=139 q=5.0 size= 103kB time=8.15 bitrate= 103.2kbits/s frame= 220 fps=144 q=5.0 size= 875kB time=10.92 bitrate= 656.6kbits/s frame= 290 fps=143 q=5.0 size= 1525kB time=13.74 bitrate= 909.1kbits/s frame= 356 fps=141 q=5.0 size= 2153kB time=15.99 bitrate=1103.1kbits/s frame= 427 fps=141 q=5.0 size= 2847kB time=18.70 bitrate=1247.0kbits/s frame= 497 fps=141 q=5.0 size= 3771kB time=21.16 bitrate=1460.0kbits/s frame= 575 fps=142 q=5.0 size= 4695kB time=24.61 bitrate=1563.0kbits/s frame= 639 fps=141 q=5.0 size= 5301kB time=26.80 bitrate=1620.2kbits/s frame= 703 fps=139 q=5.0 size= 5829kB time=29.36 bitrate=1626.2kbits/s frame= 774 fps=139 q=5.0 size= 6659kB time=32.39 bitrate=1684.0kbits/s frame= 842 fps=139 q=5.0 size= 7915kB time=35.27 bitrate=1838.6kbits/s frame= 911 fps=139 q=5.0 size= 9011kB time=37.98 bitrate=1943.4kbits/s frame= 975 fps=138 q=5.0 size= 9788kB time=40.59 bitrate=1975.3kbits/s frame= 1041 fps=138 q=5.0 size= 10904kB time=43.83 bitrate=2037.9kbits/s frame= 1115 fps=138 q=5.0 size= 11795kB time=46.24 bitrate=2089.8kbits/s frame= 1183 fps=138 q=5.0 size= 12678kB time=48.74 bitrate=2130.7kbits/s frame= 1247 fps=137 q=5.0 size= 13964kB time=51.36 bitrate=2227.5kbits/s frame= 1271 fps=136 q=5.0 Lsize= 15865kB time=58.86 bitrate=2208.1kbits/s video:15366kB audio:462kB global headers:0kB muxing overhead 0.238956% Problem: Warning, using s16 intermediate sample format for resampling I've also tried changing the parameter From -ar 44100 to -ar 11025 Thanks! Solution: Read this link: http://en.wikipedia.org/wiki/MP3#Bit_rate

    Read the article

  • Rewriting Apache URLs to use only paths and set response headers

    - by jabley
    I have apache httpd in front of an application running in Tomcat. The application exposes URLs of the form: /path/to/images?id={an-image-id} The entities returned by such URLs are images (even though URIs are opaque, I find human-friendly ones are easier to work with!). The application does not set caching directives on the image response, so I've added that via Apache. # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Note that I can't use ExpiresByType since not all images served by the app have versioned URIs. I know that ones served by the /path/to/images resource handler are versioned URIs though, which don't perform any sort of content negotiation, and thus are ripe for Far Future Expires management. This is working well for us. Now a requirement has come up to put something else in front of the app (in this case, Amazon CloudFront) to further distribute and cache some of the content. Amazon CloudFront will not pass query string parameters through to my origin server. I thought I would be able to work around this, by changing my apache config appropriately: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> This works fine in terms of serving the content, but there are no longer caching directives with the response. I've tried playing around with [PT], [P] for the RewriteRule, and adding a new LocationMatch directive: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources # /new/path/to/images/12345 -> /path/to/images?id=12345 RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> <LocationMatch "^/new/path/to/images/"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Unfortunately, I'm still unable to get the Cache-Control header added to the response with the new URL format. Please point out what I'm missing to get /new/path/to/images/12345 returning a 200 response with a Cache-Control: max-age=8640000 header. Pointers as to how to debug apache like this would be appreciated as well!

    Read the article

  • Silverlight 4 Tools for VS 2010 and WCF RIA Services Released

    - by ScottGu
    The final release of the Silverlight 4 Tools for Visual Studio 2010 and WCF RIA Services is now available for download.  Download and Install If you already have Visual Studio 2010 installed (or the free Visual Web Developer 2010 Express), then you can install both the Silverlight 4 Tooling Support as well as WCF RIA Services support by downloading and running this setup package (note: please make sure to uninstall the preview release of the Silverlight 4 Tools for VS 2010 if you have previously installed that).  The Silverlight 4 Tools for VS 2010 package extends the Silverlight support built into Visual Studio 2010 and enables support for Silverlight 4 applications as well.  It also installs WCF RIA Services application templates and libraries: Today’s release includes the English edition of the Silverlight 4 Tooling – localized versions will be available next month for other Visual Studio languages as well. Silverlight Tooling Support Visual Studio 2010 includes rich tooling support for building Silverlight and WPF applications. It includes a WYSIWYG designer surface that enables you to easily use controls to construct UI – including the ability to take advantage of layout containers, and apply styles and resources: The VS 2010 designer enables you to leverage the rich data binding support within Silverlight and WPF, and easily wire-up bindings on controls.  The Data Sources window within Silverlight projects can be used to reference POCO objects (plain old CLR objects), WCF Services, WCF RIA Services client proxies or SharePoint Lists.  For example, let’s assume we add a “Person” class like below to our project: We could then add it to the Data Source window which will cause it to show up like below in the IDE: We can optionally customize the default UI control types that are associated for each property on the object.  For example, below we’ll default the BirthDate property to be represented by a “DatePicker” control: And then when we drag/drop the Person type from the Data Sources onto the design-surface it will automatically create UI controls that are bound to the properties of our Person class: VS 2010 allows you to optionally customize each UI binding further by selecting a control, and then right-click on any of its properties within the property-grid and pull up the “Apply Bindings” dialog: This will bring up a floating data-binding dialog that enables you to easily configure things like the binding path on the data source object, specify a format convertor, specify string-format settings, specify how validation errors should be handled, etc: In addition to providing WYSIWYG designer support for WPF and Silverlight applications, VS 2010 also provides rich XAML intellisense and code editing support – enabling a rich source editing environment. Silverlight 4 Tool Enhancements Today’s Silverlight 4 Tooling Release for VS 2010 includes a bunch of nice new features.  These include: Support for Silverlight Out of Browser Applications and Elevated Trust Applications You can open up a Silverlight application’s project properties window and click the “Enable Running Application Out of Browser” checkbox to enable you to install an offline, out of browser, version of your Silverlight 4 application.  You can then customize a number of “out of browser” settings of your application within Visual Studio: Notice above how you can now indicate that you want to run with elevated trust, with hardware graphics acceleration, as well as customize things like the Window style of the application (allowing you to build a nice polished window style for consumer applications). Support for Implicit Styles and “Go to Value Definition” Support: Silverlight 4 now allows you to define “implicit styles” for your applications.  This allows you to style controls by type (for example: have a default look for all buttons) and avoid you having to explicitly reference styles from each control.  In addition to honoring implicit styles on the designer-surface, VS 2010 also now allows you to right click on any control (or on one of it properties) and choose the “Go to Value Definition…” context menu to jump to the XAML where the style is defined, and from there you can easily navigate onward to any referenced resources.  This makes it much easier to figure out questions like “why is my button red?”: Style Intellisense VS 2010 enables you to easily modify styles you already have in XAML, and now you get intellisense for properties and their values within a style based on the TargetType of the specified control.  For example, below we have a style being set for controls of type “Button” (this is indicated by the “TargetType” property).  Notice how intellisense now automatically shows us properties for the Button control (even within the <Setter> element): Great Video - Watch the Silverlight Designer Features in Action You can see all of the above Silverlight 4 Tools for Visual Studio 2010 features (and some more cool ones I haven’t mentioned) demonstrated in action within this 20 minute Silverlight.TV video on Channel 9: WCF RIA Services Today we also shipped the V1 release of WCF RIA Services.  It is included and automatically installed as part of the Silverlight 4 Tools for Visual Studio 2010 setup. WCF RIA Services makes it much easier to build business applications with Silverlight.  It simplifies the traditional n-tier application pattern by bringing together the ASP.NET and Silverlight platforms using the power of WCF for communication.  WCF RIA Services provides a pattern to write application logic that runs on the mid-tier and controls access to data for queries, changes and custom operations. It also provides end-to-end support for common tasks such as data validation, authentication and authorization based on roles by integrating with Silverlight components on the client and ASP.NET on the mid-tier. Put simply – it makes it much easier to query data stored on a server from a client machine, optionally manipulate/modify the data on the client, and then save it back to the server.  It supports a validation architecture that helps ensure that your data is kept secure and business rules are applied consistently on both the client and middle-tiers. WCF RIA Services uses WCF for communication between the client and the server  It supports both an optimized .NET to .NET binary serialization format, as well as a set of open extensions to the ATOM format known as ODATA and an optional JavaScript Object Notation (JSON) format that can be used by any client. You can hear Nikhil and Dinesh talk a little about WCF RIA Services in this 13 minutes Channel 9 video. Putting it all Together – the Silverlight 4 Training Kit Check out the Silverlight 4 Training Kit to learn more about how to build business applications with Silverlight 4, Visual Studio 2010 and WCF RIA Services. The training kit includes 8 modules, 25 videos, and several hands-on labs that explain Silverlight 4 and WCF RIA Services concepts and walks you through building an end-to-end application with them.    The training kit is available for free and is a great way to get started. Summary I’m really excited about today’s release – as they really complete the Silverlight development story and deliver a great end to end runtime + tooling story for building applications.  All of the above features are available for use both in VS 2010 as well as the free Visual Web Developer 2010 Express Edition – making it really easy to get started building great solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Android - creating a custom preferences activity screen

    - by Bill Osuch
    Android applications can maintain their own internal preferences (and allow them to be modified by users) with very little coding. In fact, you don't even need to write an code to explicitly save these preferences, it's all handled automatically! Create a new Android project, with an intial activity title Main. Create two more activities: ShowPrefs, which extends Activity Set Prefs, which extends PreferenceActivity Add these two to your AndroidManifest.xml file: <activity android:name=".SetPrefs"></activity> <activity android:name=".ShowPrefs"></activity> Now we'll work on fleshing out each activity. First, open up the main.xml layout file and add a couple of buttons to it: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"    android:orientation="vertical"    android:layout_width="fill_parent"    android:layout_height="fill_parent"> <Button android:text="Edit Preferences"    android:id="@+id/prefButton"    android:layout_width="wrap_content"    android:layout_height="wrap_content"    android:layout_gravity="center_horizontal"/> <Button android:text="Show Preferences"    android:id="@+id/showButton"    android:layout_width="wrap_content"    android:layout_height="wrap_content"    android:layout_gravity="center_horizontal"/> </LinearLayout> Next, create a couple button listeners in Main.java to handle the clicks and start the other activities: Button editPrefs = (Button) findViewById(R.id.prefButton);       editPrefs.setOnClickListener(new View.OnClickListener() {              public void onClick(View view) {                  Intent myIntent = new Intent(view.getContext(), SetPrefs.class);                  startActivityForResult(myIntent, 0);              }      });           Button showPrefs = (Button) findViewById(R.id.showButton);      showPrefs.setOnClickListener(new View.OnClickListener() {              public void onClick(View view) {                  Intent myIntent = new Intent(view.getContext(), ShowPrefs.class);                  startActivityForResult(myIntent, 0);              }      }); Now, we'll create the actual preferences layout. You'll need to create a file called preferences.xml inside res/xml, and you'll likely have to create the xml directory as well. Add the following xml: <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android"> </PreferenceScreen> First we'll add a category, which is just a way to group similar preferences... sort of a horizontal bar. Add this inside the PreferenceScreen tags: <PreferenceCategory android:title="First Category"> </PreferenceCategory> Now add a Checkbox and an Edittext box (inside the PreferenceCategory tags): <CheckBoxPreference    android:key="checkboxPref"    android:title="Checkbox Preference"    android:summary="This preference can be true or false"    android:defaultValue="false"/> <EditTextPreference    android:key="editTextPref"    android:title="EditText Preference"    android:summary="This allows you to enter a string"    android:defaultValue="Nothing"/> The key is how you will refer to the preference in code, the title is the large text that will be displayed, and the summary is the smaller text (this will make sense when you see it). Let's say we've got a second group of preferences that apply to a different part of the app. Add a new category just below the first one: <PreferenceCategory android:title="Second Category"> </PreferenceCategory> In there we'll a list with radio buttons, so add: <ListPreference    android:key="listPref"    android:title="List Preference"    android:summary="This preference lets you select an item in a array"    android:entries="@array/listArray"    android:entryValues="@array/listValues" /> When complete, your full xml file should look like this: <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android">  <PreferenceCategory android:title="First Category"> <CheckBoxPreference    android:key="checkboxPref"    android:title="Checkbox Preference"    android:summary="This preference can be true or false"    android:defaultValue="false"/> <EditTextPreference    android:key="editTextPref"    android:title="EditText Preference"    android:summary="This allows you to enter a string"    android:defaultValue="Nothing"/>  </PreferenceCategory>  <PreferenceCategory android:title="Second Category">   <ListPreference    android:key="listPref"    android:title="List Preference"    android:summary="This preference lets you select an item in a array"    android:entries="@array/listArray"    android:entryValues="@array/listValues" />  </PreferenceCategory> </PreferenceScreen> However, when you try to save it, you'll get an error because you're missing your array definition. To fix this, add a file called arrays.xml in res/values, and paste in the following: <?xml version="1.0" encoding="utf-8"?> <resources>  <string-array name="listArray">      <item>Value 1</item>      <item>Value 2</item>      <item>Value 3</item>  </string-array>  <string-array name="listValues">      <item>1</item>      <item>2</item>      <item>3</item>  </string-array> </resources> Finally (for the preferences screen at least...) add the code that will display the preferences layout to the SetPrefs.java file:  @Override     public void onCreate(Bundle savedInstanceState) {      super.onCreate(savedInstanceState);      addPreferencesFromResource(R.xml.preferences);      } OK, so now we've got an activity that will set preferences, and save them without the need to write custom save code. Let's throw together an activity to work with the saved preferences. Create a new layout called showpreferences.xml and give it three Textviews: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"     android:orientation="vertical"     android:layout_width="fill_parent"     android:layout_height="fill_parent"> <TextView   android:id="@+id/textview1"     android:layout_width="fill_parent"     android:layout_height="wrap_content"     android:text="textview1"/> <TextView   android:id="@+id/textview2"     android:layout_width="fill_parent"     android:layout_height="wrap_content"     android:text="textview2"/> <TextView   android:id="@+id/textview3"     android:layout_width="fill_parent"     android:layout_height="wrap_content"     android:text="textview3"/> </LinearLayout> Open up the ShowPrefs.java file and have it use that layout: setContentView(R.layout.showpreferences); Then add the following code to load the DefaultSharedPreferences and display them: SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(this);    TextView text1 = (TextView)findViewById(R.id.textview1); TextView text2 = (TextView)findViewById(R.id.textview2); TextView text3 = (TextView)findViewById(R.id.textview3);    text1.setText(new Boolean(prefs.getBoolean("checkboxPref", false)).toString()); text2.setText(prefs.getString("editTextPref", "<unset>"));; text3.setText(prefs.getString("listPref", "<unset>")); Fire up the application in the emulator and click the Edit Preferences button. Set various things, click the back button, then the Edit Preferences button again. Notice that your choices have been saved.   Now click the Show Preferences button, and you should see the results of what you set:   There are two more preference types that I did not include here: RingtonePreference - shows a radioGroup that lists your ringtones PreferenceScreen - allows you to embed a second preference screen inside the first - it opens up a new set of preferences when clicked

    Read the article

  • HttpContext.Items and Server.Transfer/Execute

    - by Rick Strahl
    A few days ago my buddy Ben Jones pointed out that he ran into a bug in the ScriptContainer control in the West Wind Web and Ajax Toolkit. The problem was basically that when a Server.Transfer call was applied the script container (and also various ClientScriptProxy script embedding routines) would potentially fail to load up the specified scripts. It turns out the problem is due to the fact that the various components in the toolkit use request specific singletons via a Current property. I use a static Current property tied to a Context.Items[] entry to handle this type of operation which looks something like this: /// <summary> /// Current instance of this class which should always be used to /// access this object. There are no public constructors to /// ensure the reference is used as a Singleton to further /// ensure that all scripts are written to the same clientscript /// manager. /// </summary> public static ClientScriptProxy Current { get { if (HttpContext.Current == null) return new ClientScriptProxy(); ClientScriptProxy proxy = null; if (HttpContext.Current.Items.Contains(STR_CONTEXTID)) proxy = HttpContext.Current.Items[STR_CONTEXTID] as ClientScriptProxy; else { proxy = new ClientScriptProxy(); HttpContext.Current.Items[STR_CONTEXTID] = proxy; } return proxy; } } The proxy is attached to a Context.Items[] item which makes the instance Request specific. This works perfectly fine in most situations EXCEPT when you’re dealing with Server.Transfer/Execute requests. Server.Transfer doesn’t cause Context.Items to be cleared so both the current transferred request and the original request’s Context.Items collection apply. For the ClientScriptProxy this causes a problem because script references are tracked on a per request basis in Context.Items to check for script duplication. Once a script is rendered an ID is written into the Context collection and so considered ‘rendered’: // No dupes - ref script include only once if (HttpContext.Current.Items.Contains( STR_SCRIPTITEM_IDENTITIFIER + fileId ) ) return; HttpContext.Current.Items.Add(STR_SCRIPTITEM_IDENTITIFIER + fileId, string.Empty); where the fileId is the script name or unique identifier. The problem is on the Transferred page the item will already exist in Context and so fail to render because it thinks the script has already rendered based on the Context item. Bummer. The workaround for this is simple once you know what’s going on, but in this case it was a bitch to track down because the context items are used in many places throughout this class. The trick is to determine when a request is transferred and then removing the specific keys. The first issue is to determine if a script is in a Trransfer or Execute call: if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) Context.Handler is the original handler and CurrentHandler is the actual currently executing handler that is running when a Transfer/Execute is active. You can also use Context.PreviousHandler to get the last handler and chain through the whole list of handlers applied if Transfer calls are nested (dog help us all for the person debugging that). For the ClientScriptProxy the full logic to check for a transfer and remove the code looks like this: /// <summary> /// Clears all the request specific context items which are script references /// and the script placement index. /// </summary> public void ClearContextItemsOnTransfer() { if (HttpContext.Current != null) { // Check for Server.Transfer/Execute calls - we need to clear out Context.Items if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) { List<string> Keys = HttpContext.Current.Items.Keys.Cast<string>().Where(s => s.StartsWith(STR_SCRIPTITEM_IDENTITIFIER) || s == STR_ScriptResourceIndex).ToList(); foreach (string key in Keys) { HttpContext.Current.Items.Remove(key); } } } } along with a small update to the Current property getter that sets a global flag to indicate whether the request was transferred: if (!proxy.IsTransferred && HttpContext.Current.Handler != HttpContext.Current.CurrentHandler) { proxy.ClearContextItemsOnTransfer(); proxy.IsTransferred = true; } return proxy; I know this is pretty ugly, but it works and it’s actually minimal fuss without affecting the behavior of the rest of the class. Ben had a different solution that involved explicitly clearing out the Context items and replacing the collection with a manually maintained list of items which also works, but required changes through the code to make this work. In hindsight, it would have been better to use a single object that encapsulates all the ‘persisted’ values and store that object in Context instead of all these individual small morsels. Hindsight is always 20/20 though :-}. If possible use Page.Items ClientScriptProxy is a generic component that can be used from anywhere in ASP.NET, so there are various methods that are not Page specific on this component which is why I used Context.Items, rather than the Page.Items collection.Page.Items would be a better choice since it will sidestep the above Server.Transfer nightmares as the Page is reloaded completely and so any new Page gets a new Items collection. No fuss there. So for the ScriptContainer control, which has to live on the page the behavior is a little different. It is attached to Page.Items (since it’s a control): /// <summary> /// Returns a current instance of this control if an instance /// is already loaded on the page. Otherwise a new instance is /// created, added to the Form and returned. /// /// It's important this function is not called too early in the /// page cycle - it should not be called before Page.OnInit(). /// /// This property is the preferred way to get a reference to a /// ScriptContainer control that is either already on a page /// or needs to be created. Controls in particular should always /// use this property. /// </summary> public static ScriptContainer Current { get { // We need a context for this to work! if (HttpContext.Current == null) return null; Page page = HttpContext.Current.CurrentHandler as Page; if (page == null) throw new InvalidOperationException(Resources.ERROR_ScriptContainer_OnlyWorks_With_PageBasedHandlers); ScriptContainer ctl = null; // Retrieve the current instance ctl = page.Items[STR_CONTEXTID] as ScriptContainer; if (ctl != null) return ctl; ctl = new ScriptContainer(); page.Form.Controls.Add(ctl); return ctl; } } The biggest issue with this approach is that you have to explicitly retrieve the page in the static Current property. Notice again the use of CurrentHandler (rather than Handler which was my original implementation) to ensure you get the latest page including the one that Server.Transfer fired. Server.Transfer and Server.Execute are Evil All that said – this fix is probably for the 2 people who are crazy enough to rely on Server.Transfer/Execute. :-} There are so many weird behavior problems with these commands that I avoid them at all costs. I don’t think I have a single application that uses either of these commands… Related Resources Full source of ClientScriptProxy.cs (repository) Part of the West Wind Web Toolkit Static Singletons for ASP.NET Controls Post © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • The Windows Browser Ballot Screen Offers Web Browser Choice to European Users

    - by Matthew Guay
    Since March, our friends across the pond in Europe get to decide which browser they want to install with their Windows OS. Today we thought we would take a look at the ballot choices, some are well known, and others you may not have heard of. Windows users in European countries should start seeing the so called “Browser Ballot Screen” after installing the Windows Update KB976002 (link below). The browser ballot offers a dozen different browsers, including some you’ve likely never heard of.  They each have some unique features, and are all free, and here we take a quick look at each of them. Internet Explorer 8 Internet Explorer is the world’s most used web browser, as it’s bundled with Windows. It also includes several unique features, including Accelerators that make it easy to search or find a map of a location, and InPrivate filtering to directly control what sites can get personal information.  Additionally, it offers great integration with Windows Touch and the new taskbar in Windows 7. IE 8 runs on Windows XP and newer, and is bundled with Windows 7. Mozilla Firefox 3.6 Firefox is the most popular browser other than Internet Explorer.  It is the modern descendant of Netscape, and is loved by web developers for its adherence to web standards, openness, and expandability.  It offers thousands of Add-ons and themes to let you customize it to fit your preferences. The most recent version has added Personas, which are quick, lightweight themes to let you personalize the look your browser. It’s open source, and runs on all modern versions of Windows, Mac OS X, and Linux. Of course thanks to Asian Angel, our resident browser expert, you can check out several articles regarding this popular IE alternative. Google Chrome 4 Google Chrome has gained an impressive amount of market share during its short time in the market. It offers a minimalistic interface and fast speeds with intensive web applications. The address bar is also a search bar, so you can enter a search query or web address and quickly get the information you need. With version 4 you can add a growing number of extensions, personalize it with a variety of stylish themes, and automatically translate foreign websites into your own language. Opera 10.50 Although Opera has been around for over a decade, relatively few users have used it. With the new 10.50 release, Opera has many unique features packed in a sleek UI. It integrates great with Aero and the Windows 7 taskbar, and lets you preview the contents of your websites in the tab bar. It also includes Opera Unite, a small personal web server to make file sharing easy, Opera Turbo to speed up your internet when the connection is slow, and Opera Link to keep all your copies of Opera in sync. It’s a popular browser on many mobile devices, and version 10.50 has a lot of enhancements. Apple Safari 4 Safari is the default browser in Mac OS X, and starting with version 3 it has been available for Windows as well. It’s based on Webkit, the popular new rendering engine that provides great speed and standards compatibility.  Safari 4 lets you browse your browsing history in a unique Coverflow interface, and shows your Top Sites in a fancy, 3D interface.  It’s also great for viewing mobile websites for the iPhone and other mobile devices through Developer Tools. Flock 2.5 Based on the popular Firefox core, Flock brings a multitude of social features to your browsing experience. You can view the latest YouTube videos, Flickr pictures, update your favorite social network, and keep up with your webmail thanks to It’s integration with a wide variety of services. You can even post to your blog through the integrated blog editor. If your time online is mostly spent in social services, this may be a browser you want to check out. Maxthon 2.5 Maxthon is a unique browser that builds on Internet Explorer to bring more features with IE’s rendering. Formerly known as MyIE2, Maxthon was popular for bringing tabbed browsing with IE rendering during the days of IE 6.  Today Maxthon supports a wide range of plugins and skins, so you can customize it however you want. It includes mouse gestures, a web accelerator to speed up pokey internet connections, a content blocker to remove unwanted content from sites, an online account to backup your favorites, and a nice download manager. Avant Browser Another nice browser based on Internet Explorer, Avant brings a wide variety of features in a nice brushed-metal interface. It includes an integrated AutoFill for forms, mouse gestures, customizable skins, and privacy protection features. It also includes a Flash blocker that will only load flash in webpages when you select them. You can also integrate Avant with an online account to store your bookmarks, feeds, settings and passwords online. Sleipnir Sleipnir is a customizable browser meant for advance users that is quite popular in Japan. It’s built on the Trident engine and virtually every aspect of is customizable unlike Internet Explorer.   FlashPeak SlimBrowser SlimBrowser from FlashPeak incorporates a lot of features like Popup Killer, Auto Login, site filtering and more. It’s based on Internet Explorer but offers a lot more customizable options out of the box.   K-meleon This basic browser is light on system resources and based on the Gecko engine. It’s been in development for years on SourceForge, and if you like to tweak virtually any aspect of your browser, this might be a good choice for you.   GreenBrowser GreenBrowser is based on Internet Explorer and is available in several languages. It has a large amount of features out of the box and is light on system resources.   Conclusion The European Union asked for more choices in the web browser they could choose from when installing Windows, and with the Browser Ballot Screen, they certainly get a variety to choose from.  If you’ve tried out some of the lesser known browsers, or think some important ones have been left out, leave a comment and tell us about it. Learn More About the Browser Ballot Screen and Download Alternatives to IE Windows Update KB976002 Similar Articles Productive Geek Tips Set the Default Browser on Ubuntu From the Command LineQuick Tip: Empty Internet Explorer 7 Cache when Browser is ClosedView Hidden Files and Folders in Ubuntu File BrowserSet the Default Browser and Email Client in UbuntuAccess Multiple Browsers from Firefox with Browser View Plus TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • Michael Crump&rsquo;s notes for 70-563 PRO &ndash; Designing and Developing Windows Applications usi

    - by mbcrump
    TIME TO GO PRO! This is my notes for 70-563 PRO – Designing and Developing Windows Applications using .NET Framework 3.5 I created it using several resources (various certification web sites, msdn, official ms 70-548 book). The reason that I created this review is because a) I am taking the exam. b) MS did not create a book for this exam. Use the(MS 70-548)book. c) To make sure I am familiar with each before the exam. I hope that it provides a good start for your own notes. I hope that someone finds this useful. At least, it will give you a starting point of what to expect to know on the PRO exam. Also, for those wondering, the PRO exam does contains very little code. It is basically all theory. 1. Validation Controls – How to prevent users from entering invalid data on forms. (MaskedTextBox control and RegEx) 2. ServiceController – used to start and control the behavior of existing services. 3. User Feedback (know winforms Status Bar, Tool Tips, Color, Error Provider, Context-Sensitive and Accessibility) 4. Specific (derived) exceptions must be handled before general (base class) exceptions. By moving the exception handling for the base type Exception to after exception handling of ArgumentNullException, all ArgumentNullException thrown by the Helper method will be caught and logged correctly. 5. A heartbeat method is a method exposed by a Web service that allows external applications to check on the status of the service. 6. New users must master key tasks quickly. Giving these tasks context and appropriate detail will help. However, advanced users will demand quicker paths. Shortcuts, accelerators, or toolbar buttons will speed things along for the advanced user. 7. MSBuild uses project files to instruct the build engine what to build and how to build it. MSBuild project files are XML files that adhere to the MSBuild XML schema. The MSBuild project files contain complete file, build action, and dependency information for each individual projects. 8. Evaluating whether or not to fix a bug involves a triage process. You must identify the bug's impact, set the priority, categorize it, and assign a developer. Many times the person doing the triage work will assign the bug to a developer for further investigation. In fact, the workflow for the bug work item inside of Team System supports this step. Developers are often asked to assess the impact of a given bug. This assessment helps the person doing the triage make a decision on how to proceed. When assessing the impact of a bug, you should consider time and resources to fix it, bug risk, and impacts of the bug. 9. In large projects it is generally impossible and unfeasible to fix all bugs because of the impact on schedule and budget. 10. Code reviews should be conducted by a technical lead or a technical peer. 11. Testing Applications 12. WCF Services – application state 13. SQL Server 2005 / 2008 Express Edition – reliable storage of data / Microsoft SQL Server 3.5 Compact Database– used for client computers to retrieve and save data from a shared location. 14. SQL Server 2008 Compact Edition – used for minimum possible memory and can synchronize data with a corporate SQL Server 2008 Database. Supports offline user and minimum dependency on external components. 15. MDI and SDI Forms (specifically IsMDIContainer) 16. GUID – in the case of data warehousing, it is important to define unique keys. 17. Encrypting / Security Data 18. Understanding of Isolated Storage/Proper location to store items 19. LINQ to SQL 20. Multithreaded access 21. ADO.NET Entity Framework model 22. Marshal.ReleaseComObject 23. Common User Interface Layout (ComboBox, ListBox, Listview, MaskedTextBox, TextBox, RichTextBox, SplitContainer, TableLayoutPanel, TabControl) 24. DataSets Class - http://msdn.microsoft.com/en-us/library/system.data.dataset%28VS.71%29.aspx 25. SQL Server 2008 Reporting Services (SSRS) 26. SystemIcons.Shield (Vista UAC) 27. Leverging stored procedures to perform data manipulation for a database schema that can change. 28. DataContext 29. Microsoft Windows Installer Packages, ClickOnce(bootstrapping features), XCopy. 30. Client Application Services – will authenticate users by using the same data source as a ASP.NET web application. 31. SQL Server 2008 Caching 32. StringBuilder 33. Accessibility Guidelines for Windows Applications http://msdn.microsoft.com/en-us/library/ms228004.aspx 34. Logging erros 35. Testing performance related issues. 36. Role Based Security, GenericIdentity and GenericPrincipal 37. System.Net.CookieContainer will store session data for webapps (see isolated storage for winforms) 38. .NET CLR Profiler tool will identify objects that cause performance issues. 39. ADO.NET Synchronization (SyncGroup) 40. Globalization - CultureInfo 41. IDisposable Interface- reports on several questions relating to this. 42. Adding timestamps to determine whether data has changed or not. 43. Converting applications to .NET Framework 3.5 44. MicrosoftReportViewer 45. Composite Controls 46. Windows Vista KNOWN folders. 47. Microsoft Sync Framework 48. TypeConverter -Provides a unified way of converting types of values to other types, as well as for accessing standard values and sub properties. http://msdn.microsoft.com/en-us/library/system.componentmodel.typeconverter.aspx 49. Concurrency control mechanisms The main categories of concurrency control mechanisms are: Optimistic - Delay the checking of whether a transaction meets the isolation rules (e.g., serializability and recoverability) until its end, without blocking any of its (read, write) operations, and then abort a transaction, if the desired rules are violated. Pessimistic - Block operations of a transaction, if they may cause violation of the rules. Semi-optimistic - Block operations in some situations, and do not block in other situations, while delaying rules checking to transaction's end, as done with optimistic. 50. AutoResetEvent 51. Microsoft Messaging Queue (MSMQ) 4.0 52. Bulk imports 53. KeyDown event of controls 54. WPF UI components 55. UI process layer 56. GAC (installing, removing and queuing) 57. Use a local database cache to reduce the network bandwidth used by applications. 58. Sound can easily be annoying and distracting to users, so use it judiciously. Always give users the option to turn sound off. Because a user might have sound off, never convey important information through sound alone.

    Read the article

  • Adding Volcanos and Options - Earthquake Locator, part 2

    - by Bobby Diaz
    Since volcanos are often associated with earthquakes, and vice versa, I decided to show recent volcanic activity on the Earthquake Locator map.  I am pulling the data from a website created for a joint project between the Smithsonian's Global Volcanism Program and the US Geological Survey's Volcano Hazards Program, found here.  They provide a Weekly Volcanic Activity Report as an RSS feed.   I started implementing this new functionality by creating a new Volcano entity in the domain model and adding the following to the EarthquakeService class (I also factored out the common reading/parsing helper methods to a separate FeedReader class that can be used by multiple domain service classes):           private static readonly string VolcanoFeedUrl =             ConfigurationManager.AppSettings["VolcanoFeedUrl"];           /// <summary>         /// Gets the volcano data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Volcano"/> objects.</returns>         public IQueryable<Volcano> GetVolcanos()         {             var feed = FeedReader.Load(VolcanoFeedUrl);             var list = new List<Volcano>();               if ( feed != null )             {                 foreach ( var item in feed.Items )                 {                     var quake = CreateVolcano(item);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates a <see cref="Volcano"/> object for each item in the RSS feed.         /// </summary>         /// <param name="item">The RSS item.</param>         /// <returns></returns>         private Volcano CreateVolcano(SyndicationItem item)         {             Volcano volcano = null;             string title = item.Title.Text;             string desc = item.Summary.Text;             double? latitude = null;             double? longitude = null;               FeedReader.GetGeoRssPoint(item, out latitude, out longitude);               if ( !String.IsNullOrEmpty(title) )             {                 title = title.Substring(0, title.IndexOf('-'));             }             if ( !String.IsNullOrEmpty(desc) )             {                 desc = String.Join("\n\n", desc                         .Replace("<p>", "")                         .Split(                             new string[] { "</p>" },                             StringSplitOptions.RemoveEmptyEntries)                         .Select(s => s.Trim())                         .ToArray())                         .Trim();             }               if ( latitude != null && longitude != null )             {                 volcano = new Volcano()                 {                     Id = item.Id,                     Title = title,                     Description = desc,                     Url = item.Links.Select(l => l.Uri.OriginalString).FirstOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault()                 };             }               return volcano;         } I then added the corresponding LoadVolcanos() method and Volcanos collection to the EarthquakeViewModel class in much the same way I did with the Earthquakes in my previous article in this series. Now that I am starting to add more information to the map, I wanted to give the user some options as to what is displayed and allowing them to choose what gets turned off.  I have updated the MainPage.xaml to look like this:   <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:basic="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>           <DataTemplate x:Key="VolcanoTemplate">             <Polygon Fill="Gold" Stroke="Black" StrokeThickness="1" Points="0,10 5,0 10,10"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center"                      MouseLeftButtonUp="Volcano_MouseLeftButtonUp">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="Click icon for more information..." />                     </StackPanel>                 </ToolTipService.ToolTip>             </Polygon>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">               <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />               <bing:MapItemsControl ItemsSource="{Binding Volcanos}"                                   ItemTemplate="{StaticResource VolcanoTemplate}" />         </bing:Map>           <basic:TabControl x:Name="tabs" VerticalAlignment="Bottom" MaxHeight="25" Opacity="0.7">             <basic:TabItem Margin="90,0,-90,0" MouseLeftButtonUp="TabItem_MouseLeftButtonUp">                 <basic:TabItem.Header>                     <TextBlock x:Name="txtHeader" Text="Options"                                FontSize="13" FontWeight="Bold" />                 </basic:TabItem.Header>                   <StackPanel Orientation="Horizontal">                     <TextBlock Text="Earthquakes:" FontWeight="Bold" Margin="3" />                     <StackPanel Margin="3">                         <CheckBox Content=" &lt; 4.0"                                   IsChecked="{Binding ShowLt4, Mode=TwoWay}" />                         <CheckBox Content="4.0 - 4.9"                                   IsChecked="{Binding Show4s, Mode=TwoWay}" />                         <CheckBox Content="5.0 - 5.9"                                   IsChecked="{Binding Show5s, Mode=TwoWay}" />                     </StackPanel>                       <StackPanel Margin="10,3,3,3">                         <CheckBox Content="6.0 - 6.9"                                   IsChecked="{Binding Show6s, Mode=TwoWay}" />                         <CheckBox Content="7.0 - 7.9"                                   IsChecked="{Binding Show7s, Mode=TwoWay}" />                         <CheckBox Content="8.0 +"                                   IsChecked="{Binding ShowGe8, Mode=TwoWay}" />                     </StackPanel>                       <TextBlock Text="Other:" FontWeight="Bold" Margin="50,3,3,3" />                     <StackPanel Margin="3">                         <CheckBox Content="Volcanos"                                   IsChecked="{Binding ShowVolcanos, Mode=TwoWay}" />                     </StackPanel>                 </StackPanel>               </basic:TabItem>         </basic:TabControl>       </Grid> </UserControl> Notice that I added a VolcanoTemplate that uses a triangle-shaped Polygon to represent the Volcano locations, and I also added a second <bing:MapItemsControl /> tag to the map to bind to the Volcanos collection.  The TabControl found below the map houses the options panel that will present the user with several checkboxes so they can filter the different points based on type and other properties (i.e. Magnitude).  Initially, the TabItem is collapsed to reduce it's footprint, but the screen shot below shows the options panel expanded to reveal the available settings:     I have updated the Source Code and Live Demo to include these new features.   Happy Mapping!

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Creating a multi-column rollover image gallery with HTML 5

    - by nikolaosk
    I know it has been a while since I blogged about HTML 5. I have two posts in this blog about HTML 5. You can find them here and here.I am creating a small content website (only text,images and a contact form) for a friend of mine.He wanted to create a rollover gallery.The whole concept is that we have some small thumbnails on a page, the user hovers over them and they appear enlarged on a designated container/placeholder on a page. I am trying not to use Javascript scripts when I am using effects on a web page and this is what I will be doing in this post.  Well some people will say that HTML 5 is not supported in all browsers. That is true but most of the modern browsers support most of its recommendations. For people who still use IE6 some hacks must be devised.Well to be totally honest I cannot understand why anyone at this day and time is using IE 6.0.That really is beyond me.Well, the point of having a web browser is to be able to ENJOY the great experience that the WE? offers today.  Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. In order to be absolutely clear this is not (and could not be ) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.For the people who are not convinced yet that they should invest time and resources on becoming experts on HTML 5 I should point out that HTML 5 websites will be ranked higher than others. Search engines will be able to locate better the content of our site and its relevance/importance since it is using semantic tags. Let's move now to the actual hands-on example. In this case (since I am mad Liverpool supporter) I will create a rollover image gallery of Liverpool F.C legends. I create a folder in my desktop. I name it Liverpool Gallery.Then I create two subfolders in it, large-images (I place the large images in there) and thumbs (I place the small images in there).Then I create an empty .html file called LiverpoolLegends.html and an empty .css file called style.css.Please have a look at the HTML Markup that I typed in my fancy editor package below<!doctype html><html lang="en"><head><title>Liverpool Legends Gallery</title><meta charset="utf-8"><link rel="stylesheet" type="text/css" href="style.css"></head><body><header><h1>A page dedicated to Liverpool Legends</h1><h2>Do hover over the images with the mouse to see the full picture</h2></header><ul id="column1"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/john-barnes.jpg" alt=""><img class="large" src="large-images/john-barnes-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/ian-rush.jpg" alt=""><img class="large" src="large-images/ian-rush-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/graeme-souness.jpg" alt=""><img class="large" src="large-images/graeme-souness-large.jpg" alt=""></a></li></ul><ul id="column2"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/steven-gerrard.jpg" alt=""><img class="large" src="large-images/steven-gerrard-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/kenny-dalglish.jpg" alt=""><img class="large" src="large-images/kenny-dalglish-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/robbie-fowler.jpg" alt=""><img class="large" src="large-images/robbie-fowler-large.jpg" alt=""></a></li></ul><ul id="column3"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/alan-hansen.jpg" alt=""><img class="large" src="large-images/alan-hansen-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/michael-owen.jpg" alt=""><img class="large" src="large-images/michael-owen-large.jpg" alt=""></a></li></ul></body></html> It is very easy to follow the markup. Please have a look at the new doctype and the new semantic tag <header>. I have 3 columns and I place my images in there.There is a class called "large".I will use this class in my CSS code to hide the large image when the mouse is not on (hover) an image Make sure you validate your HTML 5 page in the validator found hereHave a look at the CSS code below that makes it all happen.img { border:none;}#column1 { position: absolute; top: 30; left: 100; }li { margin: 15px; list-style-type:none;}#column1 a img.large {  position: absolute; top: 0; left:700px; visibility: hidden;}#column1 a:hover { background: white;}#column1 a:hover img.large { visibility:visible;}#column2 { position: absolute; top: 30; left: 195px; }li { margin: 5px; list-style-type:none;}#column2 a img.large { position: absolute; top: 0; left:510px; margin-left:0; visibility: hidden;}#column2 a:hover { background: white;}#column2 a:hover img.large { visibility:visible;}#column3 { position: absolute; top: 30; left: 400px; width:108px;}li { margin: 5px; list-style-type:none;}#column3 a img.large { width: 260px; height:260px; position: absolute; top: 0; left:315px; margin-left:0; visibility: hidden;}#column3 a:hover { background: white;}#column3 a:hover img.large { visibility:visible;}?n the first line of the CSS code I set the images to have no border.Then I place the first column in the page and then remove the bullets from the list elements.Then I use the large CSS class to create a position for the large image and hide it.Finally when the hover event takes place I make the image visible.I repeat the process for the next two columns. I have tested the page with IE 10 and the latest versions of Opera,Chrome and Firefox.Feel free to style your HTML 5 gallery any way you want through the magic of CSS.I did not bother adding background colors and borders because that was beyond the scope of this post. Hope it helps!!!!

    Read the article

  • So, how is the Oracle HCM Cloud User Experience? In a word, smokin’!

    - by Edith Mireles-Oracle
    By Misha Vaughan, Oracle Applications User Experience Oracle unveiled its game-changing cloud user experience strategy at Oracle OpenWorld 2013 (remember that?) with a new simplified user interface (UI) paradigm.  The Oracle HCM cloud user experience is about light-weight interaction, tailored to the task you are trying to accomplish, on the device you are comfortable working with. A key theme for the Oracle user experience is being able to move from smartphone to tablet to desktop, with all of your data in the cloud. The Oracle HCM Cloud user experience provides designs for better productivity, no matter when and how your employees need to work. Release 8  Oracle recently demonstrated how fast it is moving development forward for our cloud applications, with the availability of release 8.  In release 8, users will see expanded simplicity in the HCM cloud user experience, such as filling out a time card and succession planning. Oracle has also expanded its mobile capabilities with task flows for payslips, managing absences, and advanced analytics. In addition, users will see expanded extensibility with the new structures editor for simplified pages, and the with the user interface text editor, which allows you to update language throughout the UI from one place. If you don’t like calling people who work for you “employees,” you can use this tool to create a term that is suited to your business.  Take a look yourself at what’s available now. What are people saying?Debra Lilley (@debralilley), an Oracle ACE Director who has a long history with Oracle Applications, recently gave her perspective on release 8: “Having had the privilege of seeing a preview of release 8, I am again impressed with the enhancements around simplified UI. Even more so, at a user group event in London this week, an existing Cloud HCM customer speaking publically about his implementation said he was very excited about release 8 as the absence functionality was so superior and simple to use.”  In an interview with Lilley for a blog post by Dennis Howlett  (@dahowlett), we probably couldn’t have asked for a more even-handed look at the Oracle Applications Cloud and the impact of user experience. Take the time to watch all three videos and get the full picture.  In closing, Howlett’s said: “There is always the caveat that getting from the past to Fusion [from the editor: Fusion is now called the Oracle Applications Cloud] is not quite as simple as may be painted, but the outcomes are much better than anticipated in large measure because the user experience is so much better than what went before.” Herman Slange, Technical Manager with Oracle Applications partner Profource, agrees with that comment. “We use on-premise Financials & HCM for internal use. Having a simple user interface that works on a desktop as well as a tablet for (very) non-technical users is a big relief. Coming from E-Business Suite, there is less training (none) required to access HCM content.  From a technical point of view, having the abilities to tailor the simplified UI very easy makes it very efficient for us to adjust to specific customer needs.  When we have a conversation about simplified UI, we just hand over a tablet and ask the customer to just use it. No training and no explanation required.” Finally, in a story by Computer Weekly  about Oracle customer BG Group, a natural gas exploration and production company based in the UK and with a presence in 20 countries, the author states: “The new HR platform has proved to be easier and more intuitive for HR staff to use than the previous SAP-based technology.” What’s Next for Oracle’s Applications Cloud User Experiences? This is the question that Steve Miranda, Oracle Executive Vice President, Applications Development, asks the Applications User Experience team, and we’ve been hard at work for some time now on “what’s next.”  I can’t say too much about it, but I can tell you that we’ve started talking to customers and partners, under non-disclosure agreements, about user experience concepts that we are working on in order to get their feedback. We recently had a chance to talk about possibilities for the Oracle HCM Cloud user experience at an Oracle HCM Southern California Customer Success Summit. This was a fantastic event, hosted by Shane Bliss and Vance Morossi of the Oracle Client Success Team. We got to use the uber-slick facilities of Allergan, our hosts (of Botox fame), headquartered in Irvine, Calif., with a presence in more than 100 countries. Photo by Misha Vaughan, Oracle Applications User Experience Vance Morossi, left, and Shane Bliss, of the Oracle Client Success Team, at an Oracle HCM Southern California Customer Success Summit.  We were treated to a few really excellent talks around human resources (HR). Alice White, VP Human Resources, discussed Allergan's process for global talent acquisition -- how Allergan has designed and deployed a global process, and global tools, along with Oracle and Cognizant, and are now at the end of a global implementation. She shared a couple of insights about the journey for Allergan: “One of the major areas for improvement was on role clarification within the company.” She said the company is “empowering managers and deputizing them as recruiters. Now it is a global process that is nimble and efficient."  Deepak Rammohan, VP Product Management, HCM Cloud, Oracle, also took the stage to talk about pioneering modern HR. He reflected modern HR problems of getting the right data about the workforce, the importance of getting the right talent as a key strategic initiative, and other workforce insights. "How do we design systems to deal with all of this?” he asked. “Make sure the systems are talent-centric. The next piece is collaborative, engaging, and mobile. A lot of this is influenced by what users see today. The last thing is around insight; insight at the point of decision-making." Rammohan showed off some killer HCM Cloud talent demos focused on simplicity and mobility that his team has been cooking up, and closed with a great line about the nature of modern recruiting: "Recruiting is a team sport." Deepak Rammohan, left, and Jake Kuramoto, both of Oracle, debate the merits of a Google Glass concept demo for recruiters on-the-go. Later, in an expo-style format, the Apps UX team showed several concepts for next-generation HCM Cloud user experiences, including demos shown by Jake Kuramoto (@jkuramoto) of The AppsLab, and Aylin Uysal (@aylinuysal), Director, HCM Cloud user experience. We even hauled out our eye-tracker, a research tool used to show where the eye is looking at a particular screen, thanks to teammate Michael LaDuke. Dionne Healy, HCM Client Executive, and Aylin Uysal, Director, HCM Cloud user experiences, Oracle, take a look at new HCM Cloud UX concepts. We closed the day with Jeremy Ashley (@jrwashley), VP, Applications User Experience, who brought it all back together by talking about the big picture for applications cloud user experiences. He covered the trends we are paying attention to now, what users will be expecting of their modern enterprise apps, and what Oracle’s design strategy is around these ideas.   We closed with an excellent reception hosted by ADP Payroll services at Bistango. Want to read more?Want to see where our cloud user experience is going next? Read more on the UsableApps web site about our latest design initiative: “Glance, Scan, Commit.” Or catch up on the back story by looking over our Applications Cloud user experience content on the UsableApps web site.  You can also find out where we’ll be next at the Events page on UsableApps.

    Read the article

  • Software Architecture: Quality Attributes

    Quality is what all software engineers should strive for when building a new system or adding new functionality. Dictonary.com ambiguously defines quality as a grade of excellence. Unfortunately, quality must be defined within the context of a situation in that each engineer must extract quality attributes from a project’s requirements. Because quality is defined by project requirements the meaning of quality is constantly changing base on the project. Software architecture factors that indicate the relevance and effectiveness The relevance and effectiveness of architecture can vary based on the context in which it was conceived and the quality attributes that are required to meet. Typically when evaluating architecture for a specific system regarding relevance and effectiveness the following questions should be asked.   Architectural relevance and effectiveness questions: Does the architectural concept meet the needs of the system for which it was designed? Out of the competing architectures for a system, which one is the most suitable? If we look at the first question regarding meeting the needs of a system for which it was designed. A system that answers yes to this question must meet all of its quality goals. This means that it consistently meets or exceeds performance goals for the system. In addition, the system meets all the other required system attributers based on the systems requirements. The suitability of a system is based on several factors. In order for a project to be suitable the necessary resources must be available to complete the task. Standard Project Resources: Money Trained Staff Time Life cycle factors that affect the system and design The development life cycle used on a project can drastically affect how a system’s architecture is created as well as influence its design. In the case of using the software development life cycle (SDLC) each phase must be completed before the next can begin.  This waterfall approach does not allow for changes in a system’s architecture after that phase is completed. This can lead to major system issues when the architecture for the system is not as optimal because of missed quality attributes. This can occur when a project has poor requirements and makes misguided architectural decisions to name a few examples. Once the architectural phase is complete the concepts established in this phase must move on to the design phase that is bound to use the concepts and guidelines defined in the previous phase regardless of any missing quality attributes needed for the project. If any issues arise during this phase regarding the selected architectural concepts they cannot be corrected during the current project. This directly has an effect on the design of a system because the proper qualities required for the project where not used when the architectural concepts were approved. When this is identified nothing can be done to fix the architectural issues and system design must use the existing architectural concepts regardless of its missing quality properties because the architectural concepts for the project cannot be altered. The decisions made in the design phase then preceded to fall down to the implementation phase where the actual system is coded based on the approved architectural concepts established in the architecture phase regardless of its architectural quality. Conversely projects using more of an iterative or agile methodology to implement a system has more flexibility to correct architectural decisions based on missing quality attributes. This is due to each phase of the SDLC is executed more than once so any issues identified in architecture of a system can be corrected in the next architectural phase. Subsequently the corresponding changes will then be adjusted in the following design phase so that when the project is completed the optimal architectural and design decision are applied to the solution. Architecture factors that indicate functional suitability Systems that have function shortcomings do not have the proper functionality based on the project’s driving quality attributes. What this means in English is that the system does not live up to what is required of it by the stakeholders as identified by the missing quality attributes and requirements. One way to prevent functional shortcomings is to test the project’s architecture, design, and implementation against the project’s driving quality attributes to ensure that none of the attributes were missed in any of the phases. Another way to ensure a system has functional suitability is to certify that all its requirements are fully articulated so that there is no chance for misconceptions or misinterpretations by all stakeholders. This will help prevent any issues regarding interpreting the system requirements during the initial architectural concept phase, design phase and implementation phase. Consider the applicability of other architectural models When considering an architectural model for a project is also important to consider other alternative architectural models to ensure that the model that is selected will meet the systems required functionality and high quality attributes. Recently I can remember talking about a project that I was working on and a coworker suggested a different architectural approach that I had never considered. This new model will allow for the same functionally that is offered by the existing model but will allow for a higher quality project because it fulfills more quality attributes. It is always important to seek alternatives prior to committing to an architectural model. Factors used to identify high-risk components A high risk component can be defined as a component that fulfills 2 or more quality attributes for a system. An example of this can be seen in a web application that utilizes a remote database. One high-risk component in this system is the TCIP component because it allows for HTTP connections to handle by a web server and as well as allows for the server to also connect to a remote database server so that it can import data into the system. This component allows for the assurance of data quality attribute and the accessibility quality attribute because the system is available on the network. If for some reason the TCIP component was to fail the web application would fail on two quality attributes accessibility and data assurance in that the web site is not accessible and data cannot be update as needed. Summary As stated previously, quality is what all software engineers should strive for when building a new system or adding new functionality. The quality of a system can be directly determined by how closely it is implemented when compared to its desired quality attributes. One way to insure a higher quality system is to enforce that all project requirements are fully articulated so that no assumptions or misunderstandings can be made by any of the stakeholders. By doing this a system has a better chance of becoming a high quality system based on its quality attributes

    Read the article

  • CodePlex Daily Summary for Monday, May 03, 2010

    CodePlex Daily Summary for Monday, May 03, 2010New Projects.radiko: エアログラス採用のシンプルなradiko(http://radiko.jp/)クライアントです。タスクトレイのアイコンからラジオ局の切り替えができます。7Scale: EmptyB2C MVC Plattform: The B2C MVC Plattform aims to be pluggable site framework to help small busisness accomplish basic tasks between business and customers.ElValWeb: The goal of the project to create full featured implementation of ModelValidatorProvider for Enterprise Library Application Validation Block, wich ...esatis yazilimi: asp.net yazılımı ile satış magazasi websitesi kur.IEnumerable.It sample code: IEnumerable.It sample codejQuery MicroAjax for ASP.NET: MicroAjax is a set of jQuery plugins and .NET components designed to provide simple, powerful and efficient Ajax centric web application design pat...Karbon VOS: Karbon VOS is an advanced Virtual Operating System Template for Visual Basic Express. It's developed in Visual Basic. Karbon VOS hopes to one day b...LINQ Mapper: LINQ Mapper translates simple LINQ queries between different sources. It allows you to write queries against your domain model, but have them run ...Meccano Silverlight Framework: Meccano is a new generation of frameworks for creation of LOB Silverlight applications based on MEF, RX, WCF, ADO.NET Data Services etc. It is inte...Multiuse Model View (MMV) Library: This project is an open source library for the Multiuse Model View (MMV) pattern for building robust WPF and ASP.Net applications. Visit my blog ht...Process Affinity Control: Process Affinity Control allows to set the affinity masks of processes based on rules.SilverSpatial: This project helps bridge the gap between Silverlight and Geo-Spatial data type (such as SQL Spatial). It implements the Well-Known-Binary (WKB) fo...StageAssets: Application for storing data about "things" and people in theatre. For example equipment, actors and so on.Stratosphere: Mono compatible library with set of primitives to work with scalable table, queue and block containers with corresponding implementations for Amazo...TRX Web-Viewer: A simple web-based application to upload and view VSTS 2008 and VSTS 2010 test result files with some basic lookup and feature-wise management of r...WDT2: WDT 2 is the school project to begin learning .NET enviroment, The main focus is on learning the use of almost all the componenets.WPF Behavior Library: WPF Behavior Library is a set of additional actions for WPF that allow you to add extra behaviors to a control quickly and easily. Currently the on...YouTubeEmbeddedVideo WebControl for ASP.NET: A Control to embed YouTube videos in ASP.NET pages. Works in C# and VB.NETNew Releases.radiko: beta: 東京局のみ対応 あとは手抜きActiveWorlds Managed .NET SDK: AwManaged Technology Preview - WIN32 (Alpha): This WIN32 release contains the Server Console Application. The Setup executable should be run as administrator on O.S. using UAC (Vista/Win7)AJAX Control Framework: v1.0.1.0: v1.0.1.0 - Contains a Bing Maps sample project, a number of bug fixes and a few performance improvements. - AJAX enable ANY custom control that der...App_Code (and Usercontrol) Editor (ACE): v1.0.0 alpha: The first alpha release of the AppCode Editor for Umbraco 4.0.3 is now available to download! Tested to work with usercontrols - pre-compilation wi...ElValWeb: ElValWeb 0.0.1.0: Version 0.0.1.0 contains client validation support forAndCompositeValidator ContainsCharactersValidator DomainValidator NotNullValidator Or...esatis yazilimi: magaza: magazanın yazılımları ve veri tabanının yazılımlarıGrunty OS: Grunty OS USB: Download Grunty OS for USBGrunty OS: Grunty OS.ISO: Grunty OS ISOKarbon VOS: Milestone 1 (Kaptua): Milestone 1...Live Meeting API Wrapper: LiveMeetingAPIWrapperV1.2: Added get meeting and update meeting.Multiuse Model View (MMV) Library: v0.3: first alpha release. Medium amount of functionality and some use cases tested.MVC Foolproof Validation: Beta 0.9.3774: Adds resource provided error messages, regular expression operators and a new RegularExpressionIf attribute.Process Affinity Control: Version 1.0.0: This is the first release. Planned features for the next release: No administrative privileges needed to run the manager Select the active scena...SharePoint 2010 Service Manager: SharePoint 2010 Service Manager 1.1: Added support to run under UAC with automatic security elevationSharePoint Event Handler Manager: Event Handler Manager 2.0: Please download the application here: http://www.ackermantech.com/registerevents.aspxSkyDrive Synchronizer: SkyDrive Sync Beta 0.1: Beta release includes: Upload and download Synchronize updated files Delete files on web/locally if not in source Split larger files into sma...Stratosphere: Stratosphere 1.0.0.0: Initial beta releaseSuggested Resources for .NET Developers: 0.8.0.0 VS2010 - focus on displaying content: This is the first release of Suggested Resources that can be downloaded from the internet. While there is still a lot of work to be done this rele...TRX Web-Viewer: TRX Web-Viewer V1.0: First working versionVCC: Latest build, v2.1.30502.0: Automatic drop of latest buildWatchersNET.TagCloud: WatchersNET.TagCloud 01.04.00: !Whats New New Tag Mode: Search Referrers (Shows Search Tags From Google, Ask, Bing, Yahoo and the Dnn Site Search) Taxonomy Tags now contains L...Web/Cloud Applications Development Framework | Visual WebGui: 6.4 Beta 2e: Fully featured beta version of Visual WebGui Web/Cloud Applicaiton Development FrameworkWPF Behavior Library: WPF Behavior Library 0.1 Release: First alpha release of the WPF Behavior Library. It should be stable but doesn't have all of the features it will have in the future and the API ma...xvanneste: Sharepoint Social Network Client: Client permettant d'avoir accés au social network de sharepoint a l'exterieur du navigateur.Most Popular ProjectsRawrWBFS ManagerAJAX Control Toolkitpatterns & practices – Enterprise LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)iTuner - The iTunes CompanionASP.NETDotNetNuke® Community EditionMost Active ProjectsIonics Isapi Rewrite Filterpatterns & practices – Enterprise LibraryRawrHydroServer - CUAHSI Hydrologic Information System ServerAJAX Control Frameworkpatterns & practices: Azure Security GuidanceTinyProjectBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog ModuleDambach Linear Algebra Framework

    Read the article

  • EM12c Release 4: New EMCLI Verbs

    - by SubinDaniVarughese
    Here are the new EM CLI verbs in Enterprise Manager 12c Release 4 (12.1.0.4). This helps you in writing new scripts or enhancing your existing scripts for further automation. Basic Administration Verbs invoke_ws - Invoke EM web service.ADM Verbs associate_target_to_adm - Associate a target to an application data model. export_adm - Export Application Data Model to a specified .xml file. import_adm - Import Application Data Model from a specified .xml file. list_adms - List the names, target names and application suites of existing Application Data Models verify_adm - Submit an application data model verify job for the target specified.Agent Update Verbs get_agent_update_status -  Show Agent Update Results get_not_updatable_agents - Shows Not Updatable Agents get_updatable_agents - Show Updatable Agents update_agents - Performs Agent Update Prereqs and submits Agent Update JobBI Publisher Reports Verbs grant_bipublisher_roles - Grants access to the BI Publisher catalog and features. revoke_bipublisher_roles - Revokes access to the BI Publisher catalog and features.Blackout Verbs create_rbk - Create a Retro-active blackout.CFW Verbs cancel_cloud_service_requests -  To cancel cloud service requests delete_cloud_service_instances -  To delete cloud service instances delete_cloud_user_objects - To delete cloud user objects. get_cloud_service_instances - To get information about cloud service instances get_cloud_service_requests - To get information about cloud requests get_cloud_user_objects - To get information about cloud user objects.Chargeback Verbs add_chargeback_entity - Adds the given entity to Chargeback. assign_charge_plan - Assign a plan to a chargeback entity. assign_cost_center - Assign a cost center to a chargeback entity. create_charge_entity_type - Create  charge entity type export_charge_plans - Exports charge plans metadata to file export_custom_charge_items -  Exports user defined charge items to a file import_charge_plans - Imports charge plans metadata from given file import_custom_charge_items -  Imports user defined charge items metadata from given file list_charge_plans - Gives a list of charge plans in Chargeback. list_chargeback_entities - Gives a list of all the entities in Chargeback list_chargeback_entity_types - Gives a list of all the entity types that are supported in Chargeback list_cost_centers - Lists the cost centers in Chargeback. remove_chargeback_entity - Removes the given entity from Chargeback. unassign_charge_plan - Un-assign the plan associated to a chargeback entity. unassign_cost_center - Un-assign the cost center associated to a chargeback entity.Configuration/Association History disable_config_history - Disable configuration history computation for a target type. enable_config_history - Enable configuration history computation for a target type. set_config_history_retention_period - Sets the amount of time for which Configuration History is retained.ConfigurationCompare config_compare - Submits the configuration comparison job get_config_templates - Gets all the comparison templates from the repositoryCompliance Verbs fix_compliance_state -  Fix compliance state by removing references in deleted targets.Credential Verbs update_credential_setData Subset Verbs export_subset_definition - Exports specified subset definition as XML file at specified directory path. generate_subset - Generate subset using specified subset definition and target database. import_subset_definition - Import a subset definition from specified XML file. import_subset_dump - Imports dump file into specified target database. list_subset_definitions - Get the list of subset definition, adm and target nameDelete pluggable Database Job Verbs delete_pluggable_database - Delete a pluggable databaseDeployment Procedure Verbs get_runtime_data - Get the runtime data of an executionDiscover and Push to Agents Verbs generate_discovery_input - Generate Discovery Input file for discovering Auto-Discovered Domains refresh_fa - Refresh Fusion Instance run_fa_diagnostics - Run Fusion Applications DiagnosticsFusion Middleware Provisioning Verbs create_fmw_domain_profile - Create a Fusion Middleware Provisioning Profile from a WebLogic Domain create_fmw_home_profile - Create a Fusion Middleware Provisioning Profile from an Oracle Home create_inst_media_profile - Create a Fusion Middleware Provisioning Profile from Installation MediaGold Agent Image Verbs create_gold_agent_image - Creates a gold agent image. decouple_gold_agent_image - Decouples the agent from gold agent image. delete_gold_agent_image - Deletes a gold agent image. get_gold_agent_image_activity_status -  Gets gold agent image activity status. get_gold_agent_image_details - Get the gold agent image details. list_agents_on_gold_image - Lists agents on a gold agent image. list_gold_agent_image_activities - Lists gold agent image activities. list_gold_agent_image_series - Lists gold agent image series. list_gold_agent_images - Lists the available gold agent images. promote_gold_agent_image - Promotes a gold agent image. stage_gold_agent_image - Stages a gold agent image.Incident Rules Verbs add_target_to_rule_set - Add a target to an enterprise rule set. delete_incident_record - Delete one or more open incidents remove_target_from_rule_set - Remove a target from an enterprise rule set. Job Verbs export_jobs - Export job details in to an xml file import_jobs - Import job definitions from an xml file job_input_file - Supply details for a job verb in a property file resume_job - Resume a job or set of jobs suspend_job - Suspend a job or set of jobs Oracle Database as Service Verbs config_db_service_target - Configure DB Service target for OPCPrivilege Delegation Settings Verbs clear_default_privilege_delegation_setting - Clears the default privilege delegation setting for a given list of platforms set_default_privilege_delegation_setting - Sets the default privilege delegation setting for a given list of platforms test_privilege_delegation_setting - Tests a Privilege Delegation Setting on a hostSSA Verbs cleanup_dbaas_requests - Submit cleanup request for failed request create_dbaas_quota - Create Database Quota for a SSA User Role create_service_template - Create a Service Template delete_dbaas_quota - Delete the Database Quota setup for a SSA User Role delete_service_template - Delete a given service template get_dbaas_quota - List the Database Quota setup for all SSA User Roles get_dbaas_request_settings - List the Database Request Settings get_service_template_detail - Get details of a given service template get_service_templates -  Get the list of available service templates rename_service_template -  Rename a given service template update_dbaas_quota - Update the Database Quota for a SSA User Role update_dbaas_request_settings - Update the Database Request Settings update_service_template -  Update a given service template. SavedConfigurations get_saved_configs  - Gets the saved configurations from the repository Server Generated Alert Metric Verbs validate_server_generated_alerts  - Server Generated Alert Metric VerbServices Verbs edit_sl_rule - Edit the service level rule for the specified serviceSiebel Verbs list_siebel_enterprises -  List Siebel enterprises currently monitored in EM list_siebel_servers -  List Siebel servers under a specified siebel enterprise update_siebel- Update a Siebel enterprise or its underlying serversSiteGuard Verbs add_siteguard_aux_hosts -  Associate new auxiliary hosts to the system configure_siteguard_lag -  Configure apply lag and transport lag limit for databases delete_siteguard_aux_host -  Delete auxiliary host associated with a site delete_siteguard_lag -  Erases apply lag or transport lag limit for databases get_siteguard_aux_hosts -  Get all auxiliary hosts associated with a site get_siteguard_health_checks -  Shows schedule of health checks get_siteguard_lag -  Shows apply lag or transport lag limit for databases schedule_siteguard_health_checks -  Schedule health checks for an operation plan stop_siteguard_health_checks -  Stops all future health check execution of an operation plan update_siteguard_lag -  Updates apply lag and transport lag limit for databasesSoftware Library Verbs stage_swlib_entity_files -  Stage files of an entity from Software Library to a host target.Target Data Verbs create_assoc - Creates target associations delete_assoc - Deletes target associations list_allowed_pairs - Lists allowed association types for specified source and destination list_assoc - Lists associations between source and destination targets manage_agent_partnership - Manages partnership between agents. Used for explicitly assigning agent partnershipsTrace Reports generate_ui_trace_report  -  Generate and download UI Page performance report (to identify slow rendering pages)VI EMCLI Verbs add_virtual_platform - Add Oracle Virtual PLatform(s). modify_virtual_platform - Modify Oracle Virtual Platform.To get more details about each verb, execute$ emcli help <verb_name>Example: $ emcli help list_assocNew resources in list verbThese are the new resources in EM CLI list verb :Certificates  WLSCertificateDetails Credential Resource Group  PreferredCredentialsDefaultSystemScope - Preferred credentials (System Scope)   PreferredCredentialsSystemScope - Target preferred credentialPrivilege Delegation Settings  TargetPrivilegeDelegationSettingDetails  - List privilege delegation setting details on a host  TargetPrivilegeDelegationSetting - List privilege delegation settings on a host   PrivilegeDelegationSettings  - Lists all Privilege Delegation Settings   PrivilegeDelegationSettingDetails - Lists details of  Privilege Delegation Settings To get more details about each resource, execute$ emcli list -resource="<resource_name>" -helpExample: $ emcli list -resource="PrivilegeDelegationSettings" -helpDeprecated Verbs:Agent Administration Verbs resecure_agent - Resecure an agentTo get the complete list of verbs, execute:$ emcli help Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • Product Development Investment: A Measure of Vendor Performance

    - by Jim Mcglothlin
    The relationship between a large, complex organization and its key suppliers of information technology is normally more than just "strategic". Expectations about the duration of the relationship typically exceed 20 years. Enterprise applications and technology infrastructure are not expected to be changed out like petunias. So how would you rate the due diligence processes as performed in Higher Education when selecting critical, transformational information technology? My observation: I see a lot of effort put into elaborate demonstration of basic software functionality. I see a lot of attention paid to the cost element of technology acquisition, including the contracted cost of implementation consulting services. But the factor that receives only cursory analysis and due diligence is long-term performance--the ability of a vendor to grow, expand, and develop, and bring its customers along with it. So what should you look for in a long-term IT supplier? Oracle has a public track record for product development. The annual investment has been on a run rate of almost $3 Billion organic product development. Oracle's well-publicized acquisitions and mergers have been supplemental to its R&D. This is important for Higher Education. Another meaningful way to evaluate a company is to look at the tangible track record of enhancement. Consider the Oracle-PeopleSoft enterprise business platform since acquired by Oracle 6 years ago: Product or Technology Enhancement Customer or User Impact Service Oriented Architecture (SOA) 300+ new web services delivered in versions 9.0 & 9.1 provide flexibility, so that customers can integrate PeopleSoft with other applications. Campus Solutions has added Admissions and Constituent Web Services. Constituent Relationship Management PeopleSoft CRM 9.1 for Higher Education introduced new process flows for student recruiting and retention to support "Student Success" initiatives. A 360 view of the constituent is now delivered, and the concept of a single-stop Student Services Center is now in CRM 9.1 with tight integration to PeopleSoft Campus Solutions. Human Capital Management Contract Pay for Education, with flexibility for configuration and calculation, has been extended in HCM 9.1. New chartfield integration among Project Costing - Time & Labor - Payroll to serve the labor distribution requirements for Grants / Sponsored Research. Talent Management PeopleSoft 9.0 and 9.1 feature an integrated talent management approach centered on definitions in "Profile Manager", with all new usability improvements. Internal and external candidate pools, and the entire recruitment process, are driven by delivered configurable selection and on-boarding processes. Interview scheduling, and online job offers are newly delivered processes. Performance Management PeopleSoft HCM ePerformance 9.1 will include significant new functionality designed to help organizations more effectively align business objectives with employee goals. Using an Organization Chart view, your business goals can flow down to become tangible objectives per employee. Succession Planning / Workforce Development New in HCM 9.0, enhanced in 9.1, is a planning capability for regular or unusual (major organizational change) succession of internal or external candidates. PeopleSoft supports employee-based career planning, which ultimately increases the integrity of the succession planning process (identify their career needs, plans, preferences, and interests). Dashboards / Oracle Business Intelligence Application Suite Oracle Human Resources Analytics provides the workforce information foundation that integrates data from HR functional areas and Finance. Oracle Human Resources Analytics delivers 9 dashboards and over 200 reports. Provide your HR professionals and front-line managers the tools to analyze workforce staffing, retention, productivity, to better source high-quality applicants, and to reduce absence costs. Multi-year Planning and Commitment Control External funding sources, especially Grants, require a multi-year encumbrance business process. PeopleSoft HCM 9.1 adds multi-year funding and commitment control, including budget checking. The newly designed Real Time Budget Checking will provide the customer with an updated snapshot of their budget and encumbrances at any given time. Position Budgeting with Hyperion Hyperion Planning world-class products now include delivered integration to PeopleSoft HCM. Position Budgeting is available in the new Public Sector Planning module of Hyperion. Web 2.0 features for the latest in usability PeopleSoft 9.1 features a contemporary internet user experience: Partial-page refreshing Drag and drop pagelets New menu structure Navigation pagelets Modal popup message windows Favorites & recently used links Type-ahead Drag and drop grid columns, pop-out grids Portal Workspaces Enterprise 2.0 for your collaborative web communities, using new content management, along with Wikis, blogs, and discussion forums in PeopleSoft Portal 9.1. PeopleTools enhanced by Oracle Fusion Middleware Standards-based tools have been added to the PeopleTools application infrastructure: BI (XML) Publisher, Java tools. Certified for use with PeopleSoft: Oracle Business Intelligence (OBIEE), Oracle Enterprise Manager, Oracle Weblogic Server, Oracle SOA Suite. Hosting for PeopleSoft applications A solid new deployment option: Oracle On Demand remote hosting center for high scalability, security, and continuity of operations. Business Process Outsourcing (BPO) for HCM / Payroll functions Partnership with AT&T provides hosting of HR/Payroll application along with payroll business process operations, and subscription-based service fees (SaaS). AT&T BPO full service includes pay sheet processing, bank and 3rd party file transfer, payroll tax handling, etc. Continuous Delivery Model Feature Packs provide faster time-to-benefit; new features become available in PeopleSoft 9.1 (or Campus Solutions 9.0) without need to perform upgrade. Golden person data model across all campus applications Oracle Higher Education Constituent Hub provides synchronization and data governance of person data across any application, e.g. HR/ Payroll, Student Information System, Housing, Emergency Contact, LMS, CRM. Oracle's aggressive enhancement plans within the "Applications Unlimited" program continue, as new functionality is under development for a new version of a PeopleSoft release planned for 2012. Meanwhile, new capabilities are planned on an annual basis in Feature Packs. PeopleSoft just delivered the HCM 2010 Feature Pack and another is planned for 2011. In February we plan to have over 100 customers from our Customer Advisory Boards at our PeopleSoft Development Center in California to review designs for all of these releases. For those of you near New York City The investment and progressive development story described above is the subject of an Oracle road show event on February 9, 2011. Charting Your Course with Oracle Applications is a global event series designed to help business and IT executives assess the impact of new inflection points on their business and applications roadmap: changing workforces, shifting customer and constituent bases, and increased volatility. Learn how innovations ranging from new deployment models like cloud computing to the introduction of social applications and smart devices are delivering results across all areas of business and industry. THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT.

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • Java EE 6 and NoSQL/MongoDB on GlassFish using JPA and EclipseLink 2.4 (TOTD #175)

    - by arungupta
    TOTD #166 explained how to use MongoDB in your Java EE 6 applications. The code in that tip used the APIs exposed by the MongoDB Java driver and so requires you to learn a new API. However if you are building Java EE 6 applications then you are already familiar with Java Persistence API (JPA). Eclipse Link 2.4, scheduled to release as part of Eclipse Juno, provides support for NoSQL databases by mapping a JPA entity to a document. Their wiki provides complete explanation of how the mapping is done. This Tip Of The Day (TOTD) will show how you can leverage that support in your Java EE 6 applications deployed on GlassFish 3.1.2. Before we dig into the code, here are the key concepts ... A POJO is mapped to a NoSQL data source using @NoSQL or <no-sql> element in "persistence.xml". A subset of JPQL and Criteria query are supported, based upon the underlying data store Connection properties are defined in "persistence.xml" Now, lets lets take a look at the code ... Download the latest EclipseLink 2.4 Nightly Bundle. There is a Installer, Source, and Bundle - make sure to download the Bundle link (20120410) and unzip. Download GlassFish 3.1.2 zip and unzip. Install the Eclipse Link 2.4 JARs in GlassFish Remove the following JARs from "glassfish/modules": org.eclipse.persistence.antlr.jar org.eclipse.persistence.asm.jar org.eclipse.persistence.core.jar org.eclipse.persistence.jpa.jar org.eclipse.persistence.jpa.modelgen.jar org.eclipse.persistence.moxy.jar org.eclipse.persistence.oracle.jar Add the following JARs from Eclipse Link 2.4 nightly build to "glassfish/modules": org.eclipse.persistence.antlr_3.2.0.v201107111232.jar org.eclipse.persistence.asm_3.3.1.v201107111215.jar org.eclipse.persistence.core.jpql_2.4.0.v20120407-r11132.jar org.eclipse.persistence.core_2.4.0.v20120407-r11132.jar org.eclipse.persistence.jpa.jpql_2.0.0.v20120407-r11132.jar org.eclipse.persistence.jpa.modelgen_2.4.0.v20120407-r11132.jar org.eclipse.persistence.jpa_2.4.0.v20120407-r11132.jar org.eclipse.persistence.moxy_2.4.0.v20120407-r11132.jar org.eclipse.persistence.nosql_2.4.0.v20120407-r11132.jar org.eclipse.persistence.oracle_2.4.0.v20120407-r11132.jar Start MongoDB Download latest MongoDB from here (2.0.4 as of this writing). Create the default data directory for MongoDB as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db Refer to Quickstart for more details. Start MongoDB as: arungup-mac:mongodb-osx-x86_64-2.0.4 <arungup> ->./bin/mongod./bin/mongod --help for help and startup optionsMon Apr  9 12:56:02 [initandlisten] MongoDB starting : pid=3124 port=27017 dbpath=/data/db/ 64-bit host=arungup-mac.localMon Apr  9 12:56:02 [initandlisten] db version v2.0.4, pdfile version 4.5Mon Apr  9 12:56:02 [initandlisten] git version: 329f3c47fe8136c03392c8f0e548506cb21f8ebfMon Apr  9 12:56:02 [initandlisten] build info: Darwin erh2.10gen.cc 9.8.0 Darwin Kernel Version 9.8.0: Wed Jul 15 16:55:01 PDT 2009; root:xnu-1228.15.4~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_40Mon Apr  9 12:56:02 [initandlisten] options: {}Mon Apr  9 12:56:02 [initandlisten] journal dir=/data/db/journalMon Apr  9 12:56:02 [initandlisten] recover : no journal files present, no recovery neededMon Apr  9 12:56:02 [websvr] admin web console waiting for connections on port 28017Mon Apr  9 12:56:02 [initandlisten] waiting for connections on port 27017 Check out the JPA/NoSQL sample from SVN repository. The complete source code built in this TOTD can be downloaded here. Create Java EE 6 web app Create a Java EE 6 Maven web app as: mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=model -DartifactId=javaee-nosql -DarchetypeVersion=1.5 -DinteractiveMode=false Copy the model files from the checked out workspace to the generated project as: cd javaee-nosqlcp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/model src/main/java Copy "persistence.xml" mkdir src/main/resources cp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/META-INF ./src/main/resources Add the following dependencies: <dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>org.eclipse.persistence.jpa</artifactId> <version>2.4.0-SNAPSHOT</version> <scope>provided</scope></dependency><dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>org.eclipse.persistence.nosql</artifactId> <version>2.4.0-SNAPSHOT</version></dependency><dependency> <groupId>org.mongodb</groupId> <artifactId>mongo-java-driver</artifactId> <version>2.7.3</version></dependency> The first one is for the EclipseLink latest APIs, the second one is for EclipseLink/NoSQL support, and the last one is the MongoDB Java driver. And the following repository: <repositories> <repository> <id>EclipseLink Repo</id> <url>http://www.eclipse.org/downloads/download.php?r=1&amp;nf=1&amp;file=/rt/eclipselink/maven.repo</url> <snapshots> <enabled>true</enabled> </snapshots> </repository>  </repositories> Copy the "Test.java" to the generated project: mkdir src/main/java/examplecp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/example/Test.java ./src/main/java/example/ This file contains the source code to CRUD the JPA entity to MongoDB. This sample is explained in detail on EclipseLink wiki. Create a new Servlet in "example" directory as: package example;import java.io.IOException;import java.io.PrintWriter;import javax.servlet.ServletException;import javax.servlet.annotation.WebServlet;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse;/** * @author Arun Gupta */@WebServlet(name = "TestServlet", urlPatterns = {"/TestServlet"})public class TestServlet extends HttpServlet { protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { out.println("<html>"); out.println("<head>"); out.println("<title>Servlet TestServlet</title>"); out.println("</head>"); out.println("<body>"); out.println("<h1>Servlet TestServlet at " + request.getContextPath() + "</h1>"); try { Test.main(null); } catch (Exception ex) { ex.printStackTrace(); } out.println("</body>"); out.println("</html>"); } finally { out.close(); } } @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); }} Build the project and deploy it as: mvn clean packageglassfish3/bin/asadmin deploy --force=true target/javaee-nosql-1.0-SNAPSHOT.war Accessing http://localhost:8080/javaee-nosql/TestServlet shows the following messages in the server.log: connecting(EISLogin( platform=> MongoPlatform user name=> "" MongoConnectionSpec())) . . .Connected: User: Database: 2.7  Version: 2.7 . . .Executing MappedInteraction() spec => null properties => {mongo.collection=CUSTOMER, mongo.operation=INSERT} input => [DatabaseRecord( CUSTOMER._id => 4F848E2BDA0670307E2A8FA4 CUSTOMER.NAME => AMCE)]. . .Data access result: [{TOTALCOST=757.0, ORDERLINES=[{DESCRIPTION=table, LINENUMBER=1, COST=300.0}, {DESCRIPTION=balls, LINENUMBER=2, COST=5.0}, {DESCRIPTION=rackets, LINENUMBER=3, COST=15.0}, {DESCRIPTION=net, LINENUMBER=4, COST=2.0}, {DESCRIPTION=shipping, LINENUMBER=5, COST=80.0}, {DESCRIPTION=handling, LINENUMBER=6, COST=55.0},{DESCRIPTION=tax, LINENUMBER=7, COST=300.0}], SHIPPINGADDRESS=[{POSTALCODE=L5J1H7, PROVINCE=ON, COUNTRY=Canada, CITY=Ottawa,STREET=17 Jane St.}], VERSION=2, _id=4F848E2BDA0670307E2A8FA8,DESCRIPTION=Pingpong table, CUSTOMER__id=4F848E2BDA0670307E2A8FA7, BILLINGADDRESS=[{POSTALCODE=L5J1H8, PROVINCE=ON, COUNTRY=Canada, CITY=Ottawa, STREET=7 Bank St.}]}] You'll not see any output in the browser, just the output in the console. But the code can be easily modified to do so. Once again, the complete Maven project can be downloaded here. Do you want to try accessing relational and non-relational (aka NoSQL) databases in the same PU ?

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >