Search Results

Search found 3040 results on 122 pages for 'detail'.

Page 118/122 | < Previous Page | 114 115 116 117 118 119 120 121 122  | Next Page >

  • NHibernate AssertException: Interceptor.OnPrepareStatement(SqlString) returned null or empty SqlString.

    - by jwynveen
    I am trying to switch a table from being a many-to-one mapping to being many-to-many with an intermediate mapping table. However, when I switched it over and tried to do a query on it with NHibernate, it's giving me this error: "Interceptor.OnPrepareStatement(SqlString) returned null or empty SqlString." My query was originally something more complex, but I switched it to a basic fetch all and I'm still having the problem: Session.QueryOver<T>().Future(); It would seem to either be a problem in my model mapping files or something in my database. Here are my model mappings: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="GBI.Core" namespace="GBI.Core.Models"> <class name="Market" table="gbi_Market"> <id name="Id" column="MarketId"> <generator class="identity" /> </id> <property name="Name" /> <property name="Url" /> <property name="Description" type="StringClob" /> <property name="Rating" /> <property name="RatingComment" /> <property name="RatingCommentedOn" /> <many-to-one name="RatingCommentedBy" column="RatingCommentedBy" lazy="proxy"></many-to-one> <property name="ImageFilename" /> <property name="CreatedOn" /> <property name="ModifiedOn" /> <property name="IsDeleted" /> <many-to-one name="CreatedBy" column="CreatedBy" lazy="proxy"></many-to-one> <many-to-one name="ModifiedBy" column="ModifiedBy" lazy="proxy"></many-to-one> <set name="Content" where="IsDeleted=0 and ParentContentId is NULL" order-by="Ordering asc, CreatedOn asc, Name asc" lazy="extra"> <key column="MarketId" /> <one-to-many class="MarketContent" /> </set> <set name="FastFacts" where="IsDeleted=0" order-by="Ordering asc, CreatedOn asc, Name asc" lazy="extra"> <key column="MarketId" /> <one-to-many class="MarketFastFact" /> </set> <set name="NewsItems" table="gbi_NewsItem_Market_Map" lazy="true"> <key column="MarketId" /> <many-to-many class="NewsItem" fetch="join" column="NewsItemId" where="IsDeleted=0"/> </set> <!--<set name="MarketUpdates" table="gbi_Market_MarketUpdate_Map" lazy="extra"> <key column="MarketId" /> <many-to-many class="MarketUpdate" fetch="join" column="MarketUpdateId" where="IsDeleted=0" order-by="CreatedOn desc" /> </set>--> <set name="Documents" table="gbi_Market_Document_Map" lazy="true"> <key column="MarketId" /> <many-to-many class="Document" fetch="join" column="DocumentId" where="IsDeleted=0"/> </set> </class> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="GBI.Core" namespace="GBI.Core.Models"> <class name="MarketUpdate" table="gbi_MarketUpdate"> <id name="Id" column="MarketUpdateId"> <generator class="identity" /> </id> <property name="Description" /> <property name="CreatedOn" /> <property name="ModifiedOn" /> <property name="IsDeleted" /> <!--<many-to-one name="Market" column="MarketId" lazy="proxy"></many-to-one>--> <set name="Comments" where="IsDeleted=0" order-by="CreatedOn desc" lazy="extra"> <key column="MarketUpdateId" /> <one-to-many class="MarketUpdateComment" /> </set> <many-to-one name="CreatedBy" column="CreatedBy" lazy="proxy"></many-to-one> <many-to-one name="ModifiedBy" column="ModifiedBy" lazy="proxy"></many-to-one> </class> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="GBI.Core" namespace="GBI.Core.Models"> <class name="MarketUpdateMarketMap" table="gbi_Market_MarketUpdate_Map"> <id name="Id" column="MarketUpdateMarketMapId"> <generator class="identity" /> </id> <property name="CreatedOn" /> <property name="ModifiedOn" /> <property name="IsDeleted" /> <many-to-one name="CreatedBy" column="CreatedBy" lazy="proxy"></many-to-one> <many-to-one name="ModifiedBy" column="ModifiedBy" lazy="proxy"></many-to-one> <many-to-one name="MarketUpdate" column="MarketUpdateId" lazy="proxy"></many-to-one> <many-to-one name="Market" column="MarketId" lazy="proxy"></many-to-one> </class> As I mentioned, MarketUpdate was originally a many-to-one with Market (MarketId column is still in there, but I'm ignoring it. Could this be a problem?). But I've added in the Market_MarketUpdate_Map table to make it a many-to-many. I'm running in circles trying to figure out what this could be. I couldn't find any reference to this error when searching. And it doesn't provide much detail. Using: NHibernate 2.2 .NET 4.0 SQL Server 2005

    Read the article

  • Android ksoap nested soap objects in request gives error in response

    - by Smalesy
    I'm trying to do the following soap request on Android using KSOAP. It contains a list of nested soap objects. However, I must be doing something wrong as I get an error back. The request I am trying to generate is as follows: <?xml version="1.0" encoding="utf-8"?> <soap12:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap12="http://www.w3.org/2003/05/soap-envelope"> <soap12:Body> <SetAttendanceMarks xmlns="http://hostname.net/"> <strSessionToken>string</strSessionToken> <LessonMarks> <Count>int</Count> <LessonMarks> <LessonMark> <StudentId>int</StudentId> <EventInstanceId>int</EventInstanceId> <Mark>string</Mark> </LessonMark> <LessonMark> <StudentId>int</StudentId> <EventInstanceId>int</EventInstanceId> <Mark>string</Mark> </LessonMark> </LessonMarks> </LessonMarks> </SetAttendanceMarks> </soap12:Body> </soap12:Envelope> My code is as follows: public boolean setAttendanceMarks(List<Mark> list) throws Exception { boolean result = false; String methodName = "SetAttendanceMarks"; String soapAction = getHost() + "SetAttendanceMarks"; SoapObject lessMarksN = new SoapObject(getHost(), "LessonMarks"); for (Mark m : list) { PropertyInfo smProp =new PropertyInfo(); smProp.setName("LessonMark"); smProp.setValue(m); smProp.setType(Mark.class); lessMarksN.addProperty(smProp); } PropertyInfo cProp =new PropertyInfo(); cProp.setName("Count"); cProp.setValue(list.size()); cProp.setType(Integer.class); SoapObject lessMarks = new SoapObject(getHost(), "LessonMarks"); lessMarks.addProperty(cProp); lessMarks.addSoapObject(lessMarksN); PropertyInfo sProp =new PropertyInfo(); sProp.setName("strSessionToken"); sProp.setValue(mSession); sProp.setType(String.class); SoapObject request = new SoapObject(getHost(), methodName); request.addProperty(sProp); request.addSoapObject(lessMarks); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER12); envelope.dotNet = true; envelope.setOutputSoapObject(request); HttpTransportSE androidHttpTransport = new HttpTransportSE(getURL()); androidHttpTransport.debug = true; androidHttpTransport.call(soapAction, envelope); String a = androidHttpTransport.requestDump; String b = androidHttpTransport.responseDump; SoapObject resultsRequestSOAP = (SoapObject) envelope.bodyIn; SoapObject res = (SoapObject) resultsRequestSOAP.getProperty(0); String resultStr = res.getPropertyAsString("Result"); if (resultStr.contentEquals("OK")) { result = true; } return result; } The error I get is as follows: <?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <soap:Fault> <soap:Code> <soap:Value>soap:Sender</soap:Value> </soap:Code> <soap:Reason> <soap:Text xml:lang="en">Server was unable to read request. ---&gt; There is an error in XML document (1, 383). ---&gt; The specified type was not recognized: name='LessonMarks', namespace='http://gsdregapp.net/', at &lt;LessonMarks xmlns='http://gsdregapp.net/'&gt;.</soap:Text> </soap:Reason> <soap:Detail /> </soap:Fault> </soap:Body> </soap:Envelope> Can anybody tell me what I am doing wrong? I will be most grateful for any assistance!

    Read the article

  • When spliting MP4s with ffmpeg how do I include metadata?

    - by Josh
    I have a few MP4s that i want to upload to my flickr account but they have a maximum size of 500mb as mine is only about 550 i was planing to simply split them in half then upload them, but i want to make sure all the meta data is included but it does not seem to be. I have tried each of the following with no luck, (at the end of this post i have the original and the new ffprobe outputs): ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_metadata 0:0 SANY0069A.MP4 ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_meta_data SANY0069.MP4:SANY0069A.MP4 SANY0069A.MP4 with the this one I manually produced the individual meta tags that i took from this command ffmpeg -i SANY0069A.MP4 -f ffmetadata meta.txt ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -metadata major_brand="mp42" -metadata minor_version="1" -metadata compatible_brands="mp42avc1" -metadata creation_time="2012-09-29 09:05:50" -metadata comment="SANYO DIGITAL CAMERA CA9" -metadata comment-eng="SANYO DIGITAL CAMERA CA9" SANY0069A.MP4 using the output of the former command i also tried this: ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -f ffmetadata -i meta.txt SANY0069A.MP4 Output: sample output from my first command: ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_metadata 0:0 SANY0069A.MP4 ffmpeg version 0.8.12, Copyright (c) 2000-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 Duration: 00:08:38.71, start: 0.000000, bitrate: 9142 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9007 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 File 'SANY0069A.MP4' already exists. Overwrite ? [y/N] y Output #0, mp4, to 'SANY0069A.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 encoder : Lavf53.5.0 Stream #0.0(eng): Video: libx264, yuv420p, 1280x720 [PAR 1:1 DAR 16:9], q=2-31, 9007 kb/s, 30k tbn, 29.97 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop, [?] for help frame= 7773 fps=4644 q=-1.0 Lsize= 289607kB time=00:04:19.35 bitrate=9147.4kbits/s video:285416kB audio:4033kB global headers:0kB muxing overhead 0.054571% and finaly, when i compare the ffprobe of the original and the first split part i get the 2 following outputs: original ffprobe version 0.8.12, Copyright (c) 2007-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 Duration: 00:08:38.71, start: 0.000000, bitrate: 9142 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9007 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 Split ffprobe version 0.8.12, Copyright (c) 2007-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069A.MP4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf53.5.0 comment : SANYO DIGITAL CAMERA CA9 Duration: 00:04:19.37, start: 0.000000, bitrate: 9146 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9015 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 1970-01-01 00:00:00 I know this is incredibly long but its actually a quite simple question. I thought it would be best to provide as much detail as possible. any advice here would be great, Thanks

    Read the article

  • Suspended Laptop Cannot Wake Up - Ubuntu

    - by Zack
    I've got an ASUS G73JH, and whenever I suspend it or hibernate it, it will not wake up. The screen stays backlight but is black. The fan remains running, however the HDD does not, not disk activity is noticeable (audibly (It's not a SSD)). I can't: Awaken it with the keyboard Awaken it with the mouse Soft power-off by pressing the power button Change virtual screens by pressing Ctrl-Alt-# Restart X by pressing Ctrl-Alt-Backspace I have to hold down the power button and shut it down that way, and this seems a little unreasonable. Is there a place I could look for more detail as to what's causing this? Is there a known quick-fix to this issue? Nothing is logged as happening when the system is in "suspend" mode. Here's what happened immediately before and after the suspend "happened," note the time gap: May 4 17:46:13 tofu NetworkManager: <info> (eth0): carrier now OFF (device state 1) May 4 17:48:57 tofu kernel: imklog 4.2.0, log source = /proc/kmsg started. This one's kinda long, here's what happened immediately before the suspend, I'm not sure if it'll help but if you can find a use for it: May 4 17:46:10 tofu anacron[3353]: Anacron 2.3 started on 2010-05-04 May 4 17:46:10 tofu anacron[3353]: Normal exit (0 jobs run) May 4 17:46:10 tofu kernel: [ 2241.775927] CPU0 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.775958] CPU1 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.775987] CPU2 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.776138] CPU3 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.776168] CPU4 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.776197] CPU5 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.776200] CPU6 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.776229] CPU7 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.919611] CPU0 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.919668] domain 0: span 0,4 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.919699] groups: 0 (cpu_power = 589) 4 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.919733] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.919762] groups: 0,4 (cpu_power = 1178) 1,5 (cpu_power = 1178) 2,6 (cpu_power = 1178) 3,7 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.919850] CPU1 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.919852] domain 0: span 1,5 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.919881] groups: 1 (cpu_power = 589) 5 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.919912] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.919915] groups: 1,5 (cpu_power = 1178) 2,6 (cpu_power = 1178) 3,7 (cpu_power = 1178) 0,4 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.920003] CPU2 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.920005] domain 0: span 2,6 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.920033] groups: 2 (cpu_power = 589) 6 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.920065] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.920093] groups: 2,6 (cpu_power = 1178) 3,7 (cpu_power = 1178) 0,4 (cpu_power = 1178) 1,5 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.920155] CPU3 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.920157] domain 0: span 3,7 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.920185] groups: 3 (cpu_power = 589) 7 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.920217] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.920245] groups: 3,7 (cpu_power = 1178) 0,4 (cpu_power = 1178) 1,5 (cpu_power = 1178) 2,6 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.920307] CPU4 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.920335] domain 0: span 0,4 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.920337] groups: 4 (cpu_power = 589) 0 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.920368] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.920397] groups: 0,4 (cpu_power = 1178) 1,5 (cpu_power = 1178) 2,6 (cpu_power = 1178) 3,7 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.920459] CPU5 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.920487] domain 0: span 1,5 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.920489] groups: 5 (cpu_power = 589) 1 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.920520] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.920549] groups: 1,5 (cpu_power = 1178) 2,6 (cpu_power = 1178) 3,7 (cpu_power = 1178) 0,4 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.920611] CPU6 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.920639] domain 0: span 2,6 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.920641] groups: 6 (cpu_power = 589) 2 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.920699] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.920701] groups: 2,6 (cpu_power = 1178) 3,7 (cpu_power = 1178) 0,4 (cpu_power = 1178) 1,5 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.920762] CPU7 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.920791] domain 0: span 3,7 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.920793] groups: 7 (cpu_power = 589) 3 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.920851] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.920853] groups: 3,7 (cpu_power = 1178) 0,4 (cpu_power = 1178) 1,5 (cpu_power = 1178) 2,6 (cpu_power = 1178) May 4 17:46:12 tofu NetworkManager: <info> Sleeping... May 4 17:46:12 tofu NetworkManager: <info> (wlan0): now unmanaged May 4 17:46:12 tofu NetworkManager: <info> (wlan0): device state change: 8 -> 1 (reason 37) May 4 17:46:12 tofu NetworkManager: <info> (wlan0): deactivating device (reason: 37). May 4 17:46:12 tofu NetworkManager: <info> (wlan0): canceled DHCP transaction, dhcp client pid 1984 May 4 17:46:12 tofu kernel: [ 2244.084515] wlan0: deauthenticating from 68:7f:74:23:02:ae by local choice (reason=3) May 4 17:46:12 tofu avahi-daemon[1176]: Withdrawing address record for 192.168.1.2 on wlan0. May 4 17:46:12 tofu avahi-daemon[1176]: Leaving mDNS multicast group on interface wlan0.IPv4 with address 192.168.1.2. May 4 17:46:12 tofu avahi-daemon[1176]: Interface wlan0.IPv4 no longer relevant for mDNS. May 4 17:46:12 tofu NetworkManager: <info> Policy set 'Auto eth0' (eth0) as default for routing and DNS. May 4 17:46:12 tofu NetworkManager: <info> (wlan0): cleaning up... May 4 17:46:12 tofu NetworkManager: <info> (wlan0): taking down device. May 4 17:46:12 tofu avahi-daemon[1176]: Withdrawing address record for 2002:4c6e:638a:0:1e4b:d6ff:fe78:951d on wlan0. May 4 17:46:12 tofu wpa_supplicant[1212]: CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys May 4 17:46:13 tofu NetworkManager: <info> (eth0): now unmanaged May 4 17:46:13 tofu NetworkManager: <info> (eth0): device state change: 8 -> 1 (reason 37) May 4 17:46:13 tofu NetworkManager: <info> (eth0): deactivating device (reason: 37). May 4 17:46:13 tofu NetworkManager: <info> (eth0): canceled DHCP transaction, dhcp client pid 1559 May 4 17:46:13 tofu NetworkManager: <WARN> check_one_route(): (eth0) error -34 returned from rtnl_route_del(): Sucess#012 May 4 17:46:13 tofu avahi-daemon[1176]: Withdrawing address record for 192.168.1.3 on eth0. May 4 17:46:13 tofu avahi-daemon[1176]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.1.3. May 4 17:46:13 tofu avahi-daemon[1176]: Interface eth0.IPv4 no longer relevant for mDNS. May 4 17:46:13 tofu NetworkManager: <info> (eth0): cleaning up... May 4 17:46:13 tofu NetworkManager: <info> (eth0): taking down device. May 4 17:46:13 tofu avahi-daemon[1176]: Withdrawing address record for 2002:4c6e:638a:0:4a5b:39ff:fe0b:325d on eth0. May 4 17:46:13 tofu NetworkManager: <info> (eth0): carrier now OFF (device state 1)

    Read the article

  • WCF Authentication on the Internet - HELP

    - by Eddie
    I have a WCF service using the basicHTTP binding. The service will be targeted to be deployed in production in a DMZ environment on a Windows Server 2008 64 bit running IIS 7.0 and is not in an Active Directory domain. The service will be accessed by a business partner over the Internet with SSL protection. Originally, I had built the service to use x.509 Message authentication with wsHTTPBinding and after a lot of problems I punted and decided to back up and use basicHTTP with UserName authentication. Result: same exact, obscure error message as I received with certificate mode. The service works perfectly inside our domain with the exact same authentication but as soon as I move it to the DMZ I get an error reading: "An unsecured or incorrectly secured fault was received from the other party. See the inner FaultException for the fault code and detail". The inner exception message is: "An error occurred when verifying security for the message." The services' web config with binding configuration is as follows: <services> <service behaviorConfiguration="HSSanoviaFacade.Service1Behavior" name="HSSanoviaFacade.HSSanoviaFacade"> <endpoint address="" binding="basicHttpBinding" contract="HSSanoviaFacade.IHSSanoviaFacade" bindingConfiguration="basicHttp"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="https://FULLY QUALIFIED HOST NAME CHANGED TO PROTECT/> </baseAddresses> </host> </service> </services> <bindings> <basicHttpBinding> <binding name="basicHttp"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName" /> </security> </binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="HSSanoviaFacade.Service1Behavior"> <serviceMetadata httpsGetEnabled="True" /> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> The test client's configuration that gets the error: <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IHSSanoviaFacade" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="https://HOST NAME CHANGED TO PROTECT" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IHSSanoviaFacade" contract="MembersService.IHSSanoviaFacade" name="BasicHttpBinding_IHSSanoviaFacade" /> </client> As mentioned earlier, the service works perfectly on the domain and the production IIS box is not on a domain. I have been tweaking and pulling my hair out for 2 weeks now and nothing seems to work. If anyone can help I would appreciate it. Even a recommendation for a work around for authentication. I'd rather not use a custom authentication scheme but use built-in SOAP capabilities. The credentials pass in thru the proxy i.e. proxy.ClientCredentials.UserName.UserName and proxy.ClientCredentials.UserName.Password are valid accounts on both the internal domain in the test environment and as a machine account on the DMZ IIS box.

    Read the article

  • Pass a Delphi class to a C++ function/method that expects a class with __thiscall methods.

    - by Alan G.
    I have some MSVC++ compiled DLL's for which I have created COM-like (lite) interfaces (abstract Delphi classes). Some of those classes have methods that need pointers to objects. These C++ methods are declared with the __thiscall calling convention (which I cannot change), which is just like __stdcall, except a this pointer is passed on the ECX register. I create the class instance in Delphi, then pass it on to the C++ method. I can set breakpoints in Delphi and see it hitting the exposed __stdcall methods in my Delphi class, but soon I get a STATUS_STACK_BUFFER_OVERRUN and the app has to exit. Is it possible to emulate/deal with __thiscall on the Delphi side of things? If I pass an object instantiated by the C++ system then all is good, and that object's methods are called (as would be expected), but this is useless - I need to pass Delphi objects. Edit 2010-04-19 18:12 This is what happens in more detail: The first method called (setLabel) exits with no error (though its a stub method). The second method called (init), enters then dies when it attempts to read the vol parameter. C++ Side #define SHAPES_EXPORT __declspec(dllexport) // just to show the value class SHAPES_EXPORT CBox { public: virtual ~CBox() {} virtual void init(double volume) = 0; virtual void grow(double amount) = 0; virtual void shrink(double amount) = 0; virtual void setID(int ID = 0) = 0; virtual void setLabel(const char* text) = 0; }; Delphi Side IBox = class public procedure destroyBox; virtual; stdcall; abstract; procedure init(vol: Double); virtual; stdcall; abstract; procedure grow(amount: Double); virtual; stdcall; abstract; procedure shrink(amount: Double); virtual; stdcall; abstract; procedure setID(val: Integer); virtual; stdcall; abstract; procedure setLabel(text: PChar); virtual; stdcall; abstract; end; TMyBox = class(IBox) protected FVolume: Double; FID: Integer; FLabel: String; // public constructor Create; destructor Destroy; override; // BEGIN Virtual Method implementation procedure destroyBox; override; stdcall; // empty - Dont need/want C++ to manage my Delphi objects, just call their methods procedure init(vol: Double); override; stdcall; // FVolume := vol; procedure grow(amount: Double); override; stdcall; // Inc(FVolume, amount); procedure shrink(amount: Double); override; stdcall; // Dec(FVolume, amount); procedure setID(val: Integer); override; stdcall; // FID := val; procedure setLabel(text: PChar); override; stdcall; // Stub method; empty. // END Virtual Method implementation property Volume: Double read FVolume; property ID: Integer read FID; property Label: String read FLabel; end; I would have half expected using stdcall alone to work, but something is messing up, not sure what, perhaps something to do with the ECX register being used? Help would be greatly appreciated. Edit 2010-04-19 17:42 Could it be that the ECX register needs to be preserved on entry and restored once the function exits? Is the this pointer required by C++? I'm probably just reaching at the moment based on some intense Google searches. I found something related, but it seems to be dealing with the reverse of this issue.

    Read the article

  • Yet another C# Deadlock Debugging Question

    - by Roo
    Hi All, I have a multi-threaded application build in C# using VS2010 Professional. It's quite a large application and we've experienced the classing GUI cross-threading and deadlock issues before, but in the past month we've noticed the appears to lock up when left idle for around 20-30 minutes. The application is irresponsive and although it will repaint itself when other windows are dragged in front of the application and over it, the GUI still appears to be locked... interstingly (unlike if the GUI thread is being used for a considerable amount of time) the Close, Maximise and minimise buttons are also irresponsive and when clicked the little (Not Responding...) text is not displayed in the title of the application i.e. Windows still seems to think it's running fine. If I break/pause the application using the debugger, and view the threads that are running. There are 3 threads of our managed code that are running, and a few other worker threads whom the source code cannot be displayed for. The 3 threads that run are: The main/GUI thread A thread that loops indefinitely A thread that loops indefinitely If I step into threads 2 and 3, they appear to be looping correctly. They do not share locks (even with the main GUI thread) and they are not using the GUI thread at all. When stepping into the main/GUI thread however, it's broken on Application.Run... This problem screams deadlock to me, but what I don't understand is if it's deadlock, why can't I see the line of code the main/GUI thread is hanging on? Any help will be greatly appreciated! Let me know if you need more information... Cheers, Roo -----------------------------------------------------SOLUTION-------------------------------------------------- Okay, so the problem is now solved. Thanks to everyone for their suggestions! Much appreciated! I've marked the answer that solved my initial problem of determining where on the main/UI thread the application hangs (I handn't turned off the "Enable Just My Code" option). The overall issue I was experiencing was indeed Deadlock, however. After obtaining the call-stack and popping the top half of it into Google I came across this which explains exactly what I was experiencing... http://timl.net/ This references a lovely guide to debugging the issue... http://www.aaronlerch.com/blog/2008/12/15/debugging-ui/ This identified a control I was constructing off the GUI thread. I did know this, however, and was marshalling calls correctly, but what I didn't realise was that behind the scenes this Control was subscribing to an event or set of events that are triggered when e.g. a Windows session is unlocked or the screensaver exits. These calls are always made on the main/UI thread and were blocking when it saw the call was made on the incorrect thread. Kim explains in more detail here... http://krgreenlee.blogspot.com/2007/09/onuserpreferencechanged-hang.html In the end I found an alternative solution which did not require this Control off the main/UI thread. That appears to have solved the problem and the application no longer hangs. I hope this helps anyone who's confronted by a similar problem. Thanks again to everyone on here who helped! (and indirectly, the delightful bloggers I've referenced above!) Roo -----------------------------------------------------SOLUTION II-------------------------------------------------- Aren't threading issues delightful...you think you've solved it, and a month down the line it pops back up again. I still believe the solution above resolved an issue that would cause simillar behaviour, but we encountered the problem again. As we spent a while debugging this, I thought I'd update this question with our (hopefully) final solution: The problem appears to have been a bug in the Infragistics components in the WinForms 2010.1 release (no hot fixes). We had been running from around the time the freeze issue appeared (but had also added a bunch of other stuff too). After upgrading to WinForms 2010.3, we've yet to reproduce the issue (deja vu). See my question here for a bit more information: 'http://stackoverflow.com/questions/4077822/net-4-0-and-the-dreaded-onuserpreferencechanged-hang'. Hans has given a nice summary of the general issue. I hope this adds a little to the suggestions/information surrounding the nutorious OnUserPreferenceChanged Hang (or whatever you'd like to call it). Cheers, Roo

    Read the article

  • JavaScript snippet that populates the table

    - by kayn
    I would like to write a JavaScript snippet that populates the table based on the selection, and not create several details panes and toggle their visibility. I tried implement this using the following code but its not working as desired,firstly,it only works with internet explorer under certain conditions and it just toggles visibility of detail panes;Below is my code; <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <HTML> <HEAD> </HEAD> <BODY onLoad="tblTB_0.style.display='';tblTB_1.style.display='none'; tblTB_2.style.display='none'; tblTB_3.style.display='none'"> <center> <table> <tr> <td> <H1> <align="left"> Candi Colledge of Computing <br/>Course Page </H1> </td> </tr> </table> </center> <hr> </br> <H2><P STYLE="color: blue">Honours Courses.</H2> <left> <p><a href="" onclick="tblTB_1.style.display=''; tblTB_2.style.display='none'; tblTB_3.style.display='none'"> Concurrent Programming</a> <br/> <a href="" onclick="tblTB_1.style.display='none';tblTB_2.style.display=''; tblTB_3.style.display='none'">Simulation of Networks</a><br/> <a href="" onclick="tblTB_1.style.display='none';tblTB_2.style.display='none'; tblTB_3.style.display=''">Advanced Computer Science Topics</a></p> <br> <table style="table-layout: fixed"; border=1> <colgroup> <col width="100px"><col width="150px"><col width="150px"> </colgroup> <tbody id="tblTB_0"> <tr> <td>Course Code</td> <td>Lecturer</td> <td>Hours/Week</td> <td>Credits</td> </tr> </tbody> <tbody id="tblTB_1"> <tr> <td>RW 714</td> <td>Dr. kate</td> <td>2 hrs</td> <td>15</td </tr> </tbody> <tbody id="tblTB_2"> <tr> <td>RW 742</td> <td>Prof. Broz</td> <td>4 hrs</td> <td>10</td> </tr> </tbody> <tbody id="tblTB_3"> <tr> <td>RW 716</td> <td>Consultant</td> <td>3 hrs</td> <td>12</td> </tr> </tbody> </table> </left> <br> </BODY> </HTML>

    Read the article

  • C++ Sentinel/Count Controlled Loop beginning programming

    - by Bryan Hendricks
    Hello all this is my first post. I'm working on a homework assignment with the following parameters. Piecework Workers are paid by the piece. Often worker who produce a greater quantity of output are paid at a higher rate. 1 - 199 pieces completed $0.50 each 200 - 399 $0.55 each (for all pieces) 400 - 599 $0.60 each 600 or more $0.65 each Input: For each worker, input the name and number of pieces completed. Name Pieces Johnny Begood 265 Sally Great 650 Sam Klutz 177 Pete Precise 400 Fannie Fantastic 399 Morrie Mellow 200 Output: Print an appropriate title and column headings. There should be one detail line for each worker, which shows the name, number of pieces, and the amount earned. Compute and print totals of the number of pieces and the dollar amount earned. Processing: For each person, compute the pay earned by multiplying the number of pieces by the appropriate price. Accumulate the total number of pieces and the total dollar amount paid. Sample Program Output: Piecework Weekly Report Name Pieces Pay Johnny Begood 265 145.75 Sally Great 650 422.50 Sam Klutz 177 88.5 Pete Precise 400 240.00 Fannie Fantastic 399 219.45 Morrie Mellow 200 110.00 Totals 2091 1226.20 You are required to code, compile, link, and run a sentinel-controlled loop program that transforms the input to the output specifications as shown in the above attachment. The input items should be entered into a text file named piecework1.dat and the ouput file stored in piecework1.out . The program filename is piecework1.cpp. Copies of these three files should be e-mailed to me in their original form. Read the name using a single variable as opposed to two different variables. To accomplish this, you must use the getline(stream, variable) function as discussed in class, except that you will replace the cin with your textfile stream variable name. Do not forget to code the compiler directive #include < string at the top of your program to acknowledge the utilization of the string variable, name . Your nested if-else statement, accumulators, count-controlled loop, should be properly designed to process the data correctly. The code below will run, but does not produce any output. I think it needs something around line 57 like a count control to stop the loop. something like (and this is just an example....which is why it is not in the code.) count = 1; while (count <=4) Can someone review the code and tell me what kind of count I need to introduce, and if there are any other changes that need to be made. Thanks. [code] //COS 502-90 //November 2, 2012 //This program uses a sentinel-controlled loop that transforms input to output. #include <iostream> #include <fstream> #include <iomanip> //output formatting #include <string> //string variables using namespace std; int main() { double pieces; //number of pieces made double rate; //amout paid per amount produced double pay; //amount earned string name; //name of worker ifstream inFile; ofstream outFile; //***********input statements**************************** inFile.open("Piecework1.txt"); //opens the input text file outFile.open("piecework1.out"); //opens the output text file outFile << setprecision(2) << showpoint; outFile << name << setw(6) << "Pieces" << setw(12) << "Pay" << endl; outFile << "_____" << setw(6) << "_____" << setw(12) << "_____" << endl; getline(inFile, name, '*'); //priming read inFile >> pieces >> pay >> rate; // ,, while (name != "End of File") //while condition test { //begining of loop pay = pieces * rate; getline(inFile, name, '*'); //get next name inFile >> pieces; //get next pieces } //end of loop inFile.close(); outFile.close(); return 0; }[/code]

    Read the article

  • CoreData: Same predicate (IN) returns different fetched results after a Save operation

    - by Jason Lee
    I have code below: NSArray *existedTasks = [[TaskBizDB sharedInstance] fetchTasksWatchedByMeOfProject:projectId]; [context save:&error]; existedTasks = [[TaskBizDB sharedInstance] fetchTasksWatchedByMeOfProject:projectId]; NSArray *allTasks = [[TaskBizDB sharedInstance] fetchTasksOfProject:projectId]; First line returns two objects; Second line save the context; Third line returns just one object, which is contained in the 'two objects' above; And the last line returns 6 objects, containing the 'two objects' returned at the first line. The fetch interface works like below: WXModel *model = [WXModel modelWithEntity:NSStringFromClass([WQPKTeamTask class])]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(%@ IN personWatchers) AND (projectId == %d)", currentLoginUser, projectId]; [model setPredicate:predicate]; NSArray *fetchedTasks = [model fetch]; if (fetchedTasks.count == 0) return nil; return fetchedTasks; What confused me is that, with the same fetch request, why return different results just after a save? Here comes more detail: The 'two objects' returned at the first line are: <WQPKTeamTask: 0x1b92fcc0> (entity: WQPKTeamTask; id: 0x1b9300f0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p9> ; data: { projectId = 372004; taskId = 338001; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); } <WQPKTeamTask: 0xf3f6130> (entity: WQPKTeamTask; id: 0xf3cb8d0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p11> ; data: { projectId = 372004; taskId = 340006; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); } And the only one object returned at third line is: <WQPKTeamTask: 0x1b92fcc0> (entity: WQPKTeamTask; id: 0x1b9300f0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p9> ; data: { projectId = 372004; taskId = 338001; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); } Printing description of allTasks: <_PFArray 0xf30b9a0>( <WQPKTeamTask: 0xf3ab9d0> (entity: WQPKTeamTask; id: 0xf3cda40 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p6> ; data: <fault>), <WQPKTeamTask: 0xf315720> (entity: WQPKTeamTask; id: 0xf3c23a0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p7> ; data: <fault>), <WQPKTeamTask: 0xf3a1ed0> (entity: WQPKTeamTask; id: 0xf3cda30 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p8> ; data: <fault>), <WQPKTeamTask: 0x1b92fcc0> (entity: WQPKTeamTask; id: 0x1b9300f0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p9> ; data: { projectId = 372004; taskId = 338001; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); }), <WQPKTeamTask: 0xf325e50> (entity: WQPKTeamTask; id: 0xf343820 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p10> ; data: <fault>), <WQPKTeamTask: 0xf3f6130> (entity: WQPKTeamTask; id: 0xf3cb8d0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p11> ; data: { projectId = 372004; taskId = 340006; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); }) ) UPDATE 1 If I call the same interface fetchTasksWatchedByMeOfProject: in: #pragma mark - NSFetchedResultsController Delegate - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { I will get 'two objects' as well. UPDATE 2 I've tried: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(ANY personWatchers == %@) AND (projectId == %d)", currentLoginUser, projectId]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(ANY personWatchers.personId == %@) AND (projectId == %d)", currentLoginUserId, projectId]; Still the same result. UPDATE 3 I've checked the save:&error, error is nil.

    Read the article

  • Swap image with jquery and show zoom image

    - by Neil Bradley
    Hi there, On my site I have 4 thumbnail product images that when clicked on swap the main image. This part is working okay. However, on the main image I'm also trying to use the jQZoom script. The zoom script works for the most part, except that the zoomed image always displays the zoom of the first image, rather than the one selected. This can be seen in action here; http://www.wearecapital.com/productdetails-new.asp?id=6626 I was wondering if someone might be able to suggest a solution? My code for the page is here; <% if session("qstring") = "" then session("qstring") = "&amp;rf=latest" maxProducts = 6 prodID = request("id") if prodID = "" or not isnumeric(prodid) then response.Redirect("listproducts.asp?err=1" & session("qstring")) else prodId = cint(prodId) end if SQL = "Select * from products,subcategories,labels where subcat_id = prod_subcategory and label_id = prod_label and prod_id = " & prodID set conn = server.CreateObject("ADODB.connection") conn.Open(Application("DATABASE")) set rs = conn.Execute(SQL) if rs.eof then ' product is not valid name = "Error - product id " & prodID & " is not available" else image1 = rs.fields("prod_image1") image1Desc = rs.fields("prod_image1Desc") icon = rs.fields("prod_icon") subcat = rs.fields("prod_subcategory") image2 = rs.fields("prod_image2") image2Desc = rs.fields("prod_image2Desc") image3 = rs.fields("prod_image3") image3Desc = rs.fields("prod_image3Desc") image4 = rs.fields("prod_image4") image4Desc = rs.fields("prod_image4Desc") zoomimg = rs.Fields("prod_zoomimg") zoomimg2 = rs.Fields("prod_zoomimg2") zoomimg3 = rs.Fields("prod_zoomimg3") zoomimg4 = rs.Fields("prod_zoomimg4") thumb1 = rs.fields("prod_preview1").value thumb2 = rs.fields("prod_preview2").value thumb3 = rs.fields("prod_preview3").value thumb4 = rs.fields("prod_preview4").value end if set rs = nothing conn.Close set conn = nothing %> <!-- #include virtual="/includes/head-product.asp" --> <body id="detail"> <!-- #include virtual="/includes/header.asp" --> <script type="text/javascript" language="javascript"> function switchImg(imgName) { var ImgX = document.getElementById("mainimg"); ImgX.src="/images/products/" + imgName; } </script> <script type="text/javascript"> $(document).ready(function(){ var options = { zoomWidth: 466, zoomHeight: 260, xOffset: 34, yOffset: 0, title: false, position: "right" //and MORE OPTIONS }; $(".MYCLASS").jqzoom(options); }); </script> <!-- #include virtual="/includes/nav.asp" --> <div id="column-left"> <div id="main-image"> <% if oldie = false then %><a href="/images/products/<%=zoomimg%>" class="MYCLASS" title="MYTITLE"><img src="/images/products/<%=image1%>" title="IMAGE TITLE" name="mainimg" id="mainimg" style="width:425px; height:638px;" ></a><% end if %> </div> </div> <div id="column-right"> <div id="altviews"> <h3 class="altviews">Alternative Views</h3> <ul> <% if oldie = false then writeThumb thumb1,image1,zoomimg,image1desc writeThumb thumb2,image2,zoomimg2,image2desc writeThumb thumb3,image3,zoomimg3,image3desc writeThumb thumb4,image4,zoomimg4,image4desc end if %> </ul> </div> </div> <!-- #include virtual="/includes/footer-test.asp" --> <% sub writeThumb(thumbfile, imgfile, zoomfile, thumbdesc) response.Write "<li>" if thumbfile <> "65/default_preview.jpg" and thumbfile <> "" and not isnull(thumbfile) then if imgFile <> "" and not isnull(imgfile) then rimgfile = replace(imgfile,"/","//") else rimgfile = "" if thumbdesc <> "" and not isnull(thumbdesc) then rDescription = replace(thumbdesc,"""","&quot;") else rDescription = "" response.write "<img src=""/images/products/"& thumbfile &""" style=""cursor: pointer"" border=""0"" style=""width:65px; height:98px;"" title="""& rDescription &""" onclick=""switchImg('" & rimgfile & "')"" />" & vbcrlf else response.write "<img src=""/images/products/65/default_preview.jpg"" alt="""" />" & vbCrLF end if response.write "</li>" & vbCrLF end sub %>

    Read the article

  • Many-to-one relation exception due to closed session after loading

    - by Nick Thissen
    Hi, I am using NHibernate (version 1.2.1) for the first time so I wrote a simple test application (an ASP.NET project) that uses it. In my database I have two tables: Persons and Categories. Each person gets one category, seems easy enough. | Persons | | Categories | |--------------| |--------------| | Id (PK) | | Id (PK) | | Firstname | | CategoryName | | Lastname | | CreatedTime | | CategoryId | | UpdatedTime | | CreatedTime | | Deleted | | UpdatedTime | | Deleted | The Id, CreatedTime, UpdatedTime and Deleted attributes are a convention I use in all my tables, so I have tried to bring this fact into an additional abstraction layer. I have a project DatabaseFramework which has three important classes: Entity: an abstract class that defines these four properties. All 'entity objects' (in this case Person and Category) must inherit Entity. IEntityManager: a generic interface (type parameter as Entity) that defines methods like Load, Insert, Update, etc. NHibernateEntityManager: an implementation of this interface using NHibernate to do the loading, saving, etc. Now, the Person and Category classes are straightforward, they just define the attributes of the tables of course (keeping in mind that four of them are in the base Entity class). Since the Persons table is related to the Categories table via the CategoryId attribute, the Person class has a Category property that holds the related category. However, in my webpage, I will also need the name of this category (CategoryName), for databinding purposes for example. So I created an additional property CategoryName that returns the CategoryName property of the current Category property, or an empty string if the Category is null: Namespace Database Public Class Person Inherits DatabaseFramework.Entity Public Overridable Property Firstname As String Public Overridable Property Lastname As String Public Overridable Property Category As Category Public Overridable ReadOnly Property CategoryName As String Get Return If(Me.Category Is Nothing, _ String.Empty, _ Me.Category.CategoryName) End Get End Property End Class End Namespace I am mapping the Person class using this mapping file. The many-to-one relation was suggested by Yads in another thread: <id name="Id" column="Id" type="int" unsaved-value="0"> <generator class="identity" /> </id> <property name="CreatedTime" type="DateTime" not-null="true" /> <property name="UpdatedTime" type="DateTime" not-null="true" /> <property name="Deleted" type="Boolean" not-null="true" /> <property name="Firstname" type="String" /> <property name="Lastname" type="String" /> <many-to-one name="Category" column="CategoryId" class="NHibernateWebTest.Database.Category, NHibernateWebTest" /> (I can't get it to show the root node, this forum hides it, I don't know how to escape the html-like tags...) The final important detail is the Load method of the NHibernateEntityManager implementation. (This is in C# as it's in a different project, sorry about that). I simply open a new ISession (ISessionFactory.OpenSession) in the GetSession method and then use that to fill an EntityCollection(Of TEntity) which is just a collection inheriting System.Collections.ObjectModel.Collection(Of T). public virtual EntityCollection< TEntity Load() { using (ISession session = this.GetSession()) { var entities = session .CreateCriteria(typeof (TEntity)) .Add(Expression.Eq("Deleted", false)) .List< TEntity (); return new EntityCollection< TEntity (entities); } } (Again, I can't get it to format the code correctly, it hides the generic type parameters, probably because it reads the angled symbols as a HTML tag..? If you know how to let me do that, let me know!) Now, the idea of this Load method is that I get a fully functional collection of Persons, all their properties set to the correct values (including the Category property, and thus, the CategoryName property should return the correct name). However, it seems that is not the case. When I try to data-bind the result of this Load method to a GridView in ASP.NET, it tells me this: Property accessor 'CategoryName' on object 'NHibernateWebTest.Database.Person' threw the following exception:'Could not initialize proxy - the owning Session was closed.' The exception occurs on the DataBind method call here: public virtual void LoadGrid() { if (this.Grid == null) return; this.Grid.DataSource = this.Manager.Load(); this.Grid.DataBind(); } Well, of course the session is closed, I closed it via the using block. Isn't that the correct approach, should I keep the session open? And for how long? Can I close it after the DataBind method has been run? In each case, I'd really like my Load method to just return a functional collection of items. It seems to me that it is now only getting the Category when it is required (eg, when the GridView wants to read the CategoryName, which wants to read the Category property), but at that time the session is closed. Is that reasoning correct? How do I stop this behavior? Or shouldn't I? And what should I do otherwise? Thanks!

    Read the article

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • jQuery and Windows Azure

    - by Stephen Walther
    The goal of this blog entry is to describe how you can host a simple Ajax application created with jQuery in the Windows Azure cloud. In this blog entry, I make no assumptions. I assume that you have never used Windows Azure and I am going to walk through the steps required to host the application in the cloud in agonizing detail. Our application will consist of a single HTML page and a single service. The HTML page will contain jQuery code that invokes the service to retrieve and display set of records. There are five steps that you must complete to host the jQuery application: Sign up for Windows Azure Create a Hosted Service Install the Windows Azure Tools for Visual Studio Create a Windows Azure Cloud Service Deploy the Cloud Service Sign Up for Windows Azure Go to http://www.microsoft.com/windowsazure/ and click the Sign up Now button. Select one of the offers. I selected the Introductory Special offer because it is free and I just wanted to experiment with Windows Azure for the purposes of this blog entry.     To sign up, you will need a Windows Live ID and you will need to enter a credit card number. After you finish the sign up process, you will receive an email that explains how to activate your account. Accessing the Developer Portal After you create your account and your account is activated, you can access the Windows Azure developer portal by visiting the following URL: http://windows.azure.com/ When you first visit the developer portal, you will see the one project that you created when you set up your Windows Azure account (In a fit of creativity, I named my project StephenWalther).     Creating a New Windows Azure Hosted Service Before you can host an application in the cloud, you must first add a hosted service to your project. Click your project on the summary page and click the New Service link. You are presented with the option of creating either a new Storage Account or a new Hosted Services.     Because we have code that we want to run in the cloud – the WCF Service -- we want to select the Hosted Services option. After you select this option, you must provide a name and description for your service. This information is used on the developer portal so you can distinguish your services.     When you create a new hosted service, you must enter a unique name for your service (I selected jQueryApp) and you must select a region for this service (I selected Anywhere US). Click the Create button to create the new hosted service.   Install the Windows Azure Tools for Visual Studio We’ll use Visual Studio to create our jQuery project. Before you can use Visual Studio with Windows Azure, you must first install the Windows Azure Tools for Visual Studio. Go to http://www.microsoft.com/windowsazure/ and click the Get Tools and SDK button. The Windows Azure Tools for Visual Studio works with both Visual Studio 2008 and Visual Studio 2010.   Installation of the Windows Azure Tools for Visual Studio is painless. You just need to check some agreement checkboxes and click the Next button a few times and installation will begin:   Creating a Windows Azure Application After you install the Windows Azure Tools for Visual Studio, you can choose to create a Windows Azure Cloud Service by selecting the menu option File, New Project and selecting the Windows Azure Cloud Service project template. I named my new Cloud Service with the name jQueryApp.     Next, you need to select the type of Cloud Service project that you want to create from the New Cloud Service Project dialog.   I selected the C# ASP.NET Web Role option. Alternatively, I could have picked the ASP.NET MVC 2 Web Role option if I wanted to use jQuery with ASP.NET MVC or even the CGI Web Role option if I wanted to use jQuery with PHP. After you complete these steps, you end up with two projects in your Visual Studio solution. The project named WebRole1 represents your ASP.NET application and we will use this project to create our jQuery application. Creating the jQuery Application in the Cloud We are now ready to create the jQuery application. We’ll create a super simple application that displays a list of records retrieved from a WCF service (hosted in the cloud). Create a new page in the WebRole1 project named Default.htm and add the following code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> var products = [ {name:"Milk", price:4.55}, {name:"Yogurt", price:2.99}, {name:"Steak", price:23.44} ]; $("#productTemplate").render(products).appendTo("#productContainer"); </script> </body> </html> The jQuery code in this page simply displays a list of products by using a template. I am using a jQuery template to format each product. You can learn more about using jQuery templates by reading the following blog entry by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2010/05/07/jquery-templates-and-data-linking-and-microsoft-contributing-to-jquery.aspx You can test whether the Default.htm page is working correctly by running your application (hit the F5 key). The first time that you run your application, a database is set up on your local machine to simulate cloud storage. You will see the following dialog: If the Default.htm page works as expected, you should see the list of three products: Adding an Ajax-Enabled WCF Service In the previous section, we created a simple jQuery application that displays an array by using a template. The application is a little too simple because the data is static. In this section, we’ll modify the page so that the data is retrieved from a WCF service instead of an array. First, we need to add a new Ajax-enabled WCF Service to the WebRole1 project. Select the menu option Project, Add New Item and select the Ajax-enabled WCF Service project item. Name the new service ProductService.svc. Modify the service so that it returns a static collection of products. The final code for the ProductService.svc should look like this: using System.Collections.Generic; using System.ServiceModel; using System.ServiceModel.Activation; namespace WebRole1 { public class Product { public string name { get; set; } public decimal price { get; set; } } [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class ProductService { [OperationContract] public IList<Product> SelectProducts() { var products = new List<Product>(); products.Add(new Product {name="Milk", price=4.55m} ); products.Add(new Product { name = "Yogurt", price = 2.99m }); products.Add(new Product { name = "Steak", price = 23.44m }); return products; } } }   In real life, you would want to retrieve the list of products from storage instead of a static array. We are being lazy here. Next you need to modify the Default.htm page to use the ProductService.svc. The jQuery script in the following updated Default.htm page makes an Ajax call to the WCF service. The data retrieved from the ProductService.svc is displayed in the client template. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> $.post("ProductService.svc/SelectProducts", function (results) { var products = results["d"]; $("#productTemplate").render(products).appendTo("#productContainer"); }); </script> </body> </html>   Deploying the jQuery Application to the Cloud Now that we have created our jQuery application, we are ready to deploy our application to the cloud so that the whole world can use it. Right-click your jQueryApp project in the Solution Explorer window and select the Publish menu option. When you select publish, your application and your application configuration information is packaged up into two files named jQueryApp.cspkg and ServiceConfiguration.cscfg. Visual Studio opens the directory that contains the two files. In order to deploy these files to the Windows Azure cloud, you must upload these files yourself. Return to the Windows Azure Developers Portal at the following address: http://windows.azure.com/ Select your project and select the jQueryApp service. You will see a mysterious cube. Click the Deploy button to upload your application.   Next, you need to browse to the location on your hard drive where the jQueryApp project was published and select both the packaged application and the packaged application configuration file. Supply the deployment with a name and click the Deploy button.     While your application is in the process of being deployed, you can view a progress bar.     Running the jQuery Application in the Cloud Finally, you can run your jQuery application in the cloud by clicking the Run button.   It might take several minutes for your application to initialize (go grab a coffee). After WebRole1 finishes initializing, you can navigate to the following URL to view your live jQuery application in the cloud: http://jqueryapp.cloudapp.net/default.htm The page is hosted on the Windows Azure cloud and the WCF service executes every time that you request the page to retrieve the list of products. Summary Because we started from scratch, we needed to complete several steps to create and deploy our jQuery application to the Windows Azure cloud. We needed to create a Windows Azure account, create a hosted service, install the Windows Azure Tools for Visual Studio, create the jQuery application, and deploy it to the cloud. Now that we have finished this process once, modifying our existing cloud application or creating a new cloud application is easy. jQuery and Windows Azure work nicely together. We can take advantage of jQuery to build applications that run in the browser and we can take advantage of Windows Azure to host the backend services required by our jQuery application. The big benefit of Windows Azure is that it enables us to scale. If, all of the sudden, our jQuery application explodes in popularity, Windows Azure enables us to easily scale up to meet the demand. We can handle anything that the Internet might throw at us.

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • Setup Custom Portal & Content Enabled Domain

    - by Stefan Krantz
    When overlooking the past year we have seen a large increase in deployments where only some parts of the WebCenter Suite infrastructure has been used. The most common from my personal perspective is a domain topology that includes: WebCenter Custom Portal, WebCenter Content and Oracle HTTP ServicesToday its very common to see installation where the whole suite is installed when the use case only requires the custom portal and some sub component like WebCenter Content. This post will go into detail on how to minimize the deployment time and effort by only laying down the necessary managed servers needed, by following this proposed method you will minimize the configuration steps and only install the required components and schema's, configure only the necessary components and minimize the impact of architectural changes through reduced dependencies. Assumptions: Oracle 11g Database installed SYS or equivalent access to Database to setup schema's via RCU Running Operating System supporting JDK 7 Update 2 (Check support matrix here) Good understanding of WebLogic Architecture Binaries: Oracle JDK 7 Update 2 (1.7.0_02) (Download) Oracle WebLogic 10.3.6 (Download) Oracle WebCenter Binaries (11.1.1.6) (Download) Oracle WebCenter Content Binaries (11.1.1.6) (Download 1) (Download 2) Oracle HTTP Services (11.1.1.6) (Download) Oracle Repository Creation Utility (11.1.1.6) (Download Linux or Windows) Schema's: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} MDS - Meta Data Services (WebCenter and OWSM) WebCenter (WebCenter Schema) OCS (Oracle WebCenter Content) Activities (WebCenter Activities) OPSS (Policy Store for WebCenter) Installation Structure: - [Installation Home]/Middleware    - Oracle_WC1 (WebCenter Installation)    - Oracle_WT1 (Oracle WebTier)    - Oracle_ECM (WebCenter Content)    - wlserver_10.3 (Weblogic installation)- [Installation Home]/domains    - webcenter (WebCenter Domain)    - instances (OHS/OPMN instance)- [Installation Home]/applications- [Installation Home]/JDK1.7.0_02 Installation and Configuration Steps: Install Java and configure Java Home Extract the Java Installable (jdk-7u2-linux-x64) to [Installation Home]/JDK1.7.0_02 Add JAVA_HOME to Environment Settings (JAVA_HOME=[Installation Home]/JDK1.7.0_02) Update PATH in Environment Settings (PATH=$JAVA_HOME/bin:$PATH) Install WebLogic Server (Middleware Home) Run the installer / execute jar file (java - jar wls1036_generic.jar) Create the Middleware Home under [Installation Home]/Middleware Install WebCenter Portal (Extend Middleware Home) Extract the compressed file (ofm_wc_generic_11.1.1.6.0_disk1_1of1.zip) to a temp folder Execute runInstaller under folder (DISK1/) with following command (runInstaller -jreLoc $JAVA_HOME) Make sure to install in following structure ([Installation Home]/Middleware/Oracle_WC1) Install WebCenter Content (Extend Middleware Home) Extract the compressed files (ofm_wcc_generic_11.1.1.6.0_disk1_1of2.zip & ofm_wcc_generic_11.1.1.6.0_disk1_2of2.zip) to the same temp folder Execute runInstaller under folder (DISK1/) with following command (runInstaller -jreLoc $JAVA_HOME) Make sure to install in following structure ([Installation Home]/Middleware/Oracle_ECM) Configure Initial Domain (Domain name webcenter) Execute configuration tool - [Installation Home]/Middleware/wlserver_10.3/common/bin/config Select "Create a New Weblogic Domain" Select following template (Basic Weblogic Server Domain, Oracle Enterprise Manager, Oracle WSM Policy Manager, Oracle JRF) Create new domain with name webcenter under following location ([Installation Home]/domains) for applications ([Installation Home]/applications) Select Production Mode Finish Configuration wizard Setup username for startup scripts - Add a new file called boot.properties to ([Installation Home]/domains/webcenter/servers/AdminServer/security)Add following lines to boot.propertiesusername=weblogicpassword=[password clear text, it will be encrypted during first start] Start AdminServer in the background ([Installation Home]/domains/webcenter/bin/startWeblogic) Install and Configure Oracle WebTier (OHS Server) Extract compressed file (ofm_webtier_linux_11.1.1.6.0_64_disk1_1of1.zip) to a temp folder Execute runInstaller under folder (DISK1/) with following command (runInstaller) Select Install & Configure option Deselect Oracle WebCache Auto Configure Ports Configure Schema's with RCU (Repository Creation Utility) Extract compressed file (ofm_rcu_linux_11.1.1.6.0_disk1_1of1.zip) to a temp folder Execute rcu with following command ([temp]/rcuHome/rcu) Make sure database meets RCU requirements, particular (PROCESSES is 200 or more) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Using SQLPLUS and sys user tou can update this configuration in the database with following procedure:ALTER SYSTEM SET PROCESSES=200 SCOPE=SPFILE shutdown immediate startup Create and Configure following schemas:MDS - Meta Data Services (WebCenter and OWSM)WebCenter (WebCenter Schema)OCS (Oracle WebCenter Content)Activities (WebCenter Activities)OPSS (Policy Store for WebCenter) Remember selected schema prefix and password (will be used later) Configure WebCenter Portal instance (WC_CustomPortal) Execute following command to start configuration wizard ([Installation Home]/Middleware/Oracle_WC1/common/bin/config) Select Extend an Existing WebLogic domain Select the existing webcenter domain ([Installation Home]/domains/webcenter) Select Extend my domain using existing extension templateBrowse to ([Installation Home]/Middleware/Oracle_WC1/common/templates/applications)Select oracle.wc_custom_portal_template_11.1.1.jar Select to configure (Managed Servers/Clusters/Machines) On the Managed Server Screen you can now configure 1 or more WC_CustomPortal managed servers (name them WC_CustomPortal[n] (skip numbering if not clustered)) In case of two WC_CustomPortal Servers then create a Cluster (any name) and make sure the managed servers join the new cluster Create a new machine with same name as the current machine Make sure the AdminServer and WC_CustomPortal[n] managed servers joins the machine Finish the configuration wizard Stop AdminServer ([Installation Home]/domains/webcenter/bin/stopWeblogic) Start AdminServer in the background ([Installation Home]/domains/webcenter/bin/startWeblogic) Start WC_CustomPortal in the foreground (([Installation Home]/domains/webcenter/bin/startManagedServer WC_CustomPortal))- repeat for each WC_CustomPortal instance on the host Give credentials for weblogic user on start up Copy folder security including file boot.properties - from ([Installation Home]/domains/webcenter/servers/AdminServer/) to ([Installation Home]/domains/webcenter/servers/WC_CustomPortal/) Result should be ([Installation Home]/domains/webcenter/servers/WC_CustomPortal/security/boot.properties) Configure WebCenter Content instance (UCM_server1) Execute following command to start configuration wizard ([Installation Home]/Middleware/Oracle_ECM/common/bin/config) Select Extend an Existing WebLogic domain Select Oracle Universal Content Management - Content Server Select to configure (Managed Servers/Clusters/Machines) On Managed Server Screen create only one managed server instance (UCM_server1 on port 16200 (you can select any other available port)) Make sure the UCM_server1 managed server joins the machine Finish the configuration wizard Stop AdminServer ([Installation Home]/domains/webcenter/bin/stopWeblogic) Start AdminServer in the background ([Installation Home]/domains/webcenter/bin/startWeblogic) Start UCM_server1 in the foreground ([Installation Home]/domains/webcenter/bin/startManagedServer UCM_server1)Give credentials for weblogic user on start up Copy folder security including file boot.properties - from ([Installation Home]/domains/webcenter/servers/AdminServer/) to ([Installation Home]/domains/webcenter/servers/UCM_server1/ Result should be ([Installation Home]/domains/webcenter/servers/UCM_server1/security/boot.properties) Post Configure WebCenter Content instance for WebCenter Portal Open a browser where you have support for Java applets - navigate to http://host:port/cs WARNING: The page that you are presented with after authentication will only appear once for each instance WARNING: Make sure you set correct storage options - also remember to consider file sharing options if you like to cluster your Content Server instance over multiple hosts Set an appropriate Auto number prefix Update the Server Socket Port: Commonly set to (4444)  used for RIDC communication (a requirement for WebCenter Portal) Update the IP Address Filter to include the IP that is planned to access the server over RIDC - at the minimum add the ip address of the current host (this option can be updated later via EM) Stop UCM_server1 ([Installation Home]/domains/webcenter/bin/stopManagedServer UCM_server1) Start UCM_server1 in the background([Installation Home]/domains/webcenter/bin/startManagedServer UCM_server1) Open a browser where you have support for Java applets - navigate to http://host:port/cs Navigate to Administration/Admin Server Go to General ConfigurationCheck Enable AccountsIn Additional Configuration Variables (Add on two lines) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} AllowUpdateForGenwww=1CollectionUseCache=1 Save the changes and go to Component Manager Click on the link advanced component manager Enable following components Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Folders_g, WebCenterConfigure, SiteStudio, SiteStudioExternalApplications, DBSearchContainsOpSupport WARNING: Make sure that following component is disabled: FrameworkFolders Stop UCM_server1 ([Installation Home]/domains/webcenter/bin/stopManagedServer UCM_server1) Start UCM_server1 in the background([Installation Home]/domains/webcenter/bin/startManagedServer UCM_server1) Open a browser where you have support for Java applets - navigate to http://host:port/cs Navigate to Administration/Site Studio Administration and update - Do not forget to save and submit each page Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Set Default ProjectSet Default WebAssets Post Configure Oracle WebTier (OHS) to include Content Server and WebCenter Portal application context Update following file - [Installation Home]/domains/instances/instance1/config/OHS/ohs1/mod_wl_ohs.conf For single add lines from following example: Link For clustered environment add lines from following template (note the clustering in example on applies to WC_CustomPortal): Link For more information on this: http://docs.oracle.com/cd/E23943_01/core.1111/e12037/contentsvr.htm#WCEDG318 Optional - Configure JOC Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Follow instructions: http://docs.oracle.com/cd/E23943_01/core.1111/e12037/extend_wc.htm#WCEDG264 Optional (Recommended) - Configure Node Manager Follow instructions: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} http://docs.oracle.com/cd/E23943_01/core.1111/e12037/node_manager.htm#WCEDG277 Optional (Mandatory for clustered environments) - Re-Associate Policy Store to Database or OID Follow instructions: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} http://docs.oracle.com/cd/E23943_01/webcenter.1111/e12405/wcadm_security_credstore.htm#CFHDEDJH Optional - Configure Coherence for Content Presenter Follow instructions in Blog Post (This post is for PS4): https://blogs.oracle.com/ATEAM_WEBCENTER/entry/enabling_coherence_for_content_presenter Other Recommended Post Cloning WebCenter Custom Portal - https://blogs.oracle.com/ATEAM_WEBCENTER/entry/cloning_a_webcenter_portal_managedImproving WebCenter Performance through caching - https://blogs.oracle.com/ATEAM_WEBCENTER/entry/improving_webcenter_performance

    Read the article

  • CodePlex Daily Summary for Monday, December 06, 2010

    CodePlex Daily Summary for Monday, December 06, 2010Popular ReleasesAura: Aura Preview 1: Rewritten from scratch. This release supports getting color only from icon of foreground window.myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...SubtitleTools: SubtitleTools 1.0: First public releaseMiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (December 2010): The release is targetted for stable daily use. With improved performance and enhanced compatibility with several latest PHP open source applications; it makes this release perfect replacement of your old PHP runtime. Changes made within this release include following and much more: Performance improvements based on real-world applications experience. We determined biggest bottlenecks and we found and removed overheads causing performance problems in many PHP applications. Reimplemented nat...Chronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...MDownloader: MDownloader-0.15.26.7024: Fixed updater; Fixed MegauploadDJ - jQuery WebControls for ASP.NET: DJ 1.2: What is new? Update to support jQuery 1.4.2 Update to support jQuery ui 1.8.6 Update to Visual Studio 2010 New WebControls with samples added Autocomplete WebControl Button WebControl ToggleButt WebControl The example web site is including in source code project.LateBindingApi.Excel: LateBindingApi.Excel Release 0.7g: Unterschiede zur Vorgängerversion: - Zusätzliche Interior Properties - Group / Ungroup Methoden für Range - Bugfix COM Reference Handling für Application Objekt in einigen Klassen Release+Samples V0.7g: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Example03 - Numberformats Example04 - Shapes, WordArts, P...ESRI ArcGIS Silverlight Toolkit: November 2010 - v2.1: ESRI ArcGIS Silverlight Toolkit v2.1 Added Windows Phone 7 build. New controls added: InfoWindow ChildPage (Windows Phone 7 only) See what's new here full details for : http://help.arcgis.com/en/webapi/silverlight/help/#/What_s_new_in_2_1/016600000025000000/ Note: Requires Visual Studio 2010, .NET 4.0 and Silverlight 4.0.ASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.1: Atomic CMS 2.1.1 release notes Atomic CMS installation guide Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.5 beta Released: Hi, Today we are releasing Visifire 3.6.5 beta with the following new feature: New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. Also this release includes few bug fixes: AxisXLabel label were getting clipped if angle was set for AxisLabels and ScrollingEnabled was not set in Chart. If LabelStyle property was set as 'Inside', size of the Pie was not proper. Yo...EnhSim: EnhSim 2.1.1: 2.1.1This release adds in the changes for 4.03a. To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Switched Searing Flames bac...AI: Initial 0.0.1: It’s simply just one code file; it simulates AI and machine in a simulated world. The AI has a little understanding of its body machine and parts, and able to use its feet to do actions just start and stop walking. The world is all of white with nothing but just the machine on a white planet. Colors, odors and position information make no sense. I’m previous C# programmer and I’m learning F# during this project, although I’m still not a good F# programmer, in this project I learning to prog...NKinect: NKinect Preview: Build features: Accelerometer reading Motor serial number property Realtime image update Realtime depth calculation Export to PLY (On demand) Control motor LED Control Kinect tiltMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V1.0 - N-Layer DDD Sample App .NET 4.0: Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power Tools http://re...Sense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition: Sense/Net 6.0.1 Community Edition This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, lightning-fast indexing and query via Lucene. Limita...Minecraft GPS: Minecraft GPS 1.1.1: New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft. Open World in any folder. Fixes Fixed style so listbox won't grow the window size. Fixed open file dialog issue on non-vista kernel machines.New ProjectsAboutTime: The AboutTime WPF controls project is aimed at developing custom controls that relate to time.aReader: aReader is a free software, it's used as an XPS document reader. It's developed in C# Language and use Windows Presentation Foundation technology with .NET Framework 3.5. Mixed with Ribbon Controls Library for GUI (Graphic User Interface) make this application user friendly.Battle Net Info: Battle Net Info provides information of the StarCraft2 player from his profile pageBencoder: Library for encode/decode bencode file or string. It's developing on C#.BiBongNet: BiBongNet Project.Binhnt: BinhntC++ Bloom Filter Library: C++ Bloom Filter LibraryChild Sponsorship Manager: Sponsorship Manager is developed for a NPO that provides child sponsorship in developing countries. It is possible to track sponsor child relations, gifts and payments. It is developed in visual basic . netDocBlogger: This is a tool for automatically converting existing XML comments from your project into MSDN style HTML for posting to the codeplex site. This will use the MetaBlog API to post code, but can be used in a copy paste fashion right away.Dynamic Rdlc WebControl: "Dynamic Rdlc WebControl" is an ASP .NET WebControl to generate dynamic reports in RDLC format without generate physical files. Suports groups and totalizers. It is developed with Microsoft Visual Studio 2010, ASP .NET and C# 4.Fake Call for Windows Phone 7: Coding4Fun Windows Phone 7 fake call applicationFlow launcher: Flow is the worlds fastest application launcher, using an onscreen keyboard and mnemonics to achieve lightning fast shortcut launching.GaDotNet: GaDotNet is an open source library designed to make it easy to log page views, events and transactions, through c# code, without using JavaScript or even needing to have a browser.HackerNews for WP7: HackerNews is a WP7 client for the HackerNews website.How much is this meeting costing us?: Coding4Fun Windows Phone 7 "How much is this meeting costing us?" applicationKLAB: KLABMap Navigator: Map Navigator - it's a silverlight application intended to work with maps.MNRT: MNRT implements (demonstrates) several techniques to realize fast global illumination for dynamic scenes on Graphics Processing Units (GPUs) using CUDA. A GPU-based kd-tree was implemented to accelerate both ray tracing and photon mapping.MVC Helpers: MVC Helper makes developing views easier. It contains extended helpers classes to render view content. It is developed in C#.Net Extended Helpers for Grid has been created so far. MVCPets: This is a projected dedicated to providing a free platform to be used by animal rescue organizations. The hope is that this project can fill the void for those rescue groups that can't afford to pay a professional web designer/developer.MyGraphicProgram: ???????????????NAI: This project is a step by step illustration of some Numerical Analysis methods.Nemono: Nemono is an application that runs in the background, and is activated by pressing a key combination like ALT+W. When activated, Nemono uses context awareness to present relevant shortcuts to the user, and mnemonics to execute shortcuts.opojo: opojoOxyPlot: OxyPlot is a .NET library for making XY line plots. The focus is on simplicity and performance. The library contains custom controls for WPF and Windows Forms. The plots can also be exported to SVG, PDF and PNG.PowerChumby: PowerChumby is a Perl CGI script and a PowerShell module that gives you a PowerShell way of controlling your Chumby.RHoK Berlin Visio Projekt: Random Hacks of Kindness - Berlin Projekt für die Senatsverwaltung für Gesundheit, Umwelt und Verbraucherschutz Query, integrate and display external data in Microsoft Visio. It's developed in C#.sc2md: starcraft.md news portalSlide Show: Coding4Fun Windows Phone 7 Slide Show applicationsmartcon: smart control centerTFS Fav source: Favourites for source location in VSTwitter Followers Monitor: Free and Open Source tool that will let you monitor any Twitter account for its new & lost followers even if it's not yours and you don't have its credentials. It allows you to add several Twitter accounts and be updated right from your desktop.

    Read the article

  • top tweets WebLogic Partner Community – June 2012

    - by JuergenKress
    Send your tweets @wlscommunity #WebLogicCommunity and follow us at http://twitter.com/wlscommunity OTNArchBeat? Free Virtual Developer Day: Oracle ADF and Oracle Fusion Middleware Development http://bit.ly/MxuNAg AMIS, Oracle & Java? Checklist veearts nu ook op iPad. @amis_services Mobile integratie met Oracle Fusion Middleware http://dld.bz/buwsM #OSB #SOA WhitehorsesWhiteblog: Troubleshoot JVM crashes of Weblogic: CompilerThread (http://bit.ly/KcGzZK) Jon petter hjulstad E-vita is now Apps Grid Specialized! ODTUG Fusion Middleware Sessions RT @OTNArchBeat: ODTUG Kscope12 - June 24-28 - San Antonio, TX http://bit.ly/LlWkNV OTNArchBeat? Free Event: Modern #Java Development, in/outside the Enterprise - May 30 - Redwood Shores, CA http://bit.ly/LfB79a ADF Community DE? Oracle Advanced ADF 11g Partner Workshop Düsseldorf /Germany (english) June 26-29, click here to see Nicolas Lorain? Best Practices for #JavaFX 2 Enterprise Applications (Part Two) http://buff.ly/Lk1DBn by Jim Weaver shay shmeltzer? #Oracle Developers in #Israel - don't miss the free #ADF workshop July 2nd - get hands-on with Oracle ADF -here OTNArchBeat? Java at JAXconf | Tori Wieldt http://bit.ly/LdoLS2 Anand Akela? #Oracle Customers and Partners – Get your free pass to @CloudExpo in New York, June 11 to 14, http://goo.gl/RpYFT <- Stop by booth #511 OracleSupport_WLS? Did you know that since 3/15/12 #WebLogic Server 12.1.1.0 is certified for production with JDK 7? http://bit.ly/IYJE0L Sharat? Highly useful #JavaFX best practices blog by @JavaFXpert More details here ADF EMG How to set up a productive ADF Dev Env - discussion started by @baigsorcl. Click here to Read and comment. OracleSupport_WLS Upcoming #webcast: Diagnosing #weblogic performance issues through #java thread dumps http://bit.ly/M4O9qF My Oracle Support? New to Oracle Support? - Webcast on Support Basics webcast May 22 10:30 Central Europe. Register @ http://bit.ly/J8o0WG Mohamad Afshar? Cloud Expo – Oracle Customers and Partners – get your free pass to Cloud Expo in New York, June 11 to 14, http://goo.gl/RpYFT OTNArchBeat Oracle VM 3.1 is here | @Ronenkofman http://bit.ly/JriWTq Oracle Exalogic? RT @D0uglasPhillips: ExalogicTV New Video Introducing Oracle Secure Global Desktop for #Exalogic!! http://bit.ly/nwkrCu OracleBlogs? Java EE6 and WebLogic YouTube video channels http://ow.ly/1jVcYJ Oracle WebLogic RT @aleftik: Excited to spend some time today playing around with the WebSockets SDK http://bit.ly/NoTtri WebLogic Community Java EE6 and WebLogic YouTube video channels http://wp.me/p1LMIb-h0 OracleSupport_WLS New tutorial! How to use the #JMS #API to create a message producer with #GlassFish and #NetBeans http://bit.ly/Juqjn JDeveloper & ADF? Tip when installing JDeveloper 11.1.2.2.0 version http://dlvr.it/1b48s1 WebLogic Community Middleware Oracle Excellence Awards 2012 – HAPPY NEW YEAR! Click here to read WebLogicCommunity #opn #oracle#Specialization #opnaward Steven Davelaar? Improve performance of your ADF app using lazy, on-demand querying of detail view objects: Click here OracleBlogs? Middleware Oracle Excellence Awards 2012 & HAPPY NEW YEAR! http://ow.ly/1kahzZ OracleSupport_WLS Upgrading from #weblogic 9.2.x to 10.3.x? http://bit.ly/Kqzl9N AMIS, Oracle & Java “@JDeveloper: Logout from an ADF application http://dlvr.it/1fQBnm” WebLogic Community UK OUG call for papers–your middleware success! Click here #UKOUG #soacommunity #OPN Whitehorses Whiteblog: Enterprise Manager: Manage your Fusion Middleware logfiles (http://bit.ly/KQlZkR) WebLogic Community? @Jphjulstad HI Jon, should we send Pizza when you go in production with your WebLogic 12c project? Whish you success! #WebLogicCommunity Sabine Leitner ADF Einsteigerworkshops je 2 Tage im Juni in HAM, BLN, HANN #Oracle #WLS http://bit.ly/LcOIzB @OracleWebLogic @OracleAppGrid@soacommunity Andreas Koop new post Java Heap Monitor in JDeveloper http://bit.ly/LgSk85 Sabine Leitner? #Oracle Kundentag mit Vorträgen von Sparkasse, Schufa, LBBW, Allianz über FMW & Exa Lösungen! 21.06. FRA http://bit.ly/JtwE3v @wlscommunity NetBeans Team RT @chadlung: Installing and configuring #NetBeans 7.1.2 and the #Java JDK 1.7 on OS X: http://www.giantflyingsaucer.com/blog/p=3760 #osx WebLogic Community Happy New Year #WeblogicCommunity thanks for the business! Time for a drink http://pic.twitter.com/K34KFbvH WebLogic Community UK OUG call for papers&ndash;your middleware success! http://wp.me/p1LMIb-gU WebLogic Community? Middleware Oracle Excellence Awards 2012 - HAPPY NEW YEAR! http://wp.me/p1LMIb-h6 Oracle WebLogic? RT @wlscommunity: WebLogic World Record Two Processor Result with SPECjEnterprise2010 Benchmark Click here to read #weblogic #sunfire #li Marc? Relocate wlst script for all the logfiles in your domain @wlscommunity, http://tinyurl.com/btbjcco WebLogic Community WebLogic World Record Two Processor Result with SPECjEnterprise2010 Benchmark Click here #WebLogicCommunity #weblogic #sunfire Oracle WebLogic MIss a WebLogic Devcast webinar? Catch any of the replays in the series on-demand! #WebLogic #JavaEE #coherence http://bit.ly/LNGa4p JDeveloper & ADF? Bean DataControl - Edit table records http://dlvr.it/1ZWqCx Justin Kestelyn? Contents of "Virtual Developer Day: Java SE 7 and JavaFX 2.0" are now avail on demand; no reg http://tinyurl.com/78nxnyo Frank Nimphius? Preparing 12c new features for DOAG 2012 Development - June 14th in Bonn (http://development.doag.org) WebLogic Community? Middleware Oracle Excellence Awards 2012&ndash;HAPPY NEW YEAR! http://wp.me/p1LMIb-he JDeveloper & ADF Placeholder Watermarks with ADF 11.1.2 http://dlvr.it/1ZWDc9 Oracle ACE Program? May edition #ACE newsletter now available online. http://bit.ly/LKA2de chriscmuir New blog post: Which JDeveloper is right for me? http://bit.ly/J8sj9e GlassFish? Transactional Interceptors in Java EE 7 - Request for feedback: Linda described how EJB's container-managed tr http://bit.ly/KKuGNJ OracleEnterpriseMgr Oracle Application Testing Suite 12.1 Debuts at StarEast 2012 http://ow.ly/aXcv8 #em12c JAX London First set of speaker session announced for #JAXLondon see: http://bit.ly/L0HSME OTNArchBeat? Oracle Cloud Conference: dates and locations worldwide http://bit.ly/JgNeID NetBeans Team? Video: Create and debug a TestNG test class in #NetBeans IDE: http://ow.ly/b7NEW NetBeans Team #NetBeans tip: Code Template for #Kohana #PHP Framework: http://ow.ly/aWIvY Robin? Started to use the #Oracle #WebLogic Server #Maven Plugin. Really awesome to install a complete #WLS with "mvn wls:install" !@wlscommunity OTNArchBeat? Free Event: Modern #Java Development, in/outside the Enterprise - May 30 - Redwood Shores, CA http://bit.ly/JIN9tf OracleBlogs WebLogic Partner Community Newsletter May 2012 http://ow.ly/1k5TeG Java Certification? Java SE 7 Fundamentals course now available On Demand. Watch a preview now: http://ow.ly/aWYgD Whitehorses Whiteblog: Native IO in WebLogic on Solaris 11 X64 (http://bit.ly/KGM4mp) NetBeans Team? Quick video of FindBugs Integration in #NetBeans IDE 7.2: http://ow.ly/aNece NetBeans Team #JavaFX Scene Builder Docs Updated for 2.2 and #NetBeans 7.2 dev builds: http://ow.ly/b7Nie Duncan Mills? New blog posting on implementing input field watermarks with ADF Faces 11.1.2 Click here #adf WebLogic Community? WebLogic Partner Community Newsletter May 2012 http://wp.me/p1LMIb-h4 OracleBlogs? UK OUG call for papersyour middleware success! http://ow.ly/1jNs49 Nicolas Lorain? Java tip: Deploying #JavaFX apps to multiple environments - JavaWorld http://buff.ly/KDADvu Adam Bien? Java EE and How to Specify The Unconventional With Convention Over Configuration [Free Article]: The free http://bit.ly/JEUkUf Owen Hughes and team?#Oracle #Exalogic #Performance: What? How? Why? Click here GlassFish? SecuritEE in the Cloud: Java EE 7 and the Cloud theme continue to move full steam ahead. In a PaaS environment http://bit.ly/K2RPte JDeveloper & ADF? How to Align Managed Bean Scope and Bean Data Control in Oracle ADF http://dlvr.it/1dngxQ Andrejus Baranovskis Missing New Feature in JDev (11.1.2.2.0) - ADF Methods Security http://fb.me/1jQM1enls OracleSupport_WLS? Tutorial on managing #HTTP Sessions in a #Weblogic #Cluster http://bit.ly/JshESe Oracle WebLogic? ZeroTurnaround developer report: #Spring keeps getting heavier, and #Java EE keeps getting lighter http://bit.ly/JDmKy2 JDeveloper & ADF? How to Search in Views - Part 4 || Oracle ADF http://dlvr.it/1dpDjZ WebLogic Community Java Message Service with Java and Spring Framework on Oracle WebLogic; Webcast May 15th 2012 http://wp.me/p1LMIb-gS Andreas Koop? new post ADF Bug or Feature? Non-Breaking Space outside required icon style http://bit.ly/KDZnUo Oracle WebLogic? Don't miss this month's WebLogic DevCast: WebLogic JMS and Spring JMS http://bit.ly/J6g2ST Tuesday May 15th 10:00am PT JDeveloper & ADF How To Disable SELECT COUNT Execution for ADF Table Rendering http://dlvr.it/1dqKH6 OracleSupport_WLS? #SSL and security has its own Information Center, http://bit.ly/LP8Vil for troubleshooting, install, config and more NetBeans Team? Featured #NetBeans plugin is @Codename_One for creating native apps for major mobile platforms: http://plugins.netbeans.org/ JDeveloper & ADF? Using JDeveloper HTTP Analyser to intercept/forward requests http://dlvr.it/1Yzl4J Nicolas Lorain? Create native looks for JavaFX applications: JavaFX-CSS-Themes · http://buff.ly/M0jel0 by Gregg Setzer Devoxx? Want to make the world a better place? Then get involved in Random Hacks of Kindness on June 2 - 3 in Belgium @ http://www.rhok.be #RHoK WebLogic Community top tweets WebLogic Partner Community – May 2012 Click here #WebLogicCommunity Michel Schildmeijer Oracle Traffic Director 11g http://lnkd.in/-mm3Vy Andrejus Baranovskis? Proactively Monitoring JDeveloper 11g IDE Heap Memory http://fb.me/16YZErPrx Arun Gupta? 80+ attendees building a #javaee6 application using NetBeans/WebLogic at Java Day, Istanbul fun times! http://pic.twitter.com/odY19daW A. Chatziantoniou? Just registered for the Oracle FMW Summer Camp in Lisbon. Looking forward to learn, meet friends and try to buy ice cream on the beach OTNArchBeat Another Myth Debunked: 200 Continuous Redeployments with WebLogic|@munz http://bit.ly/JiPyM7 Oracle WebLogic? Need to learn more on #WebLogic Server #JVM performance tuning? http://bit.ly/MN UxHx GlassFish? Dukes Choice Awards 2012 Nominations Are Open: 2012 Duke's Choice Award are open for nominations. These awards http://bit.ly/Ksk4U3 Justin Kestelyn? Major cloud-related announcements from Larry Ellison and Mark Hurd on June 6 http://bit.ly/KTJiII Nicolas Lorain Transparent Windows (Stage) with #JavaFX 2 : Adam Bien's Weblog http://j.mp/INgq8K WebLogic Community Web Services with JAX and Spring on WebLogic–Webcast May 30th 2012 #WebLogicCommunity #weblogic #opn JDeveloper & ADF Oracle ADF - How to work with Dates http://dlvr.it/1Y70zw OracleBlogs Web Services with JAX and Spring on WebLogicWebcast May 30th 2012 http://ow.ly/1k2WtO Adam Bien? Summer Java EE Workshops: 23.05, Amsterdam Airport Java EE Hacking, Without Airport. The dutch version of Airport http://bit.ly/JeP6hV JDeveloper & ADF ADF 11g: BC4J or EJB3. http://bit.ly/JVVFZF ADF EMG? Great discussion with JSF guru Andy Schwartz on the forum - 38 posts! Check it out: here Devoxx? Oracle (http://www.oracle.com ) joins Devoxx 2012 as the first Premium partner, welcome aboard! Nicolas Lorain Developing a Simple Todo Application using #JavaFX, #Java and #MongoDB- Part-1JavaBeat http://j.mp/IDGxLA Nicolas Lorain Preview of JavaFX 2.2 canvas feature > Harmonic Code: Death bitmaps could be beautiful... Part I http://buff.ly/KyAXg5 #JavaFX OTNArchBeat?? New York Coherence Special Interest Group (NYCSIG) - May 24 - NYC http://bit.ly/JzJcbT WebLogic Community iAS upgrade to WebLogic watch #C2B2 online seminar http://youtu.be/5m2CNUjBIGQ #WebLogicCommunity Ruth Collett? Join Oracle in #Joburg on May 21 for OTN Developer Day - sessions on #Java #JavaEE 6/7 and much more! http://bit.ly/IENwnD WebLogic Community? Sending out invitations to our advanced Fusion Middleware Summer Camps! Want to learn more register for the community Ruth Collett? Join @ArunGupta in Istanbul this Monday to hear the latest on #JavaEE 6/7 http://bit.ly/Je63cc GlassFish? NetBeans 7.2 Beta - Built for Speed, Deploy Apps to Oracle Cloud: NetBeans 7.2 Beta is now available. The http://bit.ly/LxMMTK Lucas Jellema My latest SlideShare upload : Java ain't scary - introducing Java to PL/SQ. here via @slideshare JDeveloper & ADF? #Developer #free#ADF training in #Scotland - June 13. More information: http://bit.ly/LbPLlf AMIS, Oracle & Java? AMIS behaalt als eerste in Nedeland de Oracle ADF specialisatie - Channelworld nieuwsChannelconnect: http://bit.ly/JzAcB4 WebLogic Community Web Services with JAX and Spring on WebLogic&ndash;Webcast May 30th 2012 http://wp.me/p1LMIb-gX Nicolas Lorain?@ JavaFX-based SimpleDateFormat Demonstrator http://j.mp/KFCVOi #JavaFX via Dustin Marx Oracle Exalogic? Are you an Oracle partner? There's news on the Oracle Partner Network about #Exalogic specializations - http://bit.ly/Mt3ANY JDeveloper & ADF Shorter URL for your ADF application http://dlvr.it/1XqNLY OTNArchBeat? Bay Area Coherence Special Interest Group (BACSIG) Meeting June 7 http://bit.ly/JAa0Lx OTNArchBeat? Java EE 6 Sample Application on WebLogic 12c: Conference Planner | @arungupta http://bit.ly/LPvof4 JDeveloper & ADF? Excellent example of Oracle ADF - Google Maps/Earth integration http://dlvr.it/1cbc80 JDeveloper & ADF Setting Up JDeveloper's Embedded WLS for MySQL http://dlvr.it/1c4b8P JDeveloper & ADF? Solution for Sharing Global User Data in ADF BC http://dlvr.it/1cc7SJ Java? Java Magazine May/June #javaee #javafx #javame #openJDK #hotspot #wicket #lotsmore http://ow.ly/aX07v Oracle WebLogic? http://bit.ly/JxQsnS if you have trouble finding the right #patchset when doing an upgrade to your #weblogic server OracleEnterpriseMgr 15 minutes to go before we start our Application Testing Suite 12.1 webcast. http://bit.ly/JHyTEe Learn from the lead PM what's new. #em12c Sten Vesterli Eating your own dog food - Oracle support site finally in ADF: http://lnkd.in/s6hg_p Adam Bien Project: "Jenever" (=poison) checked-in with GIT:here CU at http://workshops.adam-bien.com. Thanks for attending! OTNArchBeat Web Service Development with NetBeans and Testing with WebLogic Admin Console | @munz http://bit.ly/JcWk34 Please feel free to send us your news! And add your blog to our SOA blog wiki

    Read the article

  • CodePlex Daily Summary for Friday, December 10, 2010

    CodePlex Daily Summary for Friday, December 10, 2010Popular ReleasesFree Silverlight & WPF Chart Control - Visifire: Visifire Silverlight, WPF Charts v3.6.5 Released: Hi, Today we are releasing final version of Visifire, v3.6.5 with the following new feature: * New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. You can visit Visifire documentation to know more. http://www.visifire.com/visifirechartsdocumentation.php Also this release includes few bug fixes: * Chart threw exception while adding new Axis in Chart using Vi...PHPExcel: PHPExcel 1.7.5 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.UserVoice Helper for WebMatrix: UserVoice Helper v0.9: This version will work with ASP.NET WebPages and ASP.NET MVC ApplicationsDNN Simple Article: DNNSimpleArticle Module V00.00.03: The initial release of the DNNSimpleArticle module (labelled V00.00.03) There are C# and VB versions of this module for this initial release. No promises that going forward there will be packages for both languages provided for future releases. This module provides the following functionality Create and display articles Display a paged list of articles Articles get created as DNN ContentItems Categorization provided through DNN Taxonomy SEO functionality for article display providi...UOB & ME: UOB_ME 2.5: latest versionCouchDB.NET: CouchDB.NET 0.1: CouchDB.NET ------- Libraries and providers to use CouchDB features from .NET This distribution includes the following projects: - MachineKeyGenerator: Command line tool to generate a machine key string for use in App.Config and Web.Config files. - CouchDB.NET: Library to facilitate the use of CouchDB features. It uses Hadi Hariri's EasyHttp library to communicate with the CouchDB server. More info at: https://github.com/hhariri/EasyHttp - CouchDb.ASP.NET: ASP.NET Membership Provider and ASP...AutoLoL: AutoLoL v1.4.3: AutoLoL now supports importing the build pages from Mobafire.com as well! Just insert the url to the build and voila. (For example: http://www.mobafire.com/league-of-legends/build/unforgivens-guide-how-to-build-a-successful-mordekaiser-24061) Stable release of AutoChat (It is still recommended to use with caution and to read the documentation) It is now possible to associate *.lolm files with AutoLoL to quickly open them The selected spells are now displayed in the masteries tab for qu...SubtitleTools: SubtitleTools 1.2: - Added auto insertion of RLE (RIGHT-TO-LEFT EMBEDDING) Unicode character for the RTL languages. - Fixed delete rows issue.PHP Manager for IIS: PHP Manager 1.1 for IIS 7: This is a final stable release of PHP Manager 1.1 for IIS 7. This is a minor incremental release that contains all the functionality available in 53121 plus additional features listed below: Improved detection logic for existing PHP installations. Now PHP Manager detects the location to php.ini file in accordance to the PHP specifications Configuring date.timezone. PHP Manager can automatically set the date.timezone directive which is required to be set starting from PHP 5.3 Ability to ...Algorithmia: Algorithmia 1.1: Algorithmia v1.1, released on December 8th, 2010.SuperSocket, an extensible socket application framework: SuperSocket 1.0 SP1: Fixed bugs: fixed a potential bug that the running state hadn't been updated after socket server stopped fixed a synchronization issue when clearing timeout session fixed a bug in ArraySegmentList fixed a bug on getting configuration valueCslaGenFork: CslaGenFork 4.0 CTP 2: The version is 4.0.1 CTP2 and was released 2010 December 7 and includes the following files: CslaGenFork 4.0.1-2010-12-07 Setup.msi Templates-2010-10-07.zip For getting started instructions, refer to How to section. Overview of the changes Since CTP1 there were 53 work items closed (28 features, 24 issues and 1 task). During this 60 days a lot of work has been done on several areas. First the stereotypes: EditableRoot is OK EditableChild is OK EditableRootCollection is OK Editable...My Web Pages Starter Kit: 1.3.1 Production Release (Security HOTFIX): Due to a critical security issue, it's strongly advised to update the My Web Pages Starter Kit to this version. Possible attackers could misuse the image upload to transmit any type of file to the website. If you already have a running version of My Web Pages Starter Kit 1.3.0, you can just replace the ftb.imagegallery.aspx file in the root directory with the one attached to this release.EnhSim: EnhSim 2.2.0 ALPHA: 2.2.0 ALPHAThis release adds in the changes for 4.03a. at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Updated En...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: popup WhiteSpaceFilterAttribute tested on mozilla, safari, chrome, opera, ie 9b/8/7/6nopCommerce. ASP.NET open source shopping cart: nopCommerce 1.90: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...MiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????New ProjectsAccountingGuid: for testing onlyChinese Nag Screen: This is a simple but effective program for learning to recognize Mandarin characters. The application sits in the system tray and displays a character random through your day. You can only get rid of it by typing in the pinyin.CouchDB.NET: .NET libraries to use CouchDB from .NET. Included are Membership and Roles provider so that you may use CouchDB as your integrated DB backend on your ASP.NET projects. Please see the readme.txt file for instructions.DataSetMapper: The idea behind DataSetMapper is to provide support for the automatic mapping of legacy DataSet based structures to proper domain objects. In essence the aim is to create the Mapping aspect of an ORM without the persistence concerns.EasyXnaAudio: EasyXnaAudio is a simple component for use in XNA Game Studio 3.1/4.0 projects that provides an easy interface to load, play, and manage songs and sounds in your game.FixMailboxSD - Exchange Mailbox Security Descriptor Canonicalizer: This is a small utility to fix mailbox security descriptors in Microsoft Exchange that have become non-canonical. It must be run on a machine with Exchange System Manager for Exchange 2003 installed, but it will work against mailboxes on 2003 or 2007 (not 2010).GearSynth Plugin: a plugin for graphsynth that makes gear trainsGroceryList: TBD with first versionIBMS Suite Build on the Associate Platform: A new way of approaching Information Systems. From the UI, users of the IS will be able to build and manipulate the IS to whatever way fits their needs. We have simplified development, removed the chasm between management and IT and give the power of simplification to the user!Ivy Nasha Framework: A PHP FrameworkjQuery helpers for ASP.NET and ASP.NET MVC: jQuery helpers makes it easier for ASP.NET developers to build jQuery scripts. It's developed in C#. JSTest.NET: JSTest.NET enabled JavaScript unit tests to be run directly in the test framework of your choice (MSTest, NUnit, xUnit, etc) and all without the need for a web browser. JSTest.NET utilizes the Windows Script Host (CScript) to run fast, fully debuggable JavaScript unit tests!Multicore Task Framework: MTF is a visual tool to simplify building robust component based .NET applications. MTF is designed to make full use of the power of multi-core processors.Nazha Script On DLR: NazhaPascalESE - a Delphi/Pascal class library for Microsoft ESENT database API: This pascal class library, primarily written for Delphi's Object Pascal, provides a lightweight and easy-to-use wrapper around the ESENT API. Perpetuum Hangar: A Character planner for the online game "Perpetuum"Projeto Exemplo: Projeto exemplo para a atividade 3 da disciplina.PSiteCode: PSiteCode Manager rScript Engine: rScript scripting engine is a managed script engine wrote in C# that supports Visual Basic and C# syntax based scripts. It provides Type's for dynamically getting and setting properties, invoking methods and run-time compilation of scripts.SharePoint 2010 User Profile WebPart: This webpart shows all user profile properties and values of the properties for a particular user profile. The results are shown in a table containing the display and technical name together with the user value.SHC: shriSHMTools: SHMTools is set of compatible software tools (mostly Matlab based) for structural health monitoring (SHM) research. This includes algorithms for system design, modeling, data acquisition, feature extraction, classification, and prognosis.SwapWin: SwapWin is a tiny and handy tool which swaps windows on different screens. Developed in C# and .NET 3.5.Teachers Diary: Teachers diary is application realizing electronic teacher's notepad with student marks. Current localization of the application is in czech language only.VkApp: Vk app for downloadingWebSpirit: A lightweighted web server implemented by C# which supports sufficient extendible feature. By zjuWPF & MEF Studio: WPF & MEF Studio

    Read the article

  • SOA Suite Integration: Part 1: Building a Web Service

    - by Anthony Shorten
    Over the next few weeks I will be posting blog entries outlying the SOA Suite integration of the Oracle Utilities Application Framework. This will illustrate how easy it is to integrate by providing some samples. I will use a consistent set of features as examples. The examples will be simple and while will not illustrate ALL the possibilities it will illustrate the relative ease of integration. Think of them as a foundation. You can obviously build upon them. Now, to ease a few customers minds, this series will certainly feature the latest version of SOA Suite and the latest version of Oracle Utilities Application Framework but the principles will apply to past versions of both those products. So if you have Oracle SOA Suite 10g or are a customer of Oracle Utilities Application Framework V2.1 or above most of what I will show you will work with those versions. It is just easier in Oracle SOA Suite 11g and Oracle Utilities Application Framework V4.x. This first posting will not feature SOA Suite at all but concentrate on the capability for the Oracle Utilities Application Framework to create Web Services you can use for integration. The XML Application Integration (XAI) component of the Oracle Utilities Application Framework allows product objects to be exposed as XML based transactions or as Web Services (or both). XAI was written before Web Services became fashionable and has allowed customers of our products to provide a consistent interface into and out of our product line. XAI has been enhanced over the last few years to take advantages of the maturing landscape of Web Services in the market place to a point where it now easier to integrate to SOA infrastructure. There are a number of object types that can be exposed as Web Services: Maintenance Objects – These are the lowest level objects that can be exposed as Web Services. Customers of past versions of the product will be familiar with XAI services based upon Maintenance Objects as they used to be the only method of generating Web Services. These are still supported for background compatibility but are starting to become less popular as they were strict in their structure and were solely attribute based. To generate Maintenance Object based Web Services definition you need to use the XAI Schema Editor component. Business Objects – In Oracle Utilities Application Framework V2.1 we introduced the concept of Business Objects. These are site or industry specific objects that are based upon Maintenance Objects. These allow sites to respecify, in configuration, the structure and elements of a Maintenance Object and other Business Objects (they are true objects with support for inheritance, polymorphism, encapsulation etc.). These can be exposed as Web Services. Business Services – As with Business Objects, we introduced Business Services in Oracle Utilities Application Framework V2.1 which allowed applications services and query zones to be expressed as custom services. These can then be exposed as Web Services via the Business Service definition. Service Scripts - As with Business Objects and Business Services, we introduced Service Scripts in Oracle Utilities Application Framework V2.1. These allow services and/objects to be combined into complex objects or simply expose common routines as callable scripts. These can also be defined as Web Services. For the purpose of this series we will restrict ourselves to Business Objects. The techniques can apply to any of the objects discussed above. Now, lets get to the important bit of this blog post, the creation of a Web Service. To build a Business Object, you first logon to the product and navigate to the Administration Menu by selecting the Admin Menu from the Menu action on left top of the screen (next to Home). A popup menu will appear with the menu’s available. If you do not see the Admin menu then you do not have authority to use it. Here is an example: Navigate to the B menu and select the + symbol next to the Business Object menu item. This indicates that you want to ADD a new Business Object. This menu will appear if you are running Alphabetic mode in your installation (I almost forgot that point). You will be presented with the Business Object maintenance screen. You will fill out the following on the first tab (at a minimum): Business Object – The name of the Business Object. Typically you will make it descriptive and also prefix with CM to denote it as a customization (you can easily find it if you prefix it). As I running this on my personal copy of the product I will use my initials as the prefix and call the sample Web Service “AS-User”. Description – A short description of the object to tell others what it is used for. For my example, I will use “Anthony Shorten’s User Object”. Detailed Description – You can add a long description to help other developers understand your object. I am just going to specify “Anthony Shorten’s Test Object for SOA Suite Integration”. Maintenance Object – As this Business Service is going to be based upon a Maintenance Object I will specify the desired Maintenance Object. In this example, I have decided to use the Framework object USER. Now, I chose this for a number of reasons. It is meaningful, simple and is across all our product lines. I could choose ANY maintenance object I wished to expose (including any custom ones, if I had them). Parent Business Object – If I was not using a Maintenance Object but building a child Business Object against another Business Object, then I would specify the Parent Business Object here. I am not using Parent’s so I will leave this blank. You either use Parent Business Object or Maintenance Object not both. Application Service – Business Objects like other objects are subject to security. You can attach an Application Service to an object to specify which groups of users (remember services are attached to user groups not users) have appropriate access to the object. I will use a default service provided with the product, F1-DFLTS ,as this is just a demonstration so I do not have to be too sophisticated about security. Instance Control – This allows the object to create instances in its objects. You can specify a Business Object purely to hold rules. I am being simple here so I will set it to Allow New Instances to allow the Business Object to be used to create, read, update and delete user records. The rest of the tab I will leave empty as I want this to be a very simple object. Other options allow lots of flexibility. The contents should look like this: Before saving your work, you need to navigate to the Schema tab and specify the contents of your object. I will save some time. When you create an object the schema will only contain the basic root elements of the object (in fact only the schema tag is visible). When you go to the Schema Tab, on the dashboard you will see a BO Schema zone with a solitary button. This will allow you to Generate the Schema for you from our metadata. Click on the Generate button to generate a basic schema from the metadata. You will now see a Schema with the element tags and references to the metadata of the Maintenance object (in the mapField attribute). I could spend a while outlining all the ways you can change the schema with defaults, formatting, tagging etc but the online help has plenty of great examples to illustrate this. You can use the Schema Tips zone in the for more details of the available customizations. Note: The tags are generated from the language pack you have installed. The sample is English so the tags are in English (which is the base language of all installations). If you are using a language pack then the tags will be generated in the language of the user that generated the object. At this point you can save your Business Object by pressing the Save action. At this point you have a basic Business Object based on the USER maintenance object ready for use but it is not defined as a Web Service yet. To do this you need to define the newly created Business Object as an XAI Inbound Service. The easiest and quickest way is to select + off the XAI Inbound Service off the context menu on the Business Object maintenance screen. This will prepopulate the service definition with the following: Adapter – This will be set to Business Adaptor. This indicates that the service is either Business Object, Business Service or Service Script based. Schema Type – Whether the object is a Business Object, Business Service or Service Script. In this case it is a Business Object. Schema Name – The name of the object. In this case it is the Business Object AS-User. Active – Set to Yes. This means the service is available upon startup automatically. You can enable and disable services as needed. Transaction Type – A default transaction type as this is Business Object Service. More about this in later postings. In our case we use the default Read. This means that if we only specify data and not a transaction type then the product will assume you want to issue a read against the object. You need to fill in the following: XAI Inbound Service – The name of the Web Service. Usually people use the same name as the underlying object , in the case of this example, but this can match your sites interfacing standards. By the way you can define multiple XAI Inbound Services/Web Services against the same object if you want. Description and Detail Description – Documentation for your Web Service. I just supplied some basic documentation for this demonstration. You can now save the service definition. Note: There are lots of other options on this screen that allow for behavior of your service to be specified. I will leave them blank for now. When you save the service you are issued with two new pieces of information. XAI Inbound Service Id is a randomly generated identifier used internally by the XAI Servlet. WSDL URL is the WSDL standard URL used for integration. We will take advantage of that in later posts. An example of the definition is shown below: Now you have defined the service but it will only be available when the next server restart or when you flush the data cache. XAI Inbound Services are cached for performance so the cache needs to be told of this new service. To refresh the cache you can use the Admin –> X –> XAI Command menu item. From the command dropdown select Refresh Registry and press Send Command. You will see an XML of the command sent to the server (the presence of the XML means it is finished). If you have an error around the authorization, then check your default user and password settings on the XAI Options menu item. Be careful with flushing the cache as the cache is shared (unless of course you are the only Web Service user on the system – In that case it only affects you). The Web Service is NOW available to be used. To perform a simple test of your new Web Service, navigate to the Admin –> X –> XAI Submission menu item. You will see an open XML request tab. You need to type in the request XML you want to test in the Main tab. The first tag is the XAI Inbound Service Name and the elements are as per your schema (minus the schema tag itself as that is only used internally). My example is as follows (I want to return the details of user SYSUSER) – Remember to close tags. Hitting the Save button will issue the XML and return the response according to the Business Object schema. Now before you panic, you noticed that it did not ask for credentials. It propagates the online credentials to the service call on this function. You now have a Web Service you can use for integration. We will reuse this information in subsequent posts. The process I just described can be used for ANY object in the system you want to expose. This whole process at a minimum can take under a minute. Obviously I only showed the basics but you can at least get an appreciation of the ease of defining a Web Service (just by using a browser). The next posts now build upon this. Hope you enjoyed the post.

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • CodePlex Daily Summary for Tuesday, December 07, 2010

    CodePlex Daily Summary for Tuesday, December 07, 2010Popular ReleasesMy Web Pages Starter Kit: 1.3.1 Production Release (Security HOTFIX): Due to a critical security issue, it's strongly advised to update the My Web Pages Starter Kit to this version. Possible attackers could misuse the image upload to transmit any type of file to the website. If you already have a running version of My Web Pages Starter Kit 1.3.0, you can just replace the ftb.imagegallery.aspx file in the root directory with the one attached to this release.ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: popup WhiteSpaceFilterAttribute tested on mozilla, safari, chrome, opera, ie 9b/8/7/6nopCommerce. ASP.NET open source shopping cart: nopCommerce 1.90: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).Aura: Aura Preview 1: Rewritten from scratch. This release supports getting color only from icon of foreground window.myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...SubtitleTools: SubtitleTools 1.0: First public releaseMiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (December 2010): The release is targetted for stable daily use. With improved performance and enhanced compatibility with several latest PHP open source applications; it makes this release perfect replacement of your old PHP runtime. Changes made within this release include following and much more: Performance improvements based on real-world applications experience. We determined biggest bottlenecks and we found and removed overheads causing performance problems in many PHP applications. Reimplemented nat...Chronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...MDownloader: MDownloader-0.15.26.7024: Fixed updater; Fixed MegauploadDJ - jQuery WebControls for ASP.NET: DJ 1.2: What is new? Update to support jQuery 1.4.2 Update to support jQuery ui 1.8.6 Update to Visual Studio 2010 New WebControls with samples added Autocomplete WebControl Button WebControl ToggleButt WebControl The example web site is including in source code project.LateBindingApi.Excel: LateBindingApi.Excel Release 0.7g: Unterschiede zur Vorgängerversion: - Zusätzliche Interior Properties - Group / Ungroup Methoden für Range - Bugfix COM Reference Handling für Application Objekt in einigen Klassen Release+Samples V0.7g: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Example03 - Numberformats Example04 - Shapes, WordArts, P...ESRI ArcGIS Silverlight Toolkit: November 2010 - v2.1: ESRI ArcGIS Silverlight Toolkit v2.1 Added Windows Phone 7 build. New controls added: InfoWindow ChildPage (Windows Phone 7 only) See what's new here full details for : http://help.arcgis.com/en/webapi/silverlight/help/#/What_s_new_in_2_1/016600000025000000/ Note: Requires Visual Studio 2010, .NET 4.0 and Silverlight 4.0.ASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.1: Atomic CMS 2.1.1 release notes Atomic CMS installation guide Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.5 beta Released: Hi, Today we are releasing Visifire 3.6.5 beta with the following new feature: New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. Also this release includes few bug fixes: AxisXLabel label were getting clipped if angle was set for AxisLabels and ScrollingEnabled was not set in Chart. If LabelStyle property was set as 'Inside', size of the Pie was not proper. Yo...AI: Initial 0.0.1: It’s simply just one code file; it simulates AI and machine in a simulated world. The AI has a little understanding of its body machine and parts, and able to use its feet to do actions just start and stop walking. The world is all of white with nothing but just the machine on a white planet. Colors, odors and position information make no sense. I’m previous C# programmer and I’m learning F# during this project, although I’m still not a good F# programmer, in this project I learning to prog...NKinect: NKinect Preview: Build features: Accelerometer reading Motor serial number property Realtime image update Realtime depth calculation Export to PLY (On demand) Control motor LED Control Kinect tiltMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V1.0 - N-Layer DDD Sample App .NET 4.0: Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power Tools http://re...New ProjectsAcorn: Little acorns lead to mighty oaks.Algorithmia: Algorithm and data-structure library for .NET 3.5 and up. Algorithmia contains sophisticated algorithms and data-structures like graphs, priority queues, command, undo-redo and more. Base Station Verification system: Base Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station VerificatioBlueAd: Simple app to broadcast messages to bluetooth enabled devicesBuiltWith Fiddler Integration: Project Description BuiltWithFiddler adds BuildWith functionality to the HTTP Debugging Proxy Fiddler. It helps to determine the underlying technologies used in HTTP responses. www.builtwith.com www.fiddler2.com It is written in C# by Andy at Bare Web BVCMS.app: The Bellevue Church Management System is a complete Web-based application for managing your church. This iPhone app provides tools to connect to bvcms so that users can search, check-in members, and other actions.coffeeGreet: CoffeeGreet is a WordPress plug-in that will greet your visitors with coffee depending on the hour of the day, by displaying images using the Flickr API.DCEL data structure: Doubly-connected edge list data structure implementation in C#.El Bruno ClickOnce Demo: Demo de ClickOnce en CodePlexFiren's Laboratory: NothingFunCam: A fun application for playing with your webcam. Experiment with different overlays and exciting effects. Save the images when you want, or on a timer. Great fun for parties! (WPF/C#) Uses WPF Media Kit for webcam integration, and Shazzam for the great shader effects.GammaJul LgLcd: A .NET wrapper around the Logitech SDK for G15/G19 keyboard screens. Supports raw byte sending, GDI+ drawing and rendering WPF elements onto the screen.Getting Started CodePlex: This is a demo for using TFS in CodePlexGPUG (Dynamics GP User Group): The location for GPUG members to share code.HPMC: DemoImageOfMeLocator: Team Boarders Platform: WordPress Objectives: 1. Create a plugin for WordPress. 2. Create a plugin that allows users to browse images uploaded on their Flickr Account and use them as overlays for store locations on a large map. 3. Create a plugjDepot: jQuery ajax, jquery UI and ASP.NET MVC based online store application. This software will let a user manage their product inventory by exposiing CRUD operations through the UI. Customers can buy these products and track each shipment separately. It is developed in C#.JQuery Cycle Carousel for DotNetNuke®: DNN Module JQuery Cycle Carousel This module will show images as a carousel using the cycle JQuery plugin. You can easyly change Cycle effect and other settings in the module.Local Movie DB in C#: C# WPF project. Will create local movie database where users can create their own DB of the movies they own/seen/liked ... etcLocation Framework for Windows Phone 7 and Windows Azure: A framework to build location based applications with Windows Phone 7 and Windows Azure.OraLibs: Collection of useful PL/SQL procedures, which contain methods for working with arrays, strings, numbers, dates.Phyo: License managementRepositório de Monografias: O Repositório de Monografias terá como função: - Salvar em um repositório todas as monografias postadas no período pelos os alunos da FACISA/FCM/ESAC. - O administrador do sistema, fará uma avaliação de acordo com ABNT e retornará para o aluno as nescessárias correções.Secure SharePoint Silverlight Web Part - Silverlight Security & Auditing: The Secure Silverlight WebPart provides both builtin security using default SharePoint security mechanisms as well as site collection specific auditing to record an event a Silverlight file is newly hosted in the SharePoint environment. SilverlightColorPicker: Photoshop like ColorPicker built in silverlight from scratchSparrow.Net Connect: This is a passport system.Sparrow.NET TaskMe: TaskMe is a project management web application.Written using Sparrow.Net frameworkSQLiteWrapper: A light c# wrapper around the sqlite library's functionsSuperMarioBros.Net: A .Net Super Mario Bros clon.Virtualizing Tree View: Tree View for large amount of itemsWindows Forms GUI based Trace Listener: Gives a simple UI based Trace Listener to debug / Trace information . No need to look at EventLog / Xml file etc. This code Library helps you View the Trace and debug entries. Can plug in to your WinForms App as well.WP Socially Related: Automatically include related posts from Twitter, WordPress.com and Bing Search into each of your blog posts

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • The Product Owner

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. Picking a most important part of Scrum is difficult.  All of the rules are required, but if there were one rule that is “more” required that every other rule, its having a good Product Owner.  Simply put, the Product Owner can make or break the project. Duties of the Product Owner A Product Owner has many duties and responsibilities.  I’ll talk about each of these duties in detail below. A Product Owner: Discovers and records stories for the backlog. Prioritizes stories in the Product Backlog, Release Backlog and Iteration Backlog. Determines Release dates and Iteration Dates. Develops story details and helps the team understand those details. Helps QA to develop acceptance tests. Interact with the Customer to make sure that the product is meeting the customer’s needs. Discovers and Records Stories for the Backlog When I do Scrum, I always use User Stories as the means for capturing functionality that’s required in the system.  Some people will use Use Cases, but the same rule applies.  The Product Owner has the ultimate responsibility for figuring out what functionality will be in the system.  Many different mechanisms for capturing this input can be used.  User interviews are great, but all sources should be considered, including talking with Customer Support types.  Often, they hear what users are struggling with the most and are a great source for stories that can make the application easier to use. Care should be taken when soliciting user stories from technical types such as programmers and the people that manage them.  They will almost always give stories that are very technical in nature and may not have a direct benefit for the end user.  Stories are about adding value to the company.  If the stories don’t have direct benefit to the end user, the Product Owner should question whether or not the story should be implemented.  In general, technical stories should be included as tasks in User Stories.  Technical stories are often needed, but the ultimate value to the user is in user based functionality, so technical stories should be considered nothing more than overhead in providing that user functionality. Until the iteration prior to development, stories should be nothing more than short, one line placeholders. An exercise called Story Planning can be used to brainstorm and come up with stories.  I’ll save the description of this activity for another blog post. For more information on User Stories, please read the book User Stories Applied by Mike Cohn. Prioritizes Stories in the Product Backlog, Release Backlog and Iteration Backlog Prioritization of stories is one of the most difficult tasks that a Product Owner must do.  A key concept of Scrum done right is the need to have the team working from a single set of prioritized stories.  If the team does not have a single set of prioritized stories, Scrum will likely fail at your organization.  The Product Owner is the ONLY person who has the responsibility to prioritize that list.  The Product Owner must be very diplomatic and sincerely listen to the people around him so that he can get the priorities correct. Just listening will still not yield the proper priorities.  Care must also be taken to ensure that Return on Investment is also considered.  Ultimately, determining which stories give the most value to the company for the least cost is the most important factor in determining priorities.  Product Owners should be willing to look at cold, hard numbers to determine the order for stories.  Even when many people want a feature, if that features is costly to develop, it may not have as high of a return on investment as features that are cheaper, but not as popular. The act of prioritization often causes conflict in an environment.  Customer Service thinks that feature X is the most important, because it will stop people from calling.  Operations thinks that feature Y is the most important, because it will stop servers from crashing.  Developers think that feature Z is most important because it will make writing software much easier for them.  All of these are useful goals, but the team can have only one list of items, and each item must have a priority that is different from all other stories.  The Product Owner will determine which feature gives the best return on investment and the other features will have to wait their turn, which means that someone will not have their top priority feature implemented first. A weak Product Owner will refuse to do prioritization.  I’ve heard from multiple Product Owners the following phrase, “Well, it’s all got to be done, so what does it matter what order we do it in?”  If your product owner is using this phrase, you need a new Product Owner.  Order is VERY important.  In Scrum, every release is potentially shippable.  If the wrong priority items are developed, then the value added in each release isn’t what it should be.  Additionally, the Product Owner with this mindset doesn’t understand Agile.  A product is NEVER finished, until the company has decided that it is no longer a going concern and they are no longer going to sell the product.  Therefore, prioritization isn’t an event, its something that continues every day.  The logical extension of the phrase “It’s all got to be done” is that you will never ship your product, since a product is never “done.”  Once stories have been prioritized, assigning them to the Release Backlog and the Iteration Backlog becomes relatively simple.  The top priority items are copied into the respective backlogs in order and the task is complete.  The team does have the right to shuffle things around a little in the iteration backlog.  For example, they may determine that working on story C with story A is appropriate because they’re related, even though story B is technically a higher priority than story C.  Or they may decide that story B is too big to complete in the time available after Story A has tasks created, so they’ll work on Story C since it’s smaller.  They can’t, however, go deep into the backlog to pick stories to implement.  The team and the Product Owner should work together to determine what’s best for the company. Prioritization is time consuming, but its one of the most important things a Product Owner does. Determines Release Dates and Iteration Dates Product owners are responsible for determining release dates for a product.  A common misconception that Product Owners have is that every “release” needs to correspond with an actual release to customers.  This is not the case.  In general, releases should be no more than 3 months long.  You  may decide to release the product to the customers, and many companies do release the product to customers, but it may also be an internal release. If a release date is too far away, developers will fall into the trap of not feeling a sense of urgency.  The date is far enough away that they don’t need to give the release their full attention.  Additionally, important tasks, such as performance tuning, regression testing, user documentation, and release preparation, will not happen regularly, making them much more difficult and time consuming to do.  The more frequently you do these tasks, the easier they are to accomplish. The Product Owner will be a key participant in determining whether or not a release should be sent out to the customers.  The determination should be made on whether or not the features contained in the release are valuable enough  and complete enough that the customers will see real value in the release.  Often, some features will take more than three months to get them to a state where they qualify for a release or need additional supporting features to be released.  The product owner has the right to make this determination. In addition to release dates, the Product Owner also will help determine iteration dates.  In general, an iteration length should be chosen and the team should follow that iteration length for an extended period of time.  If the iteration length is changed every iteration, you’re not doing Scrum.  Iteration lengths help the team and company get into a rhythm of developing quality software.  Iterations should be somewhere between 2 and 4 weeks in length.  Any shorter, and significant software will likely not be developed.  Any longer, and the team won’t feel urgency and planning will become very difficult. Iterations may not be extended during the iteration.  Companies where Scrum isn’t really followed will often use this as a strategy to complete all stories.  They don’t want to face the harsh reality of what their true performance is, and looking good is more important than seeking visibility and improving the process and team.  Companies like this typically don’t allow failure.  This is unhealthy.  Failure is part of life and unless we learn from it, we can’t improve.  I would much rather see a team push out stories to the next iteration and then have healthy discussions about why they failed rather than extend the iteration and not deal with the core problems. If iteration length varies, retrospectives become more difficult.  For example, evaluating the performance of the team’s estimation efforts becomes much more difficult if the iteration length varies.  Also, the team must have a velocity measurement.  If the iteration length varies, measuring velocity becomes impossible and upper management no longer will have the ability to evaluate the teams performance.  People external to the team will no longer have the ability to determine when key features are likely to be developed.  Variable iterations cause the entire company to fail and likely cause Scrum to fail at an organization. Develops Story Details and Helps the Team Understand Those Details A key concept in Scrum is that the stories are nothing more than a placeholder for a conversation.  Stories should be nothing more than short, one line statements about the functionality.  The team will then converse with the Product Owner about the details about that story.  The product owner needs to have a very good idea about what the details of the story are and needs to be able to help the team understand those details. Too often, we see this requirement as being translated into the need for comprehensive documentation about the story, including old fashioned requirements documentation.  The team should only develop the documentation that is required and should not develop documentation that is only created because their is a process to do so. In general, what we see that works best is the iteration before a team starts development work on a story, the Product Owner, with other appropriate business analysts, will develop the details of that story.  They’ll figure out what business rules are required, potentially make paper prototypes or other light weight mock-ups, and they seek to understand the story and what is implied.  Note that the time allowed for this task is deliberately short.  The Product Owner only has a single iteration to develop all of the stories for the next iteration. If more than one iteration is used, I’ve found that teams will end up with Big Design Up Front and traditional requirements documents.  This is a waste of time, since the team will need to then have discussions with the Product Owner to figure out what the requirements document says.  Instead of this, skip making the pretty pictures and detailing the nuances of the requirements and build only what is minimally needed by the team to do development.  If something comes up during development, you can address it at that time and figure out what you want to do.  The goal is to keep things as light weight as possible so that everyone can move as quickly as possible. Helps QA to Develop Acceptance Tests In Scrum, no story can be counted until it is accepted by QA.  Because of this, acceptance tests are very important to the team.  In general, acceptance tests need to be developed prior to the iteration or at the very beginning of the iteration so that the team can make sure that the tasks that they develop will fulfill the acceptance criteria. The Product Owner will help the team, including QA, understand what will make the story acceptable.  Note that the Product Owner needs to be careful about specifying that the feature will work “Perfectly” at the end of the iteration.  In general, features are developed a little bit at a time, so only the bit that is being developed should be considered as necessary for acceptance. A weak Product Owner will make statements like “Do it right the first time.”  Not only are these statements damaging to the team (like they would try to do it WRONG the first time . . .), they’re also ignoring the iterative nature of Scrum.  Additionally, a weak product owner will seek to add scope in the acceptance testing.  For example, they will refuse to determine acceptance at the beginning of the iteration, and then, after the team has planned and committed to the iteration, they will expand scope by defining acceptance.  This often causes the team to miss the iteration because scope that wasn’t planned on is included.  There are ways that the team can mitigate this problem.  For example, include extra “Product Owner” time to deal with the uncertainty that you know will be introduced by the Product Owner.  This will slow the perceived velocity of the team and is not ideal, since they’ll be doing more work than they get credit for. Interact with the Customer to Make Sure that the Product is Meeting the Customer’s Needs Once development is complete, what the team has worked on should be put in front of real live people to see if it meets the needs of the customer.  One of the great things about Agile is that if something doesn’t work, we can revisit it in a future iteration!  This frees up the team to make the best decision now and know that if that decision proves to be incorrect, the team can revisit it and change that decision. Features are about adding value to the customer, so if the customer doesn’t find them useful, then having the team make tweaks is valuable.  In general, most software will be 80 to 90 percent “right” after the initial round and only minor tweaks are required.  If proper coding standards are followed, these tweaks are usually minor and easy to accomplish.  Product Owners that are doing a good job will encourage real users to see and use the software, since they know that they are trying to add value to the customer. Poor product owners will think that they know the answers already, that their customers are silly and do stupid things and that they don’t need customer input.  If you have a product owner that is afraid to show the team’s work to real customers, you probably need a different product owner. Up Next, “Who Makes a Good Product Owner.” Followed by, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122  | Next Page >