Search Results

Search found 49860 results on 1995 pages for 'reference type'.

Page 334/1995 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Using Lightbox with _Screen

    Although, I have to admit that I discovered Bernard Bout's ideas and concepts about implementing a lightbox in Visual FoxPro quite a while ago, there was no "spare" time in active projects that allowed me to have a closer look into his solution(s). Luckily, these days I received a demand to focus a little bit more on this. This article describes the steps about how to integrate and make use of Bernard's lightbox class in combination with _Screen in Visual FoxPro. The requirement in this project was to be able to visually lock the whole application (_Screen area) and guide the user to an information that should not be ignored easily. Depending on the importance any current user activity should be interrupted and focus put onto the notification. Getting the "meat", eh, source code Please check out Bernard's blog on Foxite directly in order to get the latest and greatest version. As time of writing this article I use version 6.0 as described in this blog entry: The Fastest Lightbox Ever The Lightbox class is sub-classed from the imgCanvas class from the GdiPlusX project on VFPx and therefore you need to have the source code of GdiPlusX as well, and integrate it into your development environment. The version I use is available here: Release GDIPlusX 1.20 As soon as you open the bbGdiLightbox class the first it, VFP might ask you to update the reference to the gdiplusx.vcx. As we have the sources, no problem and you have access to Bernard's code. The class itself is pretty easy to understand, some properties that you do not need to change and three methods: Setup(), ShowLightbox() and BeforeDraw() The challenge - _Screen or not? Reading Bernard's article about the fastest lightbox ever, he states the following: "The class will only work on a form. It will not support any other containers" Really? And what about _Screen? Isn't that a form class, too? Yes, of course it is but nonetheless trying to use _Screen directly will fail. Well, let's have look at the code to see why: WITH This .Left = 0 .Top = 0 .Height = ThisForm.Height .Width = ThisForm.Width .ZOrder(0) .Visible = .F.ENDWITH During the setup of the lightbox as well as while capturing the image as replacement for your forms and controls, the object reference Thisform is used. Which is a little bit restrictive to my opinion but let's continue. The second issue lies in the method ShowLightbox() and introduced by the call of .Bitmap.FromScreen(): Lparameters tlVisiblilty* tlVisiblilty - show or hide (T/F)* grab a screen dump with controlsIF tlVisiblilty Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = IIF(ThisForm.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing loCaptureBmp = .Bitmap.FromScreen(ThisForm.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; ThisForm.Width ,; ThisForm.Height) ENDWITH * save it to a property This.capturebmp = loCaptureBmp ThisForm.SetAll("Visible",.F.) This.DraW() This.Visible = .T.ELSE ThisForm.SetAll("Visible",.T.) This.Visible = .F.ENDIF My first trials in using the class ended in an exception - GdiPlusError:OutOfMemory - thrown by the Bitmap object. Frankly speaking, this happened mainly because of my lack of knowledge about GdiPlusX. After reading some documentation, especially about the FromScreen() method I experimented a little bit. Capturing the visible area of _Screen actually was not the real problem but the dimensions I specified for the bitmap. The modifications - step by step First of all, it is to get rid of restrictive object references on Thisform and to change them into either This.Parent or more generic into This.oForm (even better: This.oControl). The Lightbox.Setup() method now sets the necessary object reference like so: *====================================================================* Initial setup* Default value: This.oControl = "This.Parent"* Alternative: This.oControl = "_Screen"*====================================================================With This .oControl = Evaluate(.oControl) If Vartype(.oControl) == T_OBJECT .Anchor = 0 .Left = 0 .Top = 0 .Width = .oControl.Width .Height = .oControl.Height .Anchor = 15 .ZOrder(0) .Visible = .F. EndIfEndwith Also, based on other developers' comments in Bernard articles on his lightbox concept and evolution I found the source code to handle the differences between a form and _Screen and goes into Lightbox.ShowLightbox() like this: *====================================================================* tlVisibility - show or hide (T/F)* grab a screen dump with controls*====================================================================Lparameters tlVisibility Local loControl m.loControl = This.oControl If m.tlVisibility Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = Iif(m.loControl.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing If Upper(m.loControl.Name) == Upper("Screen") loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd) Else loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; m.loControl.Width ,; m.loControl.Height) EndIf Endwith * save it to a property This.CaptureBmp = loCaptureBmp m.loControl.SetAll("Visible",.F.) This.Draw() This.Visible = .T. Else This.CaptureBmp = .Null. m.loControl.SetAll("Visible",.T.) This.Visible = .F. Endif {loadposition content_adsense} Are we done? Almost... Although, Bernard says it clearly in his article: "Just drop the class on a form and call it as shown." It did not come clear to my mind in the first place with _Screen, but, yeah, he is right. Dropping the class on a form provides a permanent link between those two classes, it creates a valid This.Parent object reference. Bearing in mind that the lightbox class can not be "dropped" on the _Screen, we have to create the same type of binding during runtime execution like so: *====================================================================* Create global lightbox component*==================================================================== Local llOk, loException As Exception m.llOk = .F. m.loException = .Null. If Not Vartype(_Screen.Lightbox) == "O" Try _Screen.AddObject("Lightbox", "bbGdiLightbox") Catch To m.loException Assert .F. Message m.loException.Message EndTry EndIf m.llOk = (Vartype(_Screen.Lightbox) == "O")Return m.llOk Through runtime instantiation we create a valid binding to This.Parent in the lightbox object and the code works as expected with _Screen. Ease your life: Use properties instead of constants Having a closer look at the BeforeDraw() method might wet your appetite to simplify the code a little bit. Looking at the sample screenshots in Bernard's article you see several forms in different colors. This got me to modify the code like so: *====================================================================* Apply the actual lightbox effect on the captured bitmap.*====================================================================If Vartype(This.CaptureBmp) == T_OBJECT Local loGfx As xfcGraphics loGfx = This.oGfx With _Screen.System.Drawing loGfx.DrawImage(This.CaptureBmp,This.Rectangle,This.Rectangle,.GraphicsUnit.Pixel) * change the colours as needed here * possible colours are (220,128,0,0),(220,0,0,128) etc. loBrush = .SolidBrush.New(.Color.FromArgb( ; This.Opacity, .Color.FromRGB(This.BorderColor))) loGfx.FillRectangle(loBrush,This.Rectangle) EndwithEndif Create an additional property Opacity to specify the grade of translucency you would like to have without the need to change the code in each instance of the class. This way you only need to change the values of Opacity and BorderColor to tweak the appearance of your lightbox. This could be quite helpful to signalize different levels of importance (ie. green, yellow, orange, red, etc...) of notifications to the users of the application. Final thoughts Using the lightbox concept in combination with _Screen instead of forms is possible. Already Jim Wiggins comments in Bernard's article to loop through the _Screen.Forms collection in order to cascade the lightbox visibility to all active forms. Good idea. But honestly, I believe that instead of looping all forms one could use _Screen.SetAll("ShowLightbox", .T./.F., "Form") with Form.ShowLightbox_Access method to gain more speed. The modifications described above might provide even more features to your applications while consuming less resources and performance. Additionally, the restrictions to capture only forms does not exist anymore. Using _Screen you are able to capture and cover anything. The captured area of _Screen does not include any toolbars, docked windows, or menus. Therefore, it is advised to take this concept on a higher level and to combine it with additional classes that handle the state of toolbars, docked windows and menus. Which I did for the customer's project.

    Read the article

  • Func Delegate in C#

    - by Jalpesh P. Vadgama
    We already know about delegates in C# and I have previously posted about basics of delegates in C#. Following are posts about basic of delegates I have written. Delegates in C# Multicast Delegates in C# In this post we are going to learn about Func Delegates in C#. As per MSDN following is a definition. “Encapsulates a method that has one parameter and returns a value of the type specified by the TResult parameter.” Func can handle multiple arguments. The Func delegates is parameterized type. It takes any valid C# type as parameter and you have can multiple parameters and also you have specify the return type as last parameters. Followings are some examples of parameters. Func<int T,out TResult> Func<int T,int T, out Tresult> Now let’s take a string concatenation example for that. I am going to create two func delegate which will going to concate two strings and three string. Following is a code for that. using System; using System.Collections.Generic; namespace FuncExample { class Program { static void Main(string[] args) { Func<string, string, string> concatTwo = (x, y) => string.Format("{0} {1}",x,y); Func<string, string, string, string> concatThree = (x, y, z) => string.Format("{0} {1} {2}", x, y,z); Console.WriteLine(concatTwo("Hello", "Jalpesh")); Console.WriteLine(concatThree("Hello","Jalpesh","Vadgama")); Console.ReadLine(); } } } As you can see in above example, I have create two delegates ‘concatTwo’ and ‘concatThree. The first concat two strings and another concat three strings. If you see the func statements the last parameter is for the out as here its output string so I have written string as last parameter in both statements. Now it’s time to run the example and as expected following is output. That’s it. Hope you like it. Stay tuned for more updates.

    Read the article

  • Gnome Do not Launching

    - by PyRulez
    When I try running gnome do, I get this. chris@Chris-Ubuntu-Laptop:~$ gnome-do pgrep: invalid user name: -u and it is not writable Trying sudo: chris@Chris-Ubuntu-Laptop:~$ sudo gnome-do [NetworkService] Could not initialize Network Manager dbus: Unable to open the session message bus. [Error 17:54:30.122] [SystemService] Could not initialize dbus: Unable to open the session message bus. (Do:2401): Wnck-CRITICAL **: wnck_set_client_type got called multiple times. (Do:2401): libdo-WARNING **: Binding '<Super>space' failed! [Error 17:54:30.649] [AbstractKeyBindingService] Key "" is already mapped. Tomboy.NotesItemSource "Tomboy Notes" encountered an error in UpdateItems: System.TypeInitializationException: An exception was thrown by the type initializer for Tomboy.TomboyDBus ---> System.Exception: Unable to open the session message bus. ---> System.ArgumentNullException: Argument cannot be null. Parameter name: address at NDesk.DBus.Bus.Open (System.String address) [0x00000] in <filename unknown>:0 at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 at Tomboy.TomboyDBus..cctor () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at Tomboy.NotesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Firefox.PlacesItemSource "Firefox Places" encountered an error in UpdateItems: System.InvalidCastException: Cannot cast from source type to destination type. at Mono.Data.Sqlite.SqliteDataReader.VerifyType (Int32 i, DbType typ) [0x00000] in <filename unknown>:0 at Mono.Data.Sqlite.SqliteDataReader.GetString (Int32 i) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource+<LoadPlaceItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem].AddEnumerable (IEnumerable`1 enumerable) [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem]..ctor (IEnumerable`1 collection) [0x00000] in <filename unknown>:0 at System.Linq.Enumerable.ToArray[PlaceItem] (IEnumerable`1 source) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Do.Universe.Linux.GNOMESpecialLocationsItemSource "GNOME Special Locations" encountered an error in UpdateItems: System.IO.FileNotFoundException: Could not find file "/root/.gtk-bookmarks". File name: '/root/.gtk-bookmarks' at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, Boolean anonymous, FileOptions options) [0x00000] in <filename unknown>:0 at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.FileStream:.ctor (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare) at System.IO.File.OpenRead (System.String path) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path, System.Text.Encoding encoding, Boolean detectEncodingFromByteOrderMarks, Int32 bufferSize) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.StreamReader:.ctor (string) at Do.Universe.Linux.GNOMESpecialLocationsItemSource+<ReadBookmarkItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at Do.Universe.Linux.GNOMESpecialLocationsItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . ^[^\Full thread dump: "<unnamed thread>" tid=0x0xb7570700 this=0x0x56f18 thread handle 0x403 state : not waiting owns () at (wrapper managed-to-native) Mono.Unix.Native.Syscall.read (int,intptr,ulong) <0xffffffff> at Mono.Unix.Native.Syscall.read (int,void*,ulong) <0x00023> at Mono.Unix.UnixStream.Read (byte[],int,int) <0x0008b> at NDesk.DBus.Connection.ReadMessage () <0x0003c> at NDesk.DBus.Connection.Iterate () <0x0001b> at NDesk.DBus.BusG/<Init>c__AnonStorey0.<>m__0 (intptr,NDesk.GLib.IOCondition,intptr) <0x00033> at (wrapper native-to-managed) NDesk.DBus.BusG/<Init>c__AnonStorey0.<>m__0 (intptr,NDesk.GLib.IOCondition,intptr) <0xffffffff> at (wrapper managed-to-native) Gtk.Clipboard.gtk_clipboard_wait_is_text_available (intptr) <0xffffffff> at Gtk.Clipboard.WaitIsTextAvailable () <0x00017> at Do.Universe.SelectedTextItem.UpdateSelection (object,System.EventArgs) <0x00027> at Do.Platform.AbstractApplicationService.OnSummoned () <0x00025> at Do.Platform.ApplicationService.<ApplicationService>m__31 (object,System.EventArgs) <0x00013> at Do.Core.Controller.OnSummoned () <0x00025> at Do.Core.Controller.Summon () <0x00027> at Do.Do.Main (string[]) <0x001eb> at (wrapper runtime-invoke) <Module>.runtime_invoke_void_object (object,intptr,intptr,intptr) <0xffffffff> "<unnamed thread>" tid=0x0xb2c81b40 this=0x0x194150 thread handle 0x412 state : interrupted state owns () at (wrapper managed-to-native) System.IO.InotifyWatcher.ReadFromFD (intptr,byte[],intptr) <0xffffffff> at System.IO.InotifyWatcher.Monitor () <0x0005f> at System.Threading.Thread.StartInternal () <0x00057> at (wrapper runtime-invoke) object.runtime_invoke_void__this__ (object,intptr,intptr,intptr) <0xffffffff> "Universe Update Dispatcher" tid=0x0xb29ffb40 this=0x0x569d8 thread handle 0x41b state : interrupted state owns () at (wrapper managed-to-native) System.Threading.WaitHandle.WaitOne_internal (System.Threading.WaitHandle,intptr,int,bool) <0xffffffff> at System.Threading.WaitHandle.WaitOne (System.TimeSpan,bool) <0x00133> at System.Threading.WaitHandle.WaitOne (System.TimeSpan) <0x00022> at Do.Core.UniverseManager.UniverseUpdateLoop () <0x0007a> at System.Threading.Thread.StartInternal () <0x00057> at (wrapper runtime-invoke) object.runtime_invoke_void__this__ (object,intptr,intptr,intptr) <0xffffffff> Tomboy.NotesItemSource "Tomboy Notes" encountered an error in UpdateItems: System.TypeInitializationException: An exception was thrown by the type initializer for Tomboy.TomboyDBus ---> System.Exception: Unable to open the session message bus. ---> System.ArgumentNullException: Argument cannot be null. Parameter name: address at NDesk.DBus.Bus.Open (System.String address) [0x00000] in <filename unknown>:0 at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 at Tomboy.TomboyDBus..cctor () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at Tomboy.NotesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Firefox.PlacesItemSource "Firefox Places" encountered an error in UpdateItems: System.InvalidCastException: Cannot cast from source type to destination type. at Mono.Data.Sqlite.SqliteDataReader.VerifyType (Int32 i, DbType typ) [0x00000] in <filename unknown>:0 at Mono.Data.Sqlite.SqliteDataReader.GetString (Int32 i) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource+<LoadPlaceItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem].AddEnumerable (IEnumerable`1 enumerable) [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem]..ctor (IEnumerable`1 collection) [0x00000] in <filename unknown>:0 at System.Linq.Enumerable.ToArray[PlaceItem] (IEnumerable`1 source) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Do.Universe.Linux.GNOMESpecialLocationsItemSource "GNOME Special Locations" encountered an error in UpdateItems: System.IO.FileNotFoundException: Could not find file "/root/.gtk-bookmarks". File name: '/root/.gtk-bookmarks' at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, Boolean anonymous, FileOptions options) [0x00000] in <filename unknown>:0 at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.FileStream:.ctor (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare) at System.IO.File.OpenRead (System.String path) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path, System.Text.Encoding encoding, Boolean detectEncodingFromByteOrderMarks, Int32 bufferSize) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.StreamReader:.ctor (string) at Do.Universe.Linux.GNOMESpecialLocationsItemSource+<ReadBookmarkItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at Do.Universe.Linux.GNOMESpecialLocationsItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . It stops when I try my key combination, ctrl-alt-. It does not pop up though.

    Read the article

  • JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue

    - by John-Brown.Evans
    JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c12_5{vertical-align:top;width:468pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c8_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 0pt 5pt} .c10_5{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c14_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c21_5{background-color:#ffffff} .c18_5{color:#1155cc;text-decoration:underline} .c16_5{color:#666666;font-size:12pt} .c5_5{background-color:#f3f3f3;font-weight:bold} .c19_5{color:inherit;text-decoration:inherit} .c3_5{height:11pt;text-align:center} .c11_5{font-weight:bold} .c20_5{background-color:#00ff00} .c6_5{font-style:italic} .c4_5{height:11pt} .c17_5{background-color:#ffff00} .c0_5{direction:ltr} .c7_5{font-family:"Courier New"} .c2_5{border-collapse:collapse} .c1_5{line-height:1.0} .c13_5{background-color:#f3f3f3} .c15_5{height:0pt} .c9_5{text-align:center} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} Welcome to another post in the series of blogs which demonstrates how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue Today we will create a BPEL process which will read (dequeue) the message from the JMS queue, which we enqueued in the last example. The JMS adapter will dequeue the full XML payload from the queue. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we designed and deployed a BPEL composite, which took a simple XML payload and enqueued it to the JMS queue. In this example, we will read that same message from the queue, using a JMS adapter and a BPEL process. As many of the configuration steps required to read from that queue were done in the previous samples, this one will concentrate on the new steps. A summary of the required objects is listed below. To find out how to create them please see the previous samples. They also include instructions on how to verify the objects are set up correctly. WebLogic Server Objects Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue Schema XSD File The following XSD file is used for the message format. It was created in the previous example and will be copied to the new process. stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                 xmlns="http://www.example.org"                 targetNamespace="http://www.example.org"                 elementFormDefault="qualified">   <xsd:element name="exampleElement" type="xsd:string">   </xsd:element> </xsd:schema> JMS Message After executing the previous samples, the following XML message should be in the JMS queue located at jms/TestJMSQueue: <?xml version="1.0" encoding="UTF-8" ?><exampleElement xmlns="http://www.example.org">Test Message</exampleElement> JDeveloper Connection You will need a valid Application Server Connection in JDeveloper pointing to the SOA server which the process will be deployed to. 2. Create a BPEL Composite with a JMS Adapter Partner Link In the previous example, we created a composite in JDeveloper called JmsAdapterWriteSchema. In this one, we will create a new composite called JmsAdapterReadSchema. There are probably many ways of incorporating a JMS adapter into a SOA composite for incoming messages. One way is design the process in such a way that the adapter polls for new messages and when it dequeues one, initiates a SOA or BPEL instance. This is possibly the most common use case. Other use cases include mid-flow adapters, which are activated from within the BPEL process. In this example we will use a polling adapter, because it is the most simple to set up and demonstrate. But it has one disadvantage as a demonstrative model. When a polling adapter is active, it will dequeue all messages as soon as they reach the queue. This makes it difficult to monitor messages we are writing to the queue, because they will disappear from the queue as soon as they have been enqueued. To work around this, we will shut down the composite after deploying it and restart it as required. (Another solution for this would be to pause the consumption for the queue and resume consumption again if needed. This can be done in the WLS console JMS-Modules -> queue -> Control -> Consumption -> Pause/Resume.) We will model the composite as a one-way incoming process. Usually, a BPEL process will do something useful with the message after receiving it, such as passing it to a database or file adapter, a human workflow or external web service. But we only want to demonstrate how to dequeue a JMS message using BPEL and a JMS adapter, so we won’t complicate the design with further activities. However, we do want to be able to verify that we have read the message correctly, so the BPEL process will include a small piece of embedded java code, which will print the message to standard output, so we can view it in the SOA server’s log file. Alternatively, you can view the instance in the Enterprise Manager and verify the message. The following steps are all executed in JDeveloper. Create the project in the same JDeveloper application used for the previous examples or create a new one. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterReadSchema. When prompted for the composite type, choose Empty Composite. Create a JMS Adapter Partner Link In the composite editor, drag a JMS adapter over from the Component Palette to the left-hand swim lane, under Exposed Services. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterRead Oracle Enterprise Messaging Service (OEMS): Oracle WebLogic JMS AppServer Connection: Use an application server connection pointing to the WebLogic server on which the JMS queue and connection factory mentioned under Prerequisites above are located. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Consume Message Operation Name: Consume_message Consume Operation Parameters Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created in a previous example. JNDI Name: The JNDI name to use for the JMS connection. As in the previous example, this is probably the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) Messages/Message SchemaURL: We will use the XSD file created during the previous example, in the JmsAdapterWriteSchema project to define the format for the incoming message payload and, at the same time, demonstrate how to import an existing XSD file into a JDeveloper project. Press the magnifying glass icon to search for schema files. In the Type Chooser, press the Import Schema File button. Select the magnifying glass next to URL to search for schema files. Navigate to the location of the JmsAdapterWriteSchema project > xsd and select the stringPayload.xsd file. Check the “Copy to Project” checkbox, press OK and confirm the following Localize Files popup. Now that the XSD file has been copied to the local project, it can be selected from the project’s schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string . Press Next and Finish, which will complete the JMS Adapter configuration.Save the project. Create a BPEL Component Drag a BPEL Process from the Component Palette (Service Components) to the Components section of the composite designer. Name it JmsAdapterReadSchema and select Template: Define Service Later and press OK. Wire the JMS Adapter to the BPEL Component Now wire the JMS adapter to the BPEL process, by dragging the arrow from the adapter to the BPEL process. A Transaction Properties popup will be displayed. Set the delivery mode to async.persist. This completes the steps at the composite level. 3 . Complete the BPEL Process Design Invoke the BPEL Flow via the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterReadSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterRead partner link in the left-hand swim lane. Drag a Receive activity onto the BPEL flow diagram, then drag a wire (left-hand yellow arrow) from it to the JMS adapter. This will open the Receive activity editor. Auto-generate the variable by pressing the green “+” button and check the “Create Instance” checkbox. This will result in a BPEL instance being created when a new JMS message is received. At this point it would actually be OK to compile and deploy the composite and it would pick up any messages from the JMS queue. In fact, you can do that to test it, if you like. But it is very rudimentary and would not be doing anything useful with the message. Also, you could only verify the actual message payload by looking at the instance’s flow in the Enterprise Manager. There are various other possibilities; we could pass the message to another web service, write it to a file using a file adapter or to a database via a database adapter etc. But these will all introduce unnecessary complications to our sample. So, to keep it simple, we will add a small piece of Java code to the BPEL process which will write the payload to standard output. This will be written to the server’s log file, which will be easy to monitor. Add a Java Embedding Activity First get the full name of the process’s input variable, as this will be needed for the Java code. Go to the Structure pane and expand Variables > Process > Variables. Then expand the input variable, for example, "Receive1_Consume_Message_InputVariable > body > ns2:exampleElement”, and note variable’s name and path, if they are different from this one. Drag a Java Embedding activity from the Component Palette (Oracle Extensions) to the BPEL flow, after the Receive activity, then open it to edit. Delete the example code and replace it with the following, replacing the variable parts with those in your sample, if necessary.: System.out.println("JmsAdapterReadSchema process picked up a message"); oracle.xml.parser.v2.XMLElement inputPayload =    (oracle.xml.parser.v2.XMLElement)getVariableData(                           "Receive1_Consume_Message_InputVariable",                           "body",                           "/ns2:exampleElement");   String inputString = inputPayload.getFirstChild().getNodeValue(); System.out.println("Input String is " + inputPayload.getFirstChild().getNodeValue()); Tip. If you are not sure of the exact syntax of the input variable, create an Assign activity in the BPEL process and copy the variable to another, temporary one. Then check the syntax created by the BPEL designer. This completes the BPEL process design in JDeveloper. Save, compile and deploy the process to the SOA server. 3. Test the Composite Shut Down the JmsAdapterReadSchema Composite After deploying the JmsAdapterReadSchema composite to the SOA server it is automatically activated. If there are already any messages in the queue, the adapter will begin polling them. To ease the testing process, we will deactivate the process first Log in to the Enterprise Manager (Fusion Middleware Control) and navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterReadSchema [1.0] . Press the Shut Down button to disable the composite and confirm the following popup. Monitor Messages in the JMS Queue In a separate browser window, log in to the WebLogic Server Console and navigate to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. This is the location of the JMS queue we created in an earlier sample (see the prerequisites section of this sample). Check whether there are any messages already in the queue. If so, you can dequeue them using the QueueReceive Java program created in an earlier sample. This will ensure that the queue is empty and doesn’t contain any messages in the wrong format, which would cause the JmsAdapterReadSchema to fail. Send a Test Message In the Enterprise Manager, navigate to the JmsAdapterWriteSchema created earlier, press Test and send a test message, for example “Message from JmsAdapterWriteSchema”. Confirm that the message was written correctly to the queue by verifying it via the queue monitor in the WLS Console. Monitor the SOA Server’s Output A program deployed on the SOA server will write its standard output to the terminal window in which the server was started, unless this has been redirected to somewhere else, for example to a file. If it has not been redirected, go to the terminal session in which the server was started, otherwise open and monitor the file to which it was redirected. Re-Enable the JmsAdapterReadSchema Composite In the Enterprise Manager, navigate to the JmsAdapterReadSchema composite again and press Start Up to re-enable it. This should cause the JMS adapter to dequeue the test message and the following output should be written to the server’s standard output: JmsAdapterReadSchema process picked up a message. Input String is Message from JmsAdapterWriteSchema Note that you can also monitor the payload received by the process, by navigating to the the JmsAdapterReadSchema’s Instances tab in the Enterprise Manager. Then select the latest instance and view the flow of the BPEL component. The Receive activity will contain and display the dequeued message too. 4 . Troubleshooting This sample demonstrates how to dequeue an XML JMS message using a BPEL process and no additional functionality. For example, it doesn’t contain any error handling. Therefore, any errors in the payload will result in exceptions being written to the log file or standard output. If you get any errors related to the payload, such as Message handle error ... ORABPEL-09500 ... XPath expression failed to execute. An error occurs while processing the XPath expression; the expression is /ns2:exampleElement. ... etc. check that the variable used in the Java embedding part of the process was entered correctly. Possibly follow the tip mentioned in previous section. If this doesn’t help, you can delete the Java embedding part and simply verify the message via the flow diagram in the Enterprise Manager. Or use a different method, such as writing it to a file via a file adapter. This concludes this example. In the next post, we will begin with an AQ JMS example, which uses JMS to write to an Advanced Queue stored in the database. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Technical differences between square and hexagon for a grid?

    - by Marlon Dias
    I'm developing a 2D city-building game and trying to decide on the type of grid. There will be vehicles, so the unit movement is important too. I know there are visual differences for using Squares or Hexagons, what I want know is: What are the issues for programming each type of grid regarding implementation and performance? Is there a tradeoff or specific benefit for using one of them in a game context?

    Read the article

  • How to stream H264 Video from camera over FTP?

    - by Jay
    I bought a h264 security camera system last year and set it up to ftp video to my computer. I was able to get the video to play (even though it played a little fast) on Ubuntu 11.04 using mplayer. A few months ago, I did a fresh install of 12.04 and I cannot seem to get the video to play with mplayer, smplayer or VLC. I have the restricted formats video packages installed and when playing with any of the players, all I get is a gray video. When calling mplayer from the command line to play the video with no options, I get a lot of these errors: [h264 @ 0x7f278c61f280]concealing 1320 DC, 1320 AC, 1320 MV errors No pts value from demuxer to use for frame! pts after filters MISSING I'm not a video expert and have been coming up with a lot of dead ends when Googling for this. Could someone offer some advice about how to play these videos? Here is the output of mediainfo for a sample file. mediainfo -f sec-cam01-m-20120921-212454.h264 General Count : 278 Count of stream of this kind : 1 Kind of stream : General Kind of stream : General Stream identifier : 0 Count of video streams : 1 Video_Format_List : AVC Video_Format_WithHint_List : AVC Codecs Video : AVC Complete name : sec-cam01-m-20120921-212454.h264 File name : sec-cam01-m-20120921-212454 File extension : h264 Format : AVC Format : AVC Format/Info : Advanced Video Codec Format/Url : http://developers.videolan.org/x264.html Format/Extensions usually used : avc h264 Commercial name : AVC Internet media type : video/H264 Codec : AVC Codec : AVC Codec/Info : Advanced Video Codec Codec/Url : http://developers.videolan.org/x264.html Codec/Extensions usually used : avc h264 File size : 1097315 File size : 1.05 MiB File size : 1 MiB File size : 1.0 MiB File size : 1.05 MiB File size : 1.046 MiB File last modification date : UTC 2012-09-22 01:27:12 File last modification date (local) : 2012-09-21 21:27:12 Video Count : 205 Count of stream of this kind : 1 Kind of stream : Video Kind of stream : Video Stream identifier : 0 Format : AVC Format/Info : Advanced Video Codec Format/Url : http://developers.videolan.org/x264.html Commercial name : AVC Format profile : [email protected] Format settings : 1 Ref Frames Format settings, CABAC : No Format settings, CABAC : No Format settings, ReFrames : 1 Format settings, ReFrames : 1 frame Format settings, GOP : M=1, N=3 Internet media type : video/H264 Codec : AVC Codec : AVC Codec/Family : AVC Codec/Info : Advanced Video Codec Codec/Url : http://developers.videolan.org/x264.html Codec profile : [email protected] Codec settings : 1 Ref Frames Codec settings, CABAC : No Codec_Settings_RefFrames : 1 Width : 704 Width : 704 pixels Height : 480 Height : 480 pixels Pixel aspect ratio : 1.000 Display aspect ratio : 1.467 Display aspect ratio : 3:2 Standard : NTSC Resolution : 8 Resolution : 8 bits Colorimetry : 4:2:0 Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 Bit depth : 8 bits Scan type : Progressive Scan type : Progressive Interlacement : PPF Interlacement : Progressive Edit: Here is a sample video using the same encoding: https://www.dropbox.com/s/l5acwzy8rtqn9xe/sec-cam08-m-20121118-105815.h264 (not the same video as mediainfo output)

    Read the article

  • SortedDictionary and SortedList

    - by Simon Cooper
    Apart from Dictionary<TKey, TValue>, there's two other dictionaries in the BCL - SortedDictionary<TKey, TValue> and SortedList<TKey, TValue>. On the face of it, these two classes do the same thing - provide an IDictionary<TKey, TValue> interface where the iterator returns the items sorted by the key. So what's the difference between them, and when should you use one rather than the other? (as in my previous post, I'll assume you have some basic algorithm & datastructure knowledge) SortedDictionary We'll first cover SortedDictionary. This is implemented as a special sort of binary tree called a red-black tree. Essentially, it's a binary tree that uses various constraints on how the nodes of the tree can be arranged to ensure the tree is always roughly balanced (for more gory algorithmical details, see the wikipedia link above). What I'm concerned about in this post is how the .NET SortedDictionary is actually implemented. In .NET 4, behind the scenes, the actual implementation of the tree is delegated to a SortedSet<KeyValuePair<TKey, TValue>>. One example tree might look like this: Each node in the above tree is stored as a separate SortedSet<T>.Node object (remember, in a SortedDictionary, T is instantiated to KeyValuePair<TKey, TValue>): class Node { public bool IsRed; public T Item; public SortedSet<T>.Node Left; public SortedSet<T>.Node Right; } The SortedSet only stores a reference to the root node; all the data in the tree is accessed by traversing the Left and Right node references until you reach the node you're looking for. Each individual node can be physically stored anywhere in memory; what's important is the relationship between the nodes. This is also why there is no constructor to SortedDictionary or SortedSet that takes an integer representing the capacity; there are no internal arrays that need to be created and resized. This may seen trivial, but it's an important distinction between SortedDictionary and SortedList that I'll cover later on. And that's pretty much it; it's a standard red-black tree. Plenty of webpages and datastructure books cover the algorithms behind the tree itself far better than I could. What's interesting is the comparions between SortedDictionary and SortedList, which I'll cover at the end. As a side point, SortedDictionary has existed in the BCL ever since .NET 2. That means that, all through .NET 2, 3, and 3.5, there has been a bona-fide sorted set class in the BCL (called TreeSet). However, it was internal, so it couldn't be used outside System.dll. Only in .NET 4 was this class exposed as SortedSet. SortedList Whereas SortedDictionary didn't use any backing arrays, SortedList does. It is implemented just as the name suggests; two arrays, one containing the keys, and one the values (I've just used random letters for the values): The items in the keys array are always guarenteed to be stored in sorted order, and the value corresponding to each key is stored in the same index as the key in the values array. In this example, the value for key item 5 is 'z', and for key item 8 is 'm'. Whenever an item is inserted or removed from the SortedList, a binary search is run on the keys array to find the correct index, then all the items in the arrays are shifted to accomodate the new or removed item. For example, if the key 3 was removed, a binary search would be run to find the array index the item was at, then everything above that index would be moved down by one: and then if the key/value pair {7, 'f'} was added, a binary search would be run on the keys to find the index to insert the new item, and everything above that index would be moved up to accomodate the new item: If another item was then added, both arrays would be resized (to a length of 10) before the new item was added to the arrays. As you can see, any insertions or removals in the middle of the list require a proportion of the array contents to be moved; an O(n) operation. However, if the insertion or removal is at the end of the array (ie the largest key), then it's only O(log n); the cost of the binary search to determine it does actually need to be added to the end (excluding the occasional O(n) cost of resizing the arrays to fit more items). As a side effect of using backing arrays, SortedList offers IList Keys and Values views that simply use the backing keys or values arrays, as well as various methods utilising the array index of stored items, which SortedDictionary does not (and cannot) offer. The Comparison So, when should you use one and not the other? Well, here's the important differences: Memory usage SortedDictionary and SortedList have got very different memory profiles. SortedDictionary... has a memory overhead of one object instance, a bool, and two references per item. On 64-bit systems, this adds up to ~40 bytes, not including the stored item and the reference to it from the Node object. stores the items in separate objects that can be spread all over the heap. This helps to keep memory fragmentation low, as the individual node objects can be allocated wherever there's a spare 60 bytes. In contrast, SortedList... has no additional overhead per item (only the reference to it in the array entries), however the backing arrays can be significantly larger than you need; every time the arrays are resized they double in size. That means that if you add 513 items to a SortedList, the backing arrays will each have a length of 1024. To conteract this, the TrimExcess method resizes the arrays back down to the actual size needed, or you can simply assign list.Capacity = list.Count. stores its items in a continuous block in memory. If the list stores thousands of items, this can cause significant problems with Large Object Heap memory fragmentation as the array resizes, which SortedDictionary doesn't have. Performance Operations on a SortedDictionary always have O(log n) performance, regardless of where in the collection you're adding or removing items. In contrast, SortedList has O(n) performance when you're altering the middle of the collection. If you're adding or removing from the end (ie the largest item), then performance is O(log n), same as SortedDictionary (in practice, it will likely be slightly faster, due to the array items all being in the same area in memory, also called locality of reference). So, when should you use one and not the other? As always with these sort of things, there are no hard-and-fast rules. But generally, if you: need to access items using their index within the collection are populating the dictionary all at once from sorted data aren't adding or removing keys once it's populated then use a SortedList. But if you: don't know how many items are going to be in the dictionary are populating the dictionary from random, unsorted data are adding & removing items randomly then use a SortedDictionary. The default (again, there's no definite rules on these sort of things!) should be to use SortedDictionary, unless there's a good reason to use SortedList, due to the bad performance of SortedList when altering the middle of the collection.

    Read the article

  • Python class representation under the hood

    - by decentralised
    OK, here is a simple Python class: class AddSomething(object): __metaclass__ = MyMetaClass x = 10 def __init__(self, a): self.a = a def add(self, a, b): return a + b We have specified a metaclass, and that means we could write something like this: class MyMetaClass(type): def __init__(cls, name, bases, cdict): # do something with the class Now, the cdict holds a representation of AddSomething: AddSomething = type('AddSomething', (object,), {'x' : 10, '__init__': __init__, 'add': add}) So my question is simple, are all Python classes represented in this second format internally? If not, how are they represented? EDIT - Python 2.7

    Read the article

  • how can I disable ssh prompt from kvm remote

    - by kamil
    when I upgraded my KVM virtual machine manager to the latest version I got a question prompt every time I try to connect remotely to my machines: The authenticity of host 'kvm.local (ip address)' can't be established. ECDSA key fingerprint is b5:fa:0a:d0:39:af:0a:60:fa:04:87:6c:31:1d:13:15. Are you sure you want to continue connecting (yes/no)? And when changing any setting on a VM I was obliged to type yes and then type the root password in another dialog using ubuntu 12.04 64bit

    Read the article

  • How to implement an email unsubscribe system for a site with many kinds of emails?

    - by Mike Liu
    I'm working on a website that features many different types of emails. Users have accounts, and when logged in they have access to a setting page that they can use to customize what types of emails they receive. However, I'd like to also give users an easy way to unsubscribe directly in the emails they receive. I've looked into list unsubscribe headers as well as creating some type of one click link that would unsubscribe a user from that type of email without requiring login or further action. The later would probably require me to break convention and make changes to the database in response to a GET on the link. However, am I incorrect in thinking that either of these would require me to generate and permanently store a unique identifier in my database for every email I ever send, really complicating email delivery? Without that, I'm not sure how I would be able to uniquely identify a user and a type of email in order to change their email preferences, and this identifier would need to be stored forever as a user could have an email sitting in their inbox for a long time before they decide to act on it. Alternatively, I was considering having a no-login page for managing email preferences. In contrast to above where I would need one of these identifiers for each email, this would only need one identifier per user, with no generation or other action required on sending an email. All of these raise security issues, and they could potentially be used by people to tamper with others' email preferences. This could be mitigated somewhat by ensuring that the identifier is really difficult to guess. For the once per user identifier approach, I was considering generating the identifier by passing a user's ID through some type of encryption algorithm, is this a sound approach? For the per-email identifiers, perhaps I could use a user's ID appended to the time. However, even this would not eliminate the problem entirely, as this would really just be security through obscurity, and anyone with the URL could tamper, and in the end the main defense would have to be that most people aren't so bored as to tamper with other people's email preferences. Are there any other alternatives I've missed, or issues or solutions with these that anyone can provide insight on? What are best practices in this area?

    Read the article

  • Using Ogre particle point billboards with shaders

    - by Jay
    I'm learning about using Ogre particles and had some questions about how the point type particles work. Q. I believe point type particles are implemented as a single position. Is one single vertex is passed to the vertex shader? Q. If one vertex is passed to the vertex shader then what gets sent to the fragment shader? Q. Can I pass the particle size to the shader? Perhaps with a custom parameter?

    Read the article

  • How do I prevent tampering with AJAX process page? [closed]

    - by whamsicore
    I am using Ajax for processing with JQUERY. The Data_string is sent to my process.php page, where it is saved. Issue: right now anyone can directly type example.com/process.php to access my process page, or type example.com/process.php/var1=foo1&var2=foo2 to emulate a form submission. How do I prevent this from happening? Also, in the Ajax code I specified POST. What is the difference here between POST and GET?

    Read the article

  • BizTalk 2009 - Custom Functoid Categories

    - by StuartBrierley
    I recently had cause to code a number of custom functoids to aid with some maps that I was writing. Once these were developed and deployed to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions a quick refresh allowed them to appear in toolbox.  After dropping these on a map and configuring the appropriate inputs I tested the map to check that they worked as expected.  All but one of the functoids worked as expecetd, but the final functoid appeared not to be firing at all. I had already tested the code used in a simple test harness application, so I was confident in the code used, but I still needed to figure out what the problem might be. Debugging the map helped me on the way; for some reason the functoid in question was not shown correctly - the functoid definition was wrong. After some investigations I found that the functoid type you assign when coding a custom functoid affects more than just the category it appears in; different functoid types have different capabilities, including what they can link too.  For example, a logical functoid can not provide content for an output element, it can only say whether the element exists.  Map this via a Value Mapping functoid and the value of true or false can be seen in the output element. The functoid I was having problems with was one whare I had used the XPath functoid type, this had seemed to be a good fit as I was looking up content in a config file using xpath and I wanted it to appear the advanced area.  From the table below you can see that this functoid type is marked as "Internal Only", preventing it from being used for custom functoids.  Changing my type to String allowed the functoid to function as expected. Category Description Toolbox Group Assert Internal Use Only Advanced Conversion Converts characters to and from numerics and converts numbers from one base to another. Conversion Count Internal Use Only Advanced Cumulative Performs accumulations of the value of a field that occurs multiple times in a source document and outputs a single output. Cumulative DatabaseExtract Internal Use Only Database DatabaseLookup Internal Use Only Database DateTime Adds date, time, date and time, or add days to a specified date, in output data. Date/Time ExistenceLooping Internal Use Only Advanced Index Internal Use Only Advanced Iteration Internal Use Only Advanced Keymatch Internal Use Only Advanced Logical Controls conditional behavior of other functoids to determine whether particular output data is created. Logical Looping Internal Use Only Advanced MassCopy Internal Use Only Advanced Math Performs specific numeric calculations such as addition, multiplication, and division. Mathematical NilValue Internal Use Only Advanced Scientific Performs specific scientific calculations such as logarithmic, exponential, and trigonometric functions. Scientific Scripter Internal Use Only Advanced String Manipulates data strings by using well-known string functions such as concatenation, length, find, and trim. String TableExtractor Internal Use Only Advanced TableLooping Internal Use Only Advanced Unknown Internal Use Only Advanced ValueMapping Internal Use Only Advanced XPath Internal Use Only Advanced Links http://msdn.microsoft.com/en-us/library/microsoft.biztalk.basefunctoids.functoidcategory(BTS.20).aspx http://blog.eliasen.dk/CommentView,guid,d33b686b-b059-4381-a0e7-1c56e808f7f0.aspx

    Read the article

  • How to use ULS in SharePoint 2010 for Custom Code Exception Logging?

    - by venkatx5
    What is ULS in SharePoint 2010? ULS stands for Unified Logging Service which captures and writes Exceptions/Logs in Log File(A Plain Text File with .log extension). SharePoint logs Each and every exceptions with ULS. SharePoint Administrators should know ULS and it's very useful when anything goes wrong. but when you ask any SharePoint 2007 Administrator to check log file then most of them will Kill you. Because read and understand the log file is not so easy. Imagine open a plain text file of 20 MB in NotePad and go thru line by line. Now Microsoft developed a tool "ULS Viewer" to view those Log files in easily readable format. This tools also helps to filter events based on exception priority. You can read on this blog to know in details about ULS Viewer . Where to get ULS Viewer? ULS Viewer is developed by Microsoft and available to download for free. URL : http://code.msdn.microsoft.com/ULSViewer/Release/ProjectReleases.aspx?ReleaseId=3308 Note: Eventhought this tool developed by Microsoft, it's not supported by Microsoft. Means you can't support for this tool from Microsoft and use it on your own Risk. By the way what's the risk in viewing Log Files?! How to use ULS in SharePoint 2010 Custom Code? ULS can be extended to use in user solutions to log exceptions. In Detail, Developer can use ULS to log his own application errors and exceptions on SharePoint Log files. So now all in Single Place (That's why it's called "Unified Logging"). Well in this article I am going to use Waldek's Code (Reference Link). However the article is core and am writing container for that (Basically how to implement the code in Detail). Let's see the steps. Open Visual Studio 2010 -> File -> New Project -> Visual C# -> Windows -> Class Library -> Name : ULSLogger (Make sure you've selected .net Framework 3.5)   In Solution Explorer Panel, Rename the Class1.cs to LoggingService.cs   Right Click on References -> Add Reference -> Under .Net tab select "Microsoft.SharePoint"   Right Click on the Project -> Properties. Select "Signing" Tab -> Check "Sign the Assembly".   In the below drop down select <New> and enter "ULSLogger", uncheck the "Protect my key with a Password" option.   Now copy the below code and paste. (Or Just refer.. :-) ) using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint; using Microsoft.SharePoint.Administration; using System.Runtime.InteropServices; namespace ULSLogger { public class LoggingService : SPDiagnosticsServiceBase { public static string vsDiagnosticAreaName = "Venkats SharePoint Logging Service"; public static string CategoryName = "vsProject"; public static uint uintEventID = 700; // Event ID private static LoggingService _Current; public static LoggingService Current {  get   {    if (_Current == null)     {       _Current = new LoggingService();     }    return _Current;   } }private LoggingService() : base("Venkats SharePoint Logging Service", SPFarm.Local) {}protected override IEnumerable<SPDiagnosticsArea> ProvideAreas() { List<SPDiagnosticsArea> areas = new List<SPDiagnosticsArea>  {   new SPDiagnosticsArea(vsDiagnosticAreaName, new List<SPDiagnosticsCategory>    {     new SPDiagnosticsCategory(CategoryName, TraceSeverity.Medium, EventSeverity.Error)    })   }; return areas; }public static string LogErrorInULS(string errorMessage) { string strExecutionResult = "Message Not Logged in ULS. "; try  {   SPDiagnosticsCategory category = LoggingService.Current.Areas[vsDiagnosticAreaName].Categories[CategoryName];   LoggingService.Current.WriteTrace(uintEventID, category, TraceSeverity.Unexpected, errorMessage);   strExecutionResult = "Message Logged"; } catch (Exception ex) {  strExecutionResult += ex.Message; } return strExecutionResult; }public static string LogErrorInULS(string errorMessage, TraceSeverity tsSeverity) { string strExecutionResult = "Message Not Logged in ULS. "; try  {  SPDiagnosticsCategory category = LoggingService.Current.Areas[vsDiagnosticAreaName].Categories[CategoryName];  LoggingService.Current.WriteTrace(uintEventID, category, tsSeverity, errorMessage);  strExecutionResult = "Message Logged";  } catch (Exception ex)  {   strExecutionResult += ex.Message;   } return strExecutionResult;  } } }   Just build the solution and it's ready to use now. This ULS solution can be used in SharePoint Webparts or Console Application. Lets see how to use it in a Console Application. SharePoint Server 2010 must be installed in the same Server or the application must be hosted in SharPoint Server 2010 environment. The console application must be set to "x64" Platform target.   Create a New Console Application. (Visual Studio -> File -> New Project -> C# -> Windows -> Console Application) Right Click on References -> Add Reference -> Under .Net tab select "Microsoft.SharePoint" Open Program.cs add "using Microsoft.SharePoint.Administration;" Right Click on References -> Add Reference -> Under "Browse" tab select the "ULSLogger.dll" which we created first. (Path : ULSLogger\ULSLogger\bin\Debug\) Right Click on Project -> Properties -> Select "Build" Tab -> Under "Platform Target" option select "x64". Open the Program.cs and paste the below code. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint.Administration; using ULSLogger; namespace ULSLoggerClient {  class Program   {   static void Main(string[] args)     {     Console.WriteLine("ULS Logging Started.");     string strResult = LoggingService.LogErrorInULS("My Application is Working Fine.");      Console.WriteLine("ULS Logging Info. Result : " + strResult);     string strResult = LoggingService.LogErrorInULS("My Application got an Exception.", TraceSeverity.High);     Console.WriteLine("ULS Logging Waring Result : " + strResult);      Console.WriteLine("ULS Logging Completed.");      Console.ReadLine();     }   } } Just build the solution and execute. It'll log the message on the log file. Make sure you are using Farm Administrator User ID. You can play with Message and TraceSeverity as required. Now Open ULS Viewer -> File -> Open From -> ULS -> Select First Option to open the default ULS Log. It's Uls RealTime and will show all log entries in readable table format. Right Click on a row and select "Filter By This Item". Select "Event ID" and enter value "700" that we used in the application. Click Ok and now you'll see the Exceptions/Logs which logged by our application.   If you want to see High Priority Messages only then Click Icons except Red Cross Icon on the Toolbar. The tooltip will tell what's the icons used for.

    Read the article

  • Banshee does not start (Ubuntu 12.04)

    - by balg
    I have installed banshee, but during the installation something went wrong and now i am experiencing this: balg@scorpion:~$ banshee Unhandled Exception: System.TypeLoadException: Could not load type 'Banshee.ServiceStack.DBusServiceManager' from assembly 'Banshee.Services, Version=2.4.0.0, Culture=neutral, PublicKeyToken=null'. [ERROR] FATAL UNHANDLED EXCEPTION: System.TypeLoadException: Could not load type 'Banshee.ServiceStack.DBusServiceManager' from assembly 'Banshee.Services, Version=2.4.0.0, Culture=neutral, PublicKeyToken=null'. I have tried to remove and purge banshee, delete the config files and then reinstall it, but it didn't help. Can anyone help me? Thanks, balg

    Read the article

  • How to store Role Based Access rights in web application?

    - by JonH
    Currently working on a web based CRM type system that deals with various Modules such as Companies, Contacts, Projects, Sub Projects, etc. A typical CRM type system (asp.net web form, C#, SQL Server backend). We plan to implement role based security so that basically a user can have one or more roles. Roles would be broken down by first the module type such as: -Company -Contact And then by the actions for that module for instance each module would end up with a table such as this: Role1 Example: Module Create Edit Delete View Company Yes Owner Only No Yes Contact Yes Yes Yes Yes In the above case Role1 has two module types (Company, and Contact). For company, the person assigned to this role can create companies, can view companies, can only edit records he/she created and cannot delete. For this same role for the module contact this user can create contacts, edit contacts, delete contacts, and view contacts (full rights basically). I am wondering is it best upon coming into the system to session the user's role with something like a: List<Role> roles; Where the Role class would have some sort of List<Module> modules; (can contain Company, Contact, etc.).? Something to the effect of: class Role{ string name; string desc; List<Module> modules; } And the module action class would have a set of actions (Create, Edit, Delete, etc.) for each module: class ModuleActions{ List<Action> actions; } And the action has a value of whether the user can perform the right: class Action{ string right; } Just a rough idea, I know the action could be an enum and the ModuleAction can probably be eliminated with a List<x, y>. My main question is what would be the best way to store this information in this type of application: Should I store it in the User Session state (I have a session class where I manage things related to the user). I generally load this during the initial loading of the application (global.asax). I can simply tack onto this session. Or should this be loaded at the page load event of each module (page load of company etc..). I eventually need to be able to hide / unhide various buttons / divs based on the user's role and that is what got me thinking to load this via session. Any examples or points would be great.

    Read the article

  • How to sort a ListView control by a column in Visual C#

    - by bconlon
    Microsoft provide an article of the same name (previously published as Q319401) and it shows a nice class 'ListViewColumnSorter ' for sorting a standard ListView when the user clicks the column header. This is very useful for String values, however for Numeric or DateTime data it gives odd results. E.g. 100 would come before 99 in an ascending sort as the string compare sees 1 < 9. So my challenge was to allow other types to be sorted. This turned out to be fairly simple as I just needed to create an inner class in ListViewColumnSorter which extends the .Net CaseInsensitiveComparer class, and then use this as the ObjectCompare member's type. Note: Ideally we would be able to use IComparer as the member's type, but the Compare method is not virtual in CaseInsensitiveComparer , so we have to create an exact type: public class ListViewColumnSorter : IComparer {     private CaseInsensitiveComparer ObjectCompare;     private MyComparer ObjectCompare;     ... rest of Microsofts class implementation... } Here is my private inner comparer class, note the 'new int Compare' as Compare is not virtual, and also note we pass the values to the base compare as the correct type (e.g. Decimal, DateTime) so they compare correctly: private class MyComparer : CaseInsensitiveComparer {     public new int Compare(object x, object y)     {         try         {             string s1 = x.ToString();             string s2 = y.ToString();               // check for a numeric column             decimal n1, n2 = 0;             if (Decimal.TryParse(s1, out n1) && Decimal.TryParse(s2, out n2))                 return base.Compare(n1, n2);             else             {                 // check for a date column                 DateTime d1, d2;                 if (DateTime.TryParse(s1, out d1) && DateTime.TryParse(s2, out d2))                     return base.Compare(d1, d2);             }         }         catch (ArgumentException) { }           // just use base string compare         return base.Compare(x, y);     } } You could extend this for other types, even custom classes as long as they support ICompare. Microsoft also have another article How to: Sort a GridView Column When a Header Is Clicked that shows this for WPF, which looks conceptually very similar. I need to test it out to see if it handles non-string types. #

    Read the article

  • SQL SERVER – Create a Very First Report with the Report Wizard

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. What is the report Wizard? In today’s world automation is all around you. Henry Ford began building his Model T automobiles on a moving assembly line a century ago and changed the world. The moving assembly line allowed Ford to build identical cars quickly and cheaply. Henry Ford said in his autobiography “Any customer can have a car painted any color that he wants so long as it is black.” Today you can buy a car straight from the factory with your choice of several colors and with many options like back up cameras, built-in navigation systems and heated leather seats. The assembly lines now use robots to perform some tasks along with human workers. When you order your new car, if you want something special, not offered by the manufacturer, you will have to find a way to add it later. In computer software, we also have “assembly lines” called wizards. A wizard will ask you a series of questions, often branching to specific questions based on earlier answers, until you get to the end of the wizard. These wizards are used for many things, from something simple like setting up a rule in Outlook to performing administrative tasks on a server. Often, a wizard will get you part of the way to the end result, enough to get much of the tedious work out of the way. Once you get the product from the wizard, if the wizard is not capable of doing something you need, you can tweak the results. Create a Report with the Report Wizard Let’s get started with your first report!  Launch SQL Server Data Tools (SSDT) from the Start menu under SQL Server 2012. Once SSDT is running, click New Project to launch the New Project dialog box. On the left side of the screen expand Business Intelligence and select Reporting Services. Configure the properties as shown in . Be sure to select Report Server Project Wizard as the type of report and to save the project in the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Project folder. Click OK and wait for the Report Wizard to launch. Click Next on the Welcome screen.  On the Select the Data Source screen, make sure that New data source is selected. Type JProCo as the data source name. Make sure that Microsoft SQL Server is selected in the Type dropdown. Click Edit to configure the connection string on the Connection Properties dialog box. If your SQL Server database server is installed on your local computer, type in localhost for the Server name and select the JProCo database from the Select or enter a database name dropdown. Click OK to dismiss the Connection Properties dialog box. Check Make this a shared data source and click Next. On the Design the Query screen, you can use the query builder to build a query if you wish. Since this post is not meant to teach you T-SQL queries, you will copy all queries from files that have been provided for you. In the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Resources folder open the sales by employee.sql file. Copy and paste the code from the file into the Query string Text Box. Click Next. On the Select the Report Type screen, choose Tabular and click Next. On the Design the Table screen, you have to figure out the groupings of the report. How do you do this? Well, you often need to know a bit about the data and report requirements. I often draw the report out on paper first to help me determine the groups. In the case of this report, I could group the data several ways. Do I want to see the data grouped by Year and Month? Do I want to see the data grouped by Employee or Category? The only thing I know for sure about this ahead of time is that the TotalSales goes in the Details section. Let’s assume that the CIO asked to see the data grouped first by Year and Month, then by Category. Let’s move the fields to the right-hand side. This is done by selecting Page > Group or Details >, as shown in, and click Next. On the Choose the Table Layout screen, select Stepped and check Include subtotals and Enable drilldown, as shown in. On the Choose the Style screen, choose any color scheme you wish (unlike the Model T) and click Next. I chose the default, Slate. On the Choose the Deployment Location screen, change the Deployment folder to Chapter 3 and click Next. At the Completing the Wizard screen, name your report Employee Sales and click Finish. After clicking Finish, the report and a shared data source will appear in the Solution Explorer and the report will also be visible in Design view. Click the Preview tab at the top. This report expects the user to supply a year which the report will then use as a filter. Type in a year between 2006 and 2013 and click View Report. Click the plus sign next to the Sales Year to expand the report to see the months, then expand again to see the categories and finally the details. You now have the assembly line report completed, and you probably already have some ideas on how to improve the report. Tomorrow’s Post Tomorrow’s blog post will show how to create your own data sources and data sets in SSRS. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Equivalent of #map in ruby in golang

    - by Oct
    I'm playing with Go and run into something I'm unable to find in Google, although there is certainly something that exists: I'm using the following struct: type Syntax struct { name string extensions *regexp.Regexp } type Scanner struct { classifier * bayesian.Classifier save_file string name_to_syntax map[string] *Syntax extensions_to_syntax map[*regexp.Regexp] *Syntax } I'd like to perform the following using Go and I'm quoting ruby because it's how I'd do that using ruby: test_regexpes = my_scanner.extensions_to_syntax.keys My goal is to get an array of *regexp.Regexp . Any idea on how to do that in a simple way ? Thank you !

    Read the article

  • TypeScript or JavaScript for noob web developer [closed]

    - by Phil Murray
    Following the recent release by Microsoft of TypeScript I was wondering if this is something that should be considered for a experienced WinForm and XAML developer looking to get into more web development. From reviewing a number of sites and videos online it appears that the type system for TypeScript makes more sense to me as a thick client developer than the dynamic type system in Javascript. I understand that Typescript compiles down to JavaScript but it appears that the learning curve is shallower due to the current tooling provided by Microsoft. What are your thoughts?

    Read the article

  • Using runtime checking of code contracts in Visual Studio 2010

    - by DigiMortal
    In my last posting about code contracts I introduced how to check input parameters of randomizer using static contracts checking. But you can also compile code contracts to your assemblies and use them also in runtime. In this posting I will show you simple example about runtime checking of code contracts. NB! If you want to play with code and try out things described here feel free to download example solution. if you are speaker and want to use this solution as a part of your sessions then feel free to do so, but don’t forget to refer me and this blog as source of this solution. And please let me know about your session. As a speaker I am very interested about it. :) To see how code contracts are checked at runtime we have to enable runtime checking from project properties. Make sure you have checked the box “Perform Runtime Contract Checking” and make sure you select “Full” from dropdown. These parts are in red box on the screenshot below. Visual Studio 2010 settings for code contracts. Runtime Checking is turned on and checks are made only in public surface. Click on image to see it at original size.  Save project settings. Then compile code and run it. As soon as code execution hits the call to GetRandomFromRangeContracted() exception is thrown. If you are not currently playing with solution referred above take a look at the following screenshot. Visual Studio 2010 runtime checking of code contracts. Exception of type ContractException is thrown when contract is violated. Click on image to see it at original size.  The exact type of exception is ContractException and it is defined in System.Diagnostics.Contracts.__ContractsRuntime namespace. In our example the message of exception is following: "Precondition failed: min < max  Min must be less than max" Besides the description we inserted for the case contract violation the message also contains violated contract type. In this case the type of contract is Precondition. Conclusion Using runtime checking of code contracts enables you to take code contracts with your code and have them checked every time when your methods are called. This way you can assure that all conditions are met to run method or exception is thrown and calling system has to handle the situation.

    Read the article

  • Modifying Contiguous Time Periods in a History Table

    Alex Kuznetsov is credited with a clever technique for creating a history table for SQL that is designed to store contiguous time periods and check that these time periods really are contiguous, using nothing but constraints. This is now increasingly useful with the DATE data type in SQL Server. The modification of data in this type of table isn't always entirely intuitive so Alex is on hand to give a brief explanation of how to do it.

    Read the article

  • BRE (Business Rules Engine) Data Services is out...!!!

    - by Vishal
    A few months ago we at Tellago had open sourced the BizTalk Data Services. We were meanwhile working on other artifacts which comes along with BizTalk Server like the “Business Rules Engine”.  We are happy to announce the first version of BRE Data Services. BRE Data Services is a same concept which we covered through BTS Data Services, providing a RESTFul OData – based API to interact with the Business Rules Engine via HTTP using ATOM Publishing Protocol or JSON as the encoding mechanism.   In the first version release, we mainly focused on the browsing, querying and searching BRE artifacts via a RESTFul interface. Also along with that we provide the functionality to execute Business Rules by inserting the Facts for policies via the IUpdatable implementation of WCF Data Services.   The BRE Data Services API provides a lightweight interface for managing Business Rules Engine artifacts such as Policies, Rules, Vocabularies, Conditions, Actions, Facts etc. The following are some examples which details some of the available features in the current version of the API.   Basic Querying: Querying BRE Policies http://localhost/BREDataServices/BREMananagementService.svc/Policies Querying BRE Rules http://localhost/BREDataServices/BREMananagementService.svc/Rules Querying BRE Vocabularies http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies   Navigation: The BRE Data Services API also leverages WCF Data Services to enable navigation across related different BRE objects. Querying a specific Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies(‘PolicyName’) Querying a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules(‘RuleName’) Querying all Rules under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Rules Querying all Facts under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Facts Querying all Actions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying all Conditions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying a specific Vocabulary: http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies('VocabName')   Implementation: With the BRE Data Services, we also provide the functionality of executing a particular policy via HTTP. There are couple of ways you can do that though the API.   Ø First is though Service Operations feature of WCF Data Services in which you can execute the Facts by passing them in the URL itself. This is a very simple implementations of the executing the policies due to the limitations & restrictions (only primitive types of input parameters which can be passed) currently of the Service Operations of the WCF Data Services. Below is a code sample.                Below is a traced Request/Response message.                                 Ø Second is through the IUpdatable Interface of WCF Data Services. In this method, you can first query the rule which you want to execute and then inserts Facts for that particular Rules and finally when you perform the SaveChanges() call for the IUpdatable Interface API, it executes the policy with the facts which you inserted at runtime. Below is a sample of client side code. Due to the limitations of current version of WCF Data Services where there is no way you can return back the updates happening on the service side back to the client via the SaveChanges() method. Here we are executing the rule passing a serialized XML as Facts and there is no changes made to any data where we can query back to fetch the changes. This is overcome though the first way to executing the policies which is by executing it as a Service Operation call.     This actually generates a AtomPub message shown as below:   POST /Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/$batch HTTP/1.1 User-Agent: Microsoft ADO.NET Data Services DataServiceVersion: 1.0;NetFx MaxDataServiceVersion: 2.0;NetFx Accept: application/atom+xml,application/xml Accept-Charset: UTF-8 Content-Type: multipart/mixed; boundary=batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Host: localhost:8080 Content-Length: 1481 Expect: 100-continue   --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Content-Type: multipart/mixed; boundary=changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf   --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf Content-Type: application/http Content-Transfer-Encoding: binary   MERGE http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy') HTTP/1.1 Content-ID: 4 Content-Type: application/atom+xml;type=entry Content-Length: 927   <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" font-size: x-small"http://www.w3.org/2005/Atom">   <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="Tellago.BRE.REST.Resources.Fact" />   <title />   <author>     <name />   </author>   <updated>2011-01-31T20:09:15.0023982Z</updated>   <id>http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy')</id>   <content type="application/xml">     <m:properties>       <d:FactInstance>&lt;ns0:LoanStatus xmlns:ns0="http://tellago.com"&gt;&lt;Age&gt;10&lt;/Age&gt;&lt;Status&gt;true&lt;/Status&gt;&lt;/ns0:LoanStatus&gt;</d:FactInstance>       <d:FactType>TestSchema</d:FactType>       <d:ID>TestPolicy</d:ID>     </m:properties>   </content> </entry> --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf-- --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7—     Installation: The installation of the BRE Data Services is pretty straight forward. ·         Create a new IIS website say BREDataServices. ·         Download the SourceCode from TellagoCodeplex and copy the content from Tellago.BRE.REST.ServiceHost to the physical location of the above created website.     ·         The appPool account running the website should have admin access to the BizTalkRuleEngineDb database. ·         TheRight click the BREManagementService.svc in the IIS ContentView for the website and wala..     Conclusion: The BRE Data Services API is an experiment intended to bring the capabilities of RESTful/OData based services to the Traditional BTS/BRE Solutions. The future releases will target on technologies like BAM, ESB Toolkit. This version has been tested with various version of BizTalk Server and we have uploaded the source code to our Tellago's DevLabs workspace at Codeplex. I hope you guys enjoy this release. Keep an eye on our new releases @ Tellago Codeplex. We are working on various other Biztalk Artifacts like BAM, ESB Toolkit.     Till than happy BizzRuling…!!!     Thanks,   Vishal Mody

    Read the article

  • How to hide jQuery Sub-Menus(ddsmoothmenu)?

    - by Tim
    I'm new to jQuery and i must admit that i've understood nothing yet, the syntax appears to me as an unknown language although i thought that i had my experiences with javascript. Nevertheless i managed it to implement this menu in my asp.net masterpage's header. Even got it to work that the content-page is loaded with ajax with help from here. But finally i'm failing with the menu to disappear when the new page was loaded asynchronously. I dont know how to hide this accursed jQuery Menu. Following the part of the js-file where the events are registered for hiding/disappearing. I dont know how to get the part that is responsible for it and even i dont know how to implement that part in my Anchor-onclick function where i dont have a reference to the jQuery Object. buildmenu:function($, setting){ var smoothmenu=ddsmoothmenu var $mainmenu=$("#"+setting.mainmenuid+">ul") //reference main menu UL $mainmenu.parent().get(0).className=setting.classname || "ddsmoothmenu" var $headers=$mainmenu.find("ul").parent() $headers.hover( function(e){ $(this).children('a:eq(0)').addClass('selected') }, function(e){ $(this).children('a:eq(0)').removeClass('selected') } ) $headers.each(function(i){ //loop through each LI header var $curobj=$(this).css({zIndex: 100-i}) //reference current LI header var $subul=$(this).find('ul:eq(0)').css({display:'block'}) $subul.data('timers', {}) this._dimensions={w:this.offsetWidth, h:this.offsetHeight, subulw:$subul.outerWidth(), subulh:$subul.outerHeight()} this.istopheader=$curobj.parents("ul").length==1? true : false //is top level header? $subul.css({top:this.istopheader && setting.orientation!='v'? this._dimensions.h+"px" : 0}) $curobj.children("a:eq(0)").css(this.istopheader? {paddingRight: smoothmenu.arrowimages.down[2]} : {}).append( //add arrow images '<img src="'+ (this.istopheader && setting.orientation!='v'? smoothmenu.arrowimages.down[1] : smoothmenu.arrowimages.right[1]) +'" class="' + (this.istopheader && setting.orientation!='v'? smoothmenu.arrowimages.down[0] : smoothmenu.arrowimages.right[0]) + '" style="border:0;" />' ) if (smoothmenu.shadow.enable){ this._shadowoffset={x:(this.istopheader?$subul.offset().left+smoothmenu.shadow.offsetx : this._dimensions.w), y:(this.istopheader? $subul.offset().top+smoothmenu.shadow.offsety : $curobj.position().top)} //store this shadow's offsets if (this.istopheader) $parentshadow=$(document.body) else{ var $parentLi=$curobj.parents("li:eq(0)") $parentshadow=$parentLi.get(0).$shadow } this.$shadow=$('<div class="ddshadow'+(this.istopheader? ' toplevelshadow' : '')+'"></div>').prependTo($parentshadow).css({left:this._shadowoffset.x+'px', top:this._shadowoffset.y+'px'}) //insert shadow DIV and set it to parent node for the next shadow div } $curobj.hover( function(e){ var $targetul=$subul //reference UL to reveal var header=$curobj.get(0) //reference header LI as DOM object clearTimeout($targetul.data('timers').hidetimer) $targetul.data('timers').showtimer=setTimeout(function(){ header._offsets={left:$curobj.offset().left, top:$curobj.offset().top} var menuleft=header.istopheader && setting.orientation!='v'? 0 : header._dimensions.w menuleft=(header._offsets.left+menuleft+header._dimensions.subulw>$(window).width())? (header.istopheader && setting.orientation!='v'? -header._dimensions.subulw+header._dimensions.w : -header._dimensions.w) : menuleft //calculate this sub menu's offsets from its parent if ($targetul.queue().length<=1){ //if 1 or less queued animations $targetul.css({left:menuleft+"px", width:header._dimensions.subulw+'px'}).animate({height:'show',opacity:'show'}, ddsmoothmenu.transition.overtime) if (smoothmenu.shadow.enable){ var shadowleft=header.istopheader? $targetul.offset().left+ddsmoothmenu.shadow.offsetx : menuleft var shadowtop=header.istopheader?$targetul.offset().top+smoothmenu.shadow.offsety : header._shadowoffset.y if (!header.istopheader && ddsmoothmenu.detectwebkit){ //in WebKit browsers, restore shadow's opacity to full header.$shadow.css({opacity:1}) } header.$shadow.css({overflow:'', width:header._dimensions.subulw+'px', left:shadowleft+'px', top:shadowtop+'px'}).animate({height:header._dimensions.subulh+'px'}, ddsmoothmenu.transition.overtime) } } }, ddsmoothmenu.showhidedelay.showdelay) }, function(e){ var $targetul=$subul var header=$curobj.get(0) clearTimeout($targetul.data('timers').showtimer) $targetul.data('timers').hidetimer=setTimeout(function(){ $targetul.animate({height:'hide', opacity:'hide'}, ddsmoothmenu.transition.outtime) if (smoothmenu.shadow.enable){ if (ddsmoothmenu.detectwebkit){ //in WebKit browsers, set first child shadow's opacity to 0, as "overflow:hidden" doesn't work in them header.$shadow.children('div:eq(0)').css({opacity:0}) } header.$shadow.css({overflow:'hidden'}).animate({height:0}, ddsmoothmenu.transition.outtime) } }, ddsmoothmenu.showhidedelay.hidedelay) } ) //end hover }) //end $headers.each() $mainmenu.find("ul").css({display:'none', visibility:'visible'}) } one link of my menu what i want to hide when the content is redirected to another page(i need "closeMenu-function"): <li><a href="DeliveryControl.aspx" onclick="AjaxContent.getContent(this.href);closeMenu();return false;">Delivery Control</a></li> In short: I want to fade out the submenus the same way they do automatically onblur, so that only the headermenu stays visible but i dont know how. Thanks, Tim EDIT: thanks to Starx' private-lesson in jQuery for beginners i solved it: I forgot the # in $("#smoothmenu1"). After that it was not difficult to find and call the hover-function from the menu's headers to let them fade out smoothly: $("#smoothmenu1").find("ul").hover(); Regards, Tim

    Read the article

  • Get your TFS 2012 task board demo ready in under 1 minute

    - by Tarun Arora
    Release Notes – http://tfsdemosetup.codeplex.com/  | Download | Source Code | Report a Bug | Ideas In this blog post, I’ll show you how to use the ‘TfsDemoSetup’ application to configure and setup the TFS 2012 task board for a demo in well less than 1 minute Step 1 – Note what you get with a newly created Team Project Create a new Team Project on TFS Preview         2. Click Create Project         3. The project creation has completed        4. Open the team web access and have a look at the home page Note – Since I created the project I am the only Team Member       A default Team by the name AdventureWorks Team has been created       A few sprints have been assigned to the default team but no dates for sprint start and end have been specified        A default Area Path for the team is missing       Step 2. Download the TFS Demo Setup Console application from Codeplex 1. Navigate to the TFS Demo Setup project on codeplex https://tfsdemosetup.codeplex.com/       2. Download Instructions and TFSDemo_<version>      3. Follow the steps in the Instructions.txt file      4. Unzip TFSDemo_<version> and open the target folder. Two important files in this folder, DemoDictionary.xml – This file contains the settings using which the demo environment will be setup SetupTfsDemo.exe – This will run the TFS demo environment setup application       Step 3 – Configure the setup (i.e. team name, members, sprint dates, etc) 1. Open up DemoDictionary.xml      2. Walkthrough DemoDictionary.xml             a. Basic Team Details         <Name> – Specify the name of the team         <Description> – Specify a description to go with the team         <SetAsDefaultTeam> – This accepts a value “true/false” when set to true, the newly created team will be set as the default team in the project         <BacklogIterationPath> – Specify a backlog iteration path for the team     b. Iterations – The iterations you specify here will be set as the Teams iterations        <Iterations> – Accepts multiple <Iteration> nodes.        <Iteration> – This is the most granular level of an Iteration        <Path> – The path to the sprint, sample values, Release 1\Sprint 1 or Release 2\Sprint 2        <StartDate> – The sprint start date, this accepts the format yyyy-MM-dd        <FinishDate> – The sprint finish date, this accepts the format yyyy-MM-dd     c. Team Members – Team Members that need to be added to the newly created team will be added under this section         <TeamMembers> – Accepts multiple <TeamMember> nodes.         <TeamMember> – This is the most granular level of a Team Member         <User> – This accepts the username, if you are running this against TFSPreview then the live id of the user will need to be passed. If you are running this against TFS Server then the user id i.e. Domain\UserName will need to be passed          <Team> – Specify the name of the team that you want the user to be assigned to.     d. WorkItems – This section will allow you to add work items (product backlog Items and linked tasks) to the current sprint of the team         <WorkItems> – Accepts multiple <WorkItem> nodes.         <WorkItem> – Accepts one <ProductBacklogItem> and multiple <Task> nodes         <ProductBacklogItem> – Used to create a Product Backlog Item type work item               <Title> – The title of the Product Backlog Item               <Description> – The description of the Product Backlog Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <Effort> – The total effort required to complete the Product Backlog Item         <Task> – Used to create a linked task to the Product Backlog type work item               <Title> – The title of the task type work item               <Description> – The description of the Task Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <RemainingWork> – The remaining effort to complete the task type work item Step 4 – Setup the demo environment against the newly created Team Project 1. Run SetupTfsDemo.exe    2. Enter Y or y on the prompt to continue setting up TFS Demo setup.     3. Select the newly created Team project, for this blogpost I had created the Team Project – AdventureWorks, so that is what I’ll select in the Connect to TFS Server pop up    3. Click Connect and follow the messages that are written to the console application       Step 5 – Validate that the Demo environment is set up as per the configuration 1. The team web access is all lit up You have a Sprint, a burn down chart, team members…    2. The team Demo has been added and has been set up as the default team    3. The Sprint Backlog Iteration path, Sprints and Sprint start and finish dates have been set    4. The default area path has been setup    5. Taskboard – Backlog items view    6. Taskboard – Team members view      Step 6 – Exception Handling! 1. This solution has been tested against TFS 2012 Service/Server for the Scrum 2.1 process template. 2. You are likely to run into an exception if you mess up the config file 3. If the team already exists and you run the console app to set up the team (with the same name) you will run into exceptions. Please remember this is just an alpha release, if you have any feedback please leave a comment! Didn’t I say that it would just take 1 minute, Enjoy!

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >