Search Results

Search found 4670 results on 187 pages for 'jvm arguments'.

Page 176/187 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • HOWTO: disable jmx in activemq network of brokers (spring, xbean)

    - by subes
    Since I've struggled a lot with this problem, I am posting my solution. Disabling jmx in an activemq network of brokers removes race conditions about the registration of the jmx connector. When starting multiple activemq servers on the same machine: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.NameAlreadyBoundException: jmxrmi [Root exception is java.rmi.AlreadyBoundException: jmxrmi] Another problem with this is, that even if you don't cause a race condition, this exception can still occur. Even when starting one broker after another while waiting for them to initialize properly in between. If one process is run by root as the first instance and the other as a normal user, somehow the user process tries to register its own jmx connector, though there already is one. Or another exception which happens when the broker that successfully registered the jmx connector goes down: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused] Those exceptions cause the network of brokers to stop working, or to not work at all. The trick to disable jmx was, that jmx had to be disabled in the connectionfactory aswell. The documentation http://activemq.apache.org/jmx.html does not say that this is needed explicitly. So I had to struggle for 2 days until I found the solution: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:amq="http://activemq.apache.org/schema/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.3.1.xsd"> <!-- Spring JMS Template --> <bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate"> <constructor-arg ref="connectionFactory" /> </bean> <!-- Caching, sodass das jms template überhaupt nutzbar ist in sachen performance --> <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"> <constructor-arg ref="amqConnectionFactory" /> <property name="exceptionListener" ref="jmsExceptionListener" /> <property name="sessionCacheSize" value="1" /> </bean> <!-- Jeder Client verbindet sich mit seinem eigenen broker, broker sind untereinander vernetzt. Nur wenn hier nochmals jmx deaktiviert wird, bleibt es auch deaktiviert... --> <amq:connectionFactory id="amqConnectionFactory" brokerURL="vm://broker:default?useJmx=false" /> <!-- Broker suchen sich einen eigenen Port und sind gegenseitig verbunden, ergeben dadurch ein Grid. Dies zwar etwas langsamer, aber dafür ausfallsicherer. Siehe http://activemq.apache.org/networks-of-brokers.html --> <amq:broker useJmx="false" persistent="false"> <!-- Wird benötigt um JMX endgültig zu deaktivieren --> <amq:managementContext> <amq:managementContext connectorHost="localhost" createConnector="false" /> </amq:managementContext> <!-- Nun die normale Konfiguration für Network of Brokers --> <amq:networkConnectors> <amq:networkConnector networkTTL="1" duplex="true" dynamicOnly="true" uri="multicast://default" /> </amq:networkConnectors> <amq:persistenceAdapter> <amq:memoryPersistenceAdapter /> </amq:persistenceAdapter> <amq:transportConnectors> <amq:transportConnector uri="tcp://localhost:0" discoveryUri="multicast://default" /> </amq:transportConnectors> </amq:broker> With this, there is no need to specify -Dcom.sun.management.jmxremote=false for the jvm. Which somehow also didn't work for me, because the connectionfactory started the jmx connector.

    Read the article

  • Code Contracts: Hiding ContractException

    - by DigiMortal
    It’s time to move on and improve my randomizer I wrote for an example of static checking of code contracts. In this posting I will modify contracts and give some explanations about pre-conditions and post-conditions. Also I will show you how to avoid ContractExceptions and how to replace them with your own exceptions. As a first thing let’s take a look at my randomizer. public class Randomizer {     public static int GetRandomFromRange(int min, int max)     {         var rnd = new Random();         return rnd.Next(min, max);     }       public static int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires(min < max, "Min must be less than max");           var rnd = new Random();         return rnd.Next(min, max);     } } We have some problems here. We need contract for method output and we also need some better exception handling mechanism. As ContractException as type is hidden from us we have to switch from ContractException to some other Exception type that we can catch. Adding post-condition Pre-conditions are contracts for method’s input interface. Read it as follows: pre-conditions make sure that all conditions for method’s successful run are met. Post-conditions are contracts for output interface of method. So, post-conditions are for output arguments and return value. My code misses the post-condition that checks return value. Return value in this case must be greater or equal to minimum value and less or equal to maximum value. To make sure that method can run only the correct value I added call to Contract.Ensures() method. public static int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires(min < max, "Min must be less than max");       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );       var rnd = new Random();     return rnd.Next(min, max); } I think that the line I added does not need any further comments. Avoiding ContractException for input interface ContractException lives in hidden namespace and we cannot see it at design time. But it is common exception type for all contract exceptions that we do not switch over to some other type. The case of Contract.Requires() method is simple: we can tell it what kind of exception we need if something goes wrong with contract it ensures. public static int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires<ArgumentOutOfRangeException>(         min < max,         "Min must be less than max"     );       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );       var rnd = new Random();     return rnd.Next(min, max); } Now, if we violate the input interface contract giving min value that is not less than max value we get ArgumentOutOfRangeException. Avoiding ContractException for output interface Output interface is more complex to control. We cannot give exception type there and hope that this type of exception will be thrown if something goes wrong. Instead we have to use delegate that gathers information about problem and throws the exception we expect to be thrown. From documentation you can find the following example about the delegate I mentioned. Contract.ContractFailed += (sender, e) => {     e.SetHandled();     e.SetUnwind(); // cause code to abort after event     Assert.Fail(e.FailureKind.ToString() + ":" + e.DebugMessage); }; We can use this delegate to throw the Exception. Let’s move the code to separate method too. Here is our method that uses now ContractException hiding. public static int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires(min < max, "Min must be less than max");       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );     Contract.ContractFailed += Contract_ContractFailed;       var rnd = new Random();     return rnd.Next(min, max)+1000; } And here is the delegate that creates exception. public static void Contract_ContractFailed(object sender,     ContractFailedEventArgs e) {     e.SetHandled();     e.SetUnwind();       throw new Exception(e.FailureKind.ToString() + ":" + e.Message); } Basically we can do in this delegate whatever we like to do with output interface errors. We can even introduce our own contract exception type. As you can see later then ContractFailed event is very useful at unit testing.

    Read the article

  • How to pass XML to DB using XMLTYPE

    - by James Taylor
    Probably not a common use case but I have seen it pop up from time to time. The question how do I pass XML from a queue or web service and insert it into a DB table using XMLTYPE.In this example I create a basic table with the field PAYLOAD of type XMLTYPE. I then take the full XML payload of the web service and insert it into that database for auditing purposes.I use SOA Suite 11.1.1.2 using composite and mediator to link the web service with the DB adapter.1. Insert Database Objects Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} --Create XML_EXAMPLE_TBL Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} CREATE TABLE XML_EXAMPLE_TBL (PAYLOAD XMLTYPE); Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} --Create procedure LOAD_TEST_XML Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} CREATE or REPLACE PROCEDURE load_test_xml (xmlFile in CLOB) IS   BEGIN     INSERT INTO xml_example_tbl (payload) VALUES (XMLTYPE(xmlFile));   --Handle the exceptions EXCEPTION   WHEN OTHERS THEN     raise_application_error(-20101, 'Exception occurred in loadPurchaseOrder procedure :'||SQLERRM || ' **** ' || xmlFile ); END load_test_xml; / 2. Creating New SOA Project TestXMLTYPE in JDeveloperIn JDeveloper either create a new Application or open an existing Application you want to put this work.Under File -> New -> SOA Tier -> SOA Project   Provide a name for the Project, e.g. TestXMLType Choose Empty Composite When selected Empty Composite click Finish.3. Create Database Connection to Stored ProcedureA Blank composite will be displayed. From the Component Palette drag a Database Adapter to the  External References panel. and configure the Database Adapter Wizard to connect to the DB procedure created above.Provide a service name InsertXML Select a Database connection where you installed the table and procedure above. If it doesn't exist create a new one. Select Call a Stored Procedure or Function then click NextChoose the schema you installed your Procedure in step 1 and query for the LOAD_TEST_XML procedure.Click Next for the remaining screens until you get to the end, then click Finish to complete the database adapter wizard.4. Create the Web Service InterfaceDownload this sample schema that will be used as the input for the web service. It does not matter what schema you use this solution will work with any. Feel free to use your own if required. singleString.xsd Drag from the component palette the Web Service to the Exposed Services panel on the component.Provide a name InvokeXMLLoad for the service, and click the cog icon.Click the magnify glass for the URL to browse to the location where you downloaded the xml schema above.  Import the schema file by selecting the import schema iconBrowse to the location to where you downloaded the singleString.xsd above.Click OK for the Import Schema File, then select the singleString node of the imported schema.Accept all the defaults until you get back to the Web Service wizard screen. The click OK. This step has created a WSDL based on the schema we downloaded earlier.Your composite should now look something like this now.5. Create the Mediator Routing Rules Drag a Mediator component into the middle of the Composite called ComponentsGive the name of Route, and accept the defaultsLink the services up to the Mediator by connecting the reference points so your Composite looks like this.6. Perform Translations between Web Service and the Database Adapter.From the Composite double click the Route Mediator to show the Map Plan. Select the transformation icon to create the XSLT translation file.Choose Create New Mapper File and accept the defaults.From the Component Palette drag the get-content-as-string component into the middle of the translation file.Your translation file should look something like thisNow we need to map the root element of the source 'singleString' to the XMLTYPE of the database adapter, applying the function get-content-as-string.To do this drag the element singleString to the left side of the function get-content-as-string and drag the right side of the get-content-as-string to the XMLFILE element of the database adapter so the mapping looks like this. You have now completed the SOA Component you can now save your work, deploy and test.When you deploy I have assumed that you have the correct database configurations in the WebLogic Console based on the connection you setup connecting to the Stored Procedure. 7. Testing the ApplicationOpen Enterprise Manager and navigate to the TestXMLTYPE Composite and click the Test button. Load some dummy variables in the Input Arguments and click the 'Test Web Service' buttonOnce completed you can run a SQL statement to check the install. In this instance I have just used JDeveloper and opened a SQL WorksheetSQL Statement Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} select * from xml_example_tbl; Result, you should see the full payload in the result.

    Read the article

  • Java: micro-optimizing array manipulation

    - by Martin Wiboe
    Hello all, I am trying to make a Java port of a simple feed-forward neural network. This obviously involves lots of numeric calculations, so I am trying to optimize my central loop as much as possible. The results should be correct within the limits of the float data type. My current code looks as follows (error handling & initialization removed): /** * Simple implementation of a feedforward neural network. The network supports * including a bias neuron with a constant output of 1.0 and weighted synapses * to hidden and output layers. * * @author Martin Wiboe */ public class FeedForwardNetwork { private final int outputNeurons; // No of neurons in output layer private final int inputNeurons; // No of neurons in input layer private int largestLayerNeurons; // No of neurons in largest layer private final int numberLayers; // No of layers private final int[] neuronCounts; // Neuron count in each layer, 0 is input // layer. private final float[][][] fWeights; // Weights between neurons. // fWeight[fromLayer][fromNeuron][toNeuron] // is the weight from fromNeuron in // fromLayer to toNeuron in layer // fromLayer+1. private float[][] neuronOutput; // Temporary storage of output from previous layer public float[] compute(float[] input) { // Copy input values to input layer output for (int i = 0; i < inputNeurons; i++) { neuronOutput[0][i] = input[i]; } // Loop through layers for (int layer = 1; layer < numberLayers; layer++) { // Loop over neurons in the layer and determine weighted input sum for (int neuron = 0; neuron < neuronCounts[layer]; neuron++) { // Bias neuron is the last neuron in the previous layer int biasNeuron = neuronCounts[layer - 1]; // Get weighted input from bias neuron - output is always 1.0 float activation = 1.0F * fWeights[layer - 1][biasNeuron][neuron]; // Get weighted inputs from rest of neurons in previous layer for (int inputNeuron = 0; inputNeuron < biasNeuron; inputNeuron++) { activation += neuronOutput[layer-1][inputNeuron] * fWeights[layer - 1][inputNeuron][neuron]; } // Store neuron output for next round of computation neuronOutput[layer][neuron] = sigmoid(activation); } } // Return output from network = output from last layer float[] result = new float[outputNeurons]; for (int i = 0; i < outputNeurons; i++) result[i] = neuronOutput[numberLayers - 1][i]; return result; } private final static float sigmoid(final float input) { return (float) (1.0F / (1.0F + Math.exp(-1.0F * input))); } } I am running the JVM with the -server option, and as of now my code is between 25% and 50% slower than similar C code. What can I do to improve this situation? Thank you, Martin Wiboe

    Read the article

  • terminating java applet thread on page unload

    - by Hammad Tariq
    Hello, I have got a problem here in terminating the threads. The problem is that I am using applet thread to access JS DOM and manipulate innerHTML of a span to right the Thread Name as the innerHTML. It works fine until I keep refreshing the page but if I dont give it a sleep of like 200ms and go to another page which dont have any applet on it, it will give me an error in firebug console: document.getElementById("userName") is null What I am suspecting is, although I send an interrupt on stop and I have also tried the same by setting a flag variable of running in run while loop, the thread executes much faster than the time JVM takes to unload it and I get my page changed in browser but applet is still trying to find an element with id userName. Is that true? Is there any work around of it? Is there any way to know how many threads are running at the moment? Is there any kill all command to send on destroy()? I have searched a lot, seen all the API of java threads and applets, couldn't get this to work. Below is the code: import java.applet.Applet; import netscape.javascript.*; //No need to extend JApplet, since we don't add any components; //we just paint. public class firstApplet extends Applet { /** * */ private static final long serialVersionUID = 1L; firstThread first; boolean running = false; public class firstThread extends Thread{ JSObject win; JSObject doc; firstThread second; public firstThread(JSObject window){ this.win = window; } public void run() { try{ System.out.println("running... "); while (!isInterrupted()){ doc = (JSObject) win.getMember("document"); String userName = Thread.currentThread().getName(); //String userName = "John Doe"; String S = "document.getElementById(\"userName\").innerHTML = \""+userName+"\";"; doc.eval(S); Thread.sleep(5); } }catch(Exception e){ System.out.println("exception in first thread... "); e.printStackTrace(); } } } public void init() { System.out.println("initializing... "); } public void start() { System.out.println("starting... "); JSObject window = JSObject.getWindow(this); first = new firstThread(window); first.start(); running = true; } public void stop() { first.interrupt(); first = null; System.out.println("stopping... "); } public void destroy() { if(first != null){ first = null; } System.out.println("preparing for unloading..."); } } Thanks

    Read the article

  • Silverlight for Windows Embedded Tutorial (step 5 and a bit of Windows Phone 7)

    - by Valter Minute
    If you haven’t spent the last week in the middle of the Sahara desert or traveling on a sled in the north pole area you should have heard something about the launch of Windows Phone 7 Series (or Windows Phone Series 7, or Windows Series Phone 7 or something like that). Even if you are in the middle of the desert or somewhere around the north pole you may have been reached by the news, since it seems that WP7S (using the full name will kill my available bandwidth!) is generating a lot of buzz in the development and IT communities. One of the most important aspects of this new platform is that it will be programmed using a new set of tools and frameworks, completely different from the ones used on older releases of Windows Mobile (or SmartPhone, or PocketPC or whatever…). WP7S applications can be developed using Silverlight or XNA. If you want to learn something more about WP7S development you can download the preview of Charles Petzold’s book about it: http://www.charlespetzold.com/phone/index.html Charles Petzold is also the author of “Programming Windows”, the first book I ever read about programming on Windows (it was Windows 3.0 at that time!). The fact that even I was able to learn how to develop Windows application is a proof of the quality of Petzold’s work. This book is up to his standards and the 150pages preview is already rich in technical contents without being boring or complicated to understand. I may be able to become a Windows Phone developer thanks to mr. Petzold. Mr. Petzold uses some nice samples to introduce the basic concepts of Silverlight development on WP7S. On this new platform you’ll use managed code to develop your application, so those samples can’t be ported on Windows CE R3 as they are, but I would like to take one of the first samples (called “SilverlightTapHello1”) and adapt it to Silverlight for Windows Embedded to show that even plain old native code can be used to develop “cool” user interfaces! The sample shows the standard WP7S title header and a textbox with an hello world message inside it. When the user touches the textbox, it will change its color. When the user touches the background (Grid) behind it, its default color (plain old White) will be restored. Let’s see how we can implement the same features on our embedded device! I took the XAML code of the sample (you can download the book samples here: http://download.microsoft.com/download/1/D/B/1DB49641-3956-41F1-BAFA-A021673C709E/CodeSamples_DRAFTPreview_ProgrammingWindowsPhone7Series.zip) and changed it a little bit to remove references to WP7S or managed runtime. If you compare the resulting files you will see that I was able to keep all the resources inside the App.xaml files and the structure of  MainPage.XAML almost intact. This is the Silverlight for Windows Embedded version of MainPage.XAML: <UserControl x:Class="SilverlightTapHello1.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:phoneNavigation="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Navigation" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="480" d:DesignHeight="800" FontFamily="{StaticResource PhoneFontFamilyNormal}" FontSize="{StaticResource PhoneFontSizeNormal}" Foreground="{StaticResource PhoneForegroundBrush}" Width="640" Height="480">   <Grid x:Name="LayoutRoot" Background="{StaticResource PhoneBackgroundBrush}"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions>   <!--TitleGrid is the name of the application and page title--> <Grid x:Name="TitleGrid" Grid.Row="0"> <TextBlock Text="SILVERLIGHT TAP HELLO #1" x:Name="textBlockPageTitle" Style="{StaticResource PhoneTextPageTitle1Style}"/> <TextBlock Text="main page" x:Name="textBlockListTitle" Style="{StaticResource PhoneTextPageTitle2Style}"/> </Grid>   <!--ContentGrid is empty. Place new content here--> <Grid x:Name="ContentGrid" Grid.Row="1" MouseLeftButtonDown="ContentGrid_MouseButtonDown" Background="{StaticResource PhoneBackgroundBrush}"> <TextBlock x:Name="TextBlock" Text="Hello, Silverlight for Windows Embedded!" HorizontalAlignment="Center" VerticalAlignment="Center" /> </Grid> </Grid> </UserControl> If you compare it to the WP7S sample (not reported here to avoid any copyright issue) you’ll notice that I had to replace the original phoneNavigation:PhoneApplicationPage with UserControl as the root node. This make sense because there is not support for phone applications on CE 6. I also had to specify width and height of my main page (on the WP7S device this will be adjusted by the OS) and I had to replace the multi-touch event handler with the MouseLeftButtonDown event (no multitouch support for Windows CE R3, still). I also changed the hello message, of course. I used XAML2CPP to generate the boring part of our application and then added the initialization code to WinMain: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { if (!XamlRuntimeInitialize()) return -1;   HRESULT retcode;   IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return -1; XRXamlSource dictsrc;   dictsrc.SetResource(hInstance,TEXT("XAML"),IDR_XAML_App);   if (FAILED(retcode=app->LoadResourceDictionary(&dictsrc,NULL))) return -1;   MainPage page;   if (FAILED(page.Init(hInstance,app))) return -1;   UINT exitcode;   if (FAILED(page.GetVisualHost()->StartDialog(&exitcode))) return -1;   return exitcode; }   You may have noticed that there is something different from the previous samples. I added the code to load a resource dictionary. Resources are an important feature of XAML that allows you to define some values that could be replaced inside any XAML file loaded by the runtime. You can use resources to define custom styles for your fonts, backgrounds, controls etc. and to support internationalization, by providing different strings for different languages. The rest of our WinMain isn’t that different. It creates an instances of our MainPage object and displays it. The MainPage class implements an event handler for the MouseLeftButtonDown event of the ContentGrid: class MainPage : public TMainPage<MainPage> { public:   HRESULT ContentGrid_MouseButtonDown(IXRDependencyObject* source,XRMouseButtonEventArgs* args) { HRESULT retcode; IXRSolidColorBrushPtr brush; IXRApplicationPtr app;   if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode;   if (FAILED(retcode=app->CreateObject(IID_IXRSolidColorBrush,&brush))) return retcode;   COLORREF color=RGBA(0xff,0xff,0xff,0xff);   if (args->pOriginalSource==TextBlock) color=RGBA(rand()&0xFF,rand()&0xFF,rand()&0xFF,0xFF);   if (FAILED(retcode=brush->SetColor(color))) return retcode;   if (FAILED(retcode=TextBlock->SetForeground(brush))) return retcode; return S_OK; } }; As you can see this event is generated when a used clicks inside the grid or inside one of the objects it contains. Since our TextBlock is inside the grid, we don’t need to provide an event handler for its MouseLeftButtonDown event. We can just use the pOriginalSource member of the event arguments to check if the event was generated inside the textblock. If the event was generated inside the grid we create a white brush,if it’s inside the textblock we create some randomly colored brush. Notice that we need to use the RGBA macro to create colors, specifying also a transparency value for them. If we use the RGB macro the resulting color will have its Alpha channel set to zero and will be transparent. Using the SetForeground method we can change the color of our control. You can compare this to the managed code that you can find at page 40-41 of Petzold’s preview book and you’ll see that the native version isn’t much more complex than the managed one. As usual you can download the full code of the sample here: http://cid-9b7b0aefe3514dc5.skydrive.live.com/self.aspx/.Public/SilverlightTapHello1.zip And remember to pre-order Charles Petzold’s “Programming Windows Phone 7 series”, I bet it will be a best-seller! Technorati Tags: Silverlight for Windows Embedded,Windows CE

    Read the article

  • My Thoughts On the Xbox 180

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/06/21/my-thoughts-on-the-xbox-180.aspx Everyone seems to be putting their 0.00237 cents into the wishing well over Microsoft's recent decision to reverse the DRM policy on the Xbox One. However, there have been a few issues that nobody has touched. As such, I have decided to dig 0.00237 cents out of my pocket. First, let me be clear about this point. I do not support the decision to reverse the DRM policy on the Xbox One. I wanted that point to be expressed first and unambiguously. I will say it again. I do not support the decision to reverse the DRM policy on the Xbox One. Now that I have that out of the way, let me go into my rationale. This decision removes most of the cool features that enticed me to pre-order the console. No, I didn't cancel my pre-order. There is still five months before the release of the console, and there is still a plethora of information that we, as consumers, do not have. With that, it should be noted that much of the talk in this post is speculation and rhetoric. I do not have any insider information that you do not possess. The persistent connection would have allowed the console to do many of the functions for which we have been begging. That demo where someone was playing Ryse, seamlessly accepted a multiplayer challenge in Killer Instinct, played the match (and a rematch,) and then jumped back into Ryse. That's gone, if you bought the game on disc. The new, DRM free system will require the disc in the system to play a game. That bullet point where one Xbox Live account could have up to 10 slave accounts so families could play together, no matter where they were located. That's gone as well. The promise of huge, expansive, dynamically changing worlds that was brought to us with the power of cloud computing. Well, "the people" didn't want there to be a forced, persistent connection. As such, developers can't rely on a connection and, as such, that feature is gone. This is akin to the removal of the hard drive on the Xbox 360. The list continues, but the enthusiast press has enumerated the list far better than I wish. All of this is because the Xbox team saw the HUGE success of Steam and decided to borrow a few ideas. Yes, Steam. The service that everyone hated for the first six months (for the same reasons the Xbox One is getting flack.) There was an initial growing pain. However, it is now lauded as the way games distribution should be handled. Unless you are Microsoft. I do find it curious that many of the features were originally announced for the PS4 during its unveiling. However, much of that was left strangely absent for Sony's E3 press conference. Instead, we received a single, static slide that basically said the exact opposite of Microsoft's plans. It is not farfetched to believe that slide came into existence during the approximately seven hours between the two media briefings. The thing that majorly annoys me over this whole kerfuffle is that the single thing that caused the call to arms is, really, not an issue. Microsoft never said they were going to block used sales. They said it was up to the publisher to make that decision. This would have allowed publishers to reclaim some of the costs of development in subsequent sales of the product. If you sell your game to GameStop for 7 USD, GameStop is going to sell it for 55 USD. That is 48 USD pure profit for them. Some publishers asked GameStop for a small cut. Was this a huge, money grubbing scheme? Well, yes, but the idea was that they have to handle server infrastructure for dormant accounts, etc. Of course, GameStop flatly refused, and the Online Pass was born. Fortunately, this trend didn’t last, and most publishers have stopped the practice. The ability to sell "licenses" has already begun to be challenged. Are you living in the EU? If so, companies must allow you to sell digital property. With this precedent in place, it's only a matter of time before other areas follow suit. If GameStop were smart, they should have immediately contacted every publisher out there to get the rights to become a clearing house for these licenses. Then, they keep their business model and could reduce their brick and mortar footprint. The digital landscape is changing. We need to not block this process. As Seth MacFarlane best said "Some issues are so important that you should drag people kicking and screaming." I believe this was said on an episode of Real Time with Bill Maher about the issue of Gay Marriages. Much like the original source, this is an issue that we need to drag people to the correct, progressive position. Microsoft, as a company, actually has the resources to weather the transition period. They have a great pool of first and second party developers that can leverage this new framework to prove the validity. Over time, the third party developers will get excited to use these tools. As an old C++ guy, I resisted C# for years. Now, I think it's one of the best languages I've ever used. I have a server room and a Co-Lo full of servers, so I originally didn't see the value in Azure. Now, I wish I could move every one of my projects into the cloud. I still LOVE getting physical packaging, which my music and games collection will proudly attest. However, I have started to see the value in pure digital, and have found ways to integrate this into the ways I consume those products. I can, honestly, understand how some parts of the population would be very apprehensive about this new landscape. There were valid arguments about people with no internet access. There are ways to combat these problems. These methods do not require us to throw the baby out with the bathwater. However, the number of people in the computer industry that I have seen cry foul is truly appalling. We are the forward looking people that help show how technology can improve people's lives. If we can't see the value of the brief pain involved with an exciting new ecosystem, than who will?

    Read the article

  • Using JDialog with Tabbed Pane to draw different pictures [migrated]

    - by Bryam Ulloa
    I am using NetBeans, and I have a class that extends to JDialog, inside that Dialog box I have created a Tabbed Pane. The Tabbed Pane contains 6 different tabs, with 6 different panels of course. What I want to do is when I click on the different tabs, a diagram is supposed to be drawn with the paint method. My question is how can I draw on the different panels with just one paint method in another class being called from the Dialog class? Here is my code for the Dialog class: package GUI; public class NewJDialog extends javax.swing.JDialog{ /** * Creates new form NewJDialog */ public NewJDialog(java.awt.Frame parent, boolean modal) { super(parent, modal); initComponents(); } /** * This method is called from within the constructor to initialize the form. * WARNING: Do NOT modify this code. The content of this method is always * regenerated by the Form Editor. */ @SuppressWarnings("unchecked") // <editor-fold defaultstate="collapsed" desc="Generated Code"> private void initComponents() { jTabbedPane1 = new javax.swing.JTabbedPane(); jPanel1 = new javax.swing.JPanel(); jPanel2 = new javax.swing.JPanel(); jPanel3 = new javax.swing.JPanel(); jPanel4 = new javax.swing.JPanel(); jPanel5 = new javax.swing.JPanel(); jPanel6 = new javax.swing.JPanel(); jPanel7 = new javax.swing.JPanel(); jLabel1 = new javax.swing.JLabel(); jLabel2 = new javax.swing.JLabel(); setDefaultCloseOperation(javax.swing.WindowConstants.DISPOSE_ON_CLOSE); javax.swing.GroupLayout jPanel1Layout = new javax.swing.GroupLayout(jPanel1); jPanel1.setLayout(jPanel1Layout); jPanel1Layout.setHorizontalGroup( jPanel1Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 466, Short.MAX_VALUE) ); jPanel1Layout.setVerticalGroup( jPanel1Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 242, Short.MAX_VALUE) ); jTabbedPane1.addTab("FCFS", jPanel1); javax.swing.GroupLayout jPanel2Layout = new javax.swing.GroupLayout(jPanel2); jPanel2.setLayout(jPanel2Layout); jPanel2Layout.setHorizontalGroup( jPanel2Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 466, Short.MAX_VALUE) ); jPanel2Layout.setVerticalGroup( jPanel2Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 242, Short.MAX_VALUE) ); jTabbedPane1.addTab("SSTF", jPanel2); javax.swing.GroupLayout jPanel3Layout = new javax.swing.GroupLayout(jPanel3); jPanel3.setLayout(jPanel3Layout); jPanel3Layout.setHorizontalGroup( jPanel3Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 466, Short.MAX_VALUE) ); jPanel3Layout.setVerticalGroup( jPanel3Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 242, Short.MAX_VALUE) ); jTabbedPane1.addTab("LOOK", jPanel3); javax.swing.GroupLayout jPanel4Layout = new javax.swing.GroupLayout(jPanel4); jPanel4.setLayout(jPanel4Layout); jPanel4Layout.setHorizontalGroup( jPanel4Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 466, Short.MAX_VALUE) ); jPanel4Layout.setVerticalGroup( jPanel4Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 242, Short.MAX_VALUE) ); jTabbedPane1.addTab("LOOK C", jPanel4); javax.swing.GroupLayout jPanel5Layout = new javax.swing.GroupLayout(jPanel5); jPanel5.setLayout(jPanel5Layout); jPanel5Layout.setHorizontalGroup( jPanel5Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 466, Short.MAX_VALUE) ); jPanel5Layout.setVerticalGroup( jPanel5Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 242, Short.MAX_VALUE) ); jTabbedPane1.addTab("SCAN", jPanel5); javax.swing.GroupLayout jPanel6Layout = new javax.swing.GroupLayout(jPanel6); jPanel6.setLayout(jPanel6Layout); jPanel6Layout.setHorizontalGroup( jPanel6Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 466, Short.MAX_VALUE) ); jPanel6Layout.setVerticalGroup( jPanel6Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 242, Short.MAX_VALUE) ); jTabbedPane1.addTab("SCAN C", jPanel6); getContentPane().add(jTabbedPane1, java.awt.BorderLayout.CENTER); jLabel1.setText("Distancia:"); jLabel2.setText("___________"); javax.swing.GroupLayout jPanel7Layout = new javax.swing.GroupLayout(jPanel7); jPanel7.setLayout(jPanel7Layout); jPanel7Layout.setHorizontalGroup( jPanel7Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(jPanel7Layout.createSequentialGroup() .addGap(21, 21, 21) .addComponent(jLabel1) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED) .addComponent(jLabel2) .addContainerGap(331, Short.MAX_VALUE)) ); jPanel7Layout.setVerticalGroup( jPanel7Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(jPanel7Layout.createSequentialGroup() .addContainerGap() .addGroup(jPanel7Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.BASELINE) .addComponent(jLabel1) .addComponent(jLabel2)) .addContainerGap(15, Short.MAX_VALUE)) ); getContentPane().add(jPanel7, java.awt.BorderLayout.PAGE_START); pack(); }// </editor-fold> /** * @param args the command line arguments */ public static void main(String args[]) { /* Set the Nimbus look and feel */ //<editor-fold defaultstate="collapsed" desc=" Look and feel setting code (optional) "> /* If Nimbus (introduced in Java SE 6) is not available, stay with the default look and feel. * For details see http://download.oracle.com/javase/tutorial/uiswing/lookandfeel/plaf.html */ try { for (javax.swing.UIManager.LookAndFeelInfo info : javax.swing.UIManager.getInstalledLookAndFeels()) { if ("Nimbus".equals(info.getName())) { javax.swing.UIManager.setLookAndFeel(info.getClassName()); break; } } } catch (ClassNotFoundException ex) { java.util.logging.Logger.getLogger(NewJDialog.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (InstantiationException ex) { java.util.logging.Logger.getLogger(NewJDialog.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (IllegalAccessException ex) { java.util.logging.Logger.getLogger(NewJDialog.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (javax.swing.UnsupportedLookAndFeelException ex) { java.util.logging.Logger.getLogger(NewJDialog.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } //</editor-fold> /* Create and display the dialog */ java.awt.EventQueue.invokeLater(new Runnable() { public void run() { NewJDialog dialog = new NewJDialog(new javax.swing.JFrame(), true); dialog.addWindowListener(new java.awt.event.WindowAdapter() { @Override public void windowClosing(java.awt.event.WindowEvent e) { System.exit(0); } }); dialog.setVisible(true); } }); } // Variables declaration - do not modify private javax.swing.JLabel jLabel1; private javax.swing.JLabel jLabel2; private javax.swing.JPanel jPanel1; private javax.swing.JPanel jPanel2; private javax.swing.JPanel jPanel3; private javax.swing.JPanel jPanel4; private javax.swing.JPanel jPanel5; private javax.swing.JPanel jPanel6; private javax.swing.JPanel jPanel7; private javax.swing.JTabbedPane jTabbedPane1; // End of variables declaration } This is another class that I have created for the paint method: package GUI; import java.awt.Graphics; import javax.swing.JPanel; /** * * @author TOSHIBA */ public class Lienzo { private int width = 5; private int height = 5; private int y = 5; private int x = 0; private int x1 = 0; public Graphics Draw(Graphics g, int[] pistas) { //Im not sure if this is the correct way to do it //The diagram gets drawn according to values from an array //The array is not always the same thats why I used the different Panels for (int i = 0; i < pistas.length; i++) { x = pistas[i]; x1 = pistas[i + 1]; g.drawOval(x, y, width, height); g.drawString(Integer.toString(x), x, y); g.drawLine(x, y, x1, y); } return g; } } I hope you guys understand what I am trying to do with my program.

    Read the article

  • C#: LINQ vs foreach - Round 1.

    - by James Michael Hare
    So I was reading Peter Kellner's blog entry on Resharper 5.0 and its LINQ refactoring and thought that was very cool.  But that raised a point I had always been curious about in my head -- which is a better choice: manual foreach loops or LINQ?    The answer is not really clear-cut.  There are two sides to any code cost arguments: performance and maintainability.  The first of these is obvious and quantifiable.  Given any two pieces of code that perform the same function, you can run them side-by-side and see which piece of code performs better.   Unfortunately, this is not always a good measure.  Well written assembly language outperforms well written C++ code, but you lose a lot in maintainability which creates a big techncial debt load that is hard to offset as the application ages.  In contrast, higher level constructs make the code more brief and easier to understand, hence reducing technical cost.   Now, obviously in this case we're not talking two separate languages, we're comparing doing something manually in the language versus using a higher-order set of IEnumerable extensions that are in the System.Linq library.   Well, before we discuss any further, let's look at some sample code and the numbers.  First, let's take a look at the for loop and the LINQ expression.  This is just a simple find comparison:       // find implemented via LINQ     public static bool FindViaLinq(IEnumerable<int> list, int target)     {         return list.Any(item => item == target);     }         // find implemented via standard iteration     public static bool FindViaIteration(IEnumerable<int> list, int target)     {         foreach (var i in list)         {             if (i == target)             {                 return true;             }         }           return false;     }   Okay, looking at this from a maintainability point of view, the Linq expression is definitely more concise (8 lines down to 1) and is very readable in intention.  You don't have to actually analyze the behavior of the loop to determine what it's doing.   So let's take a look at performance metrics from 100,000 iterations of these methods on a List<int> of varying sizes filled with random data.  For this test, we fill a target array with 100,000 random integers and then run the exact same pseudo-random targets through both searches.                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     Any         10       26          0.00046             30.00%     Iteration   10       20          0.00023             -     Any         100      116         0.00201             18.37%     Iteration   100      98          0.00118             -     Any         1000     1058        0.01853             16.78%     Iteration   1000     906         0.01155             -     Any         10,000   10,383      0.18189             17.41%     Iteration   10,000   8843        0.11362             -     Any         100,000  104,004     1.8297              18.27%     Iteration   100,000  87,941      1.13163             -   The LINQ expression is running about 17% slower for average size collections and worse for smaller collections.  Presumably, this is due to the overhead of the state machine used to track the iterators for the yield returns in the LINQ expressions, which seems about right in a tight loop such as this.   So what about other LINQ expressions?  After all, Any() is one of the more trivial ones.  I decided to try the TakeWhile() algorithm using a Count() to get the position stopped like the sample Pete was using in his blog that Resharper refactored for him into LINQ:       // Linq form     public static int GetTargetPosition1(IEnumerable<int> list, int target)     {         return list.TakeWhile(item => item != target).Count();     }       // traditionally iterative form     public static int GetTargetPosition2(IEnumerable<int> list, int target)     {         int count = 0;           foreach (var i in list)         {             if(i == target)             {                 break;             }               ++count;         }           return count;     }   Once again, the LINQ expression is much shorter, easier to read, and should be easier to maintain over time, reducing the cost of technical debt.  So I ran these through the same test data:                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile   10       41          0.00041             128%     Iteration   10       18          0.00018             -     TakeWhile   100      171         0.00171             88%     Iteration   100      91          0.00091             -     TakeWhile   1000     1604        0.01604             94%     Iteration   1000     825         0.00825             -     TakeWhile   10,000   15765       0.15765             92%     Iteration   10,000   8204        0.08204             -     TakeWhile   100,000  156950      1.5695              92%     Iteration   100,000  81635       0.81635             -     Wow!  I expected some overhead due to the state machines iterators produce, but 90% slower?  That seems a little heavy to me.  So then I thought, well, what if TakeWhile() is not the right tool for the job?  The problem is TakeWhile returns each item for processing using yield return, whereas our for-loop really doesn't care about the item beyond using it as a stop condition to evaluate. So what if that back and forth with the iterator state machine is the problem?  Well, we can quickly create an (albeit ugly) lambda that uses the Any() along with a count in a closure (if a LINQ guru knows a better way PLEASE let me know!), after all , this is more consistent with what we're trying to do, we're trying to find the first occurence of an item and halt once we find it, we just happen to be counting on the way.  This mostly matches Any().       // a new method that uses linq but evaluates the count in a closure.     public static int TakeWhileViaLinq2(IEnumerable<int> list, int target)     {         int count = 0;         list.Any(item =>             {                 if(item == target)                 {                     return true;                 }                   ++count;                 return false;             });         return count;     }     Now how does this one compare?                         List<T> On 100,000 Iterations     Method         Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile      10       41          0.00041             128%     Any w/Closure  10       23          0.00023             28%     Iteration      10       18          0.00018             -     TakeWhile      100      171         0.00171             88%     Any w/Closure  100      116         0.00116             27%     Iteration      100      91          0.00091             -     TakeWhile      1000     1604        0.01604             94%     Any w/Closure  1000     1101        0.01101             33%     Iteration      1000     825         0.00825             -     TakeWhile      10,000   15765       0.15765             92%     Any w/Closure  10,000   10802       0.10802             32%     Iteration      10,000   8204        0.08204             -     TakeWhile      100,000  156950      1.5695              92%     Any w/Closure  100,000  108378      1.08378             33%     Iteration      100,000  81635       0.81635             -     Much better!  It seems that the overhead of TakeAny() returning each item and updating the state in the state machine is drastically reduced by using Any() since Any() iterates forward until it finds the value we're looking for -- for the task we're attempting to do.   So the lesson there is, make sure when you use a LINQ expression you're choosing the best expression for the job, because if you're doing more work than you really need, you'll have a slower algorithm.  But this is true of any choice of algorithm or collection in general.     Even with the Any() with the count in the closure it is still about 30% slower, but let's consider that angle carefully.  For a list of 100,000 items, it was the difference between 1.01 ms and 0.82 ms roughly in a List<T>.  That's really not that bad at all in the grand scheme of things.  Even running at 90% slower with TakeWhile(), for the vast majority of my projects, an extra millisecond to save potential errors in the long term and improve maintainability is a small price to pay.  And if your typical list is 1000 items or less we're talking only microseconds worth of difference.   It's like they say: 90% of your performance bottlenecks are in 2% of your code, so over-optimizing almost never pays off.  So personally, I'll take the LINQ expression wherever I can because they will be easier to read and maintain (thus reducing technical debt) and I can rely on Microsoft's development to have coded and unit tested those algorithm fully for me instead of relying on a developer to code the loop logic correctly.   If something's 90% slower, yes, it's worth keeping in mind, but it's really not until you start get magnitudes-of-order slower (10x, 100x, 1000x) that alarm bells should really go off.  And if I ever do need that last millisecond of performance?  Well then I'll optimize JUST THAT problem spot.  To me it's worth it for the readability, speed-to-market, and maintainability.

    Read the article

  • CI Deployment Of Azure Web Roles Using TeamCity

    - by srkirkland
    After recently migrating an important new website to use Windows Azure “Web Roles” I wanted an easier way to deploy new versions to the Azure Staging environment as well as a reliable process to rollback deployments to a certain “known good” source control commit checkpoint.  By configuring our JetBrains’ TeamCity CI server to utilize Windows Azure PowerShell cmdlets to create new automated deployments, I’ll show you how to take control of your Azure publish process. Step 0: Configuring your Azure Project in Visual Studio Before we can start looking at automating the deployment, we should make sure manual deployments from Visual Studio are working properly.  Detailed information for setting up deployments can be found at http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx#PublishAzure or by doing some quick Googling, but the basics are as follows: Install the prerequisite Windows Azure SDK Create an Azure project by right-clicking on your web project and choosing “Add Windows Azure Cloud Service Project” (or by manually adding that project type) Configure your Role and Service Configuration/Definition as desired Right-click on your azure project and choose “Publish,” create a publish profile, and push to your web role You don’t actually have to do step #4 and create a publish profile, but it’s a good exercise to make sure everything is working properly.  Once your Windows Azure project is setup correctly, we are ready to move on to understanding the Azure Publish process. Understanding the Azure Publish Process The actual Windows Azure project is fairly simple at its core—it builds your dependent roles (in our case, a web role) against a specific service and build configuration, and outputs two files: ServiceConfiguration.Cloud.cscfg: This is just the file containing your package configuration info, for example Instance Count, OsFamily, ConnectionString and other Setting information. ProjectName.Azure.cspkg: This is the package file that contains the guts of your deployment, including all deployable files. When you package your Azure project, these two files will be created within the directory ./[ProjectName].Azure/bin/[ConfigName]/app.publish/.  If you want to build your Azure Project from the command line, it’s as simple as calling MSBuild on the “Publish” target: msbuild.exe /target:Publish Windows Azure PowerShell Cmdlets The last pieces of the puzzle that make CI automation possible are the Azure PowerShell Cmdlets (http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).  These cmdlets are what will let us create deployments without Visual Studio or other user intervention. Preparing TeamCity for Azure Deployments Now we are ready to get our TeamCity server setup so it can build and deploy Windows Azure projects, which we now know requires the Azure SDK and the Windows Azure PowerShell Cmdlets. Installing the Azure SDK is easy enough, just go to https://www.windowsazure.com/en-us/develop/net/ and click “Install” Once this SDK is installed, I recommend running a test build to make sure your project is building correctly.  You’ll want to setup your build step using MSBuild with the “Publish” target against your solution file.  Mine looks like this: Assuming the build was successful, you will now have the two *.cspkg and *cscfg files within your build directory.  If the build was red (failed), take a look at the build logs and keep an eye out for “unsupported project type” or other build errors, which will need to be addressed before the CI deployment can be completed. With a successful build we are now ready to install and configure the Windows Azure PowerShell Cmdlets: Follow the instructions at http://msdn.microsoft.com/en-us/library/windowsazure/jj554332 to install the Cmdlets and configure PowerShell After installing the Cmdlets, you’ll need to get your Azure Subscription Info using the Get-AzurePublishSettingsFile command. Store the resulting *.publishsettings file somewhere you can get to easily, like C:\TeamCity, because you will need to reference it later from your deploy script. Scripting the CI Deploy Process Now that the cmdlets are installed on our TeamCity server, we are ready to script the actual deployment using a TeamCity “PowerShell” build runner.  Before we look at any code, here’s a breakdown of our deployment algorithm: Setup your variables, including the location of the *.cspkg and *cscfg files produced in the earlier MSBuild step (remember, the folder is something like [ProjectName].Azure/bin/[ConfigName]/app.publish/ Import the Windows Azure PowerShell Cmdlets Import and set your Azure Subscription information (this is basically your authentication/authorization step, so protect your settings file Now look for a current deployment, and if you find one Upgrade it, else Create a new deployment Pretty simple and straightforward.  Now let’s look at the code (also available as a gist here: https://gist.github.com/3694398): $subscription = "[Your Subscription Name]" $service = "[Your Azure Service Name]" $slot = "staging" #staging or production $package = "[ProjectName]\bin\[BuildConfigName]\app.publish\[ProjectName].cspkg" $configuration = "[ProjectName]\bin\[BuildConfigName]\app.publish\ServiceConfiguration.Cloud.cscfg" $timeStampFormat = "g" $deploymentLabel = "ContinuousDeploy to $service v%build.number%"   Write-Output "Running Azure Imports" Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1" Import-AzurePublishSettingsFile "C:\TeamCity\[PSFileName].publishsettings" Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscription   function Publish(){ $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue   if ($a[0] -ne $null) { Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. " } if ($deployment.Name -ne $null) { #Update deployment inplace (usually faster, cheaper, won't destroy VIP) Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename. Upgrading deployment." UpgradeDeployment } else { CreateNewDeployment } }   function CreateNewDeployment() { write-progress -id 3 -activity "Creating New Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"   $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID" }   function UpgradeDeployment() { write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"   # perform Update-Deployment $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID" }   Write-Output "Create Azure Deployment" Publish   Creating the TeamCity Build Step The only thing left is to create a second build step, after your MSBuild “Publish” step, with the build runner type “PowerShell”.  Then set your script to “Source Code,” the script execution mode to “Put script into PowerShell stdin with “-Command” arguments” and then copy/paste in the above script (replacing the placeholder sections with your values).  This should look like the following:   Wrap Up After combining the MSBuild /target:Publish step (which creates the necessary Windows Azure *.cspkg and *.cscfg files) and a PowerShell script step which utilizes the Azure PowerShell Cmdlets, we have a fully deployable build configuration in TeamCity.  You can configure this step to run whenever you’d like using build triggers – for example, you could even deploy whenever a new master branch deploy comes in and passes all required tests. In the script I’ve hardcoded that every deployment goes to the Staging environment on Azure, but you could deploy straight to Production if you want to, or even setup a deployment configuration variable and set it as desired. After your TeamCity Build Configuration is complete, you’ll see something that looks like this: Whenever you click the “Run” button, all of your code will be compiled, published, and deployed to Windows Azure! One additional enormous benefit of automating the process this way is that you can easily deploy any specific source control changeset by clicking the little ellipsis button next to "Run.”  This will bring up a dialog like the one below, where you can select the last change to use for your deployment.  Since Azure Web Role deployments don’t have any rollback functionality, this is a critical feature.   Enjoy!

    Read the article

  • Hudson + Jboss AS 7.1 + Maven 3 Project Build Error

    - by Zehra Gül Çabuk
    I've wanted to prepare an environment that I'm able to build my project and run my tests on a integration server. So I've installed an Jboss application server 7.1 and deployed hudson.war to it. Then I've created a project to trigger "mvn clean install" and when I built it I've got following exception. [INFO] Using bundled Maven 3 installation [INFO] Checking Maven 3 installation environment [workspace] $ /disk7/hudson_home/maven/slavebundle/bundled-maven/bin/mvn --help [INFO] Checking Maven 3 installation version [INFO] Detected Maven 3 installation version: 3.0.3 [workspace] $ /disk7/hudson_home/maven/slavebundle/bundled-maven/bin/mvn clean install -V -B -Dmaven.ext.class.path=/disk7/hudson_home/maven/slavebundle/resources:/disk7/hudson_home/maven/slavebundle/lib/maven3-eventspy-3.0.jar:/disk7/jboss-as-7.1.0.Final/standalone/tmp/vfs/deploymentdf2a6dfa59ee3407/hudson-remoting-2.2.0.jar-407215e5de02980f/contents -Dhudson.eventspy.port=37183 -f pom.xml [DEBUG] Waiting for connection on port: 37183 Apache Maven 3.0.3 (r1075438; 2011-02-28 19:31:09+0200) Maven home: /disk7/hudson_home/maven/slavebundle/bundled-maven Java version: 1.7.0_04, vendor: Oracle Corporation Java home: /usr/lib/jvm/jdk1.7.0_04/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.2.0-24-generic", arch: "amd64", family: "unix" [ERROR] o.h.m.e.DelegatingEventSpy - Init failed java.lang.NoClassDefFoundError: hudson/remoting/Channel at org.hudsonci.maven.eventspy.common.RemotingClient.open(RemotingClient.java:103) ~[maven3-eventspy-runtime.jar:na] at org.hudsonci.maven.eventspy_30.RemotingEventSpy.openChannel(RemotingEventSpy.java:86) ~[maven3-eventspy-3.0.jar:na] at org.hudsonci.maven.eventspy_30.RemotingEventSpy.init(RemotingEventSpy.java:114) ~[maven3-eventspy-3.0.jar:na] at org.hudsonci.maven.eventspy_30.DelegatingEventSpy.init(DelegatingEventSpy.java:128) ~[maven3-eventspy-3.0.jar:na] at org.apache.maven.eventspy.internal.EventSpyDispatcher.init(EventSpyDispatcher.java:84) [maven-core-3.0.3.jar:3.0.3] at org.apache.maven.cli.MavenCli.container(MavenCli.java:403) [maven-embedder-3.0.3.jar:3.0.3] at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:191) [maven-embedder-3.0.3.jar:3.0.3] at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) [maven-embedder-3.0.3.jar:3.0.3] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_04] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_04] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_04] at java.lang.reflect.Method.invoke(Method.java:601) ~[na:1.7.0_04] at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290) [plexus-classworlds-2.4.jar:na] at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230) [plexus-classworlds-2.4.jar:na] at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409) [plexus-classworlds-2.4.jar:na] at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) [plexus-classworlds-2.4.jar:na] Caused by: java.lang.ClassNotFoundException: hudson.remoting.Channel at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:50) ~[plexus-classworlds-2.4.jar:na] at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:244) ~[plexus-classworlds-2.4.jar:na] at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:230) ~[plexus-classworlds-2.4.jar:na] ... 16 common frames omitted [ERROR] ABORTED [ERROR] Failed to initialize [ERROR] Caused by: hudson/remoting/Channel [ERROR] Caused by: hudson.remoting.Channel [ERROR] Failure: java.net.SocketException: Connection reset FATAL: Connection reset java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:189) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.FilterInputStream.read(FilterInputStream.java:133) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at hudson.remoting.Channel.<init>(Channel.java:385) at hudson.remoting.Channel.<init>(Channel.java:347) at hudson.remoting.Channel.<init>(Channel.java:320) at hudson.remoting.Channel.<init>(Channel.java:315) at hudson.slaves.Channels$1.<init>(Channels.java:71) at hudson.slaves.Channels.forProcess(Channels.java:71) at org.hudsonci.maven.plugin.builder.internal.PerformBuild.doExecute(PerformBuild.java:174) at org.hudsonci.utils.tasks.PerformOperation.execute(PerformOperation.java:58) at org.hudsonci.maven.plugin.builder.MavenBuilder.perform(MavenBuilder.java:169) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:630) at hudson.model.Build$RunnerImpl.build(Build.java:175) at hudson.model.Build$RunnerImpl.doRun(Build.java:137) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:429) at hudson.model.Run.run(Run.java:1366) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:145) I want to point out the command which is tried to execute by hudson : /disk7/hudson_home/maven/slavebundle/bundled-maven/bin/mvn clean install -V -B -Dmaven.ext.class.path=/disk7/hudson_home/maven/slavebundle/resources:/disk7/hudson_home/maven/slavebundle/lib/maven3-eventspy-3.0.jar:/disk7/jboss-as-7.1.0.Final/standalone/tmp/vfs/deploymentdf2a6dfa59ee3407/hudson-remoting-2.2.0.jar-407215e5de02980f/contents -Dhudson.eventspy.port=37183 -f pom.xml It tries to find "hudson-remoting-2.2.0.jar" to put it to build path but it searches it at the wrong place because when I look where the hudson-remoting jar I found it at /disk7/jboss-as-7.1.0.Final/standalone/tmp/vfs/deploymentdf2a6dfa59ee3407/hudson-remoting-2.2.0.jar-407215e5de02980f/hudson-remoting-2.2.0.jar for this build(not in contents). So how can I configure the hudson to force it looking at the right place for jars? Is there anyone has an idea? Thanks in advance.

    Read the article

  • .NET to iOS: From WinForms to the iPad

    - by RobertChipperfield
    One of the great things about working at Red Gate is getting to play with new technology - and right now, that means mobile. A few weeks ago, we decided that a little research into the tablet computing arena was due, and purely from a numbers point of view, that suggested the iPad as a good target device. A quick trip to iPhoneDevCon in San Diego later, and Marine and I came back full of ideas, and with some concept of how iOS development was meant to work. Here's how we went from there to the release of Stacks & Heaps, our geeky take on the classic "Snakes & Ladders" game. Step 1: Buy a Mac I've played with many operating systems in my time: from the original BBC Model B, through DOS, Windows, Linux, and others, but I'd so far managed to avoid buying fruit-flavoured computer hardware! If you want to develop for the iPhone, iPad or iPod Touch, that's the first thing that needs to change. If you've not used OS X before, the first thing you'll realise is that everything is different! In the interests of avoiding a flame war in the comments section, I'll only go so far as to say that a lot of my Windows-flavoured muscle memory no longer worked. If you're in the UK, you'll also realise your keyboard is lacking a # key, and that " and @ are the other way around from normal. The wonderful Ukelele keyboard layout editor restores some sanity here, as long as you don't look at the keyboard when you're typing. I couldn't give up the PC entirely, but a handy application called Synergy comes to the rescue - it lets you share a single keyboard and mouse between multiple machines. There's a few limitations: Alt-Tab always seems to go to the Mac, and Windows 7's UAC dialogs require the local mouse for security reasons, but it gets you a long way at least. Step 2: Register as an Apple Developer You can register as an Apple Developer free of charge, and that lets you download XCode and the iOS SDK. You also get the iPhone / iPad emulator, which is handy, since you'll need to be a paid member before you can deploy your apps to a real device. You can either enroll as an individual, or as a company. They both cost the same ($99/year), but there's a few differences between them. If you register as a company, you can add multiple developers to your team (all for the same $99 - not $99 per developer), and you get to use your company name in the App Store. However, you'll need to send off significantly more documentation to Apple, and I suspect the process takes rather longer than for an individual, where they just need to verify some credit card details. Here's a tip: if you're registering as a company, do so as early as possible. The approval process can take a while to complete, so get the application in in plenty of time. Step 3: Learn to love the square brackets! Objective-C is the language of the iPad. C and C++ are also supported, and if you're doing some serious game development, you'll probably spend most of your time in C++ talking OpenGL, but for forms-based apps, you'll be interacting with a lot of the Objective-C SDK. Like shifting from Ctrl-C to Cmd-C, it feels a little odd at first, with the familiar string.format(.) turning into: NSString *myString = [NSString stringWithFormat:@"Hello world, it's %@", [NSDate date]]; Thankfully XCode's auto-complete is normally passable, if not up to Visual Studio's standards, which coupled with a huge amount of content on Stack Overflow means you'll soon get to grips with the API. You'll need to get used to some terminology changes, though; here's an incomplete approximation: Coming from a .NET background, there's some luxuries you no longer have developing Objective C in XCode: Generics! Remember back in .NET 1.1, when all collections were just objects? Yup, we're back there now. ReSharper. Or, more generally, very much refactoring support. The not-many-keystrokes to rename a class, its file, and al references to it in Visual Studio turns into a much more painful experience in XCode. Garbage collection. This is actually rather less of an issue than you might expect: if you follow the rules, the reference counting provided by Objective C gets you a long way without too much pain. Circular references are their usual problematic self, though. Decent exception handling. You do have exceptions, but they're nowhere near as widely used. Generally, if something goes wrong, you get nil (see translation table above) back. Which brings me on to. Calling a method on a nil object isn't a failure - it just returns nil itself! There's many arguments for and against this, but personally I fall into the "stuff should fail as quickly and explicitly as possible" camp. Less specifically, I found that there's more chance of code failing at runtime rather than getting caught at compile-time: using the @selector(.) syntax to pass a method signature isn't (can't be) checked at compile-time, so the first you know about a typo is a crash when you try and call it. The solution to this is of course lots of great testing, both automated and manual, but I still find comfort in provably correct type safety being enforced in addition to testing. Step 4: Submit to the App Store Assuming you want to distribute to more than a handful of devices, you're going to need to submit your app to the Apple App Store. There's a few gotchas in terms of getting builds signed with the right certificates, and you'll be bouncing around between XCode and iTunes Connect a fair bit, but eventually you get everything checked off the to-do list, and are ready to upload your first binary! With some amount of anticipation, I pressed the Upload button in XCode, ready to release our creation into the world, but was instead greeted by an error informing me my XML file was malformed. Uh. A little Googling later, and it turned out that a simple rename from "Stacks&Heaps.app" to "StacksAndHeaps.app" worked around an XML escaping bug, and we were good to go. The next step is to wait for approval (or otherwise). After a couple of weeks of intensive development, this part is agonising. Did we make it? The Apple jury is still out at the moment, but our fingers are firmly crossed! In the meantime, you can see some screenshots and leave us your email address if you'd like us to get in touch when it does go live at the MobileFoo website. Step 5: Profit! Actually, that wasn't the idea here: Stacks & Heaps is free; there's no adverts, and we're not going to sell all your data either. So why did we do it? We wanted to get an idea of what it's like to move from coding for a desktop environment, to something completely different. We don't know whether in a year's time, the iPad will still be the dominant force, or whether Android will have smoothed out some bugs, tweaked the performance, and polished the UI, but I think it's a fairly sure bet that the tablet form factor is here to stay. We want to meet people who are using it, start chatting to them, and find out about some of the pain they're feeling. What better way to do that than do it ourselves, and get to write a cool game in the process?

    Read the article

  • Metro: Grouping Items in a ListView Control

    - by Stephen.Walther
    The purpose of this blog entry is to explain how you can group list items when displaying the items in a WinJS ListView control. In particular, you learn how to group a list of products by product category. Displaying a grouped list of items in a ListView control requires completing the following steps: Create a Grouped data source from a List data source Create a Grouped Header Template Declare the ListView control so it groups the list items Creating the Grouped Data Source Normally, you bind a ListView control to a WinJS.Binding.List object. If you want to render list items in groups, then you need to bind the ListView to a grouped data source instead. The following code – contained in a file named products.js — illustrates how you can create a standard WinJS.Binding.List object from a JavaScript array and then return a grouped data source from the WinJS.Binding.List object by calling its createGrouped() method: (function () { "use strict"; // Create List data source var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44, category: "Beverages" }, { name: "Oranges", price: 1.99, category: "Fruit" }, { name: "Wine", price: 8.55, category: "Beverages" }, { name: "Apples", price: 2.44, category: "Fruit" }, { name: "Steak", price: 1.99, category: "Other" }, { name: "Eggs", price: 2.44, category: "Other" }, { name: "Mushrooms", price: 1.99, category: "Other" }, { name: "Yogurt", price: 2.44, category: "Other" }, { name: "Soup", price: 1.99, category: "Other" }, { name: "Cereal", price: 2.44, category: "Other" }, { name: "Pepsi", price: 1.99, category: "Beverages" } ]); // Create grouped data source var groupedProducts = products.createGrouped( function (dataItem) { return dataItem.category; }, function (dataItem) { return { title: dataItem.category }; }, function (group1, group2) { return group1.charCodeAt(0) - group2.charCodeAt(0); } ); // Expose the grouped data source WinJS.Namespace.define("ListViewDemos", { products: groupedProducts }); })(); Notice that the createGrouped() method requires three functions as arguments: groupKey – This function associates each list item with a group. The function accepts a data item and returns a key which represents a group. In the code above, we return the value of the category property for each product. groupData – This function returns the data item displayed by the group header template. For example, in the code above, the function returns a title for the group which is displayed in the group header template. groupSorter – This function determines the order in which the groups are displayed. The code above displays the groups in alphabetical order: Beverages, Fruit, Other. Creating the Group Header Template Whenever you create a ListView control, you need to create an item template which you use to control how each list item is rendered. When grouping items in a ListView control, you also need to create a group header template. The group header template is used to render the header for each group of list items. Here’s the markup for both the item template and the group header template: <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div id="productGroupHeaderTemplate" data-win-control="WinJS.Binding.Template"> <div class="productGroupHeader"> <h1 data-win-bind="innerText: title"></h1> </div> </div> You should declare the two templates in the same file as you declare the ListView control – for example, the default.html file. Declaring the ListView Control The final step is to declare the ListView control. Here’s the required markup: <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate'), groupDataSource:ListViewDemos.products.groups.dataSource, groupHeaderTemplate:select('#productGroupHeaderTemplate'), layout: {type: WinJS.UI.GridLayout} }"> </div> In the markup above, six properties of the ListView control are set when the control is declared. First the itemDataSource and itemTemplate are specified. Nothing new here. Next, the group data source and group header template are specified. Notice that the group data source is represented by the ListViewDemos.products.groups.dataSource property of the grouped data source. Finally, notice that the layout of the ListView is changed to Grid Layout. You are required to use Grid Layout (instead of the default List Layout) when displaying grouped items in a ListView. Here’s the entire contents of the default.html page: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ListViewDemos</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- ListViewDemos references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script src="/js/products.js" type="text/javascript"></script> <style type="text/css"> .product { width: 200px; height: 100px; border: white solid 1px; font-size: x-large; } </style> </head> <body> <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div id="productGroupHeaderTemplate" data-win-control="WinJS.Binding.Template"> <div class="productGroupHeader"> <h1 data-win-bind="innerText: title"></h1> </div> </div> <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate'), groupDataSource:ListViewDemos.products.groups.dataSource, groupHeaderTemplate:select('#productGroupHeaderTemplate'), layout: {type: WinJS.UI.GridLayout} }"> </div> </body> </html> Notice that the default.html page includes a reference to the products.js file: <script src=”/js/products.js” type=”text/javascript”></script> The default.html page also contains the declarations of the item template, group header template, and ListView control. Summary The goal of this blog entry was to explain how you can group items in a ListView control. You learned how to create a grouped data source, a group header template, and declare a ListView so that it groups its list items.

    Read the article

  • CEN/CENELEC Lacks Perspective

    - by trond-arne.undheim
    Over the last few months, two of the European Standardization Organizations (ESOs), CEN and CENELEC have circulated an unfortunate position statement distorting the facts around fora and consortia. For the benefit of outsiders to this debate, let's just say that this debate regards whether and how the EU should recognize standards and specifications from certain fora and consortia based on a process evaluating the openness and transparency of such deliverables. The topic is complex, and somewhat confusing even to insiders, but nevertheless crucial to the European economy. As far as I can judge, their positions are not based on facts. This is unfortunate. For the benefit of clarity, here are some of the observations they make: a)"Most consortia are in essence driven by technology companies making hardware and software solutions, by definition very few of the largest ones are European-based". b) "Most consortia lack a European presence, relevant Committees, even those that are often cited as having stronger links with Europe, seem to lack an overall, inclusive set of participants". c) "Recognising specific consortia specifications will not resolve any concrete problems of interoperability for public authorities; interoperability depends on stringing together a range of specifications (from formal global bodies or consortia alike)". d) "Consortia already have the option to have their specifications adopted by the international formal standards bodies and many more exercise this than the two that seem to be campaigning for European recognition. Such specifications can then also be adopted as European standards." e) "Consortium specifications completely lack any process to take due and balanced account of requirements at national level - this is not important for technologies but can be a critical issue when discussing cross-border issues within the EU such as eGovernment, eHealth and so on". f) "The proposed recognition will not lead to standstill on national or European activities, nor to the adoption of the specifications as national standards in the CEN and CENELEC members (usually in their official national languages), nor to withdrawal of conflicting national standards. A big asset of the European standardization system is its coherence and lack of fragmentation." g) "We always miss concrete and specific examples of where consortia referencing are supposed to be helpful." First of all, note that ETSI, the third ESO, did not join the position. The reason is, of course, that ETSI beyond being an ESO, also has a global perspective and, moreover, does consider reality. Secondly, having produced arguments a) to g), CEN/CENELEC has the audacity to call a meeting on Friday 25 February entitled "ICT standardization - improving collaboration in Europe". This sounds very nice, but they have not set the stage for constructive debate. Rather, they demonstrate a striking lack of vision and lack of perspective. I will back this up by three facts, and leave it there. 1. Since the 1980s, global industry fora and consortia, such as IETF, W3C and OASIS have emerged as world-leading ICT standards development organizations with excellent procedures for openness and transparency in all phases of standards development, ex post and ex ante. - Practically no ICT system can be built without using fora and consortia standards (FCS). - Without using FCS, neither the Internet, upon which the EU economy depends, nor EU institutions would operate. - FCS are of high relevance for achieving and promoting interoperability and driving innovation. 2. FCS are complementary to the formally recognized standards organizations including the ESOs. - No work will be taken away from the ESOs should the EU recognize certain FCS. - Each FCS would be evaluated on its merit and on the openness of the process that produced it. ESOs would, with other stakeholders, have a say. - ESOs could potentially educate and assist European stakeholders to engage more actively and constructively with FCS. - ETSI, also an ESO, seems to clearly recognize these facts. 3. Europe and its Member States have a strong voice in several of the most relevant global industry fora and consortia. - W3C: W3C was founded in 1994 by an Englishman, Sir Tim Berners-Lee, in collaboration with CERN, the European research lab. In April 1995, INRIA (Institut National de Recherche en Informatique et Automatique) in France became the first European W3C host and in 2003, ERCIM (European Research Consortium in Informatics and Mathematics), also based in France, took over the role of European W3C host from INRIA. Today, W3C has 326 Members, 40% of which are European. Government participation is also strong, and it could be increased - a development that is very much desired by W3C. Current members of the W3C Advisory Board includes Ora Lassila (Nokia) and Charles McCathie Nevile (Opera). Nokia is Finnish company, Opera is a Norwegian company. SAP's Claus von Riegen is an alumni of the same Advisory Board. - OASIS: its membership - 30% of which is European - represents the marketplace, reflecting a balance of providers, user companies, government agencies, and non-profit organizations. In particular, about 15% of OASIS members are governments or universities. Frederick Hirsch from Nokia, Claus von Riegen from SAP AG and Charles-H. Schulz from Ars Aperta are on the Board of Directors. Nokia is a Finnish company, SAP is a German company and Ars Aperta is a French company. The Chairman of the Board is Peter Brown, who is an Independent Consultant, an Austrian citizen AND an official of the European Parliament currently on long-term leave. - IETF: The oversight of its activities is by the Internet Architecture Board (IAB), since 2007 chaired by Olaf Kolkman, a Dutch national who lives in Uithoorn, NL. Kolkman is director of NLnet Labs, a foundation chartered to develop open source software and open source standards for the Internet. Other IAB members include Marcelo Bagnulo whose affiliation is the University Carlos III of Madrid, Spain as well as Hannes Tschofenig from Nokia Siemens Networks. Nokia is a Finnish company. Siemens is a German company. Nokia Siemens is a European joint venture. - Member States: At least 17 European Member States have developed Interoperability Frameworks that include FCS, according to the EU-funded National Interoperability Framework Observatory (see list and NIFO web site on IDABC). This also means they actively procure solutions using FCS, reference FCS in their policies and even in laws. Member State reps are free to engage in FCS, and many do. It would be nice if the EU adjusted to this reality. - A huge number of European nationals work in the global IT industry, on European soil or elsewhere, whether in EU registered companies or not. CEN/CENELEC lacks perspective and has engaged in an effort to twist facts that is quite striking from a publicly funded organization. I wish them all possible success with Friday's meeting but I fear all of the most important stakeholders will not be at the table. Not because they do not wish to collaborate, but because they just have been insulted. If they do show up, it would be a gracious move, almost beyond comprehension. While I do not expect CEN/CENELEC to line up perfectly in favor of fora and consortia, I think it would be to their benefit to stick to more palatable observations. Actually, I would suggest an apology, straightening out the facts. This works among friends and it works in an organizational context. Then, we can all move on. Standardization is important. Too important to ignore. Too important to distort. The European economy depends on it. We need CEN/CENELEC. It is an important organization. But CEN/CENELEC needs fora and consortia, too.

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Metro: Grouping Items in a ListView Control

    - by Stephen.Walther
    The purpose of this blog entry is to explain how you can group list items when displaying the items in a WinJS ListView control. In particular, you learn how to group a list of products by product category. Displaying a grouped list of items in a ListView control requires completing the following steps: Create a Grouped data source from a List data source Create a Grouped Header Template Declare the ListView control so it groups the list items Creating the Grouped Data Source Normally, you bind a ListView control to a WinJS.Binding.List object. If you want to render list items in groups, then you need to bind the ListView to a grouped data source instead. The following code – contained in a file named products.js — illustrates how you can create a standard WinJS.Binding.List object from a JavaScript array and then return a grouped data source from the WinJS.Binding.List object by calling its createGrouped() method: (function () { "use strict"; // Create List data source var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44, category: "Beverages" }, { name: "Oranges", price: 1.99, category: "Fruit" }, { name: "Wine", price: 8.55, category: "Beverages" }, { name: "Apples", price: 2.44, category: "Fruit" }, { name: "Steak", price: 1.99, category: "Other" }, { name: "Eggs", price: 2.44, category: "Other" }, { name: "Mushrooms", price: 1.99, category: "Other" }, { name: "Yogurt", price: 2.44, category: "Other" }, { name: "Soup", price: 1.99, category: "Other" }, { name: "Cereal", price: 2.44, category: "Other" }, { name: "Pepsi", price: 1.99, category: "Beverages" } ]); // Create grouped data source var groupedProducts = products.createGrouped( function (dataItem) { return dataItem.category; }, function (dataItem) { return { title: dataItem.category }; }, function (group1, group2) { return group1.charCodeAt(0) - group2.charCodeAt(0); } ); // Expose the grouped data source WinJS.Namespace.define("ListViewDemos", { products: groupedProducts }); })(); Notice that the createGrouped() method requires three functions as arguments: groupKey – This function associates each list item with a group. The function accepts a data item and returns a key which represents a group. In the code above, we return the value of the category property for each product. groupData – This function returns the data item displayed by the group header template. For example, in the code above, the function returns a title for the group which is displayed in the group header template. groupSorter – This function determines the order in which the groups are displayed. The code above displays the groups in alphabetical order: Beverages, Fruit, Other. Creating the Group Header Template Whenever you create a ListView control, you need to create an item template which you use to control how each list item is rendered. When grouping items in a ListView control, you also need to create a group header template. The group header template is used to render the header for each group of list items. Here’s the markup for both the item template and the group header template: <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div id="productGroupHeaderTemplate" data-win-control="WinJS.Binding.Template"> <div class="productGroupHeader"> <h1 data-win-bind="innerText: title"></h1> </div> </div> You should declare the two templates in the same file as you declare the ListView control – for example, the default.html file. Declaring the ListView Control The final step is to declare the ListView control. Here’s the required markup: <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate'), groupDataSource:ListViewDemos.products.groups.dataSource, groupHeaderTemplate:select('#productGroupHeaderTemplate'), layout: {type: WinJS.UI.GridLayout} }"> </div> In the markup above, six properties of the ListView control are set when the control is declared. First the itemDataSource and itemTemplate are specified. Nothing new here. Next, the group data source and group header template are specified. Notice that the group data source is represented by the ListViewDemos.products.groups.dataSource property of the grouped data source. Finally, notice that the layout of the ListView is changed to Grid Layout. You are required to use Grid Layout (instead of the default List Layout) when displaying grouped items in a ListView. Here’s the entire contents of the default.html page: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ListViewDemos</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- ListViewDemos references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script src="/js/products.js" type="text/javascript"></script> <style type="text/css"> .product { width: 200px; height: 100px; border: white solid 1px; font-size: x-large; } </style> </head> <body> <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div id="productGroupHeaderTemplate" data-win-control="WinJS.Binding.Template"> <div class="productGroupHeader"> <h1 data-win-bind="innerText: title"></h1> </div> </div> <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate'), groupDataSource:ListViewDemos.products.groups.dataSource, groupHeaderTemplate:select('#productGroupHeaderTemplate'), layout: {type: WinJS.UI.GridLayout} }"> </div> </body> </html> Notice that the default.html page includes a reference to the products.js file: <script src=”/js/products.js” type=”text/javascript”></script> The default.html page also contains the declarations of the item template, group header template, and ListView control. Summary The goal of this blog entry was to explain how you can group items in a ListView control. You learned how to create a grouped data source, a group header template, and declare a ListView so that it groups its list items.

    Read the article

  • Deterministic/Consistent Unique Masking

    - by Dinesh Rajasekharan-Oracle
    One of the key requirements while masking data in large databases or multi database environment is to consistently mask some columns, i.e. for a given input the output should always be the same. At the same time the masked output should not be predictable. Deterministic masking also eliminates the need to spend enormous amount of time spent in identifying data relationships, i.e. parent and child relationships among columns defined in the application tables. In this blog post I will explain different ways of consistently masking the data across databases using Oracle Data Masking and Subsetting The readers of post should have minimal knowledge on Oracle Enterprise Manager 12c, Application Data Modeling, Data Masking concepts. For more information on these concepts, please refer to Oracle Data Masking and Subsetting document Oracle Data Masking and Subsetting 12c provides four methods using which users can consistently yet irreversibly mask their inputs. 1. Substitute 2. SQL Expression 3. Encrypt 4. User Defined Function SUBSTITUTE The substitute masking format replaces the original value with a value from a pre-created database table. As the method uses a hash based algorithm in the back end the mappings are consistent. For example consider DEPARTMENT_ID in EMPLOYEES table is replaced with FAKE_DEPARTMENT_ID from FAKE_TABLE. The substitute masking transformation that all occurrences of DEPARTMENT_ID say ‘101’ will be replaced with ‘502’ provided same substitution table and column is used , i.e. FAKE_TABLE.FAKE_DEPARTMENT_ID. The following screen shot shows the usage of the Substitute masking format with in a masking definition: Note that the uniqueness of the masked value depends on the number of columns being used in the substitution table i.e. if the original table contains 50000 unique values, then for the masked output to be unique and deterministic the substitution column should also contain 50000 unique values without which only consistency is maintained but not uniqueness. SQL EXPRESSION SQL Expression replaces an existing value with the output of a specified SQL Expression. For example while masking an EMPLOYEES table the EMAIL_ID of an employee has to be in the format EMPLOYEE’s [email protected] while FIRST_NAME and LAST_NAME are the actual column names of the EMPLOYEES table then the corresponding SQL Expression will look like %FIRST_NAME%||’.’||%LAST_NAME%||’@COMPANY.COM’. The advantage of this technique is that if you are masking FIRST_NAME and LAST_NAME of the EMPLOYEES table than the corresponding EMAIL ID will be replaced accordingly by the masking scripts. One of the interesting aspect’s of a SQL Expressions is that you can use sub SQL expressions, which means that you can write a nested SQL and use it as SQL Expression to address a complex masking business use cases. SQL Expression can also be used to consistently replace value with hashed value using Oracle’s PL/SQL function ORA_HASH. The following SQL Expression will help in the previous example for replacing the DEPARTMENT_IDs with a hashed number ORA_HASH (%DEPARTMENT_ID%, 1000) The following screen shot shows the usage of encrypt masking format with in the masking definition: ORA_HASH takes three arguments: 1. Expression which can be of any data type except LONG, LOB, User Defined Type [nested table type is allowed]. In the above example I used the Original value as expression. 2. Number of hash buckets which can be number between 0 and 4294967295. The default value is 4294967295. You can also co-relate the number of hash buckets to a range of numbers. In the above example above the bucket value is specified as 1000, so the end result will be a hashed number in between 0 and 1000. 3. Seed, can be any number which decides the consistency, i.e. for a given seed value the output will always be same. The default seed is 0. In the above SQL Expression a seed in not specified, so it to 0. If you have to use a non default seed then the function will look like. ORA_HASH (%DEPARTMENT_ID%, 1000, 1234 The uniqueness depends on the input and the number of hash buckets used. However as ORA_HASH uses a 32 bit algorithm, considering birthday paradox or pigeonhole principle there is a 0.5 probability of collision after 232-1 unique values. ENCRYPT Encrypt masking format uses a blend of 3DES encryption algorithm, hashing, and regular expression to produce a deterministic and unique masked output. The format of the masked output corresponds to the specified regular expression. As this technique uses a key [string] to encrypt the data, the same string can be used to decrypt the data. The key also acts as seed to maintain consistent outputs for a given input. The following screen shot shows the usage of encrypt masking format with in the masking definition: Regular Expressions may look complex for the first time users but you will soon realize that it’s a simple language. There are many resources in internet, oracle documentation, oracle learning library, my oracle support on writing a Regular Expressions, out of all the following My Oracle Support document helped me to get started with Regular Expressions: Oracle SQL Support for Regular Expressions[Video](Doc ID 1369668.1) USER DEFINED FUNCTION [UDF] User Defined Function or UDF provides flexibility for the users to code their own masking logic in PL/SQL, which can be called from masking Defintion. The standard format of an UDF in Oracle Data Masking and Subsetting is: Function udf_func (rowid varchar2, column_name varchar2, original_value varchar2) returns varchar2; Where • rowid is the row identifier of the column that needs to be masked • column_name is the name of the column that needs to be masked • original_value is the column value that needs to be masked You can achieve deterministic masking by using Oracle’s built in hash functions like, ORA_HASH, DBMS_CRYPTO.MD4, DBMS_CRYPTO.MD5, DBMS_UTILITY. GET_HASH_VALUE.Please refers to the Oracle Database Documentation for more information on the Oracle Hash functions. For example the following masking UDF generate deterministic unique hexadecimal values for a given string input: CREATE OR REPLACE FUNCTION RD_DUX (rid varchar2, column_name varchar2, orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2 (26); no_of_characters number(2); BEGIN no_of_characters:=6; stext:=substr(RAWTOHEX(DBMS_CRYPTO.HASH(UTL_RAW.CAST_TO_RAW(text),1)),0,no_of_characters); RETURN stext; END; The uniqueness depends on the input and length of the string and number of bits used by hash algorithm. In the above function MD4 hash is used [denoted by argument 1 in the DBMS_CRYPTO.HASH function which is a 128 bit algorithm which produces 2^128-1 unique hashed values , however this is limited by the length of the input string which is 6, so only 6^6 unique values will be generated. Also do not forget about the birthday paradox/pigeonhole principle mentioned earlier in this post. An another example is to consistently replace characters or numbers preserving the length and special characters as shown below: CREATE OR REPLACE FUNCTION RD_DUS(rid varchar2,column_name varchar2,orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2(26); BEGIN DBMS_RANDOM.SEED(orig_val); stext:=TRANSLATE(orig_val,'ABCDEFGHILKLMNOPQRSTUVWXYZ',DBMS_RANDOM.STRING('U',26)); stext:=TRANSLATE(stext,'abcdefghijklmnopqrstuvwxyz',DBMS_RANDOM.STRING('L',26)); stext:=TRANSLATE(stext,'0123456789',to_char(DBMS_RANDOM.VALUE(1,9))); stext:=REPLACE(stext,'.','0'); RETURN stext; END; The following screen shot shows the usage of an UDF with in a masking definition: To summarize, Oracle Data Masking and Subsetting helps you to consistently mask data across databases using one or all of the methods described in this post. It saves the hassle of identifying the parent-child relationships defined in the application table. Happy Masking

    Read the article

  • Profiling Startup Of VS2012 &ndash; SpeedTrace Profiler

    - by Alois Kraus
    SpeedTrace is a relatively unknown profiler made a company called Ipcas. A single professional license does cost 449€+VAT. For the test I did use SpeedTrace 4.5 which is currently Beta. Although it is cheaper than dotTrace it has by far the most options to influence how profiling does work. First you need to create a tracing project which does configure tracing for one process type. You can start the application directly from the profiler or (much more interesting) it does attach to a specific process when it is started. For this you need to check “Trace the specified …” radio button and enter the process name in the “Process Name of the Trace” edit box. You can even selectively enable tracing for processes with a specific command line. Then you need to activate the trace project by pressing the Activate Project button and you are ready to start VS as usual. If you want to profile the next 10 VS instances that you start you can set the Number of Processes counter to e.g. 10. This is immensely helpful if you are trying to profile only the next 5 started processes. As you can see there are many more tabs which do allow to influence tracing in a much more sophisticated way. SpeedTrace is the only profiler which does not rely entirely on the profiling Api of .NET. Instead it does modify the IL code (instrumentation on the fly) to write tracing information to disc which can later be analyzed. This approach is not only very fast but it does give you unprecedented analysis capabilities. Once the traces are collected they do show up in your workspace where you can open the trace viewer. I do skip the other windows because this view is by far the most useful one. You can sort the methods not only by Wall Clock time but also by CPU consumption and wait time which none of the other products support in their views at the same time. If you want to optimize for CPU consumption sort by CPU time. If you want to find out where most time is spent you need Clock Total time and Clock Waiting. There you can directly see if the method did take long because it did wait on something or it did really execute stuff that did take so long. Once you have found a method you want to drill deeper you can double click on a method to get to the Caller/Callee view which is similar to the JetBrains Method Grid view. But this time you do see much more. In the middle is the clicked method. Above are the methods that call you and below are the methods that you do directly call. Normally you would then start digging deeper to find the end of the chain where the slow method worth optimizing is located. But there is a shortcut. You can press the magic   button to calculate the aggregation of all called methods. This is displayed in the lower left window where you can see each method call and how long it did take. There you can also sort to see if this call stack does only contain methods (e.g. WCF connect calls which you cannot make faster) not worth optimizing. YourKit has a similar feature where it is called Callees List. In the Functions tab you have in the context menu also many other useful analysis options One really outstanding feature is the View Call History Drilldown. When you select this one you get not a sum of all method invocations but a list with the duration of each method call. This is not surprising since SpeedTrace does use tracing to get its timings. There you can get many useful graphs how this method did behave over time. Did it become slower at some point in time or was only the first call slow? The diagrams and the list will tell you that. That is all fine but what should I do when one method call was slow? I want to see from where it was coming from. No problem select the method in the list hit F10 and you get the call stack. This is a life saver if you e.g. search for serialization problems. Today Serializers are used everywhere. You want to find out from where the 5s XmlSerializer.Deserialize call did come from? Hit F10 and you get the call stack which did invoke the 5s Deserialize call. The CPU timeline tab is also useful to find out where long pauses or excessive CPU consumption did happen. Click in the graph to get the Thread Stacks window where you can get a quick overview what all threads were doing at this time. This does look like the Stack Traces feature in YourKit. Only this time you get the last called method first which helps to quickly see what all threads were executing at this moment. YourKit does generate a rather long list which can be hard to go through when you have many threads. The thread list in the middle does not give you call stacks or anything like that but you see which methods were found most often executing code by the profiler which is a good indication for methods consuming most CPU time. This does sound too good to be true? I have not told you the best part yet. The best thing about this profiler is the staff behind it. When I do see a crash or some other odd behavior I send a mail to Ipcas and I do get usually the next day a mail that the problem has been fixed and a download link to the new version. The guys at Ipcas are even so helpful to log in to your machine via a Citrix Client to help you to get started profiling your actual application you want to profile. After a 2h telco I was converted from a hater to a believer of this tool. The fast response time might also have something to do with the fact that they are actively working on 4.5 to get out of the door. But still the support is by far the best I have encountered so far. The only downside is that you should instrument your assemblies including the .NET Framework to get most accurate numbers. You can profile without doing it but then you will see very high JIT times in your process which can severely affect the correctness of the measured timings. If you do not care about exact numbers you can also enable in the main UI in the Data Trace tab logging of method arguments of primitive types. If you need to know what files at which times were opened by your application you can find it out without a debugger. Since SpeedTrace does read huge trace files in its reader you should perhaps use a 64 bit machine to be able to analyze bigger traces as well. The memory consumption of the trace reader is too high for my taste. But they did promise for the next version to come up with something much improved.

    Read the article

  • Spacewalk 2.0 provided to manage Oracle Linux systems

    - by wcoekaer
    Oracle Linux customers have a few options to manage and provision their servers. We provide a license to use Oracle Enterprise Manager's Linux OS management, monitoring and provisioning features without additional cost for every server that has an Oracle Linux support subscription. So there is no additional pack to license and no additional per server cost, it's all included in our Basic, Premier and Systems support subscriptions. The nice thing with Oracle Enterprise Manager is that you end up with a single management product that can manage all aspects of your software stack. You have complete insight into the applications running, you have roles and responsibilities, you have third party connectors for storage or other products and it makes it very easy and convenient to correlate data and events when something happens. If you use Oracle VM as well, you end up with a complete cloud portal with selfservice, chargeback, etc... Another, much simpler option, is just using yum. It is very easy to take a server and create directories and expose these through apache as repositories. You can have a simple yum config on each server pointing to a few specific repositories. It requires some manual effort in terms of creating directories, downloading packages and creating local repo files but it's easy to do and for many people a preferred solution. There are also a good number of customers that just connect their servers directly to ULN or to our free update server public-yum. Just to re-iterate, our public-yum servers have all the errata and updates available for free. Now we added another option. Many of our customers have switched from a competing Linux vendor and they had familiarity with their management tools. Switching to Oracle for support is very easy since we don't require changes to the installed servers but we also want to make sure there is a very easy and almost transparent switch for the management tools as well. While Oracle Enterprise Manager is our preferred way of managing systems, we now are offering Spacewalk 2.0 to our customers. The community project can be found here. We have made a few changes to ensure easy and complete support for Oracle Linux, tested it with public-yum, etc.. You can find the rpms in our public-yum repos at http://public-yum.oracle.com/repo/OracleLinux/OL6/. There are repositories for spacewalk server and then for each version (OL5,OL6) and architecture (x86 and x86-64) we have the client repositories as well. Spacewalk itself is only made available for OL6 x86-64. Documentation can be found here. I set it up myself and here are some quick steps on how you can get going in just a matter of minutes: Spacewalk Server Installation : 1) Installing an Oracle Database Use an existing Oracle Database or install a new Oracle Database (Standard or Enterprise Edition) [at this time use 11g, we will add support for 12c in the near future]. This database can be installed on the spacewalk server or on a separate remote server. While Oracle XE might work to create a small sample POC, we do not support the use of Oracle XE, spacewalk repositories can become large and create a significant database workload. Customers can use their existing database licenses, they can download the database with a trial licence from http://edelivery.oracle.com or Oracle Linux subscribers (customers) will be allowed to use the Oracle Database as a spacewalk repository as part of their Oracle Linux subscription at no additional cost. |NOTE : spacewalk requires the database to be configured with the UTF8 characterset. |Installation will fail if your database does not use UTF8. |To verify if your database is configured correctly, run the following command in sqlplus: | |select value from nls_database_parameters where parameter='NLS_CHARACTERSET'; |This should return 'AL32UTF8' 2) Configure the database schema for spacewalk Ideally, create a tablespace in the database to hold the spacewalk schema tables/data; create tablespace spacewalk datafile '/u01/app/oracle/oradata/orcl/spacewalk.dbf' size 10G autoextend on; Create the database user spacewalk (or use some other schema name) in sqlplus. example : create user spacewalk identified by spacewalk; grant connect, resource to spacewalk; grant create table, create trigger, create synonym, create view, alter session to spacewalk; grant unlimited tablespace to spacewalk; alter user spacewalk default tablespace spacewalk; 4) Spacewalk installation and configuration Spacewalk server requires an Oracle Linux 6 x86-64 system. Clients can be Oracle Linux 5 or 6, both 32- and 64bit. The server is only supported on OL6/64bit. The easiest way to get started is to do a 'Minimal' install of Oracle Linux on a server and configure the yum repository to include the spacewalk repo from public-yum. Once you have a system with a minimal install, modify your yum repo to include the spacewalk repo. Example : edit /etc/yum.repos.d/public-yum-ol.repo and add the following lines at the end of the file : [spacewalk] name=spacewalk baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/spacewalk20/server/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 Install the following pre-requisite packages on your spacewalk server : oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64 oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64 rpm -ivh oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64 rpm -ivh oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64 The above RPMs can be found on the Oracle Technology Network website : http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html As the root user, configure the library path to include the Oracle Instant Client libraries : cd /etc/ld.so.conf.d echo /usr/lib/oracle/11.2/client64/lib oracle-instantclient11.2.conf ldconfig Install spacewalk : # yum install spacewalk-oracle The above yum command should download and install all required packages to run spacewalk on your local server. | NOTE : if you did a full, desktop or workstation installation, | you have to remove the JTA package | BEFORE installing spacewalk-oracle (rpm -e --nodeps jta) Once the installation completes, simply run the spacewalk configuration tool and you are all set. (make sure to run the command with the 2 arguments) spacewalk-setup --disconnected --external-db Answer the questions during the setup, ensure you provide the current database user (example : spacewalk) and password (example : spacewalk) and database server hostname (the standard hostname of the server on which you have deployed the Oracle database) At the end of the setup script, your spacewalk server should be fully configured and you can log into the web portal. Use your favorite browser to connect to the website : http://[spacewalkserverhostname] The very first action will be to create the main admin account.

    Read the article

  • JSP Precompilation for ADF Applications

    - by Duncan Mills
    A question that comes up from time to time, particularly in relation to build automation, is how to best pre-compile the .jspx and .jsff files in an ADF application. Thus ensuring that the app is ready to run as soon as it's installed into WebLogic. In the normal run of things, the first poor soul to hit a page pays the price and has to wait a little whilst the JSP is compiled into a servlet. Everyone else subsequently gets a free lunch. So it's a reasonable thing to want to do... Let Me List the Ways So forth to Google (other search engines are available)... which lead me to a fairly old article on WLDJ - Removing Performance Bottlenecks Through JSP Precompilation. Technololgy wise, it's somewhat out of date, but the one good point that it made is that it's really not very useful to try and use the precompile option in the weblogic.xml file. That's a really good observation - particularly if you're trying to integrate a pre-compile step into a Hudson Continuous Integration process. That same article mentioned an alternative approach for programmatic pre-compilation using weblogic.jspc. This seemed like a much more useful approach for a CI environment. However, weblogic.jspc is now obsoleted by weblogic.appc so we'll use that instead.  Thanks to Steve for the pointer there. And So To APPC APPC has documentation - always a great place to start, and supports usage both from Ant via the wlappc task and from the command line using the weblogic.appc command. In my testing I took the latter approach. Usage, as the documentation will show you, is superficially pretty simple.  The nice thing here, is that you can pass an existing EAR file (generated of course using OJDeploy) and that EAR will be updated in place with the freshly compiled servlet classes created from the JSPs. Appc takes care of all the unpacking, compiling and re-packing of the EAR for you. Neat.  So we're done right...? Not quite. The Devil is in the Detail  OK so I'm being overly dramatic but it's not all plain sailing, so here's a short guide to using weblogic.appc to compile a simple ADF application without pain.  Information You'll Need The following is based on the assumption that you have a stand-alone WLS install with the Application Development  Runtime installed and a suitable ADF enabled domain created. This could of course all be run off of a JDeveloper install as well 1. Your Weblogic home directory. Everything you need is relative to this so make a note.  In my case it's c:\builds\wls_ps4. 2. Next deploy your EAR as normal and have a peek inside it using your favourite zip management tool. First of all look at the weblogic-application.xml inside the EAR /META-INF directory. Have a look for any library references. Something like this: <library-ref>    <library-name>adf.oracle.domain</library-name> </library-ref>   Make a note of the library ref (adf.oracle.domain in this case) , you'll need that in a second. 3. Next open the nested WAR file within the EAR and then have a peek inside the weblogic.xml file in the /WEB-INF directory. Again  make a note of the library references. 4. Now start the WebLogic as per normal and run the WebLogic console app (e.g. http://localhost:7001/console). In the Domain Structure navigator, select Deployments. 5. For each of the libraries you noted down drill into the library definition and make a note of the .war, .ear or .jar that defines the library. For example, in my case adf.oracle.domain maps to "C:\ builds\ WLS_PS4\ oracle_common\ modules\ oracle. adf. model_11. 1. 1\ adf. oracle. domain. ear". Note the extra spaces that are salted throughout this string as it is displayed in the console - just to make it annoying, you'll have to strip these out. 6. Finally you'll need the location of the adfsharebean.jar. We need to pass this on the classpath for APPC so that the ADFConfigLifeCycleCallBack listener can be found. In a more complex app of your own you may need additional classpath entries as well.  Now we're ready to go, and it's a simple matter of applying the information we have gathered into the relevant command line arguments for the utility A Simple CMD File to Run APPC  Here's the stub .cmd file I'm using on Windows to run this. @echo offREM Stub weblogic.appc Runner setlocal set WLS_HOME=C:\builds\WLS_PS4 set ADF_LIB_ROOT=%WLS_HOME%\oracle_common\modulesset COMMON_LIB_ROOT=%WLS_HOME%\wlserver_10.3\common\deployable-libraries set ADF_WEBAPP=%ADF_LIB_ROOT%\oracle.adf.view_11.1.1\adf.oracle.domain.webapp.war set ADF_DOMAIN=%ADF_LIB_ROOT%\oracle.adf.model_11.1.1\adf.oracle.domain.ear set JSTL=%COMMON_LIB_ROOT%\jstl-1.2.war set JSF=%COMMON_LIB_ROOT%\jsf-1.2.war set ADF_SHARE=%ADF_LIB_ROOT%\oracle.adf.share_11.1.1\adfsharembean.jar REM Set up the WebLogic Environment so appc can be found call %WLS_HOME%\wlserver_10.3\server\bin\setWLSEnv.cmd CLS REM Now compile away!java weblogic.appc -verbose -library %ADF_WEBAPP%,%ADF_DOMAIN%,%JSTL%,%JSF% -classpath %ADF_SHARE% %1 endlocal Running the above on a target ADF .ear  file will zip through and create all of the relevant compiled classes inside your nested .war file in the \WEB-INF\classes\jsp_servlet\ directory (but don't take my word for it, run it and take a look!) And So... In the immortal words of  the Pet Shop Boys, Was It Worth It? Well, here's where you'll have to do your own testing. In  my case here, with a simple ADF application, pre-compilation shaved an non-scientific "3 Elephants" off of the initial page load time for the first access of each page. That's a pretty significant payback for such a simple step to add into your CI process, so why not give it a go.

    Read the article

  • C#/.NET Little Wonders: Comparer&lt;T&gt;.Default

    - by James Michael Hare
    I’ve been working with a wonderful team on a major release where I work, which has had the side-effect of occupying most of my spare time preparing, testing, and monitoring.  However, I do have this Little Wonder tidbit to offer today. Introduction The IComparable<T> interface is great for implementing a natural order for a data type.  It’s a very simple interface with a single method: 1: public interface IComparer<in T> 2: { 3: // Compare two instances of same type. 4: int Compare(T x, T y); 5: }  So what do we expect for the integer return value?  It’s a pseudo-relative measure of the ordering of x and y, which returns an integer value in much the same way C++ returns an integer result from the strcmp() c-style string comparison function: If x == y, returns 0. If x > y, returns > 0 (often +1, but not guaranteed) If x < y, returns < 0 (often –1, but not guaranteed) Notice that the comparison operator used to evaluate against zero should be the same comparison operator you’d use as the comparison operator between x and y.  That is, if you want to see if x > y you’d see if the result > 0. The Problem: Comparing With null Can Be Messy This gets tricky though when you have null arguments.  According to the MSDN, a null value should be considered equal to a null value, and a null value should be less than a non-null value.  So taking this into account we’d expect this instead: If x == y (or both null), return 0. If x > y (or y only is null), return > 0. If x < y (or x only is null), return < 0. But here’s the problem – if x is null, what happens when we attempt to call CompareTo() off of x? 1: // what happens if x is null? 2: x.CompareTo(y); It’s pretty obvious we’ll get a NullReferenceException here.  Now, we could guard against this before calling CompareTo(): 1: int result; 2:  3: // first check to see if lhs is null. 4: if (x == null) 5: { 6: // if lhs null, check rhs to decide on return value. 7: if (y == null) 8: { 9: result = 0; 10: } 11: else 12: { 13: result = -1; 14: } 15: } 16: else 17: { 18: // CompareTo() should handle a null y correctly and return > 0 if so. 19: result = x.CompareTo(y); 20: } Of course, we could shorten this with the ternary operator (?:), but even then it’s ugly repetitive code: 1: int result = (x == null) 2: ? ((y == null) ? 0 : -1) 3: : x.CompareTo(y); Fortunately, the null issues can be cleaned up by drafting in an external Comparer.  The Soltuion: Comparer<T>.Default You can always develop your own instance of IComparer<T> for the job of comparing two items of the same type.  The nice thing about a IComparer is its is independent of the things you are comparing, so this makes it great for comparing in an alternative order to the natural order of items, or when one or both of the items may be null. 1: public class NullableIntComparer : IComparer<int?> 2: { 3: public int Compare(int? x, int? y) 4: { 5: return (x == null) 6: ? ((y == null) ? 0 : -1) 7: : x.Value.CompareTo(y); 8: } 9: }  Now, if you want a custom sort -- especially on large-grained objects with different possible sort fields -- this is the best option you have.  But if you just want to take advantage of the natural ordering of the type, there is an easier way.  If the type you want to compare already implements IComparable<T> or if the type is System.Nullable<T> where T implements IComparable, there is a class in the System.Collections.Generic namespace called Comparer<T> which exposes a property called Default that will create a singleton that represents the default comparer for items of that type.  For example: 1: // compares integers 2: var intComparer = Comparer<int>.Default; 3:  4: // compares DateTime values 5: var dateTimeComparer = Comparer<DateTime>.Default; 6:  7: // compares nullable doubles using the null rules! 8: var nullableDoubleComparer = Comparer<double?>.Default;  This helps you avoid having to remember the messy null logic and makes it to compare objects where you don’t know if one or more of the values is null. This works especially well when creating say an IComparer<T> implementation for a large-grained class that may or may not contain a field.  For example, let’s say you want to create a sorting comparer for a stock open price, but if the market the stock is trading in hasn’t opened yet, the open price will be null.  We could handle this (assuming a reasonable Quote definition) like: 1: public class Quote 2: { 3: // the opening price of the symbol quoted 4: public double? Open { get; set; } 5:  6: // ticker symbol 7: public string Symbol { get; set; } 8:  9: // etc. 10: } 11:  12: public class OpenPriceQuoteComparer : IComparer<Quote> 13: { 14: // Compares two quotes by opening price 15: public int Compare(Quote x, Quote y) 16: { 17: return Comparer<double?>.Default.Compare(x.Open, y.Open); 18: } 19: } Summary Defining a custom comparer is often needed for non-natural ordering or defining alternative orderings, but when you just want to compare two items that are IComparable<T> and account for null behavior, you can use the Comparer<T>.Default comparer generator and you’ll never have to worry about correct null value sorting again.     Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,IComparable,Comparer

    Read the article

  • How can I make this Java code run faster?

    - by Martin Wiboe
    Hello all, I am trying to make a Java port of a simple feed-forward neural network. This obviously involves lots of numeric calculations, so I am trying to optimize my central loop as much as possible. The results should be correct within the limits of the float data type. My current code looks as follows (error handling & initialization removed): /** * Simple implementation of a feedforward neural network. The network supports * including a bias neuron with a constant output of 1.0 and weighted synapses * to hidden and output layers. * * @author Martin Wiboe */ public class FeedForwardNetwork { private final int outputNeurons; // No of neurons in output layer private final int inputNeurons; // No of neurons in input layer private int largestLayerNeurons; // No of neurons in largest layer private final int numberLayers; // No of layers private final int[] neuronCounts; // Neuron count in each layer, 0 is input // layer. private final float[][][] fWeights; // Weights between neurons. // fWeight[fromLayer][fromNeuron][toNeuron] // is the weight from fromNeuron in // fromLayer to toNeuron in layer // fromLayer+1. private float[][] neuronOutput; // Temporary storage of output from previous layer public float[] compute(float[] input) { // Copy input values to input layer output for (int i = 0; i < inputNeurons; i++) { neuronOutput[0][i] = input[i]; } // Loop through layers for (int layer = 1; layer < numberLayers; layer++) { // Loop over neurons in the layer and determine weighted input sum for (int neuron = 0; neuron < neuronCounts[layer]; neuron++) { // Bias neuron is the last neuron in the previous layer int biasNeuron = neuronCounts[layer - 1]; // Get weighted input from bias neuron - output is always 1.0 float activation = 1.0F * fWeights[layer - 1][biasNeuron][neuron]; // Get weighted inputs from rest of neurons in previous layer for (int inputNeuron = 0; inputNeuron < biasNeuron; inputNeuron++) { activation += neuronOutput[layer-1][inputNeuron] * fWeights[layer - 1][inputNeuron][neuron]; } // Store neuron output for next round of computation neuronOutput[layer][neuron] = sigmoid(activation); } } // Return output from network = output from last layer float[] result = new float[outputNeurons]; for (int i = 0; i < outputNeurons; i++) result[i] = neuronOutput[numberLayers - 1][i]; return result; } private final static float sigmoid(final float input) { return (float) (1.0F / (1.0F + Math.exp(-1.0F * input))); } } I am running the JVM with the -server option, and as of now my code is between 25% and 50% slower than similar C code. What can I do to improve this situation? Thank you, Martin Wiboe

    Read the article

  • CodePlex Daily Summary for Sunday, August 03, 2014

    CodePlex Daily Summary for Sunday, August 03, 2014Popular ReleasesBoxStarter: Boxstarter 2.4.76: Running the Setup.bat file will install Chocolatey if not present and then install the Boxstarter modules.GMare: GMare Beta 1.2: Features Added: - Instance painting by holding the alt key down while pressing the left mouse button - Functionality to the binary exporter so that backgrounds from image files can be used - On the binary exporter background information can be edited manually now - Update to the GMare binary read GML script - Game Maker Studio export - Import from GMare project. Multiple options to import desired properties of a .gmpx - 10 undo/redo levels instead of 5 is now the default - New preferences dia...Json.NET: Json.NET 6.0 Release 4: New feature - Added Merge to LINQ to JSON New feature - Added JValue.CreateNull and JValue.CreateUndefined New feature - Added Windows Phone 8.1 support to .NET 4.0 portable assembly New feature - Added OverrideCreator to JsonObjectContract New feature - Added support for overriding the creation of interfaces and abstract types New feature - Added support for reading UUID BSON binary values as a Guid New feature - Added MetadataPropertyHandling.Ignore New feature - Improv...SQL Server Dialog: SQL Server Dialog: Input server, user and password Show folder and file in treeview Customize icon Filter file extension Skip system generate folder and fileAitso-a platform for spatial optimization and based on artificial immune systems: Aitso_0.14.08.01: Aitso0.14.08.01Installer.zipVidCoder: 1.5.24 Beta: Added NL-Means denoiser. Updated HandBrake core to SVN 6254. Added extra error handling to DVD player code to avoid a crash when the player was moved.AutoUpdater.NET : Auto update library for VB.NET and C# Developer: AutoUpdater.NET 1.3: Fixed problem in DownloadUpdateDialog where download continues even if you close the dialog. Added support for new url field for 64 bit application setup. AutoUpdater.NET will decide which download url to use by looking at the value of IntPtr.Size. Added German translation provided by Rene Kannegiesser. Now developer can handle update logic herself using event suggested by ricorx7. Added italian translation provided by Gianluca Mariani. Fixed bug that prevents Application from exiti...SEToolbox: SEToolbox 01.041.012 Release 1: Added voxel material textures to read in with mods. Fixed missing texture replacements for mods. Fixed rounding issue in raytrace code. Fixed repair issue with corrupt checkpoint file. Fixed issue with updated SE binaries 01.041.012 using new container configuration.Magick.NET: Magick.NET 6.8.9.601: Magick.NET linked with ImageMagick 6.8.9.6 Breaking changes: - Changed arguments for the Map method of MagickImage. - QuantizeSettings uses Riemersma by default.Multiple Threads TCP Server: Project: this Project is based on VS 2013, .net freamwork 4.0, you can open it by vs 2010 or laterAricie Shared: Aricie.Shared Version 1.8.00: Version 1.8.0 - Release Notes New: Expression Builder to design Flee Expressions New: Cryptographic helpers and configuration classes Improvement: Many fixes and improvements with property editor Improvement: Token Replace Property explorer now has a restricted mode for additional security Improvement: Better variables, types and object manipulation Fixed: smart file and flee bugs Fixed: Removed Exception while trying to read unsuported files Improvement: several performance twe...Accesorios de sitios Torrent en Español para Synology Download Station: Pack de Torrents en Español 6.0.0: Agregado los módulos de DivXTotal, el módulo de búsqueda depende del de alojamiento para bajar las series Utiliza el rss: http://www.divxtotal.com/rss.php DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).VG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityNew ProjectsCreek: Creek is a Collection of many C# Frameworks and my ownSpeaking Speedometer (android): Simple speaking speedometerT125Protocol { Alpha version }: implement T125 Protocol for communicate with a mainframe.Unix Time: This library provides a System.UnixTime as a new Type providing conversion between Unix Time and .NET DateTime.

    Read the article

  • Book Review: Brownfield Application Development in .NET

    - by DotNetBlues
    I recently finished reading the book Brownfield Application Development in .NET by Kyle Baley and Donald Belcham.  The book is available from Manning.  First off, let me say that I'm a huge fan of Manning as a publisher.  I've found their books to be top-quality, over all.  As a Kindle owner, I also appreciate getting an ebook copy along with the dead tree copy.  I find ebooks to be much more convenient to read, but hard-copies are easier to reference. The book covers, surprisingly enough, working with brownfield applications.  Which is well and good, if that term has meaning to you.  It didn't for me.  Without retreading a chunk of the first chapter, the authors break code bases into three broad categories: greenfield, brownfield, and legacy.  Greenfield is, essentially, new development that hasn't had time to rust and is (hopefully) being approached with some discipline.  Legacy applications are those that are more or less stable and functional, that do not expect to see a lot of work done to them, and are more likely to be replaced than reworked. Brownfield code is the gray (brown?) area between the two and the authors argue, quite effectively, that it is the most likely state for an application to be in.  Brownfield code has, in some way, been allowed to tarnish around the edges and can be difficult to work with.  Although I hadn't realized it, most of the code I've worked on has been brownfield.  Sometimes, there's talk of scrapping and starting over.  Sometimes, the team dismisses increased discipline as ivory tower nonsense.  And, sometimes, I've been the ignorant culprit vexing my future self. The book is broken into two major sections, plus an introduction chapter and an appendix.  The first section covers what the authors refer to as "The Ecosystem" which consists of version control, build and integration, testing, metrics, and defect management.  The second section is on actually writing code for brownfield applications and discusses object-oriented principles, architecture, external dependencies, and, of course, how to deal with these when coming into an existing code base. The ecosystem section is just shy of 140 pages long and brings some real meat to the matter.  The focus on "pain points" immediately sets the tone as problem-solution, rather than academic.  The authors also approach some of the topics from a different angle than some essays I've read on similar topics.  For example, the chapter on automated testing is on just that -- automated testing.  It's all well and good to criticize a project as conflating integration tests with unit tests, but it really doesn't make anyone's life better.  The discussion on testing is more focused on the "right" level of testing for existing projects.  Sometimes, an integration test is the best you can do without gutting a section of functional code.  Even if you can sell other developers and/or management on doing so, it doesn't actually provide benefit to your customers to rewrite code that works.  This isn't to say the authors encourage sloppy coding.  Far from it.  Just that they point out the wisdom of ignoring the sleeping bear until after you deal with the snarling wolf. The other sections take a similarly real-world, workable approach to the pain points they address.  As the section moves from technical solutions like version control and continuous integration (CI) to the softer, process issues of metrics and defect tracking, the authors begin to gently suggest moving toward a zero defect count.  While that really sounds like an unreasonable goal for a lot of ongoing projects, it's quite apparent that the authors have first-hand experience with taming some gruesome projects.  The suggestions are grounded and workable, and the difficulty of some situations is explicitly acknowledged. I have to admit that I started getting bored by the end of the ecosystem section.  No matter how valuable I think a good project manager or business analyst is to a successful ALM, at the end of the day, I'm a gear-head.  Also, while I agreed with a lot of the ecosystem ideas, in theory, I didn't necessarily feel that a lot of the single-developer projects that I'm often involved in really needed that level of rigor.  It's only after reading the sidebars and commentary in the coding section that I had the context for the arguments made in favor of a strong ecosystem supporting the development process.  That isn't to say that I didn't support good product management -- indeed, I've probably pushed too hard, on occasion, for a strong ALM outside of just development.  This book gave me deeper insight into why some corners shouldn't be cut and how damaging certain sins of omission can be. The code section, though, kept me engaged for its entirety.  Many technical books can be used as reference material from day one.  The authors were clear, however, that this book is not one of these.  The first chapter of the section (chapter seven, over all) addresses object oriented (OO) practices.  I've read any number of definitions, discussions, and treatises on OO.  None of the chapter was new to me, but it was a good review, and I'm of the opinion that it's good to review the foundations of what you do, from time to time, so I didn't mind. The remainder of the book is really just about how to apply OOP to existing code -- and, just because all your code exists in classes does not mean that it's object oriented.  That topic has the potential to be extremely condescending, but the authors miraculously managed to never once make me feel like a dolt or that they were wagging their finger at me for my prior sins.  Instead, they continue the "pain points" and problem-solution presentation to give concrete examples of how to apply some pretty academic-sounding ideas.  That's a point worth emphasizing, as my experience with most OO discussions is that they stay in the academic realm.  This book gives some very, very good explanations of why things like the Liskov Substitution Principle exist and why a corporate programmer should even care.  Even if you know, with absolute certainty, that you'll never have to work on an existing code-base, I would recommend this book just for the clarity it provides on OOP. This book goes beyond just theory, or even real-world application.  It presents some methods for fixing problems that any developer can, and probably will, encounter in the wild.  First, the authors address refactoring application layers and internal dependencies.  Then, they take you through those layers from the UI to the data access layer and external dependencies.  Finally, they come full circle to tie it all back to the overall process.  By the time the book is done, you're left with a lot of ideas, but also a reasonable plan to begin to improve an existing project structure. Throughout the book, it's apparent that the authors have their own preferred methodology (TDD and domain-driven design), as well as some preferred tools.  The "Our .NET Toolbox" is something of a neon sign pointing to that latter point.  They do not beat the reader over the head with anything resembling a "One True Way" mentality.  Even for the most emphatic points, the tone is quite congenial and helpful.  With some of the near-theological divides that exist within the tech community, I found this to be one of the more remarkable characteristics of the book.  Although the authors favor tools that might be considered Alt.NET, there is no reason the advice and techniques given couldn't be quite successful in a pure Microsoft shop with Team Foundation Server.  For that matter, even though the book specifically addresses .NET, it could be applied to a Java and Oracle shop, as well.

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >