Search Results

Search found 11617 results on 465 pages for 'big blue'.

Page 80/465 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Nightmare: Upgrading Tomcat 5.5 to 6.0

    - by pavanlimo
    I'm trying to upgrade a perfectly running embedded Tomcat 5.5 to Tomcat 6.0. I understand that all I need to do is replace Tomcat 5.5 jars with 6.0. That's what I did. So I replaced the following jars: catalina-5.0.28.jar catalina-5.5.9.jar catalina-optional-5.5.9.jar commons-el.jar commons-modeler-1.1.0.jar jasper-compiler-jdt.jar jasper-compiler.jar jasper-runtime.jar jmx-5.0.28.jar jsp-api-2.0.jar naming-factory.jar naming-resources.jar servlet-api-2.4.jar servlets-default.jar tomcat-coyote.jar tomcat-http.jar tomcat-util.jar with: annotations-api.jar catalina.jar jasper.jar tomcat-dbcp.jar catalina-ant.jar el-api.jar jsp-api.jar tomcat-i18n-es.jar catalina-ha.jar jasper-el.jar servlet-api.jar tomcat-i18n-fr.jar catalina-tribes.jar jasper-jdt.jar tomcat-coyote.jar tomcat-i18n-ja.jar tomcat-juli.jar As soon as I start the server, I get the following message in the logs at INFO level: INFO: Starting Servlet Engine: Apache Tomcat/6.0.29 Dec 31, 2010 6:04:18 AM org.apache.catalina.loader.WebappClassLoader validateJarFile INFO: validateJarFile(/usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class Based on the this explanation, I need to remove a jar file which has a conflicting Servlet.class. I swear to God, there is no other conflicting jar file, I grepped system wide for Servlet.class, it matched only servlet-api.jar. I also downloaded javaee.jar and replaced it by servlet-api.jar, to same avail. Having tried lot of these stuff, I did not have much to look upto, so set the tomcat logging level to ALL. In the log I could see that it is trying to check for Servlet.class in each and every jar it is loading until it finds servlet-api.jar and throws "jar not loaded" message as soon as it finds servlet-api.jar. See below: FINE: Checking for javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappLoader setRepositories FINE: Deploy JAR /WEB-INF/lib/servlet-api.jar to /usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader addJar FINE: addJar(/WEB-INF/lib/servlet-api.jar) Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader validateJarFile FINE: Checking for javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader validateJarFile INFO: validateJarFile(/usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappLoader setRepositories Please note however, that Tomcat starts successfully! And as soon as I hit the URL on the browser, I get blank page(this may be in my case only, I guess 'cuz of my web.xml, sorta different from most. Other people on the internet have got Error 404 instead.) with following log statements(at finest level) Jan 2, 2011 9:40:01 AM org.apache.catalina.connector.CoyoteAdapter parseSessionCookiesId FINE: Requested cookie session id is 0FBA716E3F9B0147C3AF7ABAE3B1C27B Jan 2, 2011 9:40:01 AM org.apache.catalina.authenticator.AuthenticatorBase invoke FINE: Security checking request GET /login.jsp Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: No applicable constraint located Jan 2, 2011 9:40:01 AM org.apache.catalina.authenticator.AuthenticatorBase invoke FINE: Not subject to any constraint Jan 2, 2011 9:40:01 AM org.apache.catalina.core.StandardWrapper allocate FINEST: Returning non-STM instance I'm not sure if the above log message is important, but I'm for all-out disclosure here. One interesting thing though, I manually created a dummy jsp file containing only "helloooo" just outside WEB-INF folder(no security constraints for this file). This file was accessible and could be displayed. But, all my jsp's and classes are inside WEB-INF(ofcourse). Sick and tired of this issue, please help me solve it. I've already spent 20-24 hours on this unsuccessfully. Any pointers directions hints leads?

    Read the article

  • Best way to add an extra (nested) form in the middle of a tabbed form

    - by Scharrels
    I've got a web application, consisting mainly of a big form with information. The form is split into multiple tabs, to make it more readable for the user: <form> <div id="tabs"> <ul> <li><a href="#tab1">Tab1</a></li> <li><a href="#tab2">Tab2</a></li> </ul> <div id="tab1">A big table with a lot of input rows</div> <div id="tab2">A big table with a lot of input rows</div> </div> </form> The form is dynamically extended (extra rows are added to the tables). Every 10 seconds the form is serialized and synchronized with the server. I now want to add an interactive form on one of the tabs: when a user enters a name in a field, this information is sent to the server and an id associated with that name is returned. This id is used as an identifier for some dynamically added form fields. A quick sketchup of such a page would look like this: <form action="bigform.php"> <div id="tabs"> <ul> <li><a href="#tab1">Tab1</a></li> <li><a href="#tab2">Tab2</a></li> </ul> <div id="tab1">A big table with a lot of input rows</div> <div id="tab2"> <div class="associatedinfo"> <p>Information for Joe</p> <ul> <li><input name="associated[26][]" /></li> <li><input name="associated[26][]" /></li> </ul> </div> <div class="associatedinfo"> <p>Information for Jill</p> <ul> <li><input name="associated[12][]" /></li> <li><input name="associated[12][]" /></li> </ul> </div> <div id="newperson"> <form action="newform.php"> <p>Add another person:</p> <input name="extra" /><input type="submit" value="Add" /> </form> </div> </div> </div> </form> The above will not work: nested forms are not allowed in HTML. However, I really need to display the form on that tab: it's part of the functionality of that page. I also want the behaviour of a separate form: when the user hits return in the form field, the "Add" submit button is pressed and a submit action is triggered on the partial form. What is the best way to solve this problem?

    Read the article

  • What's wrong with this jQuery? It isn't working as intended

    - by Doug Smith
    Using cookies, I want it to remember the colour layout of the page. (So, if they set the gallery one color and the body background another color, it will save that on refresh. But it doesn't seem to be working. jQuery: $(document).ready(function() { if (verifier == 1) { $('body').css('background', $.cookie('test_cookie')); } if (verifier == 2) { $('#gallery').css('background', $.cookie('test_cookie')); } if (verifier == 3) { $('body').css('background', $.cookie('test_cookie')); $('#gallery').css('background', $.cookie('test_cookie')); } $('#set_cookie').click(function() { var color = $('#set_cookie').val(); $.cookie('test_cookie', color); }); $('#set_page').click(function() { $('body').css('background', $.cookie('test_cookie')); var verifier = 1; }); $('#set_gallery').click(function() { $('#gallery').css('background', $.cookie('test_cookie')); var verifier = 2; }); $('#set_both').click(function() { $('body').css('background', $.cookie('test_cookie')); $('#gallery').css('background', $.cookie('test_cookie')); var verifier = 3; }); }); HTML: <p>Please select a background color for either the page's background, the gallery's background, or both.</p> <select id="set_cookie"> <option value="#1d375a" selected="selected">Default</option> <option value="black">Black</option> <option value="blue">Blue</option> <option value="brown">Brown</option> <option value="darkblue">Dark Blue</option> <option value="darkgreen">Dark Green</option> <option value="darkred">Dark Red</option> <option value="fuchsia">Fuchsia</option> <option value="green">Green</option> <option value="grey">Grey</option> <option value="#d3d3d3">Light Grey</option> <option value="#32cd32">Lime Green</option> <option value="#f8b040">Macaroni</option> <option value="#ff7300">Orange</option> <option value="pink">Pink</option> <option value="purple">Purple</option> <option value="red">Red</option> <option value="#0fcce0">Turquoise</option> <option value="white">White</option> <option value="yellow">Yellow</option> </select> <input type="button" id="set_page" value="Page's Background" /><input type="button" id="set_gallery" value="Gallery's Background" /><input type="button" id="set_both" value="Both" /> </div> </div> </body> </html> Thanks so much for the help, I appreciate it. jsFiddle: http://jsfiddle.net/hL6Ye/

    Read the article

  • can't find what's wrong with my code :(

    - by blood
    the point of my code is for me to press f1 and it will scan 500 pixels down and 500 pixels and put them in a array (it just takes a box that is 500 by 500 of the screen). then after that when i hit end it will click on only on the color black or... what i set it to. anyway it has been doing odd stuff and i can't find why: #include <iostream> #include <windows.h> using namespace std; COLORREF rgb[499][499]; HDC hDC = GetDC(HWND_DESKTOP); POINT main_coner; BYTE rVal; BYTE gVal; BYTE bVal; int red; int green; int blue; int ff = 0; int main() { for(;;) { if(GetAsyncKeyState(VK_F1)) { cout << "started"; int a1 = 0; int a2 = 0; GetCursorPos(&main_coner); int x = main_coner.x; int y = main_coner.y; for(;;) { //cout << a1 << "___" << a2 << "\n"; rgb[a1][a2] = GetPixel(hDC, x, y); a1++; x++; if(x > main_coner.x + 499) { y++; x = main_coner.x; a1 = 0; a2++; } if(y > main_coner.y + 499) { ff = 1; break; } } cout << "done"; break; } if(ff == 1) break; } for(;;) { if(GetAsyncKeyState(VK_END)) { GetCursorPos(&main_coner); int x = main_coner.x; int y = main_coner.y; int a1 = -1; int a2 = -1; for(;;) { x++; a1++; rVal = GetRValue(rgb[a1][a2]); gVal = GetGValue(rgb[a1][a2]); bVal = GetBValue(rgb[a1][a2]); red = (int)rVal; // get the colors into __int8 green = (int)gVal; // get the colors into __int8 blue = (int)bVal; // get the colors into __int8 if(red == 0 && green == 0 && blue == 0) { SetCursorPos(main_coner.x + x, main_coner.y + y); mouse_event(MOUSEEVENTF_LEFTDOWN, 0, 0, 0, 0); Sleep(10); mouse_event(MOUSEEVENTF_LEFTUP, 0, 0, 0, 0); Sleep(100); } if(x > main_coner.x + 499) { a1 = 0; a2++; } if(y > main_coner.y + 499) { Sleep(100000000000); break; } if(GetAsyncKeyState(VK_CONTROL)) { Sleep(100000); break; } } } } for(;;) { if(GetAsyncKeyState(VK_END)) { break; } } return 0; } anyone see what's wrong with my code :( (feel free to add tags)

    Read the article

  • WPF storyboard animation issue when using VisualBrush

    - by Flack
    Hey guys, I was playing around with storyboards, a flipping animation, and visual brushes. I have encountered an issue though. Below is the xaml and code-behind of a small sample I quickly put together to try to demonstrate the problem. When you first start the app, you are presented with a red square and two buttons. If you click the "Flip" button, the red square will "flip" over and a blue one will appear. In reality, all that is happening is that the scale of the width of the StackPanel that the red square is in is being decreased until it reaches zero and then the StackPanel where a blue square is, whose width is initially scaled to zero, has its width increased. If you click the "Flip" button a few times, the animation looks ok and smooth. Now, if you hit the "Reflection" button, a reflection of the red/blue buttons is added to their respective StackPanels. Hitting the "Flip" button now will still cause the flip animation but it is no longer a smooth animation. The StackPanels width often does not shrink to zero. The width shrinks somewhat but then just stops before being completely invisible. Then the other StackPanel appears as usual. The only thing that changed was adding the reflection, which is just a VisualBrush. Below is the code. Does anyone have any idea why the animations are different between the two cases (stalling in the second case)? Thanks. <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xml:lang="en-US" xmlns:d="http://schemas.microsoft.com/expression/blend/2006" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" x:Class="WpfFlipTest.Window1" x:Name="Window" Title="Window1" Width="214" Height="224"> <Window.Resources> <Storyboard x:Key="sbFlip"> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="redStack" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.4" Value="0"/> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00.4" Storyboard.TargetName="blueStack" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.8" Value="1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="sbFlipBack"> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="blueStack" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.4" Value="0"/> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00.4" Storyboard.TargetName="redStack" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.8" Value="1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </Window.Resources> <Grid x:Name="LayoutRoot" Background="Gray"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <StackPanel Name="redStack" Grid.Row="0" Grid.Column="0" RenderTransformOrigin="0.5,0.5"> <StackPanel.RenderTransform> <ScaleTransform/> </StackPanel.RenderTransform> <Border Name="redBorder" BorderBrush="Transparent" BorderThickness="4" Width="Auto" Height="Auto"> <Button Margin="0" Name="redButton" Height="75" Background="Red" Width="105" /> </Border> <Border Width="{Binding ElementName=redBorder, Path=ActualWidth}" Height="{Binding ElementName=redBorder, Path=ActualHeight}" Opacity="0.2" BorderBrush="Transparent" BorderThickness="4" Name="redRefelction" Visibility="Collapsed"> <Border.OpacityMask> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <LinearGradientBrush.GradientStops> <GradientStop Offset="0" Color="Black"/> <GradientStop Offset=".6" Color="Transparent"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Border.OpacityMask> <Border.Background> <VisualBrush Visual="{Binding ElementName=redButton}"> <VisualBrush.Transform> <ScaleTransform ScaleX="1" ScaleY="-1" CenterX="52.5" CenterY="37.5" /> </VisualBrush.Transform> </VisualBrush> </Border.Background> </Border> </StackPanel> <StackPanel Name="blueStack" Grid.Row="0" Grid.Column="0" RenderTransformOrigin="0.5,0.5"> <StackPanel.RenderTransform> <ScaleTransform ScaleX="0"/> </StackPanel.RenderTransform> <Border Name="blueBorder" BorderBrush="Transparent" BorderThickness="4" Width="Auto" Height="Auto"> <Button Grid.Row="0" Grid.Column="1" Margin="0" Width="105" Background="Blue" Name="blueButton" Height="75"/> </Border> <Border Width="{Binding ElementName=blueBorder, Path=ActualWidth}" Height="{Binding ElementName=blueBorder, Path=ActualHeight}" Opacity="0.2" BorderBrush="Transparent" BorderThickness="4" Name="blueRefelction" Visibility="Collapsed"> <Border.OpacityMask> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <LinearGradientBrush.GradientStops> <GradientStop Offset="0" Color="Black"/> <GradientStop Offset=".6" Color="Transparent"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Border.OpacityMask> <Border.Background> <VisualBrush Visual="{Binding ElementName=blueButton}"> <VisualBrush.Transform> <ScaleTransform ScaleX="1" ScaleY="-1" CenterX="52.5" CenterY="37.5" /> </VisualBrush.Transform> </VisualBrush> </Border.Background> </Border> </StackPanel> <Button Grid.Row="1" Click="FlipButton_Click" Height="19.45" HorizontalAlignment="Left" VerticalAlignment="Top" Width="76">Flip</Button> <Button Grid.Row="0" Grid.Column="1" Click="ReflectionButton_Click" Height="19.45" HorizontalAlignment="Left" VerticalAlignment="Top" Width="76">Reflection</Button> </Grid> </Window> Here are the button click handlers: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using System.Windows.Media.Animation; namespace WpfFlipTest { public partial class Window1 : Window { public Window1() { InitializeComponent(); } bool flipped = false; private void FlipButton_Click(object sender, RoutedEventArgs e) { Storyboard sbFlip = (Storyboard)Resources["sbFlip"]; Storyboard sbFlipBack = (Storyboard)Resources["sbFlipBack"]; if (flipped) { sbFlipBack.Begin(); flipped = false; } else { sbFlip.Begin(); flipped = true; } } bool reflection = false; private void ReflectionButton_Click(object sender, RoutedEventArgs e) { if (reflection) { reflection = false; redRefelction.Visibility = Visibility.Collapsed; blueRefelction.Visibility = Visibility.Collapsed; } else { reflection = true; redRefelction.Visibility = Visibility.Visible; blueRefelction.Visibility = Visibility.Visible; } } } } UPDATE: I have been testing this some more to try to find out what is causing the issue I am seeing and I believe I found what is causing the issue. Below I have pasted new xaml and code-behind. The new sample below is very similar to the original sample, with a few minor modifications. The xaml basically consists of two stack panels, each containing two borders. The second border in each stack panel is a visual brush (a reflection of the border above it). Now, when I click the "Flip" button, one stack panel gets its ScaleX reduced to zero, while the second stack panel, whose initial ScaleX is zero, gets its ScaleX increased to 1. This animation gives the illusion of flipping. There are also two textblocks which display the scale factor of each stack panel. I added those to try to diagnose my issue. The issue is (as described in the oringal post) that the flipping animation is not smooth. Every time I hit the flip button, the animation starts but whenever the ScaleX factor gets to around .14 to .16, the animation looks like it stalls and the stack panels never have there ScaleX reduced to zero, so they never totally disappear. Now, the strange thing is that if I change the Width/Height properties of the "frontBorder" and "backBorder" borders defined below to use explict values instead of Auto, such as Width=105 and Height=75 (to match the button in the border) everything works fine. The animation stutters the first two or three times I run it but after that the flips are smooth and flawless. (BTW, when an animation is run for the first time, is there something going on in the background, some sort of initialization, that causes it to be a little slow the first time?) Is it possible that the Auto Width/Height of the borders are causing the issue? I can reproduce it everytime but I am not sure why Auto Width/Height would be a problem. Below is the sample. Thanks for the help. <Window x:Class="FlipTest.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Window.Resources> <Storyboard x:Key="sbFlip"> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="front" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.5" Value="0"/> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00.5" Storyboard.TargetName="back" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.5" Value="1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="sbFlipBack"> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="back" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.5" Value="0"/> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00.5" Storyboard.TargetName="front" Storyboard.TargetProperty="(UIElement.RenderTransform).(ScaleTransform.ScaleX)"> <SplineDoubleKeyFrame KeyTime="00:00:00.5" Value="1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </Window.Resources> <Grid x:Name="LayoutRoot" Background="White" ShowGridLines="True"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <StackPanel x:Name="front" RenderTransformOrigin="0.5,0.5"> <StackPanel.RenderTransform> <ScaleTransform/> </StackPanel.RenderTransform> <Border Name="frontBorder" BorderBrush="Yellow" BorderThickness="2" Width="Auto" Height="Auto"> <Button Margin="0" Name="redButton" Height="75" Background="Red" Width="105" Click="FlipButton_Click"/> </Border> <Border Width="{Binding ElementName=frontBorder, Path=ActualWidth}" Height="{Binding ElementName=frontBorder, Path=ActualHeight}" Opacity="0.2" BorderBrush="Transparent"> <Border.OpacityMask> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <LinearGradientBrush.GradientStops> <GradientStop Offset="0" Color="Black"/> <GradientStop Offset=".6" Color="Transparent"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Border.OpacityMask> <Border.Background> <VisualBrush Visual="{Binding ElementName=frontBorder}"> <VisualBrush.Transform> <ScaleTransform ScaleX="1" ScaleY="-1" CenterX="52.5" CenterY="37.5" /> </VisualBrush.Transform> </VisualBrush> </Border.Background> </Border> </StackPanel> <StackPanel x:Name="back" RenderTransformOrigin="0.5,0.5"> <StackPanel.RenderTransform> <ScaleTransform ScaleX="0"/> </StackPanel.RenderTransform> <Border Name="backBorder" BorderBrush="Yellow" BorderThickness="2" Width="Auto" Height="Auto"> <Button Margin="0" Width="105" Background="Blue" Name="blueButton" Height="75" Click="FlipButton_Click"/> </Border> <Border Width="{Binding ElementName=backBorder, Path=ActualWidth}" Height="{Binding ElementName=backBorder, Path=ActualHeight}" Opacity="0.2" BorderBrush="Transparent"> <Border.OpacityMask> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <LinearGradientBrush.GradientStops> <GradientStop Offset="0" Color="Black"/> <GradientStop Offset=".6" Color="Transparent"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Border.OpacityMask> <Border.Background> <VisualBrush Visual="{Binding ElementName=backBorder}"> <VisualBrush.Transform> <ScaleTransform ScaleX="1" ScaleY="-1" CenterX="52.5" CenterY="37.5" /> </VisualBrush.Transform> </VisualBrush> </Border.Background> </Border> </StackPanel> <Button Grid.Row="1" Click="FlipButton_Click" Height="19.45" HorizontalAlignment="Left" VerticalAlignment="Top" Width="76">Flip</Button> <TextBlock Grid.Row="2" Grid.Column="0" Foreground="DarkRed" Height="19.45" HorizontalAlignment="Left" VerticalAlignment="Top" Width="76" Text="{Binding ElementName=front, Path=(UIElement.RenderTransform).(ScaleTransform.ScaleX)}"/> <TextBlock Grid.Row="3" Grid.Column="0" Foreground="DarkBlue" Height="19.45" HorizontalAlignment="Left" VerticalAlignment="Top" Width="76" Text="{Binding ElementName=back, Path=(UIElement.RenderTransform).(ScaleTransform.ScaleX)}"/> </Grid> </Window> Code-behind: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using System.Windows.Media.Animation; namespace FlipTest { /// <summary> /// Interaction logic for Window1.xaml /// </summary> public partial class Window1 : Window { public Window1() { InitializeComponent(); } bool flipped = false; private void FlipButton_Click(object sender, RoutedEventArgs e) { Storyboard sbFlip = (Storyboard)Resources["sbFlip"]; Storyboard sbFlipBack = (Storyboard)Resources["sbFlipBack"]; if (flipped) { sbFlipBack.Begin(); flipped = false; } else { sbFlip.Begin(); flipped = true; } } } }

    Read the article

  • Calculix Data Visualiser using QT

    - by Ann
    I am doing a project on CalculiX data visualizor,using Qt.I 've to draw the structure and after giving force the displacement should be shawn as variation in color.I chose HSV coloring,but while executing I got an error message:"QColor::from Hsv:HSV parameters out of range".The code is: DataViz1::DataViz1(QWidget *parent) : QWidget(parent), ui(new Ui::DataViz1) { DArea = new QGLScreen(this); DArea-setGeometry(QRect(10,10,700,600)); //TODO This values are feeded by user dfile="/home/41407/color.txt";//input file with displacement mfile="/home/41407/mesh21.txt";//input file nodeId="*NODE"; elId="*ELEMENT"; DataId="displ"; parseMfile(); parseDfile(); DArea->Nodes=Nodes; DArea->Elements=Elements; DArea->Data=Data; DArea->fillColorArray(); //printf("Colr is %d",DArea->pickColor(-11.02,0));fflush(stdout); ui->setupUi(this); } DataViz1::~DataViz1() { delete ui; } void DataViz1::parseMfile() { QFile file(mfile); if (!file.open(QIODevice::ReadOnly | QIODevice::Text)) return; int node_end=0; QTextStream in(&file); in.skipWhiteSpace(); while (!in.atEnd()) { QString line = in.readLine(); if(line.startsWith(nodeId))//Node block in Mfile { while(1) { line = in.readLine(); if(line.startsWith(elId)) { break; } Nodes< while(1) { line = in.readLine(); Elements<<line; //printf("Element is %s\n",line.toLocal8Bit().constData());fflush(stdout); if(in.atEnd()) break; } } } } void DataViz1::parseDfile() { QFile file(dfile); if (!file.open(QIODevice::ReadOnly | QIODevice::Text)) return; int node_end=0; QTextStream in(&file); in.skipWhiteSpace(); while (!in.atEnd()) { QString line = in.readLine(); if(line.startsWith(DataId)) { continue; } line = in.readLine(); Data< } /......................................................................../ include "qglscreen.h" include GLfloat LightAmbient[]= { 0.5f, 0.5f, 0.5f, 1.0f }; GLfloat LightDiffuse[]= { 1.0f, 1.0f, 1.0f, 1.0f }; GLfloat LightPosition[]= { 0.0f, 0.0f, 2.0f, 1.0f }; QGLScreen::QGLScreen(QWidget *parent):QGLWidget(QGLFormat(QGL::SampleBuffers), parent) { clearColor = Qt::black; xRot = 0; yRot = 0; zRot = 0; ifdef QT_OPENGL_ES_2 program = 0; endif //TODO user input ElType="HE8"; DType="SolidFrame"; axis="X"; } QGLScreen::~QGLScreen() { } QSize QGLScreen::minimumSizeHint() const { return QSize(50, 50); } QSize QGLScreen::sizeHint() const { return QSize(200, 200); } void QGLScreen::setClearColor(const QColor &color) { clearColor = color; updateGL(); } void QGLScreen::initializeGL() { xRot=0; yRot=0; zRot=0; scaling = 1.0; /* select clearing (background) color */ glClearColor (0.0, 0.0, 0.0, 0.0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); // glViewport(0,0,10,10); glOrtho(-10.0, +10.0, -10.0, +10.0, -10.0,+10.0); glEnable (GL_LINE_SMOOTH); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); } void QGLScreen::wheel1() { scaling1 += .0025; count2++; update(); } void QGLScreen::wheel2() { if(count2-14) { scaling1 -= .0025; count2--; update(); } } void QGLScreen::drawModel(int x1,int y1,int x2,int y2) { makeCurrent(); QStringList Cnode,Celement; for (int i = 0; i < Elements.size(); ++i) { Celement=Elements.at(i).split(","); // printf("Element is %s",Celement.at(0).toLocal8Bit().constData());fflush(stdout); //printf("Node at el is %s\n",(findNode(Celement.at(1).toInt())).at(1).toLocal8Bit().constData()); fflush(stdout); if(ElType=="HE8") { //First four nodes float ENX1=(findNode(Celement.at(1).toInt())).at(1).toDouble(); float ENX2=(findNode(Celement.at(2).toInt())).at(1).toDouble(); float ENX3=(findNode(Celement.at(3).toInt())).at(1).toDouble(); float ENX4=(findNode(Celement.at(4).toInt())).at(1).toDouble(); float ENY1=(findNode(Celement.at(1).toInt())).at(2).toDouble(); float ENY2=(findNode(Celement.at(2).toInt())).at(2).toDouble(); float ENY3=(findNode(Celement.at(3).toInt())).at(2).toDouble(); float ENY4=(findNode(Celement.at(4).toInt())).at(2).toDouble(); float ENZ1=(findNode(Celement.at(1).toInt())).at(3).toDouble(); float ENZ2=(findNode(Celement.at(2).toInt())).at(3).toDouble(); float ENZ3=(findNode(Celement.at(3).toInt())).at(3).toDouble(); float ENZ4=(findNode(Celement.at(4).toInt())).at(3).toDouble(); //Second four Nodes float ENX5=(findNode(Celement.at(5).toInt())).at(1).toDouble(); float ENX6=(findNode(Celement.at(6).toInt())).at(1).toDouble(); float ENX7=(findNode(Celement.at(7).toInt())).at(1).toDouble(); float ENX8=(findNode(Celement.at(8).toInt())).at(1).toDouble(); float ENY5=(findNode(Celement.at(5).toInt())).at(2).toDouble(); float ENY6=(findNode(Celement.at(6).toInt())).at(2).toDouble(); float ENY7=(findNode(Celement.at(7).toInt())).at(2).toDouble(); float ENY8=(findNode(Celement.at(8).toInt())).at(2).toDouble(); float ENZ5=(findNode(Celement.at(5).toInt())).at(3).toDouble(); float ENZ6=(findNode(Celement.at(6).toInt())).at(3).toDouble(); float ENZ7=(findNode(Celement.at(7).toInt())).at(3).toDouble(); float ENZ8=(findNode(Celement.at(8).toInt())).at(3).toDouble(); //Identify Colors GLfloat ENC[8][3]; for(int k=1;k<8;k++) { int hsv=pickColor(findData(Celement.at(k).toInt()).toDouble(),0); //printf("hsv is %d=",hsv);fflush(stdout); getRGB(hsv); //printf("%d*%d*%d\n",red,green,blue); //ENC[k]={red,green,blue}; ENC[k][0]=red; ENC[k][1]=green; ENC[k][2]=blue; } //Plot the first four direct loop if(DType=="WireFrame"){ glBegin(GL_LINE_LOOP); glColor3f(255,0,0); glVertex3f(ENX1,ENY1,ENZ1); glColor3f(255,0,0); glVertex3f(ENX2,ENY2,ENZ2); glColor3f(255,0,0); glVertex3f(ENX3,ENY3,ENZ3); glColor3f(255,0,0); glVertex3f(ENX4,ENY4,ENZ4); glEnd(); //Plot the second four direct loop glBegin(GL_LINE_LOOP); glColor3f(0,0,255); glVertex3f(ENX5,ENY5,ENZ5); glColor3f(0,0,255); glVertex3f(ENX6,ENY6,ENZ6); glColor3f(0,0,255); glVertex3f(ENX7,ENY7,ENZ7); glColor3f(0,0,255); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); //Plot the interconnections glBegin(GL_LINE); glColor3f(150,150,150); glVertex3f(ENX1,ENY1,ENZ1); glVertex3f(ENX5,ENY5,ENZ5); glEnd(); glBegin(GL_LINE); glColor3f(150,150,150); glVertex3f(ENX2,ENY2,ENZ2); glVertex3f(ENX6,ENY6,ENZ6); glEnd(); glBegin(GL_LINE); glColor3f(150,150,150); glVertex3f(ENX3,ENY3,ENZ3); glVertex3f(ENX7,ENY7,ENZ7); glEnd(); glBegin(GL_LINE); glColor3f(150,150,150); glVertex3f(ENX4,ENY4,ENZ4); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); } if(DType=="SolidFrame") { glBegin(GL_QUADS); glColor3fv(ENC[1]); glVertex3f(ENX1,ENY1,ENZ1); glColor3fv(ENC[2]); glVertex3f(ENX2,ENY2,ENZ2); glColor3fv(ENC[3]); glVertex3f(ENX3,ENY3,ENZ3); glColor3fv(ENC[4]); glVertex3f(ENX4,ENY4,ENZ4); glEnd(); //break; glBegin(GL_QUADS); glColor3fv(ENC[5]); glVertex3f(ENX5,ENY5,ENZ5); glColor3fv(ENC[6]); glVertex3f(ENX6,ENY6,ENZ6); glColor3fv(ENC[7]); glVertex3f(ENX7,ENY7,ENZ7); glColor3fv(ENC[8]); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); glBegin(GL_QUAD_STRIP); glColor3fv(ENC[1]); glVertex3f(ENX1,ENY1,ENZ1); glColor3fv(ENC[5]); glVertex3f(ENX5,ENY5,ENZ5); glColor3fv(ENC[2]); glVertex3f(ENX2,ENY2,ENZ2); glColor3fv(ENC[6]); glVertex3f(ENX6,ENY6,ENZ6); glEnd(); glBegin(GL_QUAD_STRIP); glColor3fv(ENC[3]); glVertex3f(ENX3,ENY3,ENZ3); glColor3fv(ENC[7]); glVertex3f(ENX7,ENY7,ENZ7); glColor3fv(ENC[4]); glVertex3f(ENX4,ENY4,ENZ4); glColor3fv(ENC[8]); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); glBegin(GL_QUAD_STRIP); glColor3fv(ENC[2]); glVertex3f(ENX2,ENY2,ENZ2); glColor3fv(ENC[6]); glVertex3f(ENX6,ENY6,ENZ6); glColor3fv(ENC[3]); glVertex3f(ENX3,ENY3,ENZ3); glColor3fv(ENC[7]); glVertex3f(ENX7,ENY7,ENZ7); glEnd(); glBegin(GL_QUAD_STRIP); glColor3fv(ENC[1]); glVertex3f(ENX1,ENY1,ENZ1); glColor3fv(ENC[5]); glVertex3f(ENX5,ENY5,ENZ5); glColor3fv(ENC[4]); glVertex3f(ENX4,ENY4,ENZ4); glColor3fv(ENC[8]); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); } } } } QStringList QGLScreen::findNode(int element) { QStringList Temp; for (int i = 0; i < Nodes.size(); ++i) { Temp=Nodes.at(i).split(","); if(Temp.at(0).toInt()==element) { break; } } return Temp; } QString QGLScreen::findData(int Node) { QString Temp; QRegExp sep("\s+"); for (int i = 0; i < Data.size(); ++i) { if((Data.at(i).split("\t")).at(0).section(sep,1,1).toInt()==Node) { if(axis=="X") { Temp=Data.at(i).split("\t").at(0).section(sep,2,2); } if(axis=="Y") { Temp=Data.at(i).split("\t").at(0).section(sep,3,3); } if(axis=="Z") { Temp=Data.at(i).split("\t").at(0).section(sep,4,4); } break; } } return Temp; } void QGLScreen::fillColorArray() { QString Temp1,Temp2,Temp3; double d1s=0,d2s=0,d3s=0,d1l=0,d2l=0,d3l=0,diff=0; QRegExp sep("\\s+"); for (int i = 0; i < Data.size(); ++i) { Temp1=(Data.at(i).split("\t")).at(0).section(sep,2,2); if(d1s>Temp1.toDouble()) { d1s=Temp1.toDouble(); } if(d1l<Temp1.toDouble()) { d1l=Temp1.toDouble(); } Temp2=(Data.at(i).split("\t")).at(0).section(sep,3,3); if(d2s>Temp2.toDouble()) { d2s=Temp2.toDouble(); } if(d2l<Temp2.toDouble()) { d2l=Temp2.toDouble(); } Temp3=(Data.at(i).split("\t")).at(0).section(sep,4,4); if(d3s>Temp3.toDouble()) { d3s=Temp3.toDouble(); } if(d3l<Temp3.toDouble()) { d3l=Temp3.toDouble(); } // printf("data is %s",Temp.toLocal8Bit().constData());fflush(stdout); } color[0][0]=d1l; for(int i=1;i<360;i++) { //printf("Large is%f small is %f",d1l,d1s); diff=d1l-d1s; if(d1l==0&&d1s<0) color[0][i]=color[0][i-1]-diff/360; else if(d1l>0&&d1s==0) color[0][i]=color[0][i-1]+diff/360; else if(d1l>0&&d1s<0) color[0][i]=color[0][i-1]-diff/360; diff=d2l-d2s; if(d2l==0&&d2s<0) color[1][i]=color[1][i-1]-diff/360; else if(d2l>0&&d2s==0) color[1][i]=color[1][i-1]+diff/360; else if(d2l>0&&d2s<0) color[1][i]=color[1][i-1]-diff/360; diff=d3l-d3s; if(d3l==0&&d3s<0) color[2][i]=color[2][i-1]-diff/360; else if(d3l>0&&d3s==0) color[2][i]=color[2][i-1]+diff/360; else if(d3l>0&&d3s<0) color[2][i]=color[2][i-1]-diff/360; } //for(int i=0;i<360;i++) printf("%d %f %f %f\n",i,color[0][i],color[1][i],color[2][i]); } int QGLScreen::pickColor(double data,int Did) { int i,pos; if(axis=="X")Did=0; if(axis=="Y")Did=1; if(axis=="Z")Did=2; //printf("%f data is",data);fflush(stdout); for(int i=0;i<360;i++) { if(color[Did][i]<data && data>color[Did][i+1]) { //printf("Orginal dat is %f Data found is %f and pos %d\n",data,color[Did][i],i);fflush(stdout); pos=i; break; } } return pos; } void QGLScreen::getRGB(int hsv) { QColor c; c.setHsv(hsv,255,255,255); QColor r=QColor::fromHsv(hsv,255,255); red=r.red(); green=r.green(); blue=r.blue(); } void QGLScreen::paintGL() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glPushAttrib(GL_ALL_ATTRIB_BITS); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); GLfloat x = 3.0 * GLfloat(width()) / height(); glOrtho(-x, +x, -3.0, +3.0, 4.0, 15.0); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); glTranslatef(0.0, 0.0, -10.0); glScalef(scaling, scaling, scaling); glRotatef(xRot, 1.0, 0.0, 0.0); glRotatef(yRot, 0.0, 1.0, 0.0); glRotatef(zRot, 0.0, 0.0, 1.0); drawModel(0,0,1,1); /* don't wait! * start processing buffered OpenGL routines */ glFlush (); } /void QGLScreen::zoom1() { scaling+=.05; update(); }/ void QGLScreen::resizeGL(int width, int height) { int side = qMin(width, height); glViewport((width - side) / 2, (height - side) / 2, side, side); #if !defined(QT_OPENGL_ES_2) glMatrixMode(GL_PROJECTION); glLoadIdentity(); #ifndef QT_OPENGL_ES glOrtho(-0.5, +0.5, +0.5, -0.5, 4.0, 15.0); #else glOrthof(-0.5, +0.5, +0.5, -0.5, 4.0, 15.0); #endif glMatrixMode(GL_MODELVIEW); #endif } void QGLScreen::mousePressEvent(QMouseEvent *event) { lastPos = event-pos(); } void QGLScreen::mouseMoveEvent(QMouseEvent *event) { GLfloat dx = GLfloat(event->x() - lastPos.x()) / width(); GLfloat dy = GLfloat(event->y() - lastPos.y()) / height(); if (event->buttons() & Qt::LeftButton) { xRot+= 180 * dy; yRot += 180 * dx; update(); } else if (event->buttons() & Qt::RightButton) { xRot += 180 * dy; yRot += 180 * dx; update(); } lastPos = event->pos(); } void QGLScreen::mouseReleaseEvent(QMouseEvent * /* event */) { emit clicked(); }

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Book Review: Oracle ADF Real World Developer’s Guide

    - by Frank Nimphius
    Recently PACKT Publishing published "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman, a product manager in our team. Though already the sixth book dedicated to Oracle ADF, it has a lot of great information in it that none of the previous books covered, making it a safe buy even for those who own the other books published by Oracle Press (McGrwHill) and PACKT Publishing. More than the half of the "Oracle ADF Real World Developer’s Guide" book is dedicated to Oracle ADF Business Components in a depth and clarity that allows you to feel the expertise that Jobinesh gained in this area. If you enjoy Jobinesh blog (http://jobinesh.blogspot.co.uk/) about Oracle ADF, then, no matter what expert you are in Oracle ADF, this book makes you happy as it provides you with detail information you always wished to have. If you are new to Oracle ADF, then this book alone doesn't get you flying, but, if you have some Java background, accelerates your learning big, big, big times. Chapter 1 is an introduction to Oracle ADF and not only explains the layers but also how it compares to plain Java EE solutions (page 13). If you are new to Oracle JDeveloper and ADF, then at the end of this chapter you know how to start JDeveloper and begin your ADF development Chapter 2 starts with what Jobinesh really is good at: ADF Business Components. In this chapter you learn about the architecture ingredients of ADF Business Components: View Objects, View Links, Associations, Entities, Row Sets, Query Collections and Application Modules. This chapter also provides a introduction to ADFBC SDO services, as well as sequence diagrams for what happens when you execute queries or commit updates. Chapter 3 is dedicated to entity objects and  is one of many chapters in this book you will enjoy and never want to miss. Jobinesh explains the artifacts that make up an entity object, how to work with entities and resource bundles, and many advanced topics, including inheritance, change history tracking, custom properties, validation and cursor handling.  Chapter 4 - you guessed it - is all about View objects. Comparable to entities, you learn about the XM files and classes that make a view object, as well as how to define and work with queries. List-of-values, inheritance, polymorphism, bind variables and data filtering are interesting - and important topics that follow. Again the chapter provides helpful sequence diagrams for you to understand what happens internally within a view object. Chapter 5 focuses on advanced view object and entity object topics, like lifecycle callback methods and when you want to override them. This chapter is a good digest of Jobinesh's blog entries (which most ADF developers have in their bookmark list). Really worth reading ! Chapter 6 then is bout Application Modules. Beside of what application modules are, this chapter covers important topics like properties, passivation, activation, application module pooling, how and where to write custom logic. In addition you learn about the AM lifecycle and request sequence. Chapter 7 is about the ADF binding layer. If you are new to Oracle ADF and got lost in the more advanced ADF Business Components chapters, then this chapter is where you get back into the game. In very easy terms, Jobinesh explains what the ADF binding is, how it fits into the JSF request lifecycle and what are the metadata file involved. Chapter 8 then goes into building data bound web user interfaces. In this chapter you get the basics of JavaServer Faces (e.g. managed beans) and learn about the interaction between the JSF UI and the ADF binding layer. Later this chapter provides advanced solutions for working with tree components and list of values. Chapter 9 introduces bounded task flows and ADF controller. This is a chapter you want to read if you are new to ADF of have started. Experts don't find anything new here, which doesn't mean that it is not worth reading it (I for example, enjoyed the controller talk very much) Chapter 10 is an advanced coverage of bounded task flow and talks about contextual events  Chapter 11 is another highlight and explains error handling, trains, transactions and more. I can only recommend you read this chapter. I am aware of many documents that cover exception handling in Oracle ADF (and my Oracle Magazine article for January/February 2013 does the same), but none that covers it in such a great depth. Chapter 12 covers ADF best practices, which is a great round-up of all the tips provided in this book (without Jobinesh to repeat himself). Its all cool stuff that helps you with your ADF projects. In summary, "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman is a great book and addition for all Oracle ADF developers and those who want to become one. Frank

    Read the article

  • Record and Play your WebLogic Console Tasks Like a DVR

    - by james.bayer
    Automation using WebLogic Scripting Tool Today on the Oracle internal mailing list for WebLogic Server questions someone asked how to automate the configuration of the thread model for WebLogic Server and they were having trouble with the jython scripting syntax.  I’ve previously written about this feature called Work Managers and the associated constraints.  However, I did not show how to automate the process of configuring this without the console using WebLogic Scripting Tool – the jython scripting automation environment abbreviated as WLST.  I’ve written some very basic introductions to WLST before and there is also an Oracle By Example on the subject, but this is a bit more advanced.  Fear not because there is a really easy-to-use feature of the WLS console that lets you “Record” user actions just like a DVR.  Using these recordings of the web-based console, you can easily create a script even if you are unfamiliar with the WLST syntax and API.  I’m a big fan of both DVR’s and automation as can be evidenced with this old Halloween picture taken during simpler times.  Obviously the Cast Away and The Big Labowski references show some age.  I was a big Tivo fan-boy back in the day and I still think it’s the best DVR. I strongly believe that WebLogic Scripting Tool (WLST) is an absolutely essential tool for automating administration tasks in anything beyond a development environment.  Even in development environments you can make a case that it makes sense to start the automation for environments downstream.  I promise you that once you start using it for any tasks that you do even semi-regularly, you won’t go back to clicking through the console.  It’s simply so much more efficient and less error-prone to run a script. Let’s say you need to create a Work Manager and MaxThreadsConstraint – the easy way to do it is configure it in the WLS console first while capturing the commands with a recording.  See the images for the simple steps – click to enlarge. Record Console Configurations to a File Review the Recordings and Make Slight Modifications In order to make the recorded .py file directly callable as a stand-alone script I added calls to the connect() and edit() functions at the beginning and calls to disconnect() and exit() at the end – otherwise the main section of the script was provided by the console recording.  Below is the resulting file I saved as d:/temp/wm.py connect('weblogic','welcome1', 't3://localhost:7001') edit() startEdit()   cd('/SelfTuning/wl_server') cmo.createMaxThreadsConstraint('MaxThreadsConstraint-0')   cd('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName)) cmo.setCount(5) cmo.unSet('ConnectionPoolName')   cd('/SelfTuning/wl_server') cmo.createWorkManager('WorkManager-0') cd('/SelfTuning/wl_server/WorkManagers/WorkManager-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName))   cmo.setMaxThreadsConstraint(getMBean('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0')) cmo.setIgnoreStuckThreads(false)   activate() disconnect() exit() Run the Script If you want to test it be sure to delete the Work Manager and MaxThreadConstraint that you had previously created in the console.  Do something like the following - set up the environment and tell WLST to execute the script which happens in the first 2 lines, the rest doesn’t require any user input: D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server\bin>setDomainEnv.cmd D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server>java weblogic.WLST d:\temp\wm.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'examplesServer' that belongs to domain 'wl_server'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to edit tree. This is a writable tree with DomainMBean as the root. To make changes you will need to start an edit session via startEdit().   For more help, use help(edit)   Starting an edit session ... Started edit session, please be sure to save and activate your changes once you are done. Activating all your changes, this may take a while ... The edit lock associated with this edit session is released once the activation is completed. Activation completed Disconnected from weblogic server: examplesServer     Exiting WebLogic Scripting Tool.   Now if you go back and look in the console the changes have been made and we now have a compete script.  Of course there is a full MBean reference and you can learn the nuances of jython and WLST, but why not the WLS console do most of the work for you!  Happy scripting.

    Read the article

  • Going to the Score Cards - Exceptional DBA Awards 2011

    - by Rodney
    This year marks my 4th year as a judge for the Exceptional DBA Awards, founded by Red Gate in 2008 to "recognize the essential but often overlooked contributions of DBAs, the unsung heroes of the IT community." As a professional DBA myself I have been honored to participate as a judge. It is not an easy job because there is a voluminous amount of nominees from all over the world. Each judge has to read through every word of the nominee's answers, deciding what makes each person special and stand out amongst their peers. What drives them? What single element of their submission will shine above all others? It is my hope that what I am about to divulge to you as a judge will prompt you to think about yourself or someone you know and decide that you may be the exceptional DBA who can take home the gold at this year's award ceremony in Seattle. We are more than a few weeks into the nomination process and there are quite a number of submissions already. I can not tell you how many as that would not be fair. I can say it is not 1 million or more. I can also say that it is not 100,000. But that is all I can say about that. However, I can tell you that it is enough this year that we are breaking records on the number of people who have been influenced, inspired or intrigued by the awards in the past. I remember them all like it were yesterday. fuzzy thought cloud here. It was a rainy day in Seattle (all memories for each award ceremony will start thusly) and I was in the hotel going over my notes on what I wanted to say about the winner of the 2008 Red Gate Exceptional DBA Award. The notes were on index cards that I had either bought or stolen from my wife, I do not recall, but I was nervous which was unlike me. This was, after all, a big night for the winner. Of course, we, the judges and the SQL community, had already decided the winner and now all that remained was to present the award. The room was packed. It was Casino night, sponsored by sqlservercentral.com. Money (fake), drinks (not fake) and camaraderie flowed through the room. Dan McClain won the award that year. He worked for Anheuser-Busch at the time. I promise that did not influence my decision. We presented Dan with the award. He was very proud of this achievement, rightfully so, as was the SQL community for him. I spoke with Dan throughout the conference and realized how huge this award was for him, not just personally but professionally. It was a rainy day in Seattle in 2009 and I was nervous. I was asked to speak to a group of people again as a judge for the Exceptional DBA Awards. This year, Josef Richberg would be the recipient of the award, but he would not be able to attend. We all prayed for him as he fought through an illness and congratulated him for his accomplishments as a DBA for his company. He got better and sallied forth and continued to give back to the SQL community that he saw as one big family. In 2010, and I am getting ahead of myself, he was asked to be a judge himself for the very award he had just received the year before. It was a sunny day in Seattle and I missed it, because it was in July and I was not there. It was a rainy day in Seattle and it is 2010 and Tracy Hamlin enters a submission that blows this judge away. She is managing a 50 Terabyte distributed database ("50 Gigabytes! Are you kidding me!!!", Rodney jokes.)  and loves her daily job as a DBA working with developers, mentoring them and teaching them best practices with kindness and patience. She is a people person who just happens to have 10+ years experience with RDBMS'. She wins the award and goes on to be recognized as famous at PASS. It will be a rainy day in Seattle this year when I sit amongst my old constituent judges and friends, Brad McGehee, http://www.simple-talk.com/books/sql-books/how-to-become-an-exceptional-dba,-2nd-edition/, Steve Jones, whom we all know and love at http://www.sqlservercentral.com and a young upstart to the SQL Community, this cat named Brent Ozar to announce the 2011 winner. I personally have not heard of Brent but I am told I have interviewed him for a DBA position several years ago and turned him down, http://www.brentozar.com/archive/2011/05/exceptional-dba-contest/ . I hope that did not jeopardize his future in the SQL world. I am a big hearted oaf and would feel horrible. Hopefully I will meet him at PASS and we can work this all out and I can help him get a DBA job. The rain has stopped and a new year is upon us. The stakes are high...the competition is fierce...the rewards are incredible. The entry form awaits you. http://www.exceptionaldba.com/ I very much look forward to meeting you and presenting the award to you in front of hundreds of your envious but proud peers as the new Exceptional DBA for 2011 at the PASS Summit. Here is what you could win: The Exceptional DBA of the Year receives full conference registration for the 2011 PSS Summit in Seattle, where the awards ceremony will take place, four nights' hotel accommodation, and $300 towards travel expenses. They will also be featured on Simple-Talk. Are you ready? Are you nervous?

    Read the article

  • An MCM exam, Rob? Really?

    - by Rob Farley
    I took the SQL 2008 MCM Knowledge exam while in Seattle for the PASS Summit ten days ago. I wasn’t planning to do it, but I got persuaded to try. I was meaning to write this post to explain myself before the result came out, but it seems I didn’t get typing quickly enough. Those of you who know me will know I’m a big fan of certification, to a point. I’ve been involved with Microsoft Learning to help create exams. I’ve kept my certifications current since I first took an exam back in 1998, sitting many in beta, across quite a variety of topics. I’ve probably become quite good at them – I know I’ve definitely passed some that I really should’ve failed. I’ve also written that I don’t think exams are worth studying for. (That’s probably not entirely true, but it depends on your motivation. If you’re doing learning, I would encourage you to focus on what you need to know to do your job better. That will help you pass an exam – but the two skills are very different. I can coach someone on how to pass an exam, but that’s a different kind of teaching when compared to coaching someone about how to do a job. For example, the real world includes a lot of “it depends”, where you develop a feel for what the influencing factors might be. In an exam, its better to be able to know some of the “Don’t use this technology if XYZ is true” concepts better.) As for the Microsoft Certified Master certification… I’m not opposed to the idea of having the MCM (or in the future, MCSM) cert. But the barrier to entry feels quite high for me. When it was first introduced, the nearest testing centres to me were in Kuala Lumpur and Manila. Now there’s one in Perth, but that’s still a big effort. I know there are options in the US – such as one about an hour’s drive away from downtown Seattle, but it all just seems too hard. Plus, these exams are more expensive, and all up – I wasn’t sure I wanted to try them, particularly with the fact that I don’t like to study. I used to study for exams. It would drive my wife crazy. I’d have some exam scheduled for some time in the future (like the time I had two booked for two consecutive days at TechEd Australia 2005), and I’d make sure I was ready. Every waking moment would be spent pouring over exam material, and it wasn’t healthy. I got shaken out of that, though, when I ended up taking four exams in those two days in 2005 and passed them all. I also worked out that if I had a Second Shot available, then failing wasn’t a bad thing at all. Even without Second Shot, I’m much more okay about failing. But even just trying an MCM exam is a big effort. I wouldn’t want to fail one of them. Plus there’s the illusion to maintain. People have told me for a long time that I should just take the MCM exams – that I’d pass no problem. I’ve never been so sure. It was almost becoming a pride-point. Perhaps I should fail just to demonstrate that I can fail these things. Anyway – boB Taylor (@sqlboBT) persuaded me to try the SQL 2008 MCM Knowledge exam at the PASS Summit. They set up a testing centre in one of the room there, so it wasn’t out of my way at all. I had to squeeze it in between other commitments, and I certainly didn’t have time to even see what was on the syllabus, let alone study. In fact, I was so exhausted from the week that I fell asleep at least once (just for a moment though) during the actual exam. Perhaps the questions need more jokes, I’m not sure. I knew if I failed, then I might disappoint some people, but that I wouldn’t’ve spent a great deal of effort in trying to pass. On the other hand, if I did pass I’d then be under pressure to investigate the MCM Lab exam, which can be taken remotely (therefore, a much smaller amount of effort to make happen). In some ways, passing could end up just putting a bunch more pressure on me. Oh, and I did.

    Read the article

  • C++ Little Wonders: The C++11 auto keyword redux

    - by James Michael Hare
    I’ve decided to create a sub-series of my Little Wonders posts to focus on C++.  Just like their C# counterparts, these posts will focus on those features of the C++ language that can help improve code by making it easier to write and maintain.  The index of the C# Little Wonders can be found here. This has been a busy week with a rollout of some new website features here at my work, so I don’t have a big post for this week.  But I wanted to write something up, and since lately I’ve been renewing my C++ skills in a separate project, it seemed like a good opportunity to start a C++ Little Wonders series.  Most of my development work still tends to focus on C#, but it was great to get back into the saddle and renew my C++ knowledge.  Today I’m going to focus on a new feature in C++11 (formerly known as C++0x, which is a major move forward in the C++ language standard).  While this small keyword can seem so trivial, I feel it is a big step forward in improving readability in C++ programs. The auto keyword If you’ve worked on C++ for a long time, you probably have some passing familiarity with the old auto keyword as one of those rarely used C++ keywords that was almost never used because it was the default. That is, in the code below (before C++11): 1: int foo() 2: { 3: // automatic variables (allocated and deallocated on stack) 4: int x; 5: auto int y; 6:  7: // static variables (retain their value across calls) 8: static int z; 9:  10: return 0; 11: } The variable x is assumed to be auto because that is the default, thus it is unnecessary to specify it explicitly as in the declaration of y below that.  Basically, an auto variable is one that is allocated and de-allocated on the stack automatically.  Contrast this to static variables, that are allocated statically and exist across the lifetime of the program. Because auto was so rarely (if ever) used since it is the norm, they decided to remove it for this purpose and give it new meaning in C++11.  The new meaning of auto: implicit typing Now, if your compiler supports C++ 11 (or at least a good subset of C++11 or 0x) you can take advantage of type inference in C++.  For those of you from the C# world, this means that the auto keyword in C++ now behaves a lot like the var keyword in C#! For example, many of us have had to declare those massive type declarations for an iterator before.  Let’s say we have a std::map of std::string to int which will map names to ages: 1: std::map<std::string, int> myMap; And then let’s say we want to find the age of a given person: 1: // Egad that's a long type... 2: std::map<std::string, int>::const_iterator pos = myMap.find(targetName); Notice that big ugly type definition to declare variable pos?  Sure, we could shorten this by creating a typedef of our specific map type if we wanted, but now with the auto keyword there’s no need: 1: // much shorter! 2: auto pos = myMap.find(targetName); The auto now tells the compiler to determine what type pos should be based on what it’s being assigned to.  This is not dynamic typing, it still determines the type as if it were explicitly declared and once declared that type cannot be changed.  That is, this is invalid: 1: // x is type int 2: auto x = 42; 3:  4: // can't assign string to int 5: x = "Hello"; Once the compiler determines x is type int it is exactly as if we typed int x = 42; instead, so don’t' confuse it with dynamic typing, it’s still very type-safe. An interesting feature of the auto keyword is that you can modify the inferred type: 1: // declare method that returns int* 2: int* GetPointer(); 3:  4: // p1 is int*, auto inferred type is int 5: auto *p1 = GetPointer(); 6:  7: // ps is int*, auto inferred type is int* 8: auto p2 = GetPointer(); Notice in both of these cases, p1 and p2 are determined to be int* but in each case the inferred type was different.  because we declared p1 as auto *p1 and GetPointer() returns int*, it inferred the type int was needed to complete the declaration.  In the second case, however, we declared p2 as auto p2 which means the inferred type was int*.  Ultimately, this make p1 and p2 the same type, but which type is inferred makes a difference, if you are chaining multiple inferred declarations together.  In these cases, the inferred type of each must match the first: 1: // Type inferred is int 2: // p1 is int* 3: // p2 is int 4: // p3 is int& 5: auto *p1 = GetPointer(), p2 = 42, &p3 = p2; Note that this works because the inferred type was int, if the inferred type was int* instead: 1: // syntax error, p1 was inferred to be int* so p2 and p3 don't make sense 2: auto p1 = GetPointer(), p2 = 42, &p3 = p2; You could also use const or static to modify the inferred type: 1: // inferred type is an int, theAnswer is a const int 2: const auto theAnswer = 42; 3:  4: // inferred type is double, Pi is a static double 5: static auto Pi = 3.1415927; Thus in the examples above it inferred the types int and double respectively, which were then modified to const and static. Summary The auto keyword has gotten new life in C++11 to allow you to infer the type of a variable from it’s initialization.  This simple little keyword can be used to cut down large declarations for complex types into a much more readable form, where appropriate.   Technorati Tags: C++, C++11, Little Wonders, auto

    Read the article

  • Code Metrics: Number of IL Instructions

    - by DigiMortal
    In my previous posting about code metrics I introduced how to measure LoC (Lines of Code) in .NET applications. Now let’s take a step further and let’s take a look how to measure compiled code. This way we can somehow have a picture about what compiler produces. In this posting I will introduce you code metric called number of IL instructions. NB! Number of IL instructions is not something you can use to measure productivity of your team. If you want to get better idea about the context of this metric and LoC then please read my first posting about LoC. What are IL instructions? When code written in some .NET Framework language is compiled then compiler produces assemblies that contain byte code. These assemblies are executed later by Common Language Runtime (CLR) that is code execution engine of .NET Framework. The byte code is called Intermediate Language (IL) – this is more common language than C# and VB.NET by example. You can use ILDasm tool to convert assemblies to IL assembler so you can read them. As IL instructions are building blocks of all .NET Framework binary code these instructions are smaller and highly general – we don’t want very rich low level language because it executes slower than more general language. For every method or property call in some .NET Framework language corresponds set of IL instructions. There is no 1:1 relationship between line in high level language and line in IL assembler. There are more IL instructions than lines in C# code by example. How much instructions there are? I have no common answer because it really depends on your code. Here you can see some metrics from my current community project that is developed on SharePoint Server 2007. As average I have about 7 IL instructions per line of code. This is not metric you should use, it is just illustrative example so you can see the differences between numbers of lines and IL instructions. Why should I measure the number of IL instructions? Just take a look at chart above. Compiler does something that you cannot see – it compiles your code to IL. This is not intuitive process because you usually cannot say what is exactly the end result. You know it at greater plain but you don’t know it exactly. Therefore we can expect some surprises and that’s why we should measure the number of IL instructions. By example, you may find better solution for some method in your source code. It looks nice, it works nice and everything seems to be okay. But on server under load your fix may be way slower than previous code. Although you minimized the number of lines of code it ended up with increasing the number of IL instructions. How to measure the number of IL instructions? My choice is NDepend because Visual Studio is not able to measure this metric. Steps to make are easy. Open your NDepend project or create new and add all your application assemblies to project (you can also add Visual Studio solution to project). Run project analysis and wait until it is done. You can see over-all stats form global summary window. This is the same window I used to read the LoC and the number of IL instructions metrics for my chart. Meanwhile I made some changes to my code (enabled advanced caching for events and event registrations module) and then I ran code analysis again to get results for this section of this posting. NDepend is also able to tell you exactly what parts of code have problematically much IL instructions. The code quality section of CQL Query Explorer shows you how much problems there are with members in analyzed code. If you click on the line Methods too big (NbILInstructions) you can see all the problematic members of classes in CQL Explorer shown in image on right. In my case if have 10 methods that are too big and two of them have horrible number of IL instructions – just take a look at first two methods in this TOP10. Also note the query box. NDepend has easy and SQL-like query language to query code analysis results. You can modify these queries if you like and also you can define your own ones if default set is not enough for you. What is good result? As you can see from query window then the number of IL instructions per member should have maximally 200 IL instructions. Of course, like always, the less instructions you have, the better performing code you have. I don’t mean here little differences but big ones. By example, take a look at my first method in warnings list. The number of IL instructions it has is huge. And believe me – this method looks awful. Conclusion The number of IL instructions is useful metric when optimizing your code. For analyzing code at general level to find out too long methods you can use the number of LoC metric because it is more intuitive for you and you can therefore handle the situation more easily. Also you can use NDepend as code metrics tool because it has a lot of metrics to offer.

    Read the article

  • An MCM exam, Rob? Really?

    - by Rob Farley
    I took the SQL 2008 MCM Knowledge exam while in Seattle for the PASS Summit ten days ago. I wasn’t planning to do it, but I got persuaded to try. I was meaning to write this post to explain myself before the result came out, but it seems I didn’t get typing quickly enough. Those of you who know me will know I’m a big fan of certification, to a point. I’ve been involved with Microsoft Learning to help create exams. I’ve kept my certifications current since I first took an exam back in 1998, sitting many in beta, across quite a variety of topics. I’ve probably become quite good at them – I know I’ve definitely passed some that I really should’ve failed. I’ve also written that I don’t think exams are worth studying for. (That’s probably not entirely true, but it depends on your motivation. If you’re doing learning, I would encourage you to focus on what you need to know to do your job better. That will help you pass an exam – but the two skills are very different. I can coach someone on how to pass an exam, but that’s a different kind of teaching when compared to coaching someone about how to do a job. For example, the real world includes a lot of “it depends”, where you develop a feel for what the influencing factors might be. In an exam, its better to be able to know some of the “Don’t use this technology if XYZ is true” concepts better.) As for the Microsoft Certified Master certification… I’m not opposed to the idea of having the MCM (or in the future, MCSM) cert. But the barrier to entry feels quite high for me. When it was first introduced, the nearest testing centres to me were in Kuala Lumpur and Manila. Now there’s one in Perth, but that’s still a big effort. I know there are options in the US – such as one about an hour’s drive away from downtown Seattle, but it all just seems too hard. Plus, these exams are more expensive, and all up – I wasn’t sure I wanted to try them, particularly with the fact that I don’t like to study. I used to study for exams. It would drive my wife crazy. I’d have some exam scheduled for some time in the future (like the time I had two booked for two consecutive days at TechEd Australia 2005), and I’d make sure I was ready. Every waking moment would be spent pouring over exam material, and it wasn’t healthy. I got shaken out of that, though, when I ended up taking four exams in those two days in 2005 and passed them all. I also worked out that if I had a Second Shot available, then failing wasn’t a bad thing at all. Even without Second Shot, I’m much more okay about failing. But even just trying an MCM exam is a big effort. I wouldn’t want to fail one of them. Plus there’s the illusion to maintain. People have told me for a long time that I should just take the MCM exams – that I’d pass no problem. I’ve never been so sure. It was almost becoming a pride-point. Perhaps I should fail just to demonstrate that I can fail these things. Anyway – boB Taylor (@sqlboBT) persuaded me to try the SQL 2008 MCM Knowledge exam at the PASS Summit. They set up a testing centre in one of the room there, so it wasn’t out of my way at all. I had to squeeze it in between other commitments, and I certainly didn’t have time to even see what was on the syllabus, let alone study. In fact, I was so exhausted from the week that I fell asleep at least once (just for a moment though) during the actual exam. Perhaps the questions need more jokes, I’m not sure. I knew if I failed, then I might disappoint some people, but that I wouldn’t’ve spent a great deal of effort in trying to pass. On the other hand, if I did pass I’d then be under pressure to investigate the MCM Lab exam, which can be taken remotely (therefore, a much smaller amount of effort to make happen). In some ways, passing could end up just putting a bunch more pressure on me. Oh, and I did.

    Read the article

  • Stir Trek: Iron Man Edition Recap and Photos

    - by Brian Jackett
    If you’ve noticed my blogging activity has reduced in frequency and technical content lately it’s primarily due to all of the conferences I’ve been attending, speaking at, or planning in the past few months.  This past Friday myself and six other dedicated individuals put on Stir Trek: Iron Man Edition as the culmination of a few months of hard work.  For those unfamiliar, Stir Trek is a web developer conference that was founded last year as an event to showcase content from Microsoft’s MIX conference and end the day with a private showing of the then just-released Star Trek movie.  This year’s conference expanded from 2 to 4 content tracks and upped the number of tickets from 350 to 600.  Even more amazing was the fact that we had 592 people show up day of the event for the lowest drop-off percentage of any conference I’ve been to before.   Nerd Dinner and Swag Bags     The night before Stir Trek: Iron Man Edition we hosted a nerd dinner at the Polaris Shopping mall food court with about 30 in attendance.  Nerd dinners are a great time to meet others passionate about technology and socialize before the whirlwind of the conference hits.  After the nerd dinner 20+ volunteers headed to the conference location and helped us stuff swag bags.  This in and of itself was a monumental task of putting together 600 swag bags with numerous leaflets, sponsor items, and t-shirts.  A big thanks goes out to all who assisted us that night so that we could finish in just under 2 hours instead of taking all night.  My sleep schedule also thanks you. Morning of Stir Trek     After getting a decent amount of sleep I arrived at Marcus Crosswoods theater at 6am to begin setting up for the day.  Myself and Jody Morgan were in charge of registration so we got tables set up, laid out swag bags, and organized our volunteer crew to assist with checking-in attendees.  Despite having 600+ people registration went fairly smoothly and got the day off to a great start.  I especially appreciated the 3+ cups of coffee from Crimson Cup, a local coffee shop.  For any of you that know me you’ll know that I rarely drink coffee except a few times a year when I really need the energy, so that says a lot about how good their coffee is.   Conference Starts     Once registration was completed the day kicked off with Molly Holzschlag keynoting.  Unfortunately Molly suffered from an ear infection and wasn’t able to fly so she had a virtual keynote and a session later in the day.  I was working behind the scenes on various tasks so I was only able to drop in very briefly on the keynote and rest of the morning sessions.  Throughout the day I tried to grab at least 1 or 2 pics of each presenter.  See my album below for the full set of pics.      For lunch we ordered around 150 pizzas from Mellow Mushroom, a local pizza place (notice the theme of supporting local businesses.)  Early on we were concerned about Mellow Mushroom being able to supply that many pizzas and get them delivered (still hot) to the theater, but they did an excellent job day of the event.  I wish I had gotten some pictures of the old school VW van they delivered the pizza in, but I was just a bit busy running around trying to get theaters ready for lunch.  We had attendees from last year who specifically requested that we have Mellow Mushroom supply lunch this year and I’m glad everything worked out being able to use them again.     During the afternoon I was able to attend a few sessions and hear some great content from various speakers.  It was also nice to just sit down and get off my feet for a bit.  After the last sessions the day concluded with a raffle.  There were a few logistical and technical issues that hampered our ability to smoothly conduct the raffle.  To those of you that agree the raffle wasn’t the smoothest experience I would like to say that the Stir Trek planning committee has already begun meeting to discuss ways of improving the conference for next year.  We are also accepting feedback (both positive and negative) at the following link: click here.  If you don’t wish to use the Joind In site you can also email me directly and I’ll be sure to pass along the feedback.   Iron Man 2 Movie     Last but not least, what Stir Trek event would be complete without the feature movie.  This year’s movie was Iron Man 2.  The theater had some really cool props and promotions (see pic below) for the movie.  I really enjoyed Iron Man 2, but I would recommend brushing up on the Iron Man comics and Marvel’s plans for future movies to understand some of the plot elements that come up.  Also make sure you stay through to the end of the movie credits to see a sneak peak of something special, that’s all I’ll say. Conclusion     Again a big thanks goes out to all of the speakers, sponsors, attendees, movie theater staff, volunteers, and everyone else involved in making this event great.  Also big thanks to my fellow Stir Trek planning committee members: Jeff Blankenburg, Matt Casto, Carey Payette, Jody Morgan, Rick Kierner, and Sarah Dutkiewitcz.  I am grateful for everything I learned while helping plan this event and look forward to being involved again next year.  For those interested we are currently targeting Thor as our movie theme for 2011 and then The Avengers for 2012.  These are tentative based on release dates that could shift as we get closer, but for now look solid.   Photos Pics on Facebook (includes tagging)     Stir Trek: Iron Man Edition photos on Facebook Pics on Live site (higher res)      View Full Album         -Frog Out

    Read the article

  • SQL SERVER – TechEd India 2012 – Content, Speakers and a Lots of Fun

    - by pinaldave
    TechEd is one event which every developers and IT professionals are looking forward to attend. It is opportunity of life time and no matter how many time one gets chance to engage with it, it is never enough. I still remember every single moment of every TechEd I have attended so far. We are less than 100 hours away from TechEd India 2012 event.This event is the one must attend event for every Technology Enthusiast. Fourth time in the row I am going to attend this event and I am equally excited as the first time of the event. There are going to be two very solid SQL Server track this time and I will be attending end of the end both the tracks. Here is my view on each of the 10 sessions. Each session is carefully crafted and leading exeprts from industry will present it. Day 1, March 21, 2012 T-SQL Rediscovered with SQL Server 2012 – This session is going to bring some of the lesser known enhancements that were brought with SQL Server 2012. When I learned that Jacob Sebastian is going to do this session my reaction to this is DEMO, DEMO and DEMO! Jacob spends hours and hours of his time preparing his session and this will be one of those session that I am confident will be delivered over and over through out the next many events. Catapult your data with SQL Server 2012 Integration Services – Praveen is expert story teller and one of the wizard when it is about SQL Server and business intelligence. He is surely going to mesmerize you with some interesting insights on SSIS performance too. Processing Big Data with SQL Server 2012 and Hadoop – There are three sessions on Big Data at TechEd India 2012. Stephen is going to deliver one of the session. Watching Stephen present is always joy and quite entertaining. He shares knowledge with his typical humor which captures ones attention. I wrote about what is BIG DATA in a blog post. SQL Server Misconceptions and Resolutions – I will be presenting this Session along with Vinod Kumar. READ MORE HERE. Securing with ContainedDB in SQL Server 2012 – Pranab is expert when it is about SQL Server and Security. I have seen him presenting and he is indeed very pleasant to watch. A dry subject like security, he makes it much lively. A Contained Database is a database which contains all the necessary settings and metadata, making database easily portable to another server. This database will contain all the necessary details and will not have to depend on any server where it is installed for anything. You can take this database and move it to another server without having any worries. Day 3, March 23, 2012 Peeling SQL Server like an Onion: Internals Demystified – Vinod Kumar has been writing about this extensively on his other blog post. In recent conversation he suggested that he will be creating very exclusive content for this presentation. I know Vinod for long time and have worked with him along many community activities. I am going to pay special attention to the details. I know Vinod has few give-away planned now for attending the session now only if he shares with us. Speed Up – Parallel Processes and unparalleled Performance – Performance tuning is my favorite subject. I will be discussing effect of parallelism on performance in this session. Here me out, there will be lots of quiz questions during this session and if you get the answers correct – you can win some really cool goodies – I Promise! READ MORE HERE. Keep your database available – AlwaysOn – Balmukund is like an army man. He is always ready to show and prove that he has coolest toys in terms of SQL Server and he knows how to keep them running AlwaysON. Availability groups, Listener, Clustering, Failover, Read-Only replica etc all will be demo’ed in this session. This is really heavy but very interesting content not to be missed. Lesser known facts about SQL Server Backup and Restore – Amit Banerjee – this name is known internationally for solving SQL Server problems in 140 characters. He has already blogged about this and this topic is going to be interesting. A successful restore strategy for applications is as good as their last good known backup. I have few difficult questions to ask to Amit and I am very sure that his unique style will entertain people. By the way, his one of the slide may give few in audience a funny heart attack. Top 5 reasons why you want SQL Server 2012 BI – Praveen plans to take a tour of some of the BI enhancements introduced in the new version. Business Insights with SQL Server is a critical building block and this version of SQL Server is no exception. For the matter of the fact, when I saw the demos he was going to show during this session, I felt like that I wish I can set up all of this on my machine. If you miss this session – you will miss one of the most informative session of the day. Also TechEd India 2012 has a Live streaming of some content and this can be watched here. The TechEd Team is planning to have some really good exclusive content in this channel as well. If you spot me, just do not hesitate to come by me and introduce yourself, I want to remember you! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • What Will Happen to Real Estate Leases when Operating Leases are Gone?

    - by Theresa Hickman
    Many people are concerned about what will happen to real estate leases when FASB and IASB abolish operating leases. They plan to unveil the proposed standards on treating leases this summer as part of the convergence project but no "finalized ruling" is expected for at least a year because it will need to get formal consensus from many players, such as the SEC, American Association of Investors, Congress, the Big Four, American Associate of Realtors, the international equivalents of these, etc. If your accounting is a bit rusty, an Operating Lease is where you lease equipment or some asset for a shorter period than the actual (expected) life of the asset and then give the asset back while it still has some useful life in it. (Think leasing a car). Because an Operating Lease does not contain any of the provisions that would qualify it as a Capital Lease, the lease is not treated as a sale or purchase and hits the lessee's rental expense and the lessor's revenue. So it all stays on the P&L (assuming no prepayments are made). Capital Leases, on the other hand, hit lessee's and lessor's balance sheets because the asset is treated as a sale. (I'm ignoring interest and depreciation here to emphasize my point). Question: What will happen to real estate leases when Operating Leases go away and how will Oracle Financials address these changes? Before I attempt to address these questions, here's a real-life example to expound on some of the issues: Let's say a U.S. retailer leases a store in a mall for 15 years. Under U.S. GAAP, the lease is considered an operating or expense lease. Will that same lease be considered a capital lease under IFRS? Real estate leases are supposedly going to be capitalized under IFRS. If so, will everyone need to change all leases from operating to capital? Or, could we make some adjustments so we report the lease as an expense for operations reporting but capitalize it for SEC reporting? Would all aspects of the lease be capitalized, or would some line items still be expensed? For example, many retail store leases are defined to include (1) the agreed-to rent amount; (2) a negotiated increase in base rent, e.g., maybe a 5% increase in Year 5; (3) a sales rent component whereby the retailer pays a variable additional amount based on the sales generated in the prior month; (4) parking lot maintenance fees. Would the entire lease be capitalized, or would some portions still be expensed? To help answer these questions, I met up with our resident accounting expert and walking encyclopedia, Seamus Moran. Here's what he had to say: Oracle is aware of the potential changes specific to reporting/capitalization of real estate leases; i.e., we are aware that FASB and IASB have identified real estate leases as one of the areas for standards convergence. Oracle stays apprised of the on-going convergence through our domain expertise staff, our relationship with customers, our market awareness, and, of course, our relationships with the Big 4. This is part of our normal process with respect to regulatory compliance worldwide. At this time, Oracle expects that the standards convergence committee will make a recommendation about reporting standards for real estate leases in about a year. Following typical procedures, we also expect that the recommendation will be up for review for a year, and customers will then need to start reporting to the new standard about a year after that. So that means we would expect the first customer to report under the new standard in maybe 3 years. Typically, after the new standard is finalized and distributed, we find that our customers then begin to evaluate how they plan to meet the new standard. And through groups like the Customer Advisory Boards (CABs), our customers tell us what kind of product changes are needed in order to satisfy their new reporting requirements. Of course, Oracle is also working with the Big 4 and Accenture and other implementers in order to ascertain that these recommended changes will indeed meet new reporting standards. So the best advice we can offer right now is, stay apprised of the standards convergence committee; know that Oracle is also staying abreast of developments; get involved with your CAB so your voice is heard; know that Oracle products continue to be GAAP compliant, and we will continue to maintain that as our standard. But exactly what is that "standard"--we need to wait on the standards convergence committee. In a nut shell, operating leases will become either capital leases or month to month rentals, but it is still too early, too political and too uncertain to call out at this point.

    Read the article

  • Tuning Red Gate: #4 of Some

    - by Grant Fritchey
    First time connecting to these servers directly (keys to the kingdom, bwa-ha-ha-ha. oh, excuse me), so I'm going to take a look at the server properties, just to see if there are any issues there. Max memory is set, cool, first possible silly mistake clear. In fact, these look to be nicely set up. Oh, I'd like to see the ANSI Standards set by default, but it's not a big deal. The default location for database data is the F:\ drive, where I saw all the activity last time. Cool, the people maintaining the servers in our company listen, parallelism threshold is set to 35 and optimize for ad hoc is enabled. No shocks, no surprises. The basic setup is appropriate. On to the problem database. Nothing wrong in the properties. The database is in SIMPLE recovery, but I think it's a reporting system, so no worries there. Again, I'd prefer to see the ANSI settings for connections, but that's the worst thing I can see. Time to look at the queries, tables, indexes and statistics because all the information I've collected over the last several days suggests that we're not looking at a systemic problem (except possibly not enough memory), but at the traditional tuning issues. I just want to note that, I started looking at the system, not the queries. So should you when tuning your environment. I know, from the data collected through SQL Monitor, what my top poor performing queries are, and the most frequently called, etc. I'm starting with the most frequently called. I'm going to get the execution plan for this thing out of the cache (although, with the cache dumping constantly, I might not get it). And it's not there. Called 1.3 million times over the last 3 days, but it's not in cache. Wow. OK. I'll see what's in cache for this database: SELECT  deqs.creation_time,         deqs.execution_count,         deqs.max_logical_reads,         deqs.max_elapsed_time,         deqs.total_logical_reads,         deqs.total_elapsed_time,         deqp.query_plan,         SUBSTRING(dest.text, (deqs.statement_start_offset / 2) + 1,                   (deqs.statement_end_offset - deqs.statement_start_offset) / 2                   + 1) AS QueryStatement FROM    sys.dm_exec_query_stats AS deqs         CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest         CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp WHERE   dest.dbid = DB_ID('Warehouse') AND deqs.statement_end_offset > 0 AND deqs.statement_start_offset > 0 ORDER BY deqs.max_logical_reads DESC ; And looking at the most expensive operation, we have our first bad boy: Multiple table scans against very large sets of data and a sort operation. a sort operation? It's an insert. Oh, I see, the table is a heap, so it's doing an insert, then sorting the data and then inserting into the primary key. First question, why isn't this a clustered index? Let's look at some more of the queries. The next one is deceiving. Here's the query plan: You're thinking to yourself, what's the big deal? Well, what if I told you that this thing had 8036318 reads? I know, you're looking at skinny little pipes. Know why? Table variable. Estimated number of rows = 1. Actual number of rows. well, I'm betting several more than one considering it's read 8 MILLION pages off the disk in a single execution. We have a serious and real tuning candidate. Oh, and I missed this, it's loading the table variable from a user defined function. Let me check, let me check. YES! A multi-statement table valued user defined function. And another tuning opportunity. This one's a beauty, seriously. Did I also mention that they're doing a hash against all the columns in the physical table. I'm sure that won't lead to scans of a 500,000 row table, no, not at all. OK. I lied. Of course it is. At least it's on the top part of the Loop which means the scan is only executed once. I just did a cursory check on the next several poor performers. all calling the UDF. I think I found a big tuning opportunity. At this point, I'm typing up internal emails for the company. Someone just had their baby called ugly. In addition to a series of suggested changes that we need to implement, I'm also apologizing for being such an unkind monster as to question whether that third eye & those flippers belong on such an otherwise lovely child.

    Read the article

  • Binding a select in a client template

    - by Bertrand Le Roy
    I recently got a question on one of my client template posts asking me how to bind a select tag’s value to data in client templates. I was surprised not to find anything on the web addressing the problem, so I thought I’d write a short post about it. It really is very simple once you know where to look. You just need to bind the value property of the select tag, like this: <select sys:value="{binding color}"> If you do it from markup like here, you just need to use the sys: prefix. It just works. Here’s the full source code for my sample page: <!DOCTYPE html> <html> <head> <title>Binding a select tag</title> <script src=http://ajax.microsoft.com/ajax/beta/0911/Start.js type="text/javascript"></script> <script type="text/javascript"> Sys.require(Sys.scripts.Templates, function() { var colors = [ "red", "green", "blue", "cyan", "purple", "yellow" ]; var things = [ { what: "object", color: "blue" }, { what: "entity", color: "purple" }, { what: "thing", color: "green" } ]; Sys.create.dataView("#thingList", { data: things, itemRendered: function(view, ctx) { Sys.create.dataView( Sys.get("#colorSelect", ctx), { data: colors }); } }); }); </script> <style type="text/css"> .sys-template {display: none;} </style> </head> <body xmlns:sys="javascript:Sys"> <div> <ul id="thingList" class="sys-template"> <li> <span sys:id="thingName" sys:style-color="{binding color}" >{{what}}</span> <select sys:id="colorSelect" sys:value="{binding color}" class="sys-template"> <option sys:value="{{$dataItem}}" sys:style-background-color="{{$dataItem}}" >{{$dataItem}}</option> </select> </li> </ul> </div> </body> </html> This produces the following page: Each of the items sees its color change as you select a different color in the drop-down. Other details worth noting in this page are the use of the script loader to get the framework from the CDN, and the sys:style-background-color syntax to bind the background color style property from markup. Of course, I’ve used a fair amount of custom ASP.NET Ajax markup in here, but everything could be done imperatively and with completely clean markup from the itemRendered event using Sys.bind.

    Read the article

  • FrameBuffer Render to texture not working all the way

    - by brainydexter
    I am learning to use Frame Buffer Objects. For this purpose, I chose to render a triangle to a texture and then map that to a quad. When I render the triangle, I clear the color to something blue. So, when I render the texture on the quad from fbo, it only renders everything blue, but doesn't show up the triangle. I can't seem to figure out why this is happening. Can someone please help me out with this ? I'll post the rendering code here, since glCheckFramebufferStatus doesn't complain when I setup the FBO. I've pasted the setup code at the end. Here is my rendering code: void FrameBufferObject::Render(unsigned int elapsedGameTime) { glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); glClearColor(0.0, 0.6, 0.5, 1); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // adjust viewport and projection matrices to texture dimensions glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0, m_FBOWidth, m_FBOHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, m_FBOWidth, 0, m_FBOHeight, 1.0, 100.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); DrawTriangle(); glPopAttrib(); // setting FrameBuffer back to window-specified Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); //unbind // back to normal viewport and projection matrix //glViewport(0, 0, 1280, 768); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, 1.33, 1.0, 1000.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); render(elapsedGameTime); } void FrameBufferObject::DrawTriangle() { glPushMatrix(); glBegin(GL_TRIANGLES); glColor3f(1, 0, 0); glVertex2d(0, 0); glVertex2d(m_FBOWidth, 0); glVertex2d(m_FBOWidth, m_FBOHeight); glEnd(); glPopMatrix(); } void FrameBufferObject::render(unsigned int elapsedTime) { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, m_TextureID); glPushMatrix(); glTranslated(0, 0, -20); glBegin(GL_QUADS); glColor4f(1, 1, 1, 1); glTexCoord2f(1, 1); glVertex3f(1,1,1); glTexCoord2f(0, 1); glVertex3f(-1,1,1); glTexCoord2f(0, 0); glVertex3f(-1,-1,1); glTexCoord2f(1, 0); glVertex3f(1,-1,1); glEnd(); glPopMatrix(); glBindTexture(GL_TEXTURE_2D, 0); glDisable(GL_TEXTURE_2D); } void FrameBufferObject::Initialize() { // Generate FBO glGenFramebuffers(1, &m_FBO); glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); // Add depth buffer as a renderbuffer to fbo // create depth buffer id glGenRenderbuffers(1, &m_DepthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, m_DepthBuffer); // allocate space to render buffer for depth buffer glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, m_FBOWidth, m_FBOHeight); // attaching renderBuffer to FBO // attach depth buffer to FBO at depth_attachment glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_DepthBuffer); // Adding a texture to fbo // Create a texture glGenTextures(1, &m_TextureID); glBindTexture(GL_TEXTURE_2D, m_TextureID); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_FBOWidth, m_FBOHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0); // onlly allocating space glBindTexture(GL_TEXTURE_2D, 0); // attach texture to FBO glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_TextureID, 0); // Check FBO Status if( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "\n Error:: FrameBufferObject::Initialize() :: FBO loading not complete \n"; // switch back to window system Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); } Thanks!

    Read the article

  • Making the most of next weeks SharePoint 2010 developer training

    - by Eric Nelson
    [you can still register if you are free on the afternoons of 9th to 11th – UK time] We have 50+ registrations with more coming in – which is fantastic. Please read on to make the most of the training. Background We have structured the training to make sure that you can still learn lots during the three days even if you do not have SharePoint 2010 installed. Additionally the course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Which means if you have zero time between now and next Wednesday then you are still good to go. But if you can do some pre-work you will likely get even more out of the three days. Step 1: Check out the topics and resources available on-demand The course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Take a lap around the SharePoint 2010 Training Course on Channel 9 Download the SharePoint Developer Training Kit Step 2: Use a pre-configured Virtual Machine which you can download (best start today – it is large!) Consider using the VM we created If you don't have access to SharePoint 2010. You will need a 64bit host OS and bare minimum of 4GB of RAM. 8GB recommended. Virtual PC can not be used with this VM – Virtual PC only supports 32bit guests. The 2010-7a Information Worker VM gives you everything you need to develop for SharePoint 2010. Watch the Video on how to use this VM Download the VM Remember you only need to download the “parts” for the 2010-7a VM. There are 3 subtly different ways of using this VM: Easiest is to follow the advice of the video and get yourself a host OS of Windows Server 2008 R2 with Hyper-V and simply use the VM Alternatively you can take the VHD and create a “Boot to VHD” if you have Windows 7 Ultimate or Enterprise Edition. This works really well – especially if you are already familiar with “Boot to VHD” (This post I did will help you get started) Or you can take the VHD and use an alternative VM tool such as VirtualBox if you have a different host OS. NB: This tends to involve some work to get everything running fine. Check out parts 1 to 3 from Rolly and if you go with Virtual Box use an IDE controller not SATA. SATA will blue screen. Note in the screenshot below I also converted the vhd to a vmdk. I used the FREE Starwind Converter to do this whilst I was fighting blue screens – not sure its necessary as VirtualBox does now work with VHDs. or Step 3 – Install SharePoint 2010 on a 64bit Windows 7 or Vista Host I haven’t tried this but it is now supported. Check out MSDN. Final notes: I am in the process of securing a number of hosted VMs for ISVs directly managed by my team. Your Architect Evangelist will have details once I have them! Else we can sort out on the Wed. Regrettably I am unable to give folks 1:1 support on any issues around Boot to VHD, 3rd party VM products etc. Related Links: Check you are fully plugged into the work of my team – have you done these simple steps including joining our new LinkedIn group?

    Read the article

  • Java EE @ No Fluff Just Stuff Tour

    - by reza_rahman
    If you work in the US and still don't know what the No Fluff Just Stuff (NFJS) Tour is, you are doing yourself a very serious disfavor. NFJS is by far the cheapest and most effective way to stay up to date through some world class speakers and talks. This is most certainly true for US enterprise Java developers in particular. Following the US cultural tradition of old-fashioned roadshows, NFJS is basically a set program of speakers and topics offered at major US cities year round. Many now famous world class technology speakers can trace their humble roots to NFJS. Via NFJS you basically get to have amazing training without paying for an expensive venue, lodging or travel. The events are usually on the weekends so you don't need to even skip work if you want (a great feature for consultants on tight budgets and deadlines). I am proud to share with you that I recently joined the NFJS troupe. My hope is that this will help solve the lingering problem of effectively spreading the Java EE message here in the US. For NFJS I hope my joining will help beef up perhaps much desired Java content. In any case, simply being accepted into this legendary program is an honor I could have perhaps only dreamed of a few years ago. I am very grateful to Jay Zimmerman for seeing the value in me and the Java EE content. The current speaker line-up consists of the likes of Neal Ford, Venkat Subramaniam, Nathaniel Schutta, Tim Berglund and many other great speakers. I actually had my tour debut on April 4-5 with the NFJS New York Software Symposium - basically a short train commute away from my home office. The show is traditionally one of the smaller ones and it was not that bad for a start. I look forward to doing a few more in the coming months (more on that a bit later). I had four talks back to back (really my most favorite four at the moment). The first one was a talk on JMS 2 - some of you might already know JMS is one of my most favored Java EE APIs. The slides for the talk are posted below: What’s New in Java Message Service 2 from Reza Rahman The next talk I delivered was my Cargo Tracker/Java EE + DDD talk. This talk basically overviews DDD and describes how DDD maps to Java EE using code examples/demos from the Cargo Tracker Java EE Blue Prints project. Applied Domain-Driven Design Blue Prints for Java EE from Reza Rahman The third talk I delivered was our flagship Java EE 7/8 talk. As you may know, currently the talk is basically about Java EE 7. I'll probably slowly evolve this talk to gradually transform it into a Java EE 8 talk as we move forward (I'll blog about that separately shortly). The following is the slide deck for the talk: JavaEE.Next(): Java EE 7, 8, and Beyond from Reza Rahman My last talk for the show was my JavaScript+Java EE 7 talk. This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman Unsurprisingly this talk was well-attended. The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. My next NFJS show is the Central Ohio Software Symposium in Columbus on June 6-8 (sorry for the late notice - it's been a really crazy few weeks). Here's my tour schedule so far, I'll keep you up-to-date as the tour goes forward: June 6 - 8, Columbus Ohio. June 24 - 27, Denver Colorado (UberConf) - my most extensive agenda on the tour so far. July 18 - 20, Austin Texas. I hope you'll take this opportunity to get some updates on Java EE as well as the other awesome content on the tour?

    Read the article

  • Three Buckets of Knowledge

    - by BuckWoody
    As I learn more and more about SQL Server every day, I divide up my information into three “buckets”: Concepts In the first bucket are the general concepts about the topic. What is it? What does it do (or sometimes, what is is supposed to do?) How does one operation flow to another? For this information I use books, magazine articles and believe it or not – Wikipedia. I don’t always trust that last source, but I do use it to see how others lay out their thoughts around a concept. I really like graphical charts that show me the process flow if I can get it, and this is an ideal place for a good presentation. In fact, this may be the only real use for a presentation – I’ll explain what I mean in a moment. Reference The references for a topic include things like Transact-SQL (T-SQL) syntax, or the screen layout on a panel, things like that. Think Dictionary. The only reference I trust for this information is Books Online – presentations are fine, but we’re talking about a dictionary. Ever go to a movie that just reads through a dictionary? Me neither. But I have gone to presentations where people try to include tons of reference materials in their slides. Even if you give me the presentation material later, it’s not really a searchable, readable medium. How To A how-to for me is an example, or even better, a tutorial about an example. Whatever it is shows me a practical use for the concepts and of course involves the syntax. The important thing here is that you need to be able to separate out the example the person is showing you from the stuff you need to know. I can’t tell you how many times folks have told me, “well, sure, if yours is red then that works. But mine is blue.” And I have to explain, “then use “blue” for the search word here.” You get the idea. No one will do your work for you – the examples are meant as a teaching tool only. I accept that, learn what I can, and then run off to create my own thing. You might think a How To works well in a presentation, and it does, for the most part. For a complex example or tutorial, I still prefer the printed word (electronic if possible) so that I can go over the example multiple times, skip around and so on.   The order here isn’t actually that important. Most of the time I start with a concept, look at an example, and then read the reference material. But sometimes I look up an example, read a little of concepts and then check the reference. The only primary thing I try to enforce is to read something from each of them. It’s dangerous to base your work on any single example, reference or concept.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • saving and retrieving a text file in java?

    - by user3319432
    import java.sql. ; import java.awt.; import javax.swing.; import java.awt.event.; public class saving extends JFrame implements ActionListener{ JTextField edpno=new JTextField(10); JLabel l0= new JLabel ("EDP Number: "); JComboBox fname = new JComboBox(); JLabel l1= new JLabel("First Name: "); JTextField lname= new JTextField(20); JLabel l2= new JLabel("Last Name: "); // JTextField contno= new JTextField(20); // JLabel l3= new JLabel("Contact Number: "); JComboBox contno = new JComboBox(); JLabel l3 = new JLabel ("Contact Number: "); JButton bOK = new JButton("Save"); JButton bRetrieve = new JButton("Retrieve"); private ImageIcon icon; JPanel C=new JPanel(){ protected void paintComponent(Graphics g){ g.drawImage(icon.getImage(),0,0,null); super.paintComponent(g); } }; public Search Record (){ icon=new ImageIcon("images/canres.png"); C.setOpaque(false); C.setLayout(new GridLayout(5,2,4,4)); setTitle("Search Record"); C.add (l0); C.add (edpno); edpno.addActionListener(this); C.add (l1); C.add (fname); fname.setForeground(Color.BLUE); fname.setFont(new Font(" ", Font.BOLD,15)); C.add (l2); C.add (lname); C.add (l3); C.add (contno); contno.setForeground(Color.BLUE); contno.setFont(new Font(" ", Font.BOLD,15)); C.add(bOK); bOK.addActionListener(this); C.add (bRetrieve); bRetrieve.addActionListener(this); bOK.setBackground(Color.white); bRetrieve.setBackground(Color.white); } public void saverecord(){ try{ //Connect to the Database Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String path ="jdbc:odbc:;DRIVER=Microsoft Access Driver (*.mdb);DBQ=Database/roomassign.mdb"; String DBPassword =""; String DBUserName =""; Connection con = DriverManager.getConnection(path,"",""); Statement s = con.createStatement(); s.executeQuery("select firstname, Lastname, contact number from name WHERE edpno ='"+edpno.getText()+"'"); ResultSet rs = s.getResultSet(); ResultSetMetaData md = rs.getMetaData(); while(rs.next()) { fname.setSelectedItem(rs.getString(1)); lname.setText(rs.getString(2)); contno.setSelectedItem(rs.getString(3)); // crs.setSelectedItem(rs.getString(4)); } s.close(); con.close(); } catch(Exception Q) { JOptionPane.showMessageDialog(this,Q); } } public void SaveRecord(){ try{ Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String path = "jdbc:odbc:;DRIVER=Microsoft Access Driver (*.mdb);DBQ=Database/roomassign.mdb"; String DBPassword =""; String DBUsername =""; Connection con = DriverManager.getConnection(path,"",""); Statement s = con.createStatement(); String sql = "UPDATE rooms SET Firstname='"+fname.getSelectedItem()+"',Lastname='"+lname.getText()+"',Contactnumber='"+contno.getSelectedItem()+"' WHERE '"+edpno.getText()+"'=edpno"; s.executeUpdate(sql); JOptionPane.showMessageDialog(this,"New room Record has been successfully saved"); dispose(); s.close(); con.close(); } catch(Exception Mismatch){ JOptionPane.showMessageDialog(this,Mismatch); } } public void actionPerformed (ActionEvent ako){ if (ako.getSource() == bRetrieve){ dispose(); } else if (ako.getSource() == bOK){ SaveRecord(); } } public static void main (String [] awtsave){ new Search(); } }

    Read the article

  • NFJS Central Iowa Software Symposium Des Moines Trip Report

    - by reza_rahman
    As some of you may be aware, I recently joined the well-respected US based No Fluff Just Stuff (NFJS) Tour. If you work in the US and still don't know what the No Fluff Just Stuff (NFJS) Tour is, you are doing yourself a very serious disfavor. NFJS is by far the cheapest and most effective way to stay up to date through some world class speakers and talks. Following the US cultural tradition of old-fashioned roadshows, NFJS is basically a set program of speakers and topics offered at major US cities year round. The NFJS Central Iowa Software Symposium was held August 8 - 10 in Des Moines. The attendance at the event and my sessions was moderate by comparison to some of the other shows. It is one of the few events of it's kind that take place this part the country so it is extremely important. I had five talks total over two days, more or less back-to-back. The first one was my JavaScript + Java EE 7 talk titled "Using JavaScript/HTML5 Rich Clients with Java EE 7". This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. I am delivering this material at JavaOne 2014 as a two-hour tutorial. This should give me a little more bandwidth to dig a little deeper, especially on the JavaScript end. The second talk (on the second day) was our flagship Java EE 7/8 talk. Currently the talk is basically about Java EE 7 but I'm slowly evolving the talk to transform it into a Java EE 8 talk as we move forward. The following is the slide deck for the talk: JavaEE.Next(): Java EE 7, 8, and Beyond from Reza Rahman The next talk I delivered was my Cargo Tracker/Java EE + DDD talk. This talk basically overviews DDD and describes how DDD maps to Java EE using code examples/demos from the Cargo Tracker Java EE Blue Prints project. Applied Domain-Driven Design Blue Prints for Java EE from Reza Rahman The third was my talk titled "Using NoSQL with ~JPA, EclipseLink and Java EE". The talk covers an interesting gap that there is surprisingly little material on out there. The talk has three parts -- a birds-eye view of the NoSQL landscape, how to use NoSQL via a JPA centric facade using EclipseLink NoSQL, Hibernate OGM, DataNucleus, Kundera, Easy-Cassandra, etc and how to use NoSQL native APIs in Java EE via CDI. The slides for the talk are here: Using NoSQL with ~JPA, EclipseLink and Java EE from Reza Rahman The JPA based demo is available here, while the CDI based demo is available here. Both demos use MongoDB as the data store. Do let me know if you need help getting the demos up and running. I finishd off the event with a talk titled Building Java HTML5/WebSocket Applications with JSR 356. The talk introduces HTML 5 WebSocket, overviews JSR 356, tours the API and ends with a small WebSocket demo on GlassFish 4. The slide deck for the talk is posted below. Building Java HTML5/WebSocket Applications with JSR 356 from Reza Rahman The demo code is posted on GitHub: https://github.com/m-reza-rahman/hello-websocket. My next NFJS show is the Greater Atlanta Software Symposium on September 12 - 14. Here's my tour schedule so far, I'll keep you up-to-date as the tour goes forward: September 12 - 14, Atlanta. September 19 - 21, Boston. October 17 - 19, Seattle. I hope you'll take this opportunity to get some updates on Java EE as well as the other useful content on the tour?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >