Search Results

Search found 58440 results on 2338 pages for 'data cleansing'.

Page 545/2338 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • The name 'GridView1' does not exist in the current context

    - by sameer
    hi all, I have two files named as TimeSheet.aspx.cs and TimSheet.aspx ,code of the file are given below for your reference. when i build the application im getting error "The name 'GridView1' does not exist in the current context" even thought i have a control with the id GridView1 and i have added the runat="server" as well. Im not able to figure out what is causing this issue.Can any one figure whats happen here. Thanks & Regards, ======================================= TimeSheet.aspx.cs ======================================= #region Using directives using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using TSMS.Web.UI; #endregion public partial class TimeSheets: Page { protected void Page_Load(object sender, EventArgs e) { FormUtil.RedirectAfterUpdate(GridView1, "TimeSheets.aspx?page={0}"); FormUtil.SetPageIndex(GridView1, "page"); FormUtil.SetDefaultButton((Button)GridViewSearchPanel1.FindControl("cmdSearch")); } protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { string urlParams = string.Format("TimeSheetId={0}", GridView1.SelectedDataKey.Values[0]); Response.Redirect("TimeSheetsEdit.aspx?" + urlParams, true); } protected void GridView1_RowCommand(object sender, GridViewCommandEventArgs e) { } } ======================================================= TimeSheet.aspx ======================================================= <%@ Page Language="C#" Theme="Default" MasterPageFile="~/MasterPages/admin.master" AutoEventWireup="true" CodeFile="TimeSheets.aspx.cs" Inherits="TimeSheets" Title="TimeSheets List" %> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder2" Runat="Server">Time Sheets List</asp:Content> <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <data:GridViewSearchPanel ID="GridViewSearchPanel1" runat="server" GridViewControlID="GridView1" PersistenceMethod="Session" /> <br /> <data:EntityGridView ID="GridView1" runat="server" AutoGenerateColumns="False" OnSelectedIndexChanged="GridView1_SelectedIndexChanged" DataSourceID="TimeSheetsDataSource" DataKeyNames="TimeSheetId" AllowMultiColumnSorting="false" DefaultSortColumnName="" DefaultSortDirection="Ascending" ExcelExportFileName="Export_TimeSheets.xls" onrowcommand="GridView1_RowCommand" > <Columns> <asp:CommandField ShowSelectButton="True" ShowEditButton="True" /> <asp:BoundField DataField="TimeSheetId" HeaderText="Time Sheet Id" SortExpression="[TimeSheetID]" ReadOnly="True" /> <asp:BoundField DataField="TimeSheetTitle" HeaderText="Time Sheet Title" SortExpression="[TimeSheetTitle]" /> <asp:BoundField DataField="StartDate" DataFormatString="{0:d}" HtmlEncode="False" HeaderText="Start Date" SortExpression="[StartDate]" /> <asp:BoundField DataField="EndDate" DataFormatString="{0:d}" HtmlEncode="False" HeaderText="End Date" SortExpression="[EndDate]" /> <asp:BoundField DataField="DateOfCreation" DataFormatString="{0:d}" HtmlEncode="False" HeaderText="Date Of Creation" SortExpression="[DateOfCreation]" /> <data:BoundRadioButtonField DataField="Locked" HeaderText="Locked" SortExpression="[Locked]" /> <asp:BoundField DataField="ReviewedBy" HeaderText="Reviewed By" SortExpression="[ReviewedBy]" /> <data:HyperLinkField HeaderText="Employee Id" DataNavigateUrlFormatString="EmployeesEdit.aspx?EmployeeId={0}" DataNavigateUrlFields="EmployeeId" DataContainer="EmployeeIdSource" DataTextField="LastName" /> </Columns> <EmptyDataTemplate> <b>No TimeSheets Found!</b> </EmptyDataTemplate> </data:EntityGridView> <asp:GridView ID="GridView2" runat="server"> </asp:GridView> <br /> <asp:Button runat="server" ID="btnTimeSheets" OnClientClick="javascript:location.href='TimeSheetsEdit.aspx'; return false;" Text="Add New"></asp:Button> <data:TimeSheetsDataSource ID="TimeSheetsDataSource" runat="server" SelectMethod="GetPaged" EnablePaging="True" EnableSorting="True" EnableDeepLoad="True" > <DeepLoadProperties Method="IncludeChildren" Recursive="False"> <Types> <data:TimeSheetsProperty Name="Employees"/> <%--<data:TimeSheetsProperty Name="TimeSheetDetailsCollection" />--%> </Types> </DeepLoadProperties> <Parameters> <data:CustomParameter Name="WhereClause" Value="" ConvertEmptyStringToNull="false" /> <data:CustomParameter Name="OrderByClause" Value="" ConvertEmptyStringToNull="false" /> <asp:ControlParameter Name="PageIndex" ControlID="GridView1" PropertyName="PageIndex" Type="Int32" /> <asp:ControlParameter Name="PageSize" ControlID="GridView1" PropertyName="PageSize" Type="Int32" /> <data:CustomParameter Name="RecordCount" Value="0" Type="Int32" /> </Parameters> </data:TimeSheetsDataSource> </asp:Content>

    Read the article

  • Driving me INSANE: Unable to Retrieve Metadata for

    - by Loren
    I've been spending the past 3 days trying to fix this problem I'm encountering - it's driving me insane... I'm not quite sure what is causing this bug - here are the details: MVC4 + Entity Framework 4.4 + MySql + POCO/Code First I'm setting up the above configuration .. here are my classes: namespace BTD.DataContext { public class BTDContext : DbContext { public BTDContext() : base("name=BTDContext") { } protected override void OnModelCreating(DbModelBuilder modelBuilder) { base.OnModelCreating(modelBuilder); //modelBuilder.Conventions.Remove<System.Data.Entity.Infrastructure.IncludeMetadataConvention>(); } public DbSet<Product> Products { get; set; } public DbSet<ProductImage> ProductImages { get; set; } } } namespace BTD.Data { [Table("Product")] public class Product { [Key] public long ProductId { get; set; } [DisplayName("Manufacturer")] public int? ManufacturerId { get; set; } [Required] [StringLength(150)] public string Name { get; set; } [Required] [DataType(DataType.MultilineText)] public string Description { get; set; } [Required] [StringLength(120)] public string URL { get; set; } [Required] [StringLength(75)] [DisplayName("Meta Title")] public string MetaTitle { get; set; } [DataType(DataType.MultilineText)] [DisplayName("Meta Description")] public string MetaDescription { get; set; } [Required] [StringLength(25)] public string Status { get; set; } [DisplayName("Create Date/Time")] public DateTime CreateDateTime { get; set; } [DisplayName("Edit Date/Time")] public DateTime EditDateTime { get; set; } } [Table("ProductImage")] public class ProductImage { [Key] public long ProductImageId { get; set; } public long ProductId { get; set; } public long? ProductVariantId { get; set; } [Required] public byte[] Image { get; set; } public bool PrimaryImage { get; set; } public DateTime CreateDateTime { get; set; } public DateTime EditDateTime { get; set; } } } Here is my web.config setup... <connectionStrings> <add name="BTDContext" connectionString="Server=localhost;Port=3306;Database=btd;User Id=root;Password=mypassword;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> The database AND tables already exist... I'm still pretty new with mvc but was using this tutorial The application builds fine.. however when I try to add a controller using Product (BTD.Data) as my model class and BTDContext (BTD.DataContext) as my data context class I receive the following error: Unable to retrieve metadata for BTD.Data.Product using the same DbCompiledModel to create context against different types of database servers is not supported. Instead, create a separate DbCompiledModel for each type of server being used. I am at a complete loss - I've scoured google with almost every different variation of that error message above I can think of but to no avail. Here are the things i can verify... MySql is working properly I'm using MySql Connector version 6.5.4 and have created other ASP.net web forms + entity framework applications with ZERO problems I have also tried including/removing this in my web.config: <system.data> <DbProviderFactories> <remove invariant="MySql.Data.MySqlClient"/> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=6.5.4.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" /> </DbProviderFactories> I've literally been working on this bug for days - I'm to the point now that I would be willing to pay someone to solve it.. no joke... I'd really love to use MVC 4 and Razor - I was so excited to get started on this, but now i'm pretty discouraged - I truly appreciate any help/guidance on this! Also note - i'm using Entityframework from Nuget... Another Note I was using the default visual studio template that creates your MVC project with the account pages and other stuff. I JUST removed all references to the added files because they were trying to use the "DefaultConnection" which didn't exist - so i thought those files may be what was causing the error - however still no luck after removing them - I just wanted to let everyone know i'm using the visual studio MVC project template which pre-creates a bunch of files. I will be trying to recreate this all from a blank MVC project which doesn't have those files - i will update this once i test that Other References It appears someone else is having the same issues I am - the only difference is they are using sql server - I tried tweaking all my code to follow the suggestions on this stackoverflow question/answer here but still to no avail

    Read the article

  • How to set focus for CustCombBox in a CellEditingTemplate when entering page at the first time(MVVM

    - by Shamin
    PreparingCellForEdit="dg_PreparingCellForEdit" BeginningEdit="dg_BeginningEdit" <data:DataGridTemplateColumn MinWidth="300"> <data:DataGridTemplateColumn.HeaderStyle> <Style TargetType="primitives:DataGridColumnHeader" BasedOn="{StaticResource FOTDataGridColumnHeaderStyle}"> <Setter Property="ContentTemplate"> <Setter.Value> <DataTemplate> <TextBlock Text="{Binding CancelReasonText2,Source={StaticResource LabelResource}}" Style="{StaticResource TextBlockLabelStandardStyle}"/> </DataTemplate> </Setter.Value> </Setter> </Style> </data:DataGridTemplateColumn.HeaderStyle> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding CancelReason.CancelCodeDescription}" Style="{StaticResource TextBlockLabelStandardStyle}"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> <data:DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <input:AutoCompleteBox x:Name="cBoxCancelReason" FilterMode="StartsWith" IsDropDownOpen="True" SelectedItem="{Binding CancelReason, Mode=TwoWay}" ItemsSource="{Binding CancelCodes}" ValueMemberPath="CancelCodeDescription" > <input:AutoCompleteBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding CancelCodeDescription}" Style="{StaticResource TextBlockLabelStandardStyle}"/> </DataTemplate> </input:AutoCompleteBox.ItemTemplate> </input:AutoCompleteBox> </DataTemplate> </data:DataGridTemplateColumn.CellEditingTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> ---CodeBind public partial class CancelFlightView : UserControl,ICancelFlightView { private data.CancelCode DefaultCancelCode { get { data.CancelCode code = new data.CancelCode(); code.CancelCd = "-1"; code.CancelCodeDescription = "-- Select Cancel Reason --"; return code; } } public CancelFlightView() { InitializeComponent(); this.dg.LoadingRow += new EventHandler<DataGridRowEventArgs>(dg_LoadingRow); //this.Loaded += new RoutedEventHandler(CancelFlightView_Loaded); } void dg_LoadingRow(object sender, DataGridRowEventArgs e) { CheckBox checkBox = (CheckBox)dg.Columns[0].GetCellContent(e.Row); if (checkBox.IsChecked.Value) { FrameworkElement obj = (FrameworkElement)dg.Columns[1].GetCellContent(e.Row); System.Windows.Browser.HtmlPage.Plugin.Focus(); DataGridCell cellEdit = (DataGridCell)obj.Parent; cellEdit.Focus(); dg.BeginEdit(); } } //private void UserControl_Loaded(object sender, RoutedEventArgs e) //{ // if (DataContext != null) // { // CancelFlightViewModel viewModel = (CancelFlightViewModel)DataContext; // viewModel.View = this; // viewModel.Grid = dg; // //viewModel.InitFocus(); // } //} //void CancelFlightView_Loaded(object sender, RoutedEventArgs e) //{ // if (dg.SelectedItem != null) // { // CheckBox checkBox = (CheckBox)dg.Columns[0].GetCellContent(dg.SelectedItem); // if (checkBox.IsChecked.Value) // { // DataGridCell cellEdit = ((DataGridCell)((System.Windows.Controls.Primitives.DataGridCellsPresenter)((DataGridCell)checkBox.Parent).Parent).Children[1]); // dg.CurrentColumn = dg.Columns[1]; // System.Windows.Browser.HtmlPage.Plugin.Focus(); // cellEdit.Focus(); // dg.BeginEdit(); // } // } //} public CancelFlightView(CancelFlightViewModel viewModel):this() { ViewModel = viewModel; } private void dg_PreparingCellForEdit(object sender, DataGridPreparingCellForEditEventArgs e) { object obj = dg.Columns[1].GetCellContent(e.Row); if (obj != null && obj.GetType() == typeof(AutoCompleteBox)) { AutoCompleteBox cBoxCancelReason = (AutoCompleteBox)obj; System.Windows.Browser.HtmlPage.Plugin.Focus(); cBoxCancelReason.Focus(); } } private void CustomComboBox_SelectionChanged(object sender, SelectionChangedEventArgs e) { } private void dg_BeginningEdit(object sender, DataGridBeginningEditEventArgs e) { } private void chkFlight_Click(object sender, RoutedEventArgs e) { CheckBox chkTemp = sender as CheckBox; if (!chkTemp.IsChecked.Value) { } else { DataGridCell cellEdit = ((DataGridCell)((System.Windows.Controls.Primitives.DataGridCellsPresenter)((DataGridCell)chkTemp.Parent).Parent).Children[1]); dg.CurrentColumn = dg.Columns[1]; cellEdit.Focus(); dg.BeginEdit(); } } private void LayoutRoot_KeyUp(object sender, KeyEventArgs e) { //if (e.Key == Key.Enter) //{ //} } #region ICancelFlightView Members public CancelFlightViewModel ViewModel { get { return DataContext as CancelFlightViewModel; } set { DataContext = value; } } #endregion } Now, when user click CheckBox, I can set focus on CustCombBox, but I can't set focus on Whose checkBox.IsChecked.Value = true when page is opened for the first time. is it possible on MVVM pattern? Looking forward your reply, thanks very much.

    Read the article

  • Why won't my DIV height grow? Is it because I am using floats?

    - by user1684636
    Please help! I am trying to create some div sections and some of these div sections have some other divs that are floated. All of which I have cleared. But the sections are not growing to accomodate the content inside of them. The following is my HTML - <div class="data"> <div class="name">Data</div> <div class="first section"> <div class="title">First Section</div> <div class="left settings"> <div class="row"> <div class="field">First Name</div> <div class="value">John</div> </div> <div class="row"> <div class="field">Last Name</div> <div class="value">Smith</div> </div> </div> <div class="right settings"> <div class="row"> <div class="field">ID</div> <div class="value">321</div> </div> <div class="row"> <div class="field">Group</div> <div class="value">Eng</div> </div> </div> </div> <div class="second section"> <div class="title">Second Section</div> </div> <div class="third section"> <div class="title">Third Section</div> </div> </div> The following is my CSS - div.data { position: relative; top: 1em; width: 100em; } div.data div.name { color: #0066FF; font-weight: bold; } div.data div.section div.title { color: green; font-weight: bold; } /** first section **/ div.data div.first div.settings { position: relative; width: 799px; border-top: 1px solid; float: left; } div.data div.first div.left { border-left: 1px solid; } div.data div.first div.settings div.row { border-bottom: 1px solid; clear: both; height: 2em; } div.data div.first div.settings div.row div { float: left; height: 22px; padding: 5px; border-right: 1px solid; } div.data div.first div.settings div.row div.field { width: 192px; } div.data div.first div.settings div.row div.value { width: 585px; } /** second section **/ div.data div.second { clear: both; } For the first section, I have a title and a table that is made up of a left and right. Both of these are floated left and cleared. The rows of the left and right table have a field and value which are also floated left and right and cleared. At this point, I expect "first section" to grow as the content inside of it grows. But instead the height is stuck at 22px, the same height as the section title! What is going on? I tried overflow:auto for div.section and that worked. But when I tried to set this style for div.settings in "first section": div.data div.first div.settings { position: relative; top: 1em; } I end up with these scroll bars, instead of the height growing to fit the new changes. And everything was out of whack. I am really at my wit's end trying to figure this out. If anyone can give me any suggestions on what I am doing wrong, it would be a great help. Thanks.

    Read the article

  • java: how to get a string representation of a compressed byte array ?

    - by Guillaume
    I want to put some compressed data into a remote repository. To put data on this repository I can only use a method that take the name of the resource and its content as a String. (like data.txt + "hello world"). The repository is moking a filesystem but is not, so I can not use File directly. I want to be able to do the following: client send to server a file 'data.txt' server compress 'data.txt' into a compressed file 'data.zip' server send a string representation of data.zip to the repository repository store data.zip client download from repository data.zip and his able to open it with its favorite zip tool The problem arise at step 3 when I try to get a string representation of my compressed file. Here is a sample class, using the zip*stream and that emulate the repository showcasing my problem. The created zip file is working, but after its 'serialization' it's get corrupted. (the sample class use jakarta commons.io ) Many thanks for your help. package zip; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.util.zip.ZipEntry; import java.util.zip.ZipInputStream; import java.util.zip.ZipOutputStream; import org.apache.commons.io.FileUtils; /** * Date: May 19, 2010 - 6:13:07 PM * * @author Guillaume AME. */ public class ZipMe { public static void addOrUpdate(File zipFile, File ... files) throws IOException { File tempFile = File.createTempFile(zipFile.getName(), null); // delete it, otherwise you cannot rename your existing zip to it. tempFile.delete(); boolean renameOk = zipFile.renameTo(tempFile); if (!renameOk) { throw new RuntimeException("could not rename the file " + zipFile.getAbsolutePath() + " to " + tempFile.getAbsolutePath()); } byte[] buf = new byte[1024]; ZipInputStream zin = new ZipInputStream(new FileInputStream(tempFile)); ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFile)); ZipEntry entry = zin.getNextEntry(); while (entry != null) { String name = entry.getName(); boolean notInFiles = true; for (File f : files) { if (f.getName().equals(name)) { notInFiles = false; break; } } if (notInFiles) { // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(name)); // Transfer bytes from the ZIP file to the output file int len; while ((len = zin.read(buf)) > 0) { out.write(buf, 0, len); } } entry = zin.getNextEntry(); } // Close the streams zin.close(); // Compress the files if (files != null) { for (File file : files) { InputStream in = new FileInputStream(file); // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(file.getName())); // Transfer bytes from the file to the ZIP file int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } // Complete the entry out.closeEntry(); in.close(); } // Complete the ZIP file } tempFile.delete(); out.close(); } public static void main(String[] args) throws IOException { final String zipArchivePath = "c:/temp/archive.zip"; final String tempFilePath = "c:/temp/data.txt"; final String resultZipFile = "c:/temp/resultingArchive.zip"; File zipArchive = new File(zipArchivePath); FileUtils.touch(zipArchive); File tempFile = new File(tempFilePath); FileUtils.writeStringToFile(tempFile, "hello world"); addOrUpdate(zipArchive, tempFile); //archive.zip exists and contains a compressed data.txt that can be read using winrar //now simulate writing of the zip into a in memory cache String archiveText = FileUtils.readFileToString(zipArchive); FileUtils.writeStringToFile(new File(resultZipFile), archiveText); //resultingArchive.zip exists, contains a compressed data.txt, but it can not //be read using winrar: CRC failed in data.txt. The file is corrupt } }

    Read the article

  • how to serialize / deserialize classes defined in .proto (protobuf)

    - by make
    Hi, Could someone please help me with serialization/deserialization classes defined in .proto (protobuf). here is an exp that I am trying to build: file.proto message Data{ required string x1 = 1; required uint32 x2 = 2; required float x3 = 3; } message DataExge { repeated Data data = 1; } client.cpp ... void serialize(const DataExge &data_snd){ try { ofstream ofs("DataExge"); data_snd.SerializeToOstream(&ofs); } catch(exception &e) { cerr << "serialize/exception: " << e.what() << endl; exit(1); } } void deserialize(DataExge &data_rec){ try { ifstream ifs("DataExge"); data_rec.ParseFromIstream(&ifs); } catch(exception& e) { cerr << "deserialize/exception: " << e.what() << endl; exit(1); } } int main(){ ... DataExge dataexge; Data *dat = dataexge.add_data(); char *y1 = "operation1"; uint32_t y2 = 123 ; float y3 = 3.14; // assigning data to send() dat->set_set_x1(y1); dat->set_set_x2(y2); dat->set_set_x3(y3); //sending data to the client serialize(dataexge); if (send(socket, &dataexge, sizeof(dataexge), 0) < 0) { cerr << "send() failed" ; exit(1); } //receiving data from the server deserialize(dataexge); if (recv(socket, &dataexge, sizeof(dataexge), 0) < 0) { cerr << "recv() failed"; exit(1); } //printing received data cout << dat->x1() << "\n"; cout << dat->x2() << "\n"; cout << dat->x3() << "\n"; ... } server.cpp ... void serialize(const DataExge &data_snd){ try { ofstream ofs("DataExge"); data_snd.SerializeToOstream(&ofs); } catch(exception &e) { cerr << "serialize/exception: " << e.what() << endl; exit(1); } } void deserialize(DataExge &data_rec){ try { ifstream ifs("DataExge"); data_rec.ParseFromIstream(&ifs); } catch(exception& e) { cerr << "deserialize/exception: " << e.what() << endl; exit(1); } } int main(){ ... DataExge dataexge; Data *dat = dataexge.add_data(); //receiving data from the client deserialize(dataexge); if (recv(socket, &dataexge, sizeof(dataexge), 0) < 0) { cerr << "recv() failed"; exit(1); } //printing received data cout << dat->x1() << "\n"; cout << dat->x2() << "\n"; cout << dat->x3() << "\n"; // assigning data to send() dat->set_set_x1("operation2"); dat->set_set_x2(dat->x2() + 1); dat->set_set_x3(dat->x3() + 1.1); //sending data to the client serialize(dataexge); //error// I am getting error at this line ... if (send(socket, &dataexge, sizeof(dataexge), 0) < 0) { cerr << "send() failed" ; exit(1); } ... } Thanks for your help and replies -

    Read the article

  • How to check all check boxes at a click of a button

    - by LivingThing
    I am new to Swing, UI and MVC I have created a code based on MVC. Now my problem is that that in the controller part i have an actioneventlistener which listens to different button clicks. Out of all those buttons i have "select all" and "de-select all". In my view i have a table, one of the column of that table contains "check boxes". Now, when i click the "select-all" button i want to check all the check boxes and with "de-select all" i want to uncheck all of them. Below is my code which is not working. Please tell me what am i doing wrong here. Also, if someone knows a more elagent way please share. Thanks In my view public class CustomerSelectorDialogUI extends JFrame{ public CustomerSelectorDialogUI(TestApplicationUI ownerView, DummyCustomerStore dCStore, boolean modality) { //super(ownerView, modality); setTitle("[=] Customer Selection Dialog [=]"); //setDefaultCloseOperation(JFrame.DO_NOTHING_ON_CLOSE); custSelectPanel = new JPanel(); buttonPanel = new JPanel(); selectAllButton = new JButton(" Select All "); clearAllButton = new JButton(" Clear All "); applyButton = new JButton(" Apply "); cancelButton = new JButton(" Cancel "); PopulateAndShow(dCStore, Boolean.FALSE); } public void PopulateAndShow(DummyCustomerStore dCStore, Boolean select) { List data = new ArrayList(); for (Customer customer : dCStore.getAllCustomers()) { Object record[] = new Object[COLUMN_COUNT]; record[0] = (select == false) ? Boolean.FALSE : Boolean.TRUE; record[1] = Integer.toString(customer.customerId); record[2] = customer.fullName; data.add(record); } tModel = new TableModel(data); // In the above for loop accoring to user input (i.e click on check all or // uncheck all) i have tried to update the data. As it can be seen that i // have a condition for record[0]. //After the loop, here i have tried several options like validate(). repaint but to no avail customerTable = new JTable(tModel); scrollPane = new JScrollPane(customerTable); setContentPane(this.createContentPane()); setSize(480, 580); setResizable(false); setVisible(true); } private JPanel createContentPane() { custSelectPanel.setLayout(null); customerTable.setDragEnabled(false); customerTable.setFillsViewportHeight(true); scrollPane.setLocation(10, 10); scrollPane.setSize(450,450); custSelectPanel.add(scrollPane); buttonPanel.setLayout(null); buttonPanel.setLocation(10, 480); buttonPanel.setSize(450, 100); custSelectPanel.add(buttonPanel); selectAllButton.setLocation(0, 0); selectAllButton.setSize(100, 40); buttonPanel.add(selectAllButton); clearAllButton.setLocation(110, 0); clearAllButton.setSize(100, 40); buttonPanel.add(clearAllButton); applyButton.setLocation(240, 0); applyButton.setSize(100, 40); buttonPanel.add(applyButton); cancelButton.setLocation(350, 0); cancelButton.setSize(100, 40); buttonPanel.add(cancelButton); return custSelectPanel; } } Table Model private class TableModel extends AbstractTableModel { private List data; public TableModel(List data) { this.data = data; } private String[] columnNames = {"Selected ", "Customer Id ", "Customer Name " }; public int getColumnCount() { return COLUMN_COUNT; } public int getRowCount() { return data == null ? 0 : data.size(); } public String getColumnName(int col) { return columnNames[col]; } public void setValueAt(Object value, int rowIndex, int columnIndex) { getRecord(rowIndex)[columnIndex] = value; super.fireTableCellUpdated(rowIndex, columnIndex); } private Object[] getRecord(int rowIndex) { return (Object[]) data.get(rowIndex); } public Object getValueAt(int rowIndex, int columnIndex) { return getRecord(rowIndex)[columnIndex]; } public Class getColumnClass(int columnIndex) { if (data == null || data.size() == 0) { return Object.class; } Object o = getValueAt(0, columnIndex); return o == null ? Object.class : o.getClass(); } public boolean isCellEditable(int row, int col) { if (col > 0) { return false; } else { return true; } } } } A Views Action Listener class CustomerSelectorUIListener implements ActionListener{ CustomerSelectorDialogUI custSelectView; Controller controller; public CustomerSelectorUIListener (Controller controller, CustomerSelectorDialogUI custSelectView) { this.custSelectView = custSelectView; this.controller = controller; } @Override public void actionPerformed(ActionEvent e) { String actionEvent = e.getActionCommand(); else if ( actionEvent.equals( "clearAllButton" ) ) { controller.checkButtonControl(false); } else if ( actionEvent.equals( "selectAllButton" ) ) { controller.checkButtonControl(true); } } } Main Controller public class Controller implements ActionListener{ CustomerSelectorDialogUI selectUI; DummyCustomerStore store; public Controller( DummyCustomerStore store, TestApplicationUI appUI ) { this.store = store; this.appUI = appUI; appUI.ButtonListener( this ); } @Override public void actionPerformed(ActionEvent event) { String viewAction = event.getActionCommand(); if (viewAction.equals("TEST")) { selectUI = new CustomerSelectorDialogUI(appUI, store, true); selectUI.showTextActionListeners(new CustomerSelectorUIListener( this, selectUI ) ); selectUI.setVisible( true ); } } public void checkButtonControl (Boolean checkAll) { selectUI.PopulateAndShow(store, checkAll); } }

    Read the article

  • jquery tabs with form help

    - by sico87
    Hello, I am implementing jQuery tabs on mysite, one of the tabs holds a form and this is my problem, the form is loaded in via ajax as it is used multiple time throughout the site. My issue is that when the form is submitted the page leaves the tabbed area, whereas I need to stay within the tabbed system. Below is the code I am using TABS HTML <div id="tabs"> <ul> <li><a href="#tabs-1">Active Categories</a></li> <li><a href="#tabs-2">De-activated Categories</a></li> <li><a href="<?=base_url();?>admin/addCategory">Add A New Category</a></li> </ul> FORM MARKUP <div id="contact_form"> <?php // open the form echo form_open(base_url().'admin/addCategory'); // categoryTitle echo form_label('Category Name', 'categoryTitle'); echo form_error('categoryTitle'); $data = array( 'name' => 'categoryTitle', 'id' => 'categoryTitle', 'value' => $categoryTitle, ); echo form_input($data); // categoryAbstract $data = array( 'name' => 'categoryAbstract', 'id' => 'categoryAbstract wysiwyg', 'value' => $categoryAbstract, ); echo form_label('Category Abstract', 'categoryAbstract'); echo form_error('categoryAbstract'); echo form_textarea($data); // categorySlug $data = array( 'name' => 'categorySlug', 'id' => 'categorySlug', 'value' => $categorySlug, ); echo form_label('Category Slug', 'categorySlug'); echo form_error('categorySlug'); echo form_input($data); // categoryIsSpecial /*$data = array( 'name' => 'categoryIsSpecial', 'id' => 'categoryIsSpecial', 'value' => '1', 'checked' => $checkedSpecial, ); echo form_label('Is Category Special?', 'categoryIsSpecial'); echo form_error('categoryIsSpecial'); echo form_checkbox($data);*/ // categoryOnline $data = array( 'name' => 'categoryOnline', 'id' => 'categoryOnline', 'value' => '1', 'checked' => $checkedOnline, ); echo form_label('Online?', 'categoryOnline'); echo form_checkbox($data); echo form_error('categoryOnline'); //hidden field check if we are adding or editing echo form_hidden('edit', $edit); echo form_hidden('categoryId', $categoryId); // categorySubmit $data = array('class' => 'submit', 'id' => 'submit', 'value'=>'Submit', 'name' => 'categorySubmit'); echo form_submit($data); echo form_close(); ?> </div> FORM PROCESS function saveCategory() { $data = array(); // we need to set the what element the form errors get displayed in $this->form_validation->set_error_delimiters('<div class="formError">', '</div>'); // we need to estabilsh some rules so the form can be submitted without error, // or if there is error then the form needs show errors. $config = array( array( 'field' => 'categoryTitle', 'label' => 'Category title', 'rules' => 'required|trim|max_length[25]|xss_clean' ), array( 'field' => 'categoryAbstract', 'label' => 'Category abstract', 'rules' => 'required|trim|max_length[150]|xss_clean' ), array( 'field' => 'categorySlug', 'label' => 'Category slug', 'rules' => 'required|trim|alpha|max_length[25]|xss_clean' ), /*array( 'field' => 'categoryIsSpecial', 'label' => 'Special category', 'rules' => 'trim|xss_clean' ),*/ array( 'field' => 'categoryOnline', 'label' => 'Category online', 'rules' => 'trim|xss_clean' ) ); $this->form_validation->set_rules($config); // with the validation rules set we can no run the validation rules over the form // if any the validation returns false then the error messages will be returned to the view // in the delimiters that we set further up the page. if($this->form_validation->run() == FALSE) { // we should reload the form $this->load->view('admin/add_category'); } }

    Read the article

  • strange behavior while including a class in php

    - by user1864539
    I'm experiencing a strange behavior with PHP. Basically I want to require a class within a PHP script. I know it is straight forward and I did it before but when I do so, it change the behavior of my jquery (1.8.3) ajax response. I'm running a wamp setup and my PHP version is 5.4.6. Here is a sample as for my index.html head (omitting the jquery js include) <script> $(document).ready(function(){ $('#submit').click(function(){ var action = $('#form').attr('action'); var form_data = { fname: $('#fname').val(), lname: $('#lname').val(), phone: $('#phone').val(), email: $('#email').val(), is_ajax: 1 }; $.ajax({ type: $('#form').attr('method'), url: action, data: form_data, success: function(response){ switch(response){ case 'ok': var msg = 'data saved'; break; case 'ko': var msg = 'Oops something wrong happen'; break; default: var msg = 'misc:<br/>'+response; break; } $('#message').html(msg); } }); return false; }); }); </script> body <div id="message"></div> <form id="form" action="handler.php" method="post"> <p> <input type="text" name="fname" id="fname" placeholder="fname"> <input type="text" name="lname" id="lname" placeholder="lname"> </p> <p> <input type="text" name="phone" id="phone" placeholder="phone"> <input type="text" name="email" id="email" placeholder="email"> </p> <input type="submit" name="submit" value="submit" id="submit"> </form> And as for the handler.php file: <?php require('class/Container.php'); $filename = 'xml/memory.xml'; $is_ajax = $_REQUEST['is_ajax']; if(isset($is_ajax) && $is_ajax){ $fname = $_REQUEST['fname']; $lname = $_REQUEST['lname']; $phone = $_REQUEST['phone']; $email = $_REQUEST['email']; $obj = new Container; $obj->insertData('fname',$fname); $obj->insertData('lname',$lname); $obj->insertData('phone',$phone); $obj->insertData('email',$email); $tmp = $obj->give(); $result = $tmp['_obj']; /* Push data inside array */ $array = array(); foreach($result as $key => $value){ array_push($array,$key,$value); } $xml = simplexml_load_file($filename); // check if there is any data in if(count($xml->elements->data) == 0){ // if not, create the structure $xml->elements->addChild('data',''); } // proceed now that we do have the structure if(count($xml->elements->data) == 1){ foreach($result as $key => $value){ $xml->elements->data->addChild($key,$value); } $xml->saveXML($filename); echo 'ok'; }else{ echo 'ko'; } } ? The Container class: <?php class Container{ private $_obj; public function __construct(){ $this->_obj = array(); } public function addData($data = array()){ if(!empty($data)){ $oldData = $this->_obj; $data = array_merge($oldData,$data); $this->_obj = $data; } } public function removeData($key){ if(!empty($key)){ $oldData = $this->_obj; unset($oldData[$key]); $this->_obj = $oldData; } } public function outputData(){ return $this->_obj; } public function give(){ return get_object_vars($this); } public function insertData($key,$value){ $this->_obj[$key] = $value; } } ? The strange thing is that my result always fall under the default switch statement and the ajax response fit both present statement. I noticed then if I just paste the Container class on the top of the handler.php file, everything works properly but it kind of defeat what I try to achieve. I tried different way to include the Container class but it seem to be than the issue is specific to this current scenario. I'm still learning PHP and my guess is that I'm missing something really basic. I also search on stackoverflow regarding the issue I'm experiencing as well as PHP.net, without success. Regards,

    Read the article

  • SQL SERVER – Update Statistics are Sampled By Default

    - by pinaldave
    After reading my earlier post SQL SERVER – Create Primary Key with Specific Name when Creating Table on Statistics, I have received another question by a blog reader. The question is as follows: Question: Are the statistics sampled by default? Answer: Yes. The sampling rate can be specified by the user and it can be anywhere between a very low value to 100%. Let us do a small experiment to verify if the auto update on statistics is left on. Also, let’s examine a very large table that is created and statistics by default- whether the statistics are sampled or not. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Million Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 1000000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO Now let us observe the result of the DBCC SHOW_STATISTICS. The result shows that Resultset is for sure sampling for a large dataset. The percentage of sampling is based on data distribution as well as the kind of data in the table. Before dropping the table, let us check first the size of the table. The size of the table is 35 MB. Now, let us run the above code with lesser number of the rows. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Hundred Thousand Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 100000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO You can see that Rows Sampled is just the same as Rows of the table. In this case, the sample rate is 100%. Before dropping the table, let us also check the size of the table. The size of the table is less than 4 MB. Let us compare the Result set just for a valid reference. Test 1: Total Rows: 1000000, Rows Sampled: 255420, Size of the Table: 35.516 MB Test 2: Total Rows: 100000, Rows Sampled: 100000, Size of the Table: 3.555 MB The reason behind the sample in the Test1 is that the data space is larger than 8 MB, and therefore it uses more than 1024 data pages. If the data space is smaller than 8 MB and uses less than 1024 data pages, then the sampling does not happen. Sampling aids in reducing excessive data scan; however, sometimes it reduces the accuracy of the data as well. Please note that this is just a sample test and there is no way it can be claimed as a benchmark test. The result can be dissimilar on different machines. There are lots of other information can be included when talking about this subject. I will write detail post covering all the subject very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • Exploring packages in code

    In my previous post Searching for tasks with code you can see how to explore the control flow side of packages, drilling down through containers, task, and event handlers, but it didn’t cover the data flow. I recently saw a post on the MSDN forum asking how to edit an existing package programmatically, and the sticking point was how to find the the data flow and the components inside. This post builds on some of the previous code and shows how you can explore all objects inside a package. I took the sample Task Search application I’d written previously, and came up with a totally pointless little console application that just walks through the package and writes out the basic type and name of every object it finds, starting with the package itself e.g. Package – MyPackage . The sample package we used last time showed nested objects as well an event handler; a OnPreExecute event tucked away on the task SQL In FEL. The output of this sample tool would look like this: PackageObjects v1.0.0.0 (1.0.0.26627) Copyright (C) 2009 Konesans Ltd Processing File - Z:\Users\Darren Green\Documents\Visual Studio 2005\Projects\SSISTestProject\EventsAndContainersWithExe cSQLForSearch.dtsx Package - EventsAndContainersWithExecSQLForSearch For Loop - FOR Counter Loop Task - SQL In Counter Loop Sequence Container - SEQ For Each Loop Wrapper For Each Loop - FEL Simple Loop Task - SQL In FEL Task - SQL On Pre Execute for FEL SQL Task Sequence Container - SEQ Top Level Sequence Container - SEQ Nested Lvl 1 Sequence Container - SEQ Nested Lvl 2 Task - SQL In Nested Lvl 2 Task - SQL In Nested Lvl 1 #1 Task - SQL In Nested Lvl 1 #2 Connection Manager – LocalHost The code is very similar to what we had previously, but there are a couple of extra bits to deal with connections and to look more closely at a task and see if it is a Data Flow task. For connections your just examine the package's Connections collection as shown in the abridged snippets below. First you can see the call to the ProcessConnections method, followed by the method itself. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { // Write out the package name Console.WriteLine("Package - {0}", package.Name); ... More ... // Look and the connections ProcessConnections(package.Connections); } private static void ProcessConnections(Connections connections) { foreach (ConnectionManager connectionManager in connections) { Console.WriteLine("Connection Manager - {0}", connectionManager.Name); } } What we didn’t see in the sample output above was anything to do with the Data Flow, but rest assured the code now handles it too. The following snippet shows how each task is examined to see if it is a Data Flow task, and if so we can then loop through all of the components inside the data flow. private static void ProcessTaskHost(TaskHost taskHost) { if (taskHost == null) { return; } Console.WriteLine("Task - {0}", taskHost.Name); // Check if the task is a Data Flow task MainPipe pipeline = taskHost.InnerObject as MainPipe; if (pipeline != null) { ProcessPipeline(pipeline); } } private static void ProcessPipeline(MainPipe pipeline) { foreach (IDTSComponentMetaData90 componentMetadata in pipeline.ComponentMetaDataCollection) { Console.WriteLine("Pipeline Component - {0}", componentMetadata.Name); // If you wish to make changes to the component then you should really use the managed wrapper. // CManagedComponentWrapper wrapper = componentMetadata.Instantiate(); // wrapper.SetComponentProperty("PropertyName", "Value"); } } Hopefully you can see how we get a reference to the Data Flow task, and then use the ComponentMetaDataCollection to find out what components we have inside the pipeline. If you wanted to know more about the component you could look at the ObjectType or ComponentClassID properties. After that it gets a bit harder and you should get a reference to the wrapper object as the comment suggest and start using the properties, just like you would in the create packages samples, see our Code Development category for some for these examples. Download Sample code project PackageObjects.zip (5KB)

    Read the article

  • Oracle BI and XS Energy Drinks – Don’t Miss the Amway Presentation!

    - by Michelle Kimihira
    By Maria Forney Amway is a global leader in the direct sales industry with $10.9B in annual sales in more than 100 countries and territories. The company has implemented a global BI framework that provides accurate, consistent, and timely insights to support global, regional and local analytical research, business planning, performance measurement and assessment. Oracle BI EE is used by 1500 employees across Amway sales, marketing, finance, and supply chain business units as well as Amway affiliates in Europe, Russia, South Africa, Japan, Australia, Latin America, Malaysia, Vietnam, and Indonesia. Last week, I spoke with Lead Data Analyst with Amway Global Sales, Dan Arganbright, and IT Manager with Amway BI Competency Center, Mike Olson, about their upcoming presentation at Oracle OpenWorld in San Francisco. Scheduled during a prime speaking slot on Monday, October 1 at 12:15pm in Moscone West, 2007, Dan and Mike will discuss their experience building Amway’s Distributor Consulting solution, powered by Oracle BI EE. You can find more information here. As background, Amway offers people an opportunity to own their own businesses and consumers exclusive products in health and wellness, beauty and home care.  The Amway internal Sales organization is charged with consulting leadership-level Distributors to help them with data insights and ultimately grow their business. Until recently, this was a resource-intense process of gathering and formatting data. In some markets, it took over 40 hours to collect the data and produce the analysis needed for one consultation session. Amway began its global BI journey in 2006 and since then the company has migrated from having multiple technology providers and integration points to an integrated strategic vendor approach. Today, the company has standardized on Oracle technology for BI.  Amway has achieved cost savings through the retirement of redundant technology platforms. In addition, Mike’s organization has led the charge to align disparate BI organizations into a BI Competency Center.  The following diagram highlights the simplicity of the standardized architecture of Amway today. Dubbed Distributor Consulting, Amway has developed a BI solution using the Oracle technology stack to help Distributor leaders grow their businesses. The Distributor Consulting solution provides over 40 metrics for Sales staff to provide data-driven insights on the Distributors and organizations they support.  Using Oracle BI EE, Exadata, and Oracle Data Integrator, Amway provides customized and personalized business intelligence, and the Oracle BI EE dashboards were developed by the Amway Sales organization, which demonstrates business empowerment of the technology. Amway is also leveraging the power of BI to drive business growth in all of its markets.  A new set of Distributor Segmentation metrics are enabling a better understanding of distributor behaviors. A Global Scorecard that Amway developed provides key metrics at a market and global level for executive-level discussions. Product Analysis teams can now highlight repeat purchase rates, product penetration and the success of CRM campaigns. In the words of Dan and Mike, the addition of Exadata 11 months ago has been “a game changer.”  Amway has been able to dramatically reduce complexity, improve performance and increase business productivity and cost savings. For example, the number of indexes on the global data warehouse was reduced from more than 1,000 to less than 20.  Pulling data for the highest level distributors or the largest markets in the company now can be done in minutes instead of hours.  As a result, IT has shifted from performance tuning and keeping the system operational to higher-value business-focused activities. •       “The distributors that have been introduced to the BI reports have found them extremely helpful. Because they have never had this kind of information before, when they were presented with the reports, they wanted to take action immediately!”  -     Sales Development Manager in Latin America Without giving away more, the Amway case study presentation will be one of the unique customer sessions at OpenWorld this year. Speakers Dan Arganbright and Mike Olson have planned an interactive and entertaining session on Monday October 1 at 12:15pm in Moscone West, 2007. I’ll see you there!

    Read the article

  • OWB 11gR2 &ndash; OLAP and Simba

    - by David Allan
    Oracle Warehouse Builder was the first ETL product to provide a single integrated and complete environment for managing enterprise data warehouse solutions that also incorporate multi-dimensional schemas. The OWB 11gR2 release provides Oracle OLAP 11g deployment for multi-dimensional models (in addition to support for prior releases of OLAP). This means users can easily utilize Simba's MDX Provider for Oracle OLAP (see here for details and cost) which allows you to use the powerful and popular ad hoc query and analysis capabilities of Microsoft Excel PivotTables® and PivotCharts® with your Oracle OLAP business intelligence data. The extensions to the dimensional modeling capabilities have been built on established relational concepts, with the option to seamlessly move from a relational deployment model to a multi-dimensional model at the click of a button. This now means that ETL designers can logically model a complete data warehouse solution using one single tool and control the physical implementation of a logical model at deployment time. As a result data warehouse projects that need to provide a multi-dimensional model as part of the overall solution can be designed and implemented faster and more efficiently. Wizards for dimensions and cubes let you quickly build dimensional models and realize either relationally or as an Oracle database OLAP implementation, both 10g and 11g formats are supported based on a configuration option. The wizard provides a good first cut definition and the objects can be further refined in the editor. Both wizards let you choose the implementation, to deploy to OLAP in the database select MOLAP: multidimensional storage. You will then be asked what levels and attributes are to be defined, by default the wizard creates a level bases hierarchy, parent child hierarchies can be defined in the editor. Once the dimension or cube has been designed there are special mapping operators that make it easy to load data into the objects, below we load a constant value for the total level and the other levels from a source table.   Again when the cube is defined using the wizard we can edit the cube and define a number of analytic calculations by using the 'generate calculated measures' option on the measures panel. This lets you very easily add a lot of rich analytic measures to your cube. For example one of the measures is the percentage difference from a year ago which we can see in detail below. You can also add your own custom calculations to leverage the capabilities of the Oracle OLAP option, either by selecting existing template types such as moving averages to defining true custom expressions. The 11g OLAP option now supports percentage based summarization (the amount of data to precompute and store), this is available from the option 'cost based aggregation' in the cube's configuration. Ensure all measure-dimensions level based aggregation is switched off (on the cube-dimension panel) - previously level based aggregation was the only option. The 11g generated code now uses the new unified API as you see below, to generate the code, OWB needs a valid connection to a real schema, this was not needed before 11gR2 and is a new requirement since the OLAP API which OWB uses is not an offline one. Once all of the objects are deployed and the maps executed then we get to the fun stuff! How can we analyze the data? One option which is powerful and at many users' fingertips is using Microsoft Excel PivotTables® and PivotCharts®, which can be used with your Oracle OLAP business intelligence data by utilizing Simba's MDX Provider for Oracle OLAP (see Simba site for details of cost). I'll leave the exotic reporting illustrations to the experts (see Bud's demonstration here), but with Simba's MDX Provider for Oracle OLAP its very simple to easily access the analytics stored in the database (all built and loaded via the OWB 11gR2 release) and get the regular features of Excel at your fingertips such as using the conditional formatting features for example. That's a very quick run through of the OWB 11gR2 with respect to Oracle 11g OLAP integration and the reporting using Simba's MDX Provider for Oracle OLAP. Not a deep-dive in any way but a quick overview to illustrate the design capabilities and integrations possible.

    Read the article

  • Excel Template Teaser

    - by Tim Dexter
    In lieu of some official documentation I'm in the process of putting together some posts on the new 10.1.3.4.1 Excel templates. No more HTML, maskerading as Excel; far more flexibility than Excel Analyzer and no need to write complex XSL templates to create the same output. Multi sheet outputs with macros and embeddable XSL commands are here. Their capabilities are pretty extensive and I have not worked on them for a few years since I helped put them together for EBS FSG users, so Im back on the learning curve. Let me say up front, there is no template builder, its a completely manual process to build them but, the results can be fantastic and provide yet another 'superstar' opportunity for you. The templates can take hierarchical XML data and walk the structure much like an RTF template. They use named cells/ranges and a hidden sheet to provide the rendering engine the hooks to drop the data in. As a taster heres the data and output I worked with on my first effort: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... </LIST_G_DEPT> </EMPLOYEES> Structured XML coming from a data template, check out the data template progression post. I can then generate the following binary XLS file. There are few cool things to notice in this output. DEPARTMENT-EMPLOYEE master detail output. Not easy to do in the Excel analyzer. Date formatting - this is using an Excel function. Remember BIP generates XML dates in the canonical format. I have formatted the other data in the template using native Excel functionality Salary Total - although in the data I have calculated this in the template Conditional formatting - this is handled by Excel based on the incoming data Bursting department data across sheets and using the department name for the sheet name. This alone is worth the wait! there's more, but this is surely enough to whet your appetite. These new templates are already tucked away in EBS R12 under controlled release by the GL team and have now come to the BIEE and standalone releases in the 10.1.3.4.1+ rollup patch. For the rest of you, its going to be a bit of a waiting game for the relevant teams to uptake the latest BIP release. Look out for more soon with some explanation of how they work and how to put them together!

    Read the article

  • Un balance del XXI Congreso de la Comunidad de Usuarios de Oracle

    - by Fabian Gradolph
    La XXI edición del Congreso de CUORE (Comunidad de Usuarios de Oracle) se clausuró el miércoles pasado tras dos intensos días de conferencias, talleres, reuniones y mesas redondas. Los más de 600 asistentes son una buena muestra del gran interés que despiertan las propuestas tecnológicas de Oracle entre nuestros clientes. Big Data y el sector utilities fueron dos de los grandes protagonistas del Congreso. El evento fue inaugurado por Félix del Barrio (en la segunda foto por la izquierda), director general de Oracle en España. Una buena parte del evento, la mañana del martes, estuvo dedicada a Big Data. Con Andrew Sutherland, Vicepresidente Senior de Tecnología de Oracle en EMEA, haciendo la presentación principal, para dar paso después a sesiones específicas sobre las tecnologías necesarias en las diferentes fases de los proyectos Big Data (obtener los datos, organizarlos, analizarlos y, finalmente, tomar las decisiones de negocio correspondientes). No nos vamos a entretener explicando qué es Big Data, un tema que ya hemos tratado previamente en este blog (aquí y aquí), pero sí hay que llamar la atención sobre un tema que Andrew Sutherland puso sobre la mesa en una reunión con periodistas: los proyectos relacionados con los Big Data tienen sentido pleno si nos sirven para modificar procesos y modelos de negocio, de forma que incrementemos la eficacia de la organización. Si nuestra organización está basada en procesos rígidos e inmutables (lo que tiene que ver esencialmente con el tipo de aplicaciones que estén implementadas), el aprovechamiento de los Big Data será limitado. En otras palabras, Big Data es un impulsor del cambio en las organizaciones. Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Los retos a los que se enfrenta un sector como el energético ocuparon el segundo día del Congreso. Las tendencias de la industria como las Redes Inteligentes, el Smart Metering, la entrada de nuevos actores y distribuidores en el mercado, la atomización de las operadoras y las inversiones congeladas son el panorama que se dibuja para las compañías del sector utilities . Además de los grandes eventos (Big Data y Oracle Utilities Day), las dos jornadas del Congreso sirvieron para que aquellos partners de Oracle que lo desearan recibieran la certificación gratuita de sus profesionales en diversas jornadas de examen. Adicionalmente, se desarrollaron sesiones paralelas sobre tecnologías y visiones estratégicas, demostraciones de producto y casos de éxito. En resumen, el balance del XXI Congreso de CUORE es muy positivo para Oracle, para nuestros clientes y para nuestros partners. Os esperamos a todos el próximo año.

    Read the article

  • Partition Wise Joins

    - by jean-pierre.dijcks
    Some say they are the holy grail of parallel computing and PWJ is the basis for a shared nothing system and the only join method that is available on a shared nothing system (yes this is oversimplified!). The magic in Oracle is of course that is one of many ways to join data. And yes, this is the old flexibility vs. simplicity discussion all over, so I won't go there... the point is that what you must do in a shared nothing system, you can do in Oracle with the same speed and methods. The Theory A partition wise join is a join between (for simplicity) two tables that are partitioned on the same column with the same partitioning scheme. In shared nothing this is effectively hard partitioning locating data on a specific node / storage combo. In Oracle is is logical partitioning. If you now join the two tables on that partitioned column you can break up the join in smaller joins exactly along the partitions in the data. Since they are partitioned (grouped) into the same buckets, all values required to do the join live in the equivalent bucket on either sides. No need to talk to anyone else, no need to redistribute data to anyone else... in short, the optimal join method for parallel processing of two large data sets. PWJ's in Oracle Since we do not hard partition the data across nodes in Oracle we use the Partitioning option to the database to create the buckets, then set the Degree of Parallelism (or run Auto DOP - see here) and get our PWJs. The main questions always asked are: How many partitions should I create? What should my DOP be? In a shared nothing system the answer is of course, as many partitions as there are nodes which will be your DOP. In Oracle we do want you to look at the workload and concurrency, and once you know that to understand the following rules of thumb. Within Oracle we have more ways of joining of data, so it is important to understand some of the PWJ ideas and what it means if you have an uneven distribution across processes. Assume we have a simple scenario where we partition the data on a hash key resulting in 4 hash partitions (H1 -H4). We have 2 parallel processes that have been tasked with reading these partitions (P1 - P2). The work is evenly divided assuming the partitions are the same size and we can scan this in time t1 as shown below. Now assume that we have changed the system and have a 5th partition but still have our 2 workers P1 and P2. The time it takes is actually 50% more assuming the 5th partition has the same size as the original H1 - H4 partitions. In other words to scan these 5 partitions, the time t2 it takes is not 1/5th more expensive, it is a lot more expensive and some other join plans may now start to look exciting to the optimizer. Just to post the disclaimer, it is not as simple as I state it here, but you get the idea on how much more expensive this plan may now look... Based on this little example there are a few rules of thumb to follow to get the partition wise joins. First, choose a DOP that is a factor of two (2). So always choose something like 2, 4, 8, 16, 32 and so on... Second, choose a number of partitions that is larger or equal to 2* DOP. Third, make sure the number of partitions is divisible through 2 without orphans. This is also known as an even number... Fourth, choose a stable partition count strategy, which is typically hash, which can be a sub partitioning strategy rather than the main strategy (range - hash is a popular one). Fifth, make sure you do this on the join key between the two large tables you want to join (and this should be the obvious one...). Translating this into an example: DOP = 8 (determined based on concurrency or by using Auto DOP with a cap due to concurrency) says that the number of partitions >= 16. Number of hash (sub) partitions = 32, which gives each process four partitions to work on. This number is somewhat arbitrary and depends on your data and system. In this case my main reasoning is that if you get more room on the box you can easily move the DOP for the query to 16 without repartitioning... and of course it makes for no leftovers on the table... And yes, we recommend up-to-date statistics. And before you start complaining, do read this post on a cool way to do stats in 11.

    Read the article

  • BizTalk Server Monitoring &ndash; SharePoint Web Part

    - by SURESH GIRIRAJAN
    I have been worked with customers using BizTalk as shared infrastructure in the enterprise, where we have two or more BizTalk apps running on it for different Business groups. Also these customers are not using BizTalk ESB portal even though they are using BizTalk ESB exception framework. So main issue with all these Business groups are they don’t have visibility into the BizTalk apps running in prod, even though they are using SCOM and other monitoring stuff in place. So I am trying to address few issues I am going to list below and how I try to mitigate them, first one on the list is how to get visibility into prod, how to provision those access to the BizTalk resources with minimal activity and how can we take advantage of the resources we have today. So I was working on creating REST data services for BizTalk RFID a year ago and available on codeplex. I thought to extend that idea to take advantage of BizTalk Data Services available in codeplex. I extended the BizTalk data services I will upload the updated service soon. So let me start thru how my solution works, so first step I am using the BizTalk data service (REST service) which expose most of the BizTalk artifacts as resources such as Applications, Orchestrations, Send ports, Receive ports, Host instances and In process instances etc. BizTalk Server Monitoring – SharePoint Web Part I am hosting the BizTalk data service in IIS with application pool configured to run under BizTalk administrator credentials. So with this setup I am making the service to make accessible anonymous. Next step of this solution I have created a SharePoint Visual web part which consumes the BizTalk data service and display all the BizTalk Application and Platform settings in read only mode. Even though BizTalk data services offers to browse resources as well perform actions like starting, stopping Orchestrations, Send ports, Receive locations, Host instances etc. Host Instances BizTalk Applications BizTalk Running / Suspended Instances So having this BizTalk Monitoring SharePoint web part, will be added to the SharePoint. This eliminates the need for granting access to the BizTalk users explicitly, so when you have BizTalk contractor or BizTalk application user need to have access to the BizTalk environment all the need is have access to the SharePoint website. You can configure the web part point to different end point based on your environment. I am making this as read only as part of this to make easier for the users and in terms of provisioning. This removes the dependency of BizTalk admin at least for viewing the BizTalk application status and errors etc. If we need to make any changes to the BizTalk application then its application owner responsibility to co-ordinate with BizTalk admins. There are options like BizTalk ESB portal, BizTalk 360 etc… but this one of the approach to reduce number of steps required to give access to BizTalk application users and also to maximize the resource we have in enterprise today. Also you can expose this data service thru Azure Service Bus and access from other apps like mobile devices or create a web site hosted in Azure etc. One last thing I have tested only with BizTalk Server 2010 on x64 VM only, but it should work on other version. I will try to upload the code shortly with instructions how to setup etc.… I welcome thoughts and suggestions… Hope this helps….

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Advanced Solution – Wait Type – Day 7 of 28

    - by pinaldave
    Earlier we discussed about the what is the common solution to solve the issue with CXPACKET wait time. Today I am going to talk about few of the other suggestions which can help to reduce the CXPACKET wait. If you are going to suggest that I should focus on MAXDOP and COST THRESHOLD – I totally agree. I have covered them in details in yesterday’s blog post. Today we are going to discuss few other way CXPACKET can be reduced. Potential Reasons: If data is heavily skewed, there are chances that query optimizer may estimate the correct amount of the data leading to assign fewer thread to query. This can easily lead to uneven workload on threads and may create CXPAKCET wait. While retrieving the data one of the thread face IO, Memory or CPU bottleneck and have to wait to get those resources to execute its tasks, may create CXPACKET wait as well. Data which is retrieved is on different speed IO Subsystem. (This is not common and hardly possible but there are chances). Higher fragmentations in some area of the table can lead less data per page. This may lead to CXPACKET wait. As I said the reasons here mentioned are not the major cause of the CXPACKET wait but any kind of scenario can create the probable wait time. Best Practices to Reduce CXPACKET wait: Refer earlier article regarding MAXDOP and Cost Threshold. De-fragmentation of Index can help as more data can be obtained per page. (Assuming close to 100 fill-factor) If data is on multiple files which are on multiple similar speed physical drive, the CXPACKET wait may reduce. Keep the statistics updated, as this will give better estimate to query optimizer when assigning threads and dividing the data among available threads. Updating statistics can significantly improve the strength of the query optimizer to render proper execution plan. This may overall affect the parallelism process in positive way. Bad Practice: In one of the recent consultancy project, when I was called in I noticed that one of the ‘experienced’ DBA noticed higher CXPACKET wait and to reduce them, he has increased the worker threads. The reality was increasing worker thread has lead to many other issues. With more number of the threads, more amount of memory was used leading memory pressure. As there were more threads CPU scheduler faced higher ‘Context Switching’ leading further degrading performance. When I explained all these to ‘experienced’ DBA he suggested that now we should reduce the number of threads. Not really! Lower number of the threads may create heavy stalling for parallel queries. I suggest NOT to touch the setting of number of the threads when dealing with CXPACKET wait. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest reading book on-line for further clarification. All the discussion of Wait Stats over here is generic and it varies by system to system. You are recommended to test this on development server before implementing to production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Understanding XML – Contest Win Joes 2 Pros Combo (USD 198) – Day 5 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Is following XML a well formed XML Document? <?xml version=”1.0″?> <address> <firstname>Pinal</firstname> <lastname>Dave</lastname> <title>Founder</title> <company>SQLAuthority.com</company> </address> a) Yes b) No c) I do not know Query Hints: BIG HINT POST A common observation by people seeing an XML file for the first time is that it looks like just a bunch of data inside a text file. XML files are text-based documents, which makes them easy to read.  All of the data is literally spelled out in the document and relies on a just a few characters (<, >, =) to convey relationships and structure of the data.  XML files can be used by any commonly available text editor, like Notepad. Much like a book’s Table of Contents, your first glance at well-formed XML will tell you the subject matter of the data and its general structure. Hints appearing within the data help you to quickly identify the main theme (similar to book’s subject), its headers (similar to chapter titles or sections of a book), data elements (similar to a book’s characters or chief topics), and so forth. We’ll learn to recognize and use the structural “hints,” which are XML’s markup components (e.g., XML tags, root elements). The XML Raw and Auto modes are great for displaying data as all attributes or all elements – but not both at once. If you want your XML stream to have some of its data shown in attributes and some shown as elements, then you can use the XML Path mode. If you are using an XML Path stream, then by default all values will be shown as elements. However, it is possible to pick one or more elements to be shown with an attribute(s) as well. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 5. SQL Joes 2 Pros Development Series – OpenXML Options SQL Joes 2 Pros Development Series – Preparing XML in Memory SQL Joes 2 Pros Development Series – Shredding XML SQL Joes 2 Pros Development Series – Using Root With Auto XML Mode SQL Joes 2 Pros Development Series – Using Root With Auto XML Mode SQL Joes 2 Pros Development Series – What is XML? SQL Joes 2 Pros Development Series – What is XML? – 2 Next Step: Answer the Quiz in Contact Form in following format Question - Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ASP.NET Web API - Screencast series Part 4: Paging and Querying

    - by Jon Galloway
    We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools. In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods. In Part 3, we modified data on the server using DELETE and POST methods. In Part 4, we'll extend on our simple querying methods form Part 2, adding in support for paging and querying. This part shows two approaches to querying data (paging really just being a specific querying case) - you can do it yourself using parameters passed in via querystring (as well as headers, other route parameters, cookies, etc.). You're welcome to do that if you'd like. What I think is more interesting here is that Web API actions that return IQueryable automatically support OData query syntax, making it really easy to support some common query use cases like paging and filtering. A few important things to note: This is just support for OData query syntax - you're not getting back data in OData format. The screencast demonstrates this by showing the GET methods are continuing to return the same JSON they did previously. So you don't have to "buy in" to the whole OData thing, you're just able to use the query syntax if you'd like. This isn't full OData query support - full OData query syntax includes a lot of operations and features - but it is a pretty good subset: filter, orderby, skip, and top. All you have to do to enable this OData query syntax is return an IQueryable rather than an IEnumerable. Often, that could be as simple as using the AsQueryable() extension method on your IEnumerable. Query composition support lets you layer queries intelligently. If, for instance, you had an action that showed products by category using a query in your repository, you could also support paging on top of that. The result is an expression tree that's evaluated on-demand and includes both the Web API query and the underlying query. So with all those bullet points and big words, you'd think this would be hard to hook up. Nope, all I did was change the return type from IEnumerable<Comment> to IQueryable<Comment> and convert the Get() method's IEnumerable result using the .AsQueryable() extension method. public IQueryable<Comment> GetComments() { return repository.Get().AsQueryable(); } You still need to build up the query to provide the $top and $skip on the client, but you'd need to do that regardless. Here's how that looks: $(function () { //--------------------------------------------------------- // Using Queryable to page //--------------------------------------------------------- $("#getCommentsQueryable").click(function () { viewModel.comments([]); var pageSize = $('#pageSize').val(); var pageIndex = $('#pageIndex').val(); var url = "/api/comments?$top=" + pageSize + '&$skip=' + (pageIndex * pageSize); $.getJSON(url, function (data) { // Update the Knockout model (and thus the UI) with the comments received back // from the Web API call. viewModel.comments(data); }); return false; }); }); And the neat thing is that - without any modification to our server-side code - we can modify the above jQuery call to request the comments be sorted by author: $(function () { //--------------------------------------------------------- // Using Queryable to page //--------------------------------------------------------- $("#getCommentsQueryable").click(function () { viewModel.comments([]); var pageSize = $('#pageSize').val(); var pageIndex = $('#pageIndex').val(); var url = "/api/comments?$top=" + pageSize + '&$skip=' + (pageIndex * pageSize) + '&$orderby=Author'; $.getJSON(url, function (data) { // Update the Knockout model (and thus the UI) with the comments received back // from the Web API call. viewModel.comments(data); }); return false; }); }); So if you want to make use of OData query syntax, you can. If you don't like it, you're free to hook up your filtering and paging however you think is best. Neat. In Part 5, we'll add on support for Data Annotation based validation using an Action Filter.

    Read the article

  • Oracle Database 12c Spatial: Vector Performance Acceleration

    - by Okcan Yasin Saygili-Oracle
    Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by incorporating location analysis and intelligence into their information systems. This allows organizations to make better decisions, respond to customers more effectively, and reduce operational costs – increasing ROI and creating competitive advantage. Oracle Database, the industry’s most advanced database,  includes native location capabilities, fully integrated in the kernel, for fast, scalable, reliable and secure spatial and massive graph applications. It is a foundation for deploying enterprise-wide spatial information systems and locationenabled business applications. Developers can extend existing Oracle-based tools and applications, since they can easily incorporate location information directly in their applications, workflows, and services. Spatial Features The geospatial data features of Oracle Spatial and Graph option support complex geographic information systems (GIS) applications, enterprise applications and location services applications. Oracle Spatial and Graph option extends the spatial query and analysis features included in every edition of Oracle Database with the Oracle Locator feature, and provides a robust foundation for applications that require advanced spatial analysis and processing in the Oracle Database. It supports all major spatial data types and models, addressing challenging business-critical requirements from various industries, including transportation, utilities, energy, public sector, defense and commercial location intelligence. Network Data Model Graph Features The Network Data Model graph explicitly stores and maintains a persistent data model withnetwork connectivity and provides network analysis capability such as shortest path, nearest neighbors, within cost and reachability. It loads partitioned networks into memory on demand, overcomingthe limitations of in-memory analysis. Partitioning massive networks into manageable sub-networkssimplifies the network analysis. RDF Semantic Graph Features RDF Semantic Graph has native support for World Wide Web Consortium standards. It has open, scalable, and secure features for storing RDF/OWL ontologies anddata; native inference with OWL 2, SKOS and user-defined rules; and querying RDF/OWL data withSPARQL 1.1, Java APIs, and SPARQLgraph patterns in SQL. Video: Oracle Spatial and Graph Overview Oracle spatial is embeded on oracle database product. So ,we can use oracle installer (OUI).The Oracle Universal Installer (OUI) is used to install Oracle Database software. OUI is a graphical user interface utility that enables you to view the Oracle software that is installed on your machine, install new Oracle Database software, and delete Oracle software that you no longer need to use. Online Help is available to guide you through the installation process. One of the installation options is to create a database. If you select database creation, OUI automatically starts Oracle Database Configuration Assistant (DBCA) to guide you through the process of creating and configuring a database. If you do not create a database during installation, you must invoke DBCA after you have installed the software to create a database. You can also use DBCA to create additional databases. For installing Oracle Database 12c you may check the Installing Oracle Database Software and Creating a Database tutorial under the Oracle Database 12c 2-Day DBA Series.You can always check if spatial is available in your database using  "select comp_id, version, status, comp_name from dba_registry where comp_id='SDO';"   One of the most notable improvements with Oracle Spatial and Graph 12c can be seen in performance increases in vector data operations. Enabling the Spatial Vector Acceleration feature (available with the Spatial option) dramatically improves the performance of commonly used vector data operations, such as sdo_distance, sdo_aggr_union, and sdo_inside. With 12c, these operations also run more efficiently in parallel than in prior versions through the use of metadata caching. For organizations that have been facing processing limitations, these enhancements enable developers to make a small set of configuration changes and quickly realize significant performance improvements. Results include improved index performance, enhanced geometry engine performance, optimized secondary filter optimizations for Spatial operators, and improved CPU and memory utilization for many advanced vector functions. Vector performance acceleration is especially beneficial when using Oracle Exadata Database Machine and other large-scale systems. Oracle Spatial and Graph vector performance acceleration builds on general improvements available to all SDO_GEOMETRY operations in these areas: Caching of index metadata, Concurrent update mechanisms, and Optimized spatial predicate selectivity and cost functions. These optimizations enable more efficient use of: CPU, Memory, and Partitioning Resulting in substantial query performance improvements.UsageTo accelerate the performance of spatial operators, it is recommended that you set the SPATIAL_VECTOR_ACCELERATION database system parameter to the value TRUE. (This parameter is authorized for use only by licensed Oracle Spatial users, and its default value is FALSE.) You can set this parameter for the whole system or for a single session. To set the value for the whole system, do either of the following:Enter the following statement from a suitably privileged account:   ALTER SYSTEM SET SPATIAL_VECTOR_ACCELERATION = TRUE;Add the following to the database initialization file (xxxinit.ora):   SPATIAL_VECTOR_ACCELERATION = TRUE;To set the value for the current session, enter the following statement from a suitably privileged account:   ALTER SESSION SET SPATIAL_VECTOR_ACCELERATION = TRUE; Checkout the complete list of new features on Oracle.com @ http://www.oracle.com/technetwork/database/options/spatialandgraph/overview/index.html Spatial and Graph Data Sheet (PDF) Spatial and Graph White Paper (PDF)

    Read the article

  • Install Oracle Configuration Manager's Standalone Collector

    - by Get Proactive Customer Adoption Team
    Untitled Document The Why and the How If you have heard of Oracle Configuration Manager (OCM), but haven’t installed it, I’m guessing this is for one of two reasons. Either you don’t know how it helps you or you don’t know how to install it. I’ll address both of those reasons today. First, let’s take a quick look at how My Oracle Support and the Oracle Configuration Manager work together to gain a good understanding of what their differences and roles are before we tackle the install.   Oracle Configuration Manger is the tool that actually performs the data collection task. You deploy this lightweight piece of software into your system to collect configuration information about the system and OCM uploads that data to Oracle’s customer configuration repository. Oracle Support Engineers then have the configuration data available when you file a service request. You can also view the data through My Oracle Support. The real value is that the data Oracle Configuration Manager collects can help you avoid problems and get your Service Requests solved more quickly. When you view the information in My Oracle Support’s user interface to OCM, it may help you avoid situations that create problems. The proactive tools included in Oracle Configuration Manager help you avoid issues before they occur. You also save time because you didn’t need to open a service request. For example, you can use this capability when you need to compare your system configuration at two points in time, or monitor the system health. If you make the configuration data available to Oracle Support Engineers, when you need to open a Service Request the data helps them diagnose and resolve your critical system issues more quickly, which means you get answers more quickly too. Quick Installation Process Overview Before we dive into the step-by-step details, let me provide a quick overview. For some of you, this will be all you need. Log in to My Oracle Support and download the data collector from Collector tab. If you don’t see the Collector tab, click the More tab gain access. On the Collector tab, you will find a drop-down list showing which platforms are available. You can also see more ways to the Collector can help you if you click through the carousel of benefits. After you download the software for your platform, use FTP to move that file (.zip) from your PC to the server that hosts the Oracle software. Once you have that file on the server, locate the $ORACLE_HOME directory, and unzip the file within that directory. You can then use the command line tool to start the installation process. The installation process requires the My Oracle Support credential (Support Identifier, username, and password) Proxy specification (Host IP Address, Port number, username and password) Installation Step-by-Step Download the collector zip file from My Oracle Support and place it into your $Oracle_Home Unzip the zip file you downloaded from My Oracle Support – this will create a directory named CCR with several subdirectories Using the command line go to “$ORACLE_HOME/CCR/bin” and run the following command “setupCCR” Provide your My Oracle Support credential: login, password, and Support Identifier The installer will start deploying the collector application You have installed the Collector Post Installation Now that you have installed successfully, the scheduler is ready to collect configuration information for the software available in your Oracle Home. By default, the first collection will take place the day after the installation. If you want to run an instrumentation script to start the configuration collection of your Oracle Database server, E-Business Suite, or Enterprise Manager, you will find more details on that in the Installation and Administration Guide for My Oracle Support Configuration Manager. Related documents available on My Oracle Support Oracle Configuration Manager Installation and Administration Guide [ID 728989.5] Oracle Configuration Manager Prerequisites [ID 728473.5] Oracle Configuration Manager Network Connectivity Test [ID 728970.5] Oracle Configuration Manager Collection Overview [ID 728985.5] Oracle Configuration Manager Security Overview [ID 728982.5] Oracle Software Configuration Manager: Disconnected Mode Collection [ID 453412.1]

    Read the article

  • Exploring packages in code

    In my previous post Searching for tasks with code you can see how to explore the control flow side of packages, drilling down through containers, task, and event handlers, but it didn’t cover the data flow. I recently saw a post on the MSDN forum asking how to edit an existing package programmatically, and the sticking point was how to find the the data flow and the components inside. This post builds on some of the previous code and shows how you can explore all objects inside a package. I took the sample Task Search application I’d written previously, and came up with a totally pointless little console application that just walks through the package and writes out the basic type and name of every object it finds, starting with the package itself e.g. Package – MyPackage . The sample package we used last time showed nested objects as well an event handler; a OnPreExecute event tucked away on the task SQL In FEL. The output of this sample tool would look like this: PackageObjects v1.0.0.0 (1.0.0.26627) Copyright (C) 2009 Konesans Ltd Processing File - Z:\Users\Darren Green\Documents\Visual Studio 2005\Projects\SSISTestProject\EventsAndContainersWithExe cSQLForSearch.dtsx Package - EventsAndContainersWithExecSQLForSearch For Loop - FOR Counter Loop Task - SQL In Counter Loop Sequence Container - SEQ For Each Loop Wrapper For Each Loop - FEL Simple Loop Task - SQL In FEL Task - SQL On Pre Execute for FEL SQL Task Sequence Container - SEQ Top Level Sequence Container - SEQ Nested Lvl 1 Sequence Container - SEQ Nested Lvl 2 Task - SQL In Nested Lvl 2 Task - SQL In Nested Lvl 1 #1 Task - SQL In Nested Lvl 1 #2 Connection Manager – LocalHost The code is very similar to what we had previously, but there are a couple of extra bits to deal with connections and to look more closely at a task and see if it is a Data Flow task. For connections your just examine the package's Connections collection as shown in the abridged snippets below. First you can see the call to the ProcessConnections method, followed by the method itself. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { // Write out the package name Console.WriteLine("Package - {0}", package.Name); ... More ... // Look and the connections ProcessConnections(package.Connections); } private static void ProcessConnections(Connections connections) { foreach (ConnectionManager connectionManager in connections) { Console.WriteLine("Connection Manager - {0}", connectionManager.Name); } } What we didn’t see in the sample output above was anything to do with the Data Flow, but rest assured the code now handles it too. The following snippet shows how each task is examined to see if it is a Data Flow task, and if so we can then loop through all of the components inside the data flow. private static void ProcessTaskHost(TaskHost taskHost) { if (taskHost == null) { return; } Console.WriteLine("Task - {0}", taskHost.Name); // Check if the task is a Data Flow task MainPipe pipeline = taskHost.InnerObject as MainPipe; if (pipeline != null) { ProcessPipeline(pipeline); } } private static void ProcessPipeline(MainPipe pipeline) { foreach (IDTSComponentMetaData90 componentMetadata in pipeline.ComponentMetaDataCollection) { Console.WriteLine("Pipeline Component - {0}", componentMetadata.Name); // If you wish to make changes to the component then you should really use the managed wrapper. // CManagedComponentWrapper wrapper = componentMetadata.Instantiate(); // wrapper.SetComponentProperty("PropertyName", "Value"); } } Hopefully you can see how we get a reference to the Data Flow task, and then use the ComponentMetaDataCollection to find out what components we have inside the pipeline. If you wanted to know more about the component you could look at the ObjectType or ComponentClassID properties. After that it gets a bit harder and you should get a reference to the wrapper object as the comment suggest and start using the properties, just like you would in the create packages samples, see our Code Development category for some for these examples. Download Sample code project PackageObjects.zip (5KB)

    Read the article

  • career in Mobile sw/Application Development [closed]

    - by pramod
    i m planning to do a course on Wireless & mobile computing.The syllabus are given below.Please check & let me know whether its worth to do.How is the job prospects after that.I m a fresher & from electronic Engg.The modules are- *Wireless and Mobile Computing (WiMC) – Modules* C, C++ Programming and Data Structures 100 Hours C Revision C, C++ programming tools on linux(Vi editor, gdb etc.) OOP concepts Programming constructs Functions Access Specifiers Classes and Objects Overloading Inheritance Polymorphism Templates Data Structures in C++ Arrays, stacks, Queues, Linked Lists( Singly, Doubly, Circular) Trees, Threaded trees, AVL Trees Graphs, Sorting (bubble, Quick, Heap , Merge) System Development Methodology 18 Hours Software life cycle and various life cycle models Project Management Software: A Process Various Phases in s/w Development Risk Analysis and Management Software Quality Assurance Introduction to Coding Standards Software Project Management Testing Strategies and Tactics Project Management and Introduction to Risk Management Java Programming 110 Hours Data Types, Operators and Language Constructs Classes and Objects, Inner Classes and Inheritance Inheritance Interface and Package Exceptions Threads Java.lang Java.util Java.awt Java.io Java.applet Java.swing XML, XSL, DTD Java n/w programming Introduction to servlet Mobile and Wireless Technologies 30 Hours Basics of Wireless Technologies Cellular Communication: Single cell systems, multi-cell systems, frequency reuse, analog cellular systems, digital cellular systems GSM standard: Mobile Station, BTS, BSC, MSC, SMS sever, call processing and protocols CDMA standard: spread spectrum technologies, 2.5G and 3G Systems: HSCSD, GPRS, W-CDMA/UMTS,3GPP and international roaming, Multimedia services CDMA based cellular mobile communication systems Wireless Personal Area Networks: Bluetooth, IEEE 802.11a/b/g standards Mobile Handset Device Interfacing: Data Cables, IrDA, Bluetooth, Touch- Screen Interfacing Wireless Security, Telemetry Java Wireless Programming and Applications Development(J2ME) 100 Hours J2ME Architecture The CLDC and the KVM Tools and Development Process Classification of CLDC Target Devices CLDC Collections API CLDC Streams Model MIDlets MIDlet Lifecycle MIDP Programming MIDP Event Architecture High-Level Event Handling Low-Level Event Handling The CLDC Streams Model The CLDC Networking Package The MIDP Implementation Introduction to WAP, WML Script and XHTML Introduction to Multimedia Messaging Services (MMS) Symbian Programming 60 Hours Symbian OS basics Symbian OS services Symbian OS organization GUI approaches ROM building Debugging Hardware abstraction Base porting Symbian OS reference design porting File systems Overview of Symbian OS Development – DevKits, CustKits and SDKs CodeWarrior Tool Application & UI Development Client Server Framework ECOM STDLIB in Symbian iPhone Programming 80 Hours Introducing iPhone core specifications Understanding iPhone input and output Designing web pages for the iPhone Capturing iPhone events Introducing the webkit CSS transforms transitions and animations Using iUI for web apps Using Canvas for web apps Building web apps with Dashcode Writing Dashcode programs Debugging iPhone web pages SDK programming for web developers An introduction to object-oriented programming Introducing the iPhone OS Using Xcode and Interface builder Programming with the SDK Toolkit OS Concepts & Linux Programming 60 Hours Operating System Concepts What is an OS? Processes Scheduling & Synchronization Memory management Virtual Memory and Paging Linux Architecture Programming in Linux Linux Shell Programming Writing Device Drivers Configuring and Building GNU Cross-tool chain Configuring and Compiling Linux Virtual File System Porting Linux on Target Hardware WinCE.NET and Database Technology 80 Hours Execution Process in .NET Environment Language Interoperability Assemblies Need of C# Operators Namespaces & Assemblies Arrays Preprocessors Delegates and Events Boxing and Unboxing Regular Expression Collections Multithreading Programming Memory Management Exceptions Handling Win Forms Working with database ASP .NET Server Controls and client-side scripts ASP .NET Web Server Controls Validation Controls Principles of database management Need of RDBMS etc Client/Server Computing RDBMS Technologies Codd’s Rules Data Models Normalization Techniques ER Diagrams Data Flow Diagrams Database recovery & backup SQL Android Application 80 Hours Introduction of android Why develop for android Android SDK features Creating android activities Fundamental android UI design Intents, adapters, dialogs Android Technique for saving data Data base in Androids Maps, Geocoding, Location based services Toast, using alarms, Instant messaging Using blue tooth Using Telephony Introducing sensor manager Managing network and wi-fi connection Advanced androids development Linux kernel security Implement AIDL Interface. Project 120 Hours

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >