Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 519/2416 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • How can I write javaScript cookies to keep the data persistent after page reloads on my form?

    - by Johhny Thero
    Hello, I am trying to learn how to write cookies to keep the data in my CookieButton1 button persistent and to survive refreshes and page reloads. How can I do this in JavaScript? I have supplied my source code. Any advise, links or tutorials will be very helpful. If you navigate to http://iqlusion.net/test.html and click on Empty1, it will start to ask you questions. When finished it stores everything into CookieButton1. But when I refresh my browser the data resets and goes away. Thanks! <html> <head> <title>no_cookies> </head> <script type="text/javascript" > var Can1Set = "false"; function Can1() { if (Can1Set == "false") { Can1Title = prompt("What do you want to name this new canned response?",""); Can1State = prompt("Enter a ticket state (open or closed)","closed"); Can1Response = prompt("Enter the canned response:",""); Can1Points = prompt("What point percentage do you want to assign? (0-10)","2.5"); // Set the "Empty 1" button text to the new name the user specified document.CookieTest.CookieButton1.value = Can1Title; // Set the cookie here, and then set the Can1Set variable to true document.CookieTest.CookieButton1 = "CookieButton1"; Can1Set = true; }else{ document.TestForm.TestStateDropDownBox.value = Can1State; document.TestForm.TestPointsDropDownBox.value = Can1Points; document.TestForm.TestTextArea.value = Can1Response; // document.TestForm.submit(); } } </script> <form name=TestForm> State: <select name=TestStateDropDownBox> <option value=new selected>New</option> <option value=open selected>Open</option> <option value=closed>Closed</option> </select> Points: <select name=TestPointsDropDownBox> <option value=1>1</option> <option value=1.5>1.5</option> <option value=2>2</option> <option value=2.5>2.5</option> <option value=3>3</option> <option value=3.5>3.5</option> <option value=4>4</option> <option value=4.5>4.5</option> <option value=5>5</option> <option value=5.5>5.5</option> <option value=6>6</option> <option value=6.5>6.5</option> <option value=7>7</option> <option value=7.5>7.5</option> <option value=8>8</option> <option value=8.5>8.5</option> <option value=9>9</option> <option value=9.5>9.5</option> <option value=10>10</option> </select> <p> Ticket information:<br> <textarea name=TestTextArea cols=50 rows=7></textarea> </form> <form name=CookieTest> <input type=button name=CookieButton1 value="Empty 1" onClick="javascript:Can1()"> </form>

    Read the article

  • Azure - Part 4 - Table Storage Service in Windows Azure

    - by Shaun
    In Windows Azure platform there are 3 storage we can use to save our data on the cloud. They are the Table, Blob and Queue. Before the Chinese New Year Microsoft announced that Azure SDK 1.1 had been released and it supports a new type of storage – Drive, which allows us to operate NTFS files on the cloud. I will cover it in the coming few posts but now I would like to talk a bit about the Table Storage.   Concept of Table Storage Service The most common development scenario is to retrieve, create, update and remove data from the data storage. In the normal way we communicate with database. When we attempt to move our application over to the cloud the most common requirement should be have a storage service. Windows Azure provides a in-build service that allow us to storage the structured data, which is called Windows Azure Table Storage Service. The data stored in the table service are like the collection of entities. And the entities are similar to rows or records in the tradtional database. An entity should had a partition key, a row key, a timestamp and set of properties. You can treat the partition key as a group name, the row key as a primary key and the timestamp as the identifer for solving the concurrency problem. Different with a table in a database, the table service does not enforce the schema for tables, which means you can have 2 entities in the same table with different property sets. The partition key is being used for the load balance of the Azure OS and the group entity transaction. As you know in the cloud you will never know which machine is hosting your application and your data. It could be moving based on the transaction weight and the number of the requests. If the Azure OS found that there are many requests connect to your Book entities with the partition key equals “Novel” it will move them to another idle machine to increase the performance. So when choosing the partition key for your entities you need to make sure they indecate the category or gourp information so that the Azure OS can perform the load balance as you wish.   Consuming the Table Although the table service looks like a database, you cannot access it through the way you are using now, neither ADO.NET nor ODBC. The table service exposed itself by ADO.NET Data Service protocol, which allows you can consume it through the RESTful style by Http requests. The Azure SDK provides a sets of classes for us to connect it. There are 2 classes we might need: TableServiceContext and TableServiceEntity. The TableServiceContext inherited from the DataServiceContext, which represents the runtime context of the ADO.NET data service. It provides 4 methods mainly used by us: CreateQuery: It will create a IQueryable instance from a given type of entity. AddObject: Add the specified entity into Table Service. UpdateObject: Update an existing entity in the Table Service. DeleteObject: Delete an entity from the Table Service. Beofre you operate the table service you need to provide the valid account information. It’s something like the connect string of the database but with your account name and the account key when you created the storage service on the Windows Azure Development Portal. After getting the CloudStorageAccount you can create the CloudTableClient instance which provides a set of methods for using the table service. A very useful method would be CreateTableIfNotExist. It will create the table container for you if it’s not exsited. And then you can operate the eneities to that table through the methods I mentioned above. Let me explain a bit more through an exmaple. We always like code rather than sentence.   Straightforward Accessing to the Table Here I would like to build a WCF service on the Windows Azure platform, and for now just one requirement: it would allow the client to create an account entity on the table service. The WCF service would have a method named Register and accept an instance of the account which the client wants to create. After perform some validation it will add the entity into the table service. So the first thing I should do is to create a Cloud Application on my VIstial Studio 2010 RC. (The Azure SDK 1.1 only supports VS2008 and VS2010 RC.) The solution should be like this below. Then I added a configuration items for the storage account through the Settings section under the cloud project. (Double click the Services file under Roles folder and navigate to the Setting section.) This setting will be used when to retrieve my storage account information. Since for now I just in the development phase I will select “UseDevelopmentStorage=true”. And then I navigated to the WebRole.cs file under my WCF project. If you have read my previous posts you would know that this file defines the process when the application start, and terminate on the cloud. What I need to do is to when the application start, set the configuration publisher to load my config file with the config name I specified. So the code would be like below. I removed the original service and contract created by the VS template and add my IAccountService contract and its implementation class - AccountService. And I add the service method Register with the parameters: email, password and it will return a boolean value to indicates the result which is very simple. At this moment if I press F5 the application will be established on my local development fabric and I can see my service runs well through the browser. Let’s implement the service method Rigister, add a new entity to the table service. As I said before the entities you want to store in the table service must have 3 properties: partition key, row key and timespan. You can create a class with these 3 properties. The Azure SDK provides us a base class for that named TableServiceEntity in Microsoft.WindowsAzure.StorageClient namespace. So what we need to do is more simply, create a class named Account and let it derived from the TableServiceEntity. And I need to add my own properties: Email, Password, DateCreated and DateDeleted. The DateDeleted is a nullable date time value to indecate whether this entity had been deleted and when. Do you notice that I missed something here? Yes it’s the partition key and row key I didn’t assigned. The TableServiceEntity base class defined 2 constructors one was a parameter-less constructor which will be used to fill values into the properties from the table service when retrieving data. The other was one with 2 parameters: partition key and row key. As I said below the partition key may affect the load balance and the row key must be unique so here I would like to use the email as the parition key and the email plus a Guid as the row key. OK now we finished the entity class we need to store onto the table service. The next step is to create a data access class for us to add it. Azure SDK gives us a base class for it named TableServiceContext as I mentioned below. So let’s create a class for operate the Account entities. The TableServiceContext need the storage account information for its constructor. It’s the combination of the storage service URI that we will create on Windows Azure platform, and the relevant account name and key. The TableServiceContext will use this information to find the related address and verify the account to operate the storage entities. Hence in my AccountDataContext class I need to override this constructor and pass the storage account into it. All entities will be saved in the table storage with one or many tables which we call them “table containers”. Before we operate an entity we need to make sure that the table container had been created on the storage. There’s a method we can use for that: CloudTableClient.CreateTableIfNotExist. So in the constructor I will perform it firstly to make sure all method will be invoked after the table had been created. Notice that I passed the storage account enpoint URI and the credentials to specify where my storage is located and who am I. Another advise is that, make your entity class name as the same as the table name when create the table. It will increase the performance when you operate it over the cloud especially querying. Since the Register WCF method will add a new account into the table service, here I will create a relevant method to add the account entity. Before implement, I should add a reference - System.Data.Services.Client to the project. This reference provides some common method within the ADO.NET Data Service which can be used in the Windows Azure Table Service. I will use its AddObject method to create my account entity. Since the table service are not fully implemented the ADO.NET Data Service, there are some methods in the System.Data.Services.Client that TableServiceContext doesn’t support, such as AddLinks, etc. Then I implemented the serivce method to add the account entity through the AccountDataContext. You can see in the service implmentation I load the storage account information through my configuration file and created the account table entity from the parameters. Then I created the AccountDataContext. If it’s my first time to invoke this method the constructor of the AccountDataContext will create a table container for me. Then I use Add method to add the account entity into the table. Next, let’s create a farely simple client application to test this service. I created a windows console application and added a service reference to my WCF service. The metadata information of the WCF service cannot be retrieved if it’s deployed on the Windows Azure even though the <serviceMetadata httpGetEnabled="true"/> had been set. If we need to get its metadata we can deploy it on the local development service and then changed the endpoint to the address which is on the cloud. In the client side app.config file I specified the endpoint to the local development fabric address. And the just implement the client to let me input an email and a password then invoke the WCF service to add my acocunt. Let’s run my application and see the result. Of course it should return TRUE to me. And in the local SQL Express I can see the data had been saved in the table.   Summary In this post I explained more about the Windows Azure Table Storage Service. I also created a small application for demostration of how to connect and consume it through the ADO.NET Data Service Managed Library provided within the Azure SDK. I only show how to create an eneity in the storage service. In the next post I would like to explain about how to query the entities with conditions thruogh LINQ. I also would like to refactor my AccountDataContext class to make it dyamic for any kinds of entities.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Using a WPF ListView as a DataGrid

    - by psheriff
    Many people like to view data in a grid format of rows and columns. WPF did not come with a data grid control that automatically creates rows and columns for you based on the object you pass it. However, the WPF Toolkit can be downloaded from CodePlex.com that does contain a DataGrid control. This DataGrid gives you the ability to pass it a DataTable or a Collection class and it will automatically figure out the columns or properties and create all the columns for you and display the data.The DataGrid control also supports editing and many other features that you might not always need. This means that the DataGrid does take a little more time to render the data. If you want to just display data (see Figure 1) in a grid format, then a ListView works quite well for this task. Of course, you will need to create the columns for the ListView, but with just a little generic code, you can create the columns on the fly just like the WPF Toolkit’s DataGrid. Figure 1: A List of Data using a ListView A Simple ListView ControlThe XAML below is what you would use to create the ListView shown in Figure 1. However, the problem with using XAML is you have to pre-define the columns. You cannot re-use this ListView except for “Product” data. <ListView x:Name="lstData"          ItemsSource="{Binding}">  <ListView.View>    <GridView>      <GridViewColumn Header="Product ID"                      Width="Auto"               DisplayMemberBinding="{Binding Path=ProductId}" />      <GridViewColumn Header="Product Name"                      Width="Auto"               DisplayMemberBinding="{Binding Path=ProductName}" />      <GridViewColumn Header="Price"                      Width="Auto"               DisplayMemberBinding="{Binding Path=Price}" />    </GridView>  </ListView.View></ListView> So, instead of creating the GridViewColumn’s in XAML, let’s learn to create them in code to create any amount of columns in a ListView. Create GridViewColumn’s From Data TableTo display multiple columns in a ListView control you need to set its View property to a GridView collection object. You add GridViewColumn objects to the GridView collection and assign the GridView to the View property. Each GridViewColumn object needs to be bound to a column or property name of the object that the ListView will be bound to. An ADO.NET DataTable object contains a collection of columns, and these columns have a ColumnName property which you use to bind to the GridViewColumn objects. Listing 1 shows a sample of reading and XML file into a DataSet object. After reading the data a GridView object is created. You can then loop through the DataTable columns collection and create a GridViewColumn object for each column in the DataTable. Notice the DisplayMemberBinding property is set to a new Binding to the ColumnName in the DataTable. C#private void FirstSample(){  // Read the data  DataSet ds = new DataSet();  ds.ReadXml(GetCurrentDirectory() + @"\Xml\Product.xml");    // Create the GridView  GridView gv = new GridView();   // Create the GridView Columns  foreach (DataColumn item in ds.Tables[0].Columns)  {    GridViewColumn gvc = new GridViewColumn();    gvc.DisplayMemberBinding = new Binding(item.ColumnName);    gvc.Header = item.ColumnName;    gvc.Width = Double.NaN;    gv.Columns.Add(gvc);  }   // Setup the GridView Columns  lstData.View = gv;  // Display the Data  lstData.DataContext = ds.Tables[0];} VB.NETPrivate Sub FirstSample()  ' Read the data  Dim ds As New DataSet()  ds.ReadXml(GetCurrentDirectory() & "\Xml\Product.xml")   ' Create the GridView  Dim gv As New GridView()   ' Create the GridView Columns  For Each item As DataColumn In ds.Tables(0).Columns    Dim gvc As New GridViewColumn()    gvc.DisplayMemberBinding = New Binding(item.ColumnName)    gvc.Header = item.ColumnName    gvc.Width = [Double].NaN    gv.Columns.Add(gvc)  Next   ' Setup the GridView Columns  lstData.View = gv  ' Display the Data  lstData.DataContext = ds.Tables(0)End SubListing 1: Loop through the DataTable columns collection to create GridViewColumn objects A Generic Method for Creating a GridViewInstead of having to write the code shown in Listing 1 for each ListView you wish to create, you can create a generic method that given any DataTable will return a GridView column collection. Listing 2 shows how you can simplify the code in Listing 1 by setting up a class called WPFListViewCommon and create a method called CreateGridViewColumns that returns your GridView. C#private void DataTableSample(){  // Read the data  DataSet ds = new DataSet();  ds.ReadXml(GetCurrentDirectory() + @"\Xml\Product.xml");   // Setup the GridView Columns  lstData.View =      WPFListViewCommon.CreateGridViewColumns(ds.Tables[0]);  lstData.DataContext = ds.Tables[0];} VB.NETPrivate Sub DataTableSample()  ' Read the data  Dim ds As New DataSet()  ds.ReadXml(GetCurrentDirectory() & "\Xml\Product.xml")   ' Setup the GridView Columns  lstData.View = _      WPFListViewCommon.CreateGridViewColumns(ds.Tables(0))  lstData.DataContext = ds.Tables(0)End SubListing 2: Call a generic method to create GridViewColumns. The CreateGridViewColumns MethodThe CreateGridViewColumns method will take a DataTable as a parameter and create a GridView object with a GridViewColumn object in its collection for each column in your DataTable. C#public static GridView CreateGridViewColumns(DataTable dt){  // Create the GridView  GridView gv = new GridView();  gv.AllowsColumnReorder = true;   // Create the GridView Columns  foreach (DataColumn item in dt.Columns)  {    GridViewColumn gvc = new GridViewColumn();    gvc.DisplayMemberBinding = new Binding(item.ColumnName);    gvc.Header = item.ColumnName;    gvc.Width = Double.NaN;    gv.Columns.Add(gvc);  }   return gv;} VB.NETPublic Shared Function CreateGridViewColumns _  (ByVal dt As DataTable) As GridView  ' Create the GridView  Dim gv As New GridView()  gv.AllowsColumnReorder = True   ' Create the GridView Columns  For Each item As DataColumn In dt.Columns    Dim gvc As New GridViewColumn()    gvc.DisplayMemberBinding = New Binding(item.ColumnName)    gvc.Header = item.ColumnName    gvc.Width = [Double].NaN    gv.Columns.Add(gvc)  Next   Return gvEnd FunctionListing 3: The CreateGridViewColumns method takes a DataTable and creates GridViewColumn objects in a GridView. By separating this method out into a class you can call this method anytime you want to create a ListView with a collection of columns from a DataTable. SummaryIn this blog you learned how to create a ListView that acts like a DataGrid. You are able to use a DataTable as both the source of the data, and for creating the columns for the ListView. In the next blog entry you will learn how to use the same technique, but for Collection classes. NOTE: You can download the complete sample code (in both VB and C#) at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "WPF ListView as a DataGrid" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free eBook on "Fundamentals of N-Tier".

    Read the article

  • Convert a Row to a Column in Excel the Easy Way

    - by Matthew Guay
    Sometimes we’ve entered data in a column in Excel, only to realize later that it would be better to have this data in a row, or vise-versa.  Here’s a simple trick to convert any row or set of rows into a column, or vise-versa, in Excel. Please Note: This is tested in Excel 2003, 2007, and 2010.  Here we took screenshots from Excel 2010 x64, but it works the same on the other versions. Convert a Row to a Column Here’s our data in Excel: We want to change these two columns into rows.  Select all the cells you wish to convert, right-click, and select copy (or simply press Ctrl+C): Now, right-click in the cell where you want to put the data in rows, and select “Paste Special…”   Check the box at the bottom that says “Transpose”, and then click OK. Now your data that was in columns is in rows! This works the exact same for converting rows into columns.  Here’s some data in rows:   After copying and pasting special with Transpose selected, here’s the data in columns! This is a great way to get your data organized just like you want in Excel. Similar Articles Productive Geek Tips Convert Older Excel Documents to Excel 2007 FormatHow To Import a CSV File Containing a Column With a Leading 0 Into ExcelExport an Access 2003 Report Into Excel SpreadsheetMake Row Labels In Excel 2007 Freeze For Easier ReadingKeyboard Ninja: Insert Tables in Word 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Increase the size of Taskbar Previews (Win 7) Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor Add Multiple Tabs to Office Programs The Wearing of the Green – St. Patrick’s Day Theme (Firefox)

    Read the article

  • 3-Tier Architecture in asp.net

    - by Aamir Hasan
    Three-tier (layer) is a client-server architecture in which the user interface, business process (business rules) and data storage and data access are developed and maintained as independent modules or most often on separate platforms. Basically, there are 3 layers, tier 1 (presentation tier, GUI tier), tier 2 (business objects, business logic tier) and tier 3 (data access tier). These tiers can be developed and tested separately. 3 - Tier Architecture is like following : 1. Presentation Layer 2.Data Manager Layer 3. Data Access Layer  The communication between all these layers need to be done using Business Entities. 1. Presentation Layer is the one where the UI comes into picture 2. Data Manager Layer is the one where all the maipulative code is written. Basically in this layer all the functional code needs to mentioned. 3. Data Access Layer is the one which communicates directly to the database. Data from one layer to other needs to be tranformed using Entities.

    Read the article

  • Metro: Using Templates

    - by Stephen.Walther
    The goal of this blog post is to describe how templates work in the WinJS library. In particular, you learn how to use a template to display both a single item and an array of items. You also learn how to load a template from an external file. Why use Templates? Imagine that you want to display a list of products in a page. The following code is bad: var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productsHTML = ""; for (var i = 0; i < products.length; i++) { productsHTML += "<h1>Product Details</h1>" + "<div>Product Name: " + products[i].name + "</div>" + "<div>Product Price: " + products[i].price + "</div>"; } document.getElementById("productContainer").innerHTML = productsHTML; In the code above, an array of products is displayed by creating a for..next loop which loops through each element in the array. A string which represents a list of products is built through concatenation. The code above is a designer’s nightmare. You cannot modify the appearance of the list of products without modifying the JavaScript code. A much better approach is to use a template like this: <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> A template is simply a fragment of HTML that contains placeholders. Instead of displaying a list of products by concatenating together a string, you can render a template for each product. Creating a Simple Template Let’s start by using a template to render a single product. The following HTML page contains a template and a placeholder for rendering the template: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <!-- Product Template --> <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> <!-- Place where Product Template is Rendered --> <div id="productContainer"></div> </body> </html> In the page above, the template is defined in a DIV element with the id productTemplate. The contents of the productTemplate are not displayed when the page is opened in the browser. The contents of a template are automatically hidden when you convert the productTemplate into a template in your JavaScript code. Notice that the template uses data-win-bind attributes to display the product name and price properties. You can use both data-win-bind and data-win-bindsource attributes within a template. To learn more about these attributes, see my earlier blog post on WinJS data binding: http://stephenwalther.com/blog/archive/2012/02/26/windows-web-applications-declarative-data-binding.aspx The page above also includes a DIV element named productContainer. The rendered template is added to this element. Here’s the code for the default.js script which creates and renders the template: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var product = { name: "Tesla", price: 80000 }; var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); productTemplate.render(product, document.getElementById("productContainer")); } }; app.start(); })(); In the code above, a single product object is created with the following line of code: var product = { name: "Tesla", price: 80000 }; Next, the productTemplate element from the page is converted into an actual WinJS template with the following line of code: var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); The template is rendered to the templateContainer element with the following line of code: productTemplate.render(product, document.getElementById("productContainer")); The result of this work is that the product details are displayed: Notice that you do not need to call WinJS.Binding.processAll(). The Template render() method takes care of the binding for you. Displaying an Array in a Template If you want to display an array of products using a template then you simply need to create a for..next loop and iterate through the array calling the Template render() method for each element. (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); var productContainer = document.getElementById("productContainer"); var i, product; for (i = 0; i < products.length; i++) { product = products[i]; productTemplate.render(product, productContainer); } } }; app.start(); })(); After each product in the array is rendered with the template, the result is appended to the productContainer element. No changes need to be made to the HTML page discussed in the previous section to display an array of products instead of a single product. The same product template can be used in both scenarios. Rendering an HTML TABLE with a Template When using the WinJS library, you create a template by creating an HTML element in your page. One drawback to this approach of creating templates is that your templates are part of your HTML page. In order for your HTML page to validate, the HTML within your templates must also validate. This means, for example, that you cannot enclose a single HTML table row within a template. The following HTML is invalid because you cannot place a TR element directly within the body of an HTML document:   <!-- Product Template --> <tr> <td data-win-bind="innerText:name"></td> <td data-win-bind="innerText:price"></td> </tr> This template won’t validate because, in a valid HTML5 document, a TR element must appear within a THEAD or TBODY element. Instead, you must create the entire TABLE element in the template. The following HTML page illustrates how you can create a template which contains a TR element: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <!-- Product Template --> <div id="productTemplate"> <table> <tbody> <tr> <td data-win-bind="innerText:name"></td> <td data-win-bind="innerText:price"></td> </tr> </tbody> </table> </div> <!-- Place where Product Template is Rendered --> <table> <thead> <tr> <th>Name</th><th>Price</th> </tr> </thead> <tbody id="productContainer"> </tbody> </table> </body> </html>   In the HTML page above, the product template includes TABLE and TBODY elements: <!-- Product Template --> <div id="productTemplate"> <table> <tbody> <tr> <td data-win-bind="innerText:name"></td> <td data-win-bind="innerText:price"></td> </tr> </tbody> </table> </div> We discard these elements when we render the template. The only reason that we include the TABLE and THEAD elements in the template is to make the HTML page validate as valid HTML5 markup. Notice that the productContainer (the target of the template) in the page above is a TBODY element. We want to add the rows rendered by the template to the TBODY element in the page. The productTemplate is rendered in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); var productContainer = document.getElementById("productContainer"); var i, product, row; for (i = 0; i < products.length; i++) { product = products[i]; productTemplate.render(product).then(function (result) { row = WinJS.Utilities.query("tr", result).get(0); productContainer.appendChild(row); }); } } }; app.start(); })(); When the product template is rendered, the TR element is extracted from the rendered template by using the WinJS.Utilities.query() method. Next, only the TR element is added to the productContainer: productTemplate.render(product).then(function (result) { row = WinJS.Utilities.query("tr", result).get(0); productContainer.appendChild(row); }); I discuss the WinJS.Utilities.query() method in depth in a previous blog entry: http://stephenwalther.com/blog/archive/2012/02/23/windows-web-applications-query-selectors.aspx When everything gets rendered, the products are displayed in an HTML table: You can see the actual HTML rendered by looking at the Visual Studio DOM Explorer window:   Loading an External Template Instead of embedding a template in an HTML page, you can place your template in an external HTML file. It makes sense to create a template in an external file when you need to use the same template in multiple pages. For example, you might need to use the same product template in multiple pages in your application. The following HTML page does not contain a template. It only contains a container that will act as a target for the rendered template: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <!-- Place where Product Template is Rendered --> <div id="productContainer"></div> </body> </html> The template is contained in a separate file located at the path /templates/productTemplate.html:   Here’s the contents of the productTemplate.html file: <!-- Product Template --> <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> Notice that the template file only contains the template and not the standard opening and closing HTML elements. It is an HTML fragment. If you prefer, you can include all of the standard opening and closing HTML elements in your external template – these elements get stripped away automatically: <html> <head><title>product template</title></head> <body> <!-- Product Template --> <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> </body> </html> Either approach – using a fragment or using a full HTML document  — works fine. Finally, the following default.js file loads the external template, renders the template for each product, and appends the result to the product container: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productTemplate = new WinJS.Binding.Template(null, { href: "/templates/productTemplate.html" }); var productContainer = document.getElementById("productContainer"); var i, product, row; for (i = 0; i < products.length; i++) { product = products[i]; productTemplate.render(product, productContainer); } } }; app.start(); })(); The path to the external template is passed to the constructor for the Template class as one of the options: var productTemplate = new WinJS.Binding.Template(null, {href:"/templates/productTemplate.html"}); When a template is contained in a page then you use the first parameter of the WinJS.Binding.Template constructor to represent the template – instead of null, you pass the element which contains the template. When a template is located in an external file, you pass the href for the file as part of the second parameter for the WinJS.Binding.Template constructor. Summary The goal of this blog entry was to describe how you can use WinJS templates to render either a single item or an array of items to a page. We also explored two advanced topics. You learned how to render an HTML table by extracting the TR element from a template. You also learned how to place a template in an external file.

    Read the article

  • Storing Entity Framework Entities in a Separate Assembly

    - by Anthony Trudeau
    The Entity Framework has been valuable to me since it came out, because it provided a convenient and powerful way to model against my data source in a consistent way.  The first versions had some deficiencies that for me mostly fell in the category of the tight coupling between the model and its resulting object classes (entities). Version 4 of the Entity Framework pretty much solves this with the support of T4 templates that allow you to implement your entities as self-tracking entities, plain old CLR objects (POCO), et al.  Doing this involves either specifying a new code generation template or implementing them yourselves.  Visual Studio 2010 ships with a self-tracking entities template and a POCO template is available from the Extension Manager.  (Extension Manager is very nice but it's very easy to waste a bunch of time exploring add-ins.  You've been warned.) In a current project I wanted to use POCO; however, I didn't want my entities in the same assembly as the context classes.  It would be nice if this was automatic, but since it isn't here are the simple steps to move them.  These steps detail moving the entity classes and not the context.  The context can be moved in the same way, but I don't see a compelling reason to physically separate the context from my model. Turn off code generation for the template.  To do this set the Custom Tool property for the entity template file to an empty string (the entity template file will be named something like MyModel.tt). Expand the tree for the entity template file and delete all of its items.  These are the items that were automatically generated when you added the template. Create a project for your entities (if you haven't already). Add an existing item and browse to your entity template file, but add it as a link (do not add it directly).  Adding it as a link will allow the model and the template to stay in sync, but the code generation will occur in the new assembly.

    Read the article

  • USB Hub and Ubuntu

    - by aserwin
    I have a powered 7 port hub connected to my Ubuntu box and it does nothing. The devices (zip drive and web cam) work direct, but aren't recognized through the hub. This worked fine in Windows 7. I can't prove it is the OS because this is a new motherboard and processor. Any advice? EDIT : Output from lsusb -v Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ehci_hcd iProduct 2 EHCI Host Controller iSerial 1 0000:00:12.2 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x000a No power switching (usb 1.0) Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0503 highspeed power enable connect Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ehci_hcd iProduct 2 EHCI Host Controller iSerial 1 0000:00:13.2 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x000a No power switching (usb 1.0) Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ehci_hcd iProduct 2 EHCI Host Controller iSerial 1 0000:00:16.2 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x000a No power switching (usb 1.0) Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Device Status: 0x0001 Self Powered Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:12.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:13.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:14.5 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 2 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Device Status: 0x0001 Self Powered Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:16.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0303 lowspeed power enable connect Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Device Status: 0x0001 Self Powered Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 1 Single TT bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:02:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 2 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Device Status: 0x0001 Self Powered Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:02:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 31 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 bMaxBurst 0 Hub Descriptor: bLength 12 bDescriptorType 42 nNbrPorts 2 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere bHubDecLat 0.0 micro seconds wHubDelay 0 nano seconds DeviceRemovable 0x00 Hub Port Status: Port 1: 0000.02a0 5Gbps power Rx.Detect Port 2: 0000.02a0 5Gbps power Rx.Detect Binary Object Store Descriptor: bLength 5 bDescriptorType 15 wTotalLength 15 bNumDeviceCaps 1 SuperSpeed USB Device Capability: bLength 10 bDescriptorType 16 bDevCapabilityType 3 bmAttributes 0x00 Latency Tolerance Messages (LTM) Supported wSpeedsSupported 0x0008 Device can operate at SuperSpeed (5Gbps) bFunctionalitySupport 3 Lowest fully-functional device speed is SuperSpeed (5Gbps) bU1DevExitLat 3 micro seconds bU2DevExitLat 2047 micro seconds Device Status: 0x0001 Self Powered Bus 001 Device 002: ID 04a9:1709 Canon, Inc. PIXMA MP150 Scanner Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x04a9 Canon, Inc. idProduct 0x1709 PIXMA MP150 Scanner bcdDevice 1.08 iManufacturer 1 Canon iProduct 2 MP150 iSerial 3 20BC24 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 62 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 2mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 3 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x07 EP 7 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x88 EP 8 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x89 EP 9 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 11 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 7 Printer bInterfaceSubClass 1 Printer bInterfaceProtocol 2 Bidirectional iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 007 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x046d Logitech, Inc. idProduct 0xc517 LX710 Cordless Desktop Laser bcdDevice 38.10 iManufacturer 1 Logitech iProduct 2 USB Receiver iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 59 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 98mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 1 Keyboard iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 59 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 2 Mouse iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 177 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Device Status: 0x0000 (Bus Powered) This is with the powered hub plugged in.

    Read the article

  • Few basic Billing facts

    - by Rajesh Sharma
    Quick basic points on Billing: In batch billing, there can be one and ONLY ONE bill for an Account, per Bill Cycle. If an Account has been already billed within the current Bill Cycle's window period, it will not be billed again and will be skipped by the Bill Segment generation program, part of batch eligibility check routine. If an Account does not have any Stopped Service Agreements and you attempt to generate a Bill for that Account that too for a period for which it was already billed, no Bill Segments are generated and a Pending Bill is created for that Account. If a Pending Bill exists for an Account and was generated from a batch, the Account will be re-billed in the next batch run. In contrast, if a Pending Bill exists for an Account and was generated online, the Account will be skipped in the next batch run of the Account's Bill Cycle. Bill generation source, Batch or Online at DB level is determined as following: Batch = CI_BILL.BILL_CYC_CD = {Bill Cycle Code} and CI_BILL.WIN_START_DT = {Window Start Date} Online = CI_BILL.BILL_CYC_CD = "" and CI_BILL.WIN_START_DT IS NULL Bill generation source, Batch or Online from Bill page is determined as following: Batch Online   Closing/Final Bill segment is generated for Stopped Service Agreements and is determined as follows: DB level CI_BSEG.CLOSING_BSEG_SW = "Y" Bill Segment page

    Read the article

  • Using Umbraco&rsquo;s Dropdown Datatype

    - by MightyZot
    In Umbraco, you could consider document types like models and data types as property types for those modes. For example, you may create a document type called “Prices” to represent a page that displays a list of prices. And, then, you might create a document type called “Price Item” to represent the price list items. A property called “Price” could then represent the price of an item. When you create the Price property, you specify the data type, which in this case might be “Number”, indicating that this particular field accepts only numerical values. Consequently, you could also create a drop down list property called “Category”, allowing you to categorize the items in your price list. To add items to the drop down list, you modify the data type definition in the Developer module of the Umbraco administrative utility. Instead of modifying the drop down data type itself, you should first make a copy by right-clicking on the Data Types node and choosing Create. Give your new data type a name in the Create dialog and click the Create button. In the Edit datatype dialog, change “Render control” to “Dropdown list”.  To add your list items, simply enter that value into the textbox next to “Add prevalue” and click Save or press the Enter key. Now that you have a new drop down list data type created, along with assigned list items, you can use it in your document type definitions just like you would the other data types.

    Read the article

  • Managing JS and CSS for a static HTML web application

    - by Josh Kelley
    I'm working on a smallish web application that uses a little bit of static HTML and relies on JavaScript to load the application data as JSON and dynamically create the web page elements from that. First question: Is this a fundamentally bad idea? I'm unclear on how many web sites and web applications completely dispense with server-side generation of HTML. (There are obvious disadvantages of JS-only web apps in the areas of graceful degradation / progressive enhancement and being search engine friendly, but I don't believe that these are an issue for this particular app.) Second question: What's the best way to manage the static HTML, JS, and CSS? For my "development build," I'd like non-minified third-party code, multiple JS and CSS files for easier organization, etc. For the "release build," everything should be minified, concatenated together, etc. If I was doing server-side generation of HTML, it'd be easy to have my web framework generate different development versus release HTML that includes multiple verbose versus concatenated minified code. But given that I'm only doing any static HTML, what's the best way to manage this? (I realize I could hack something together with ERB or Perl, but I'm wondering if there are any standard solutions.) In particular, since I'm not doing any server-side HTML generation, is there an easy, semi-standard way of setting up my static HTML so that it contains code like <script src="js/vendors/jquery.js"></script> <script src="js/class_a.js"></script> <script src="js/class_b.js"></script> <script src="js/main.js"></script> at development time and <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src="js/entire_app.min.js"></script> for release?

    Read the article

  • Setting user's group and umask has no effect

    - by Andrew Vit
    I'm trying to allow my "deploy" user to have access to files created by www-data: I added "deploy" to the www-data group. I set umask to 002. When I run the following commands, I'm not seeing the result I expect: deploy@ubuntu-lucid-32-generic:/var/www$ groups www-data adm dialout cdrom plugdev lpadmin sambashare admin deploy sysadmin deploy@ubuntu-lucid-32-generic:/var/www$ newgrp www-data deploy@ubuntu-lucid-32-generic:/var/www$ umask 0002 deploy@ubuntu-lucid-32-generic:/var/www$ mkdir test deploy@ubuntu-lucid-32-generic:/var/www$ ls -la test total 0 drwxr-xr-x 1 deploy deploy 68 Nov 7 20:37 . drwxr-xr-x 1 deploy deploy 476 Nov 7 20:37 .. I see that: The folder doesn't belong to the www-data group. The folder permissions don't have group-write (775). Note that the /var/www directory is owned by the deploy user: drwxr-xr-x 1 deploy deploy 510 Nov 7 20:45 . How can I give www-data selective access to directories? Or, how to share the /var/www directory with my deploy user: I don't care who owns it, as long as I can write to it, and so can www-data. (Ideally I would set up a directory with SGID access for www-data.)

    Read the article

  • Iva&rsquo;s internship story

    - by anca.rosu
    Hello, my name is Iva and I am a member of the Internship program at Oracle Czech. When I joined Oracle, I initially worked as an Alliances and Channel Marketing Assistant at Oracle Czech Republic, but most recently, I have been working in the Demand Generation Team. I am a student of the Economics and Management Faculty at Czech University of Life Sciences in Prague, specializing in Marketing, Business and Administration. I have recently passed my Bachelor exams. I received the information about Oracle’s Internship opportunity from a friend. I joined Oracle in September 2008 and worked as an Alliances and Channel Marketing Assistant until May 2009. Here I was responsible for the Open Market Model (OMM) and at the same time I was covering communication with Partners, Oracle Events and Team Buildings as well as creating Partner Databases and Reports. At the moment, I support our Demand Generation Team to execute Direct Marketing campaigns in Czech Republic, Slovak Republic and Hungary. In addition to this, I help with Reporting and Contact Data Management for the whole of the European Enlargement (EE) and Commonwealth of Independent States (CIS) regions. I enjoy my job and I appreciate the experience. Every day is interesting, because every day I learn something new. I am very happy that I was presented with an opportunity to work at Oracle and cooperate with friendly people in a multicultural environment. Oracle gives me the chance to develop my skills and start building my career. I am able to attend interesting training classes, improve my language skills and enjoy sporting activities, such as squash, swimming and aerobics, at the same time. If you dream of working in an international company and you would like to join a very dynamic industry, I really can recommend Oracle without a doubt, even if you have no IT background! If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com   Technorati Tags: Internship program,Oracle Czech,Economics,Management Faculty,Czech University of Life Sciences in Prague,Demand Generation Team

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 6

    - by Max
    In this post, we are going to look into implementing lists into our twitter application and also about enhancing the data grid to display the status messages in a pleasing way with the profile images. Twitter lists are really cool feature that they recently added, I love them and I’ve quite a few lists setup one for DOTNET gurus, SQL Server gurus and one for a few celebrities. You can follow them here. Now let us move onto our tutorial. 1) Lists can be subscribed to in two ways, one can be user’s own lists, which he has created and another one is the lists that the user is following. Like for example, I’ve created 3 lists myself and I am following 1 other lists created by another user. Both of them cannot be fetched in the same api call, its a two step process. 2) In the TwitterCredentialsSubmit method we’ve in Home.xaml.cs, let us do the first api call to get the lists that the user has created. For this the call has to be made to https://twitter.com/<TwitterUsername>/lists.xml. The API reference is available here. myService1.AllowReadStreamBuffering = true; myService1.UseDefaultCredentials = false; myService1.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService1.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsRequestCompleted); myService1.DownloadStringAsync(new Uri("https://twitter.com/" + GlobalVariable.getUserName() + "/lists.xml")); 3) Now let us look at implementing the event handler – ListRequestCompleted for this. public void ListsRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { StatusMessage.Text = "This application must be installed first."; parseXML(""); } else { //MessageBox.Show(e.Result.ToString()); parseXMLLists(e.Result.ToString()); } } 4) Now let us look at the parseXMLLists in detail xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.MinWidth = 70; b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } So here what am I doing, I am just dynamically creating a button for each of the lists and put them within a StackPanel and for each of these buttons, I am creating a event handler b_Click which will be fired on button click. We will look into this method in detail soon. For now let us get the buttons displayed. 5) Now the user might have some lists to which he has subscribed to. We need to create a button for these as well. In the end of TwitterCredentialsSubmit method, we need to make a call to http://api.twitter.com/1/<TwitterUsername>/lists/subscriptions.xml. Reference is available here. The code will look like this below. myService2.AllowReadStreamBuffering = true; myService2.UseDefaultCredentials = false; myService2.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService2.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsSubsRequestCompleted); myService2.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/subscriptions.xml")); 6) In the event handler – ListsSubsRequestCompleted, we need to parse through the xml string and create a button for each of the lists subscribed, let us see how. I’ve taken only the “full_name”, you can choose what you want, refer the documentation here. Note the point that the full_name will have @<UserName>/<ListName> format – this will be useful very soon. xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("full_name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.MinWidth = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } Please note, I am setting the button width to be auto based on the content and also giving it a midwidth value. I wanted to create a rounded corner buttons, but for some reason its not working. Also add this StackPanel – TwitterListStack of the Home.xaml <StackPanel HorizontalAlignment="Center" Orientation="Horizontal" Name="TwitterListStack"></StackPanel> After doing this, you would get a series of buttons in the top of the home page. 7) Now the button click event handler – b_Click, in this method, once the button is clicked, I call another method with the content string of the button which is clicked as the parameter. Button b = (Button)e.OriginalSource; getListStatuses(b.Content.ToString()); 8) Now the getListsStatuses method: toggleProgressBar(true); WebRequest.RegisterPrefix("http://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); if (listName.IndexOf("@") > -1 && listName.IndexOf("/") > -1) { string[] arrays = null; arrays = listName.Split('/'); arrays[0] = arrays[0].Replace("@", " ").Trim(); //MessageBox.Show(arrays[0]); //MessageBox.Show(arrays[1]); string url = "http://api.twitter.com/1/" + arrays[0] + "/lists/" + arrays[1] + "/statuses.xml"; //MessageBox.Show(url); myService.DownloadStringAsync(new Uri(url)); } else myService.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/" + listName + "/statuses.xml")); Please note that the url to look at will be different based on the list clicked – if its user created, the url format will be http://api.twitter.com/1/<CurentUser>/lists/<ListName>/statuses.xml But if it is some lists subscribed, it will be http://api.twitter.com/1/<ListOwnerUserName>/lists/<ListName>/statuses.xml The first one is pretty straight forward to implement, but if its a list subscribed, we need to split the listName string to get the list owner and list name and user them to form the string. So that is what I’ve done in this method, if the listName has got “@” and “/” I build the url differently. 9) Until now, we’ve been using only a few nodes of the status message xml string, now we will look to fetch a new field - “profile_image_url”. Images in datagrid – COOL. So for that, we need to modify our Status.cs file to include two more fields one string another BitmapImage with get and set. public string profile_image_url { get; set; } public BitmapImage profileImage { get; set; } 10) Now let us change the generic parseXML method which is used for binding to the datagrid. public void parseXML(string text) { XDocument xdoc; xdoc = XDocument.Parse(text); statusList = new List<Status>(); statusList = (from status in xdoc.Descendants("status") select new Status { ID = status.Element("id").Value, Text = status.Element("text").Value, Source = status.Element("source").Value, UserID = status.Element("user").Element("id").Value, UserName = status.Element("user").Element("screen_name").Value, profile_image_url = status.Element("user").Element("profile_image_url").Value, profileImage = new BitmapImage(new Uri(status.Element("user").Element("profile_image_url").Value)) }).ToList(); DataGridStatus.ItemsSource = statusList; StatusMessage.Text = "Datagrid refreshed."; toggleProgressBar(false); } We are here creating a new bitmap image from the image url and creating a new Status object for every status and binding them to the data grid. Refer to the Twitter API documentation here. You can choose any column you want. 11) Until now, we’ve been using the auto generate columns for the data grid, but if you want it to be really cool, you need to define the columns with templates, etc… <data:DataGrid AutoGenerateColumns="False" Name="DataGridStatus" Height="Auto" MinWidth="400"> <data:DataGrid.Columns> <data:DataGridTemplateColumn Width="50" Header=""> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Image Source="{Binding profileImage}" Width="50" Height="50" Margin="1"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> <data:DataGridTextColumn Width="Auto" Header="User Name" Binding="{Binding UserName}" /> <data:DataGridTemplateColumn MinWidth="300" Width="Auto" Header="Status"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock TextWrapping="Wrap" Text="{Binding Text}"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> I’ve used only three columns – Profile image, Username, Status text. Now our Datagrid will look super cool like this. Coincidentally,  Tim Heuer is on the screenshot , who is a Silverlight Guru and works on SL team in Microsoft. His blog is really super. Here is the zipped file for all the xaml, xaml.cs & class files pages. Ok let us stop here for now, will look into implementing few more features in the next few posts and then I am going to look into developing a ASP.NET MVC 2 application. Hope you all liked this post. If you have any queries / suggestions feel free to comment below or contact me. Cheers! Technorati Tags: Silverlight,LINQ,Twitter API,Twitter,Silverlight 4

    Read the article

  • Asset and Work Management in Utilities: An Integrated Enterprise Success Story

    - by stephen.slade(at)oracle.com
    Jan 11 '11 Webcast: Utilities are turning to Oracle to deliver an integrated EAM platform that manages all of their assets from fleet to facilities and distribution to generation. Hear from solutions experts and from Sunflower Electric Power Corporation about how an integrated enterprise asset and work management system helped them deliver bottom line results Do you have different work management systems for generation, distribution, and transmission? Fleet maintenance? Facilities? Are you on the latest release of these products? Have you considered your options when the product is no longer supported? Do you struggle with integration and keeping the various systems "in balance"? Do you have trouble retrieving data from these disparate systems and getting an enterprise view of asset and work management operations? Utilities are challenged to better manage information on generation, transmission and distribution assets. Point solutions for Enterprise Asset Management (EAM) and Computerized Maintenance Management Systems (CMMS) are often effective as departmental solutions but have limited ability to deliver an enterprise solution with accessible business intelligence. Date:  January 11, 2011 @ 10am PT/1pm ET EVITE:  http://www.oracle.com/us/dm/h2fy11/63025-wwmk10040611mpp054c003-se-197386.html Register: HERE

    Read the article

  • SQL Azure Roadmap gets a little clearer &ndash; announcements from Tech Ed

    - by Eric Nelson
    On Monday at Tech?Ed 2010 we announced new stuff (I like new stuff) that “showcases our continued commitment to deliver value, flexibility and control of data through data cloud services to our customers”. Ok, that does sound like marketing speak (and it is) but the good news is there is some meat behind it. We have some decent new features coming and we also have some clarity on when we will be able to get our hands on those features. SQL Azure Business Edition Extends to 50 GB – June 28th SQL Azure Business Edition database is now extending from 10GB to 50GB The new 50GB database size will be available worldwide starting June 28th SQL Azure Business Edition Subscription Offer – August 1st Starting August 1st, we will have a new discounted SQL Azure promotional offer (SQL Azure Development Accelerator Core) More information is available at http://www.microsoft.com/windowsazure/offers/. Public Preview of the Data Sync Service  - CTP now Data Sync Service for SQL Azure allows for more flexible control over data by deciding which data components should be distributed across multiple datacenters in different geographic locations, based on your internal policies and business needs.  Available as a community technology preview after registering at http://www.sqlazurelabs.com SQL Server Web Manager for SQL Azure - CTP this Summer SQL Server Web Manager (SSWM) is a lightweight and easy to use database management tool for SQL Azure databases, to be offered this summer. Access 10 Support for SQL Azure – available now Yey – at last! Microsoft Office 2010 will natively support data connectivity to SQL Azure – we can now start developing those “departmental apps” with the confidence of a highly available SQL store provisioned in seconds. NB: I don’t believe we will support any previous versions of Access talking to SQL Azure. The Pre-announced Spatial Data Support to Become Live – Live now* At MIX in March we announced spatial was coming and apparently it is now here - although I need to check. Related Links UK based? Sign up at http://ukazure.ning.com SQL Azure Team Blog http://blogs.msdn.com/b/sqlazure/

    Read the article

  • A Cautionary Tale About Multi-Source JNDI Configuration

    - by scott.s.nelson(at)oracle.com
    Here's a bit of fun with WebLogic JDBC configurations.  I ran into this issue after reading that p13nDataSource and cgDataSource-NonXA should not be configured as multi-source. There were some issues changing them to use the basic JDBC connection string and when rolling back to the bad configuration the server went "Boom".  Since one purpose behind this blog is to share lessons learned, I just had to post this. If you write your descriptors manually (as opposed to generating them using the WLS console) and put a comma-separated list of JNDI addresses like this: <jdbc-data-source-params> <jndi-name>weblogic.jdbc.jts.commercePool,contentDataSource, contentVersioningDataSource,portalFrameworkPool</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <data-source-list>portalDataSource-rac0,portalDataSource-rac1</data-source-list> <failover-request-if-busy>false</failover-request-if-busy> </jdbc-data-source-params> so long as the first address resolves, it will still work. Sort of.  If you call this connection to do an update, only one node of the RAC instance is updated. Other wonderful side-effects include the server refusing to start sometimes. The proper way to list the JNDI sources is one per node, like this: <jdbc-data-source-params> <jndi-name>weblogic.jdbc.jts.commercePool</jndi-name> <jndi-name>contentDataSource</jndi-name> <jndi-name>contentVersioningDataSource</jndi-name> <jndi-name>portalFrameworkPool</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <data-source-list>portalDataSource-rac0, portalDataSource-rac1, portalDataSource-rac2 </data-source-list> <failover-request-if-busy>false</failover-request-if-busy> </jdbc-data-source-params>(Props to Sandeep Seshan for locating the root cause)

    Read the article

  • SSIS Denali as part of “Enterprise Information Management”

    - by jorg
    When watching the SQL PASS session “What’s Coming Next in SSIS?” of Steve Swartz, the Group Program Manager for the SSIS team, an interesting question came up: Why is SSIS thought of to be BI, when we use it so frequently for other sorts of data problems? The answer of Steve was that he breaks the world of data work into three parts: Process of inputs BI   Enterprise Information Management All the work you have to do when you have a lot of data to make it useful and clean and get it to the right place. This covers master data management, data quality work, data integration and lineage analysis to keep track of where the data came from. All of these are part of Enterprise Information Management. Next, Steve told Microsoft is developing SSIS as part of a large push in all of these areas in the next release of SQL. So SSIS will be, next to a BI tool, part of Enterprise Information Management in the next release of SQL Server. I'm interested in the different ways people use SSIS, I've basically used it for ETL, data migrations and processing inputs. In which ways did you use SSIS?

    Read the article

  • Having trouble with projection matrix, need help

    - by Mr.UNOwen
    I'm having trouble with what appears to be the projection matrix. Given a wide enough of a screen, when a cube is on the left and right most edge, the left or right wall will appear stretched to the point that the front face is 1/10 the width of the side. So I do update the screen ratio along with the projection matrix and view port on screen resize, am I safe to assume all the trouble is from the matrix class? Also the cube follows the mouse, but it's only vertically aligned and ahead of the mouse when going left or right from the center of the screen. Perspective function call: * setPerspective * * @param fov: angle in radians * @param aspect: screen ratio w/h * @param near: near distance * @param far: far distance **/ void APCamera::setPerspective(GMFloat_t fov, GMFloat_t aspect, GMFloat_t near, GMFloat_t far) { GMFloat_t difZ = near - far; GMFloat_t *data; mProjection->clear(); //set to identity matrix data = mProjection->getData(); GMFloat_t v = 1.0f / tan(fov / 2.0f); data[_AP_MAA] = v / aspect; data[_AP_MBB] = v; data[_AP_MCC] = (far + near) / difZ; data[_AP_MCD] = -1.0f; data[_AP_MDD] = 0.0f; data[_AP_MDC] = 2.0f * far * near/ difZ; mRatio = aspect; mInvProjOutdated = true; mIsPerspective = true; } and... #define _AP_MAA 0 #define _AP_MAB 1 #define _AP_MAC 2 #define _AP_MAD 3 #define _AP_MBA 4 #define _AP_MBB 5 #define _AP_MBC 6 #define _AP_MBD 7 #define _AP_MCA 8 #define _AP_MCB 9 #define _AP_MCC 10 #define _AP_MCD 11 #define _AP_MDA 12 #define _AP_MDB 13 #define _AP_MDC 14 #define _AP_MDD 15

    Read the article

  • Site Web Analytics not updating Sharepoint 2010

    - by Rohit Gupta
    If you facing the issue that the web Analytics Reports in SharePoint 2010 Central Administration is not updating data. When you go to your site > site settings > Site Web Analytics reports or Site Collection Analytics reports  You get old data as in the ribbon displayed "Data Last Updated: 12/13/2010 2:00:20 AM" Please insure that the following things are covered: Insure that Usage and Data Health Data Collection service is configured correctly. Log Collection Schedule is configured correctly Microsoft Sharepoint Foundation Usage Data Import and Microsoft SharePoint Foundation Usage Data Processing Timer jobs are configured to run at regular intervals One last important Timer job is the Web Analytics Trigger Workflows Timer Job insure that this timer job is enabled and scheduled to run at regular intervals (for each site that you need analytics for). After you have insured that the web analytics service configuration is working fine and the Usage Data Import job is importing the *.usage files from the ULS LOGS folder into the WSS_Logging database, and that all the required timer jobs are running as expected… wait for a day for the report to get updated… the report gets updated automatically at 2:00 am in the morning… and i could not find a way to control the schedule for this report update job. So be sure to wait for a day before giving up :)

    Read the article

  • AT&T Application Resource Analyzer in NetBeans IDE

    - by Geertjan
    Here at Øredev in Malmö I met Doug Sillars who does developer outreach for the AT&T Application Resource Optimizer. In this YouTube clip you see Doug explaining how it works and what it can do for optimizing performance of mobile applications. There's a free and open source Android app on GitHub that you can install on Android to collect data and then there's a Java Swing application for analyzing the results. And here's what that application looks like as a plugin in NetBeans IDE, click to enlarge the image, which shows the Android sources of the Data Collector, as well as the Data Analyzer ready to be used to collect data: Since the ARO Data Analyzer is written in Java and has JPanels defining its UI layer, integrating the user interface wasn't hard. Now working on the Actions, so there'll be a new ARO menu with start/stop data collecting menu items, etc, reusing as much of the original code as possible. That part is actually already working. I started up an Android emulator, then started the data collection process from the IDE. Now need to include the Actions for importing the data into the analyzer, together with a few other related features. A pretty cool feature in ARO is video capture, so that a movie can be made by ARO of all the steps taken on the device during the collection process, which will also be nice to have integrated into the NetBeans plugin. Ultimately, this will be handy for anyone creating Android applications in NetBeans IDE since they'll be able to use AT&T's ARO tool for optimizing the performance of the applications they're developing. It will also be useful for those using the built-in Cordova tools in NetBeans IDE to create iOS applications because ARO is also applicable to analyzing iOS application performance.

    Read the article

  • Working with QuickBooks using LINQPad

    - by dataintegration
    The RSSBus ADO.NET Providers can be used from many applications and development environments. In this article, we show how to use LINQPad to connect to QuickBooks using the RSSBus ADO.NET Provider for QuickBooks. Although this example uses the QuickBooks Data Provider, the same process applies to any of our ADO.NET Providers. Create the Data Model Step 1: Download and install both the Data Provider from RSSBus and LINQPad (available at www.linqpad.net Step 2: Create a new project in Visual Studio and create a data model for it using the ADO.NET Entity Data Model wizard. Step 3: Create a new connection by clicking "New Connection", specify the connection string options, and click Next. Step 4: Select the desired tables and views and click Finish to create the data model. Step 5: Right click on the entity diagram and select 'Add Code Generation Item'. Choose the 'ADO.NET DbContext Generator'. Step 6: Now build the project. The generated files can be used to create a QuickBooks connection in LINQPad. Create the connection to QuickBooks in LINQPad Step 7:Open LINQPad and click 'Add New Connection'. Step 8: Choose 'Entity Framework DbContext POCO'. Step 9: Choose the data model assembly ('.dll') created by Visual Studio as the 'Path to Custom Assembly'. Choose the name of the custom DbContext, the path to the config file, and assign a name to the connection that will allow you to recognize its purpose. Step 10: Congratulations! Now you have a connection to QuickBooks, and you can query data through LINQPad.

    Read the article

  • flat files vs. RDBMS database, few read/writes, few changes

    - by Bob Lapique
    I have to handle data from long term (years, decades) climate monitoring stations. The data flow usually starts with raw data (voltages, etc.) plus quality check information (pressure, temperature, flow rate, etc.) generally recorded @ 1Hz. Then, the data are assigned a quality flag (human and/or program), processed (apply calibration curves) and flagged. So, we basically end up with 2 datasets : raw and processed data. New data are typically added once a day (~500Ko/day/instrument). Simultaneous queries are not likely to ever happen. I wanted to go for a RDBMS (we have a MySQL server) and have some experience in database design, but the IT guy keeps telling me that flat files will to the job just as well. I suspect him to try to make his life easier when it comes to backup/upgrade the MySQL. There are not so many links between data, they don't change much, but the quality flags will change. A RDBMS is easier to compare data from different instruments on a "many days" scale, compared to daily text files. Well, what would you advise ? Thanks.

    Read the article

  • Recommended storage scheme for home server? (LVM/JBOD/RAID 5...)

    - by j-g-faustus
    Are there any guidelines for which storage scheme(s) makes most sense for a multiple-disk home server? I am assuming a separate boot/OS disk (so bootability is not a concern, this is for data storage only) and 4-6 storage disks of 1-2 TB each, for a total storage capacity in the range 4-12 TB. The file system is ext4, I expect there will be only one big partition spanning all disks. As far as I can tell, the alternatives are individual disks pros: works with any combination of disk sizes; losing a disk loses only the data on that disk; no need for volume management. cons: data management is clumsy when logical units (like a "movies" folder) are larger than the capacity of any single drive. JBOD span pros: can merge disks of any size. cons: losing a disk loses all data on all disks LVM pros: can merge disks of any size; relatively simple to add and remove disks. cons: losing a disk loses all data on all disks RAID 0 pros: speed cons: losing one drive loses all data; disks must be same size RAID 5 pros: data survives losing one disk cons: gives up one disk worth of capacity; disks must be same size RAID 6 pros: data survives losing two disks cons: gives up two disks worth of capacity; disks must be same size I'm primarily considering either LVM or JBOD span simply because it will let me reuse older, smaller-capacity disks when I upgrade the system. The runner-up is RAID 0 for speed. I'm planning on having full backups to a separate system, so I expect the extra redundancy from RAID levels 5 or 6 won't be important. Is this a fair representation of the alternatives? Are there other considerations or alternatives I have missed? And what would you recommend?

    Read the article

  • The remote server returned an error: 227 Entering Passive Mode

    - by hmloo
    Today while uploading file to FTP sever, the codes throw an error - "The remote server returned an error: 227 Entering Passive Mode", after research, I got some knowledge in FTP working principle. FTP may run in active or passive mode, which determines how the data connection is established. Active mode: command connection: client >1024  -> server 21 data connection:    client >1024  <-  server 20 passive mode: command connection: client > 1024 -> server 21 data connection:    client > 1024 <- server > 1024 In active mode, the client connects from a random unprivileged port (N > 1023) to the FTP server's command port(default port 21). If the client needs to transfer data, the client will use PORT command to tell the server:"hi, I opened port XXXX, please connect to me." and then server will use port 20 to initiate the data connection to that client port number. In passive mode, the client connects from a random unprivileged port (N > 1023) to the FTP server's command port(default port 21). If the client needs to transfer data, the sever will tell the client:"hi, I opened port XXXX , please connect to me." and then client will initiate the data connection to that sever port number. In a nutshell, active mode is used to have the server connect to the client, and passive mode is used to have the client connect to the server. So if your FTP server is configured to work in active mode only or the firewalls between your client and the server are blocking the data port range, then you will get error message, to fix this issue, just set System.Net.FtpWebRequest property UsePassive = false. Hope this helps! Thanks for reading!

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >