Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 151/588 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • How can I use Hibernate Criteria's to query nested tables?

    - by cbmeeks
    I've looked all over SO and Google but I guess I'm not using the right search terms or something. Anyway, say I have three tables: Companies ----------------------------------------- id name user_id Users ----------------------------------------- id username usertype_id UserTypes ----------------------------------------- id typeofuser So ACME would be a company, it would have a user Moe and Moe would be a usertype of Stooge. In SQL, I would do something like: select * from companies c join users u on (u.id = c.user_id) join usertypes ut on (ut.id = u.usertype_id) where ut.typeofuser = 'Stooge' But I can't seem to figure out how to do that in a Criteria. I have tried: Criteria crit = io.getSession().createCriteria(Company.class); List<Company> list = crit.createCriteria("users") .createCriteria("usertypes") .add(Restriction.eq("typeofuser", "Stooge").list(); But I get back way too many records. And the results don't even come close to being accurate. I've also tried: Criteria crit = io.getSession().createCriteria(Company.class); List<Company> list = crit.createAlias("users", "u") .createAlias("u.usertypes", "ut") .add(Restriction.eq("ut.typeofuser", "Stooge").list(); Seems to bring back the exact same result set. I actually have read the user manual. And when I nest only one level deep (ie, searching by users is fine) but when I get two layers deep, I can't quite get it. And the manual is no help. I just can't relate cats and kittens to business objects. Maybe they should use cats, kittens and fleas? :-/ Thanks for any suggestions.

    Read the article

  • Using entity framework to connect to multiple similar tables in .net MVC.

    - by Dite
    A relative newcomer to .net MVC2 and the entity framework, I am working on a project which requires a single web application, (C# .net 4), to connect to multiple different databases depending on the route of access, (ie subdomain). No problem with this in principle and all the logic is written to transform the subdomain into an entity connection and pass this through to the Entity Model. The problem comes with the fact that the different database whilst being largely similar in structure contain 3 or 4 unique tables bespoke to that instance. To my mind there are two ways to solve this issue, neither of which i am sure will be possible. 1/ Use a separate entity model for each database. -Attempts down this route have through up conflicts where table/sp names are the same across differnt db's, or implicit conversion errors when I try and put the different models in different namespaces. or 2/ Overwrite the classes which refer to the changeable database objects based on the value of a base controller property. -I have found nothing to suggest i can even do this. My question is if either of theser routes can ever work in principle or if i should just give up on the EF and connect to the dtabases directlky using ADO. Perhaps there is another way to solve this problem i haven't thought of? Thanks for any help...

    Read the article

  • Where to start to create an HTML website with 2 tables read from csv files using anything but php

    - by CodingIsAwesome
    I want to design a website which displays on loading two tables each with it's respective data from a csv file. Then every minute the website automatically refreshes. This problem seems so simple! But yet the solution eludes me. All of the files will be contained in 1 directory, not on a server but on a local machine. Such as sitting on the desktop. I understand if I use javascript I have to use ADO, and I'm still trying to work out how to use ASP. I am new with both languages. So far the only restriction is that I can't use PHP. So the jist so far as I can think right now is: 1. read the file 2. place the file into an array by splitting at the commas 3. write the array into td's ????? 4. then print all this out into a div ???? I have googled my heart out and can't seem to find what I'm looking for. or even piece together what I'm looking for. Everything with javascript and ADO's leads me to dead ends, I can't find anything on ASP that is helpful. Could someone please write up some sample code for a resource? Or have a better solution?

    Read the article

  • Beginner Ques::How to delete records permanently in case of linked tables?

    - by Serenity
    Let's say I have these 2 tables QuesType and Ques:- QuesType QuestypeID|QuesType |Active ------------------------------------ 101 |QuesType1 |True 102 |QuesType2 |True 103 |XXInActiveXX |False Ques QuesID|Ques|Answer|QUesTypeID|Active ------------------------------------ 1 |Ques1|Ans1 |101 |True 2 |Ques2|Ans2 |102 |True 3 |Ques3|Ans3 |101 |True In the QuesType Table:- QuesTypeID is a Primary key In the Ques Table:- QuesID is a Primary key and QuesType ID is the Foreign Key that refernces QuesTypeID from QuesType Table Now I am unable to delete records from QuesType Table, I can only make QuesType inactive by setting Active=False. I am unable to delete QuesTypes permanently because of the Foreign key relation it has with Ques Table. So , I just set the column Active=false and those Questypes then don't show on my grid wen its binded. What I want to do is be able to delete any QuesType permamnently. Now it can only be deleted if its not being used anywhere in the Ques table, right? So to delete any QuesType permanently I thot this is what I could do:- In the grid that displays QuesTypes, I have this check box for Active and a button for delete.What I thot was, when a user makes some QuesType inactive then OnCheckChanged() event will run and that will have the code to delete all the Questions in Ques table that are using that QuesTypeID. Then on the QuesType grid, that QuesType would show as Deactivated and only then can a user delete it permanently. Am I thinking correctly? Currently in my DeleteQuesType Stored Procedure what I am doing is:- Setting the Active=false and Setting QuesTye= some string like XXInactiveXX Is there any other way?

    Read the article

  • how to compare two tables fields name with another value in mysql?

    - by I Like PHP
    I have two tables table_school school_open_time|school_close_time|school_day 8:00 AM | 9:00PM | Monday 10:00 AM | 7:00PM | Wednesday table_college college_open_time|college_close_time|college_day 10:00 AM | 8:00PM | Monday 10:00 AM | 9:00PM | Tuesday 10:00 AM | 5:00PM | Wednesday Now I want to select school_open_time school_close time, college_open_time and college_close_time according to today (means college_day=school_day=today), and also if there is no row for a specific day in any of one table then it display blank field ( LEFT JOIN , I think I can use). Please suggest me best and optimized query for this. UPDATE: if there is no open time and close time for school then college_open_time and college_close_time has to be returned( not to be filled in database,just return) as school_open_time and school_close_time. and there always must be college_open_time and college_close_time for a given day MORE UPDATE: i m using below query SELECT college_open_time,college_close_time ,school_open_time, school_close_time FROM tbl_college LEFT JOIN tbl_school ON school_owner_id=college_owner_id WHERE college_owner_id='".$_session['user_id']."' AND college_day='".date('l',time())."'"; it return single row (left hand having some value and right hand having blank value) when there is no row of a given day in table_school, BUT display seven rows with same value on left hand side(college_open_time, college_close_time) and 6 blank row on right hand side (school_open_time and school_close_time) i need only one row when both table have a row of a given day but using above query take only first row of corresponding table_school where school_owner_id is 50(let), it not see the condition that school_day name should be given day

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • USB 3.0 port with USB 3.0 device in Ubuntu 12.10

    - by fernando garcía
    When I try to connect a USB 3.0 device in Ubuntu 12.10 (ASUS K55VD, kernel 3.5.0-19-generic #30-Ubuntu SMP), the system says [ 74.747832] hub 3-0:1.0: unable to enumerate USB device on port 1 [ 74.931957] usb 4-1: new SuperSpeed USB device number 2 using xhci_hcd [ 74.949390] usb 4-1: New USB device found, idVendor=05e3, idProduct=0731 [ 74.949396] usb 4-1: New USB device strings: Mfr=0, Product=1, SerialNumber=2 [ 74.949400] usb 4-1: Product: USB Storage [ 74.949403] usb 4-1: SerialNumber: 0000000000000033 [ 75.033327] usbcore: registered new interface driver uas [ 75.038548] Initializing USB Mass Storage driver... [ 75.038651] scsi7 : usb-storage 4-1:1.0 [ 75.038700] usbcore: registered new interface driver usb-storage [ 75.038701] USB Mass Storage support registered. but it does not recognize the device, and the disks applications (gparted, nautilus) act as if nothing had been connected. I have checked other questions, but either they have no answers or they told about previous Ubuntu version with 3.0.x kernels. A USB 2.0 device will work in the USB 3.0 ports. A USB 3.0 device will work (at USB 2.0 speeds) in the USB 2.0 ports. The problem, as I wrote, is between USB 3.0 devices and USB 3.0 ports. I have my USB 3.0 ports configured without legacy support via the BIOS (the way they should be, I suppose). But I also have tried to configure them with XHCI Preboot mode disabled. Have any one solved a similar problem? Thanks in advance.

    Read the article

  • Inside Amazon’s Warehouses

    - by Jason Fitzpatrick
    If you’re expecting the inside of Amazon’s warehouses to be some sort of rigidly organized robot-filled warehouse of tomorrow, you’ll be quite surprised to find that storage technique they employ is called “chaotic storage”. International Business Times paid a visit to a major Amazon warehouse and took a tour. Rather than finding robots they found: Amazon must rely on barcodes and human hands to find the ordered items and drop them into the proper bins — without robots, Amazon utilizes a system known as “chaotic storage,” where products are essentially shelved at random. By storing items randomly instead of categorically, the warehouse has a much better flow of material. Even without robots or automation, Amazon can compile a “picking list” where each item needs to be taken off the shelf and scanned again before it can be shipped. The real advantage to chaotic storage is that it’s significantly more flexible than conventional storage systems. If there are big changes in a product range, the company doesn’t need to plan for more space, because the products or their sales volumes don’t need to be known or planned in advance if they’re simply being stored at random. HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • Backing up Information Store - Recovering to Different Information Store / RSG

    - by Kip
    Hi All, I have a question on a situation, that hasn't yet arrisen but I wondered the possibilities and how we go about it. Currently we backup our Exchange 2003 Cluster with Backup exec. Currently it is set to backup the Microsoft Information Store on that server and all of the Mailbox Stores beneath it. We have previously used this in conjunction with a recovery storage group on the same server to recover lost mailboxes. However, due to space constrictions on that server ( a seperate issue that is being addressed in the very near future but outside of the scope of this question) we now don't have enough space on that server to do a recovery storage group type restore. Is it possible, to restore an information store, to a different server in the same administrative group (ie first)? By that I mean we have the following: Server1 | First Storage Group | Mailbox Store1/2/3 Could Mailbox Store 1 be restored to: Server2 | First Storage Group | Recovery Storage Group Both servers are under the same Administrative Group Currently for whatever reason ( mainly time) the mailboxes are not being backed up individually. Regards Kip

    Read the article

  • Why does a user have to enter "Profile" data to enter data into other tables?

    - by Greg McNulty
    EDIT It appears the user has to enter some data for his profile, otherwise I get this error below. I guess if there is no profile data, the user can not continue to enter data in other tables by default? I do not want to make entering user profile data a requirement to use the rest of the sites functionality, how can I get around this? Currently I have been testing everything with the same user and everything has been working fine. However, when I created a new user for the very first time and tried to enter data into my custom table, I get the following error. The INSERT statement conflicted with the FOREIGN KEY constraint "FK_UserData_aspnet_Profile". The conflict occurred in database "C:\ISTATE\APP_DATA\ASPNETDB.MDF", table "dbo.aspnet_Profile", column 'UserId'. The statement has been terminated. Not sure why I am getting this error. I have the user controls set up in ASP.NET 3.5 however all I am using is my own table or at least that I am aware of. I have a custom UserData table that includes the columns: id, UserProfileID, CL, LL, SL, DateTime (id is the auto incremented int) The intent is that all users will add their data in this table and as I mentioned above it has been working fine for my original first user I created. However, when i created a new user I am getting this problem. Here is the code that updates the database. protected void Button1_Click(object sender, EventArgs e) { //connect to database MySqlConnection database = new MySqlConnection(); database.CreateConn(); //create command object Command = new SqlCommand(queryString, database.Connection); //add parameters. used to prevent sql injection Command.Parameters.Add("@UID", SqlDbType.UniqueIdentifier); Command.Parameters["@UID"].Value = Membership.GetUser().ProviderUserKey; Command.Parameters.Add("@CL", SqlDbType.Int); Command.Parameters["@CL"].Value = InCL.Text; Command.Parameters.Add("@LL", SqlDbType.Int); Command.Parameters["@LL"].Value = InLL.Text; Command.Parameters.Add("@SL", SqlDbType.Int); Command.Parameters["@SL"].Value = InSL.Text; Command.ExecuteNonQuery(); } Source Error: Line 84: Command.ExecuteNonQuery();

    Read the article

  • Get XML from Server for Use on Windows Phone

    - by psheriff
    When working with mobile devices you always need to take into account bandwidth usage and power consumption. If you are constantly connecting to a server to retrieve data for an input screen, then you might think about moving some of that data down to the phone and cache the data on the phone. An example would be a static list of US State Codes that you are asking the user to select from. Since this is data that does not change very often, this is one set of data that would be great to cache on the phone. Since the Windows Phone does not have an embedded database, you can just use an XML string stored in Isolated Storage. Of course, then you need to figure out how to get data down to the phone. You can either ship it with the application, or connect and retrieve the data from your server one time and thereafter cache it and retrieve it from the cache. In this blog post you will see how to create a WCF service to retrieve data from a Product table in a database and send that data as XML to the phone and store it in Isolated Storage. You will then read that data from Isolated Storage using LINQ to XML and display it in a ListBox. Step 1: Create a Windows Phone Application The first step is to create a Windows Phone application called WP_GetXmlFromDataSet (or whatever you want to call it). On the MainPage.xaml add the following XAML within the “ContentPanel” grid: <StackPanel>  <Button Name="btnGetXml"          Content="Get XML"          Click="btnGetXml_Click" />  <Button Name="btnRead"          Content="Read XML"          IsEnabled="False"          Click="btnRead_Click" />  <ListBox Name="lstData"            Height="430"            ItemsSource="{Binding}"            DisplayMemberPath="ProductName" /></StackPanel> Now it is time to create the WCF Service Application that you will call to get the XML from a table in a SQL Server database. Step 2: Create a WCF Service Application Add a new project to your solution called WP_GetXmlFromDataSet.Services. Delete the IService1.* and Service1.* files and the App_Data folder, as you don’t generally need these items. Add a new WCF Service class called ProductService. In the IProductService class modify the void DoWork() method with the following code: [OperationContract]string GetProductXml(); Open the code behind in the ProductService.svc and create the GetProductXml() method. This method (shown below) will connect up to a database and retrieve data from a Product table. public string GetProductXml(){  string ret = string.Empty;  string sql = string.Empty;  SqlDataAdapter da;  DataSet ds = new DataSet();   sql = "SELECT ProductId, ProductName,";  sql += " IntroductionDate, Price";  sql += " FROM Product";   da = new SqlDataAdapter(sql,    ConfigurationManager.ConnectionStrings["Sandbox"].ConnectionString);   da.Fill(ds);   // Create Attribute based XML  foreach (DataColumn col in ds.Tables[0].Columns)  {    col.ColumnMapping = MappingType.Attribute;  }   ds.DataSetName = "Products";  ds.Tables[0].TableName = "Product";  ret = ds.GetXml();   return ret;} After retrieving the data from the Product table using a DataSet, you will want to set each column’s ColumnMapping property to Attribute. Using attribute based XML will make the data transferred across the wire a little smaller. You then set the DataSetName property to the top-level element name you want to assign to the XML. You then set the TableName property on the DataTable to the name you want each element to be in your XML. The last thing you need to do is to call the GetXml() method on the DataSet object which will return an XML string of the data in your DataSet object. This is the value that you will return from the service call. The XML that is returned from the above call looks like the following: <Products>  <Product ProductId="1"           ProductName="PDSA .NET Productivity Framework"           IntroductionDate="9/3/2010"           Price="5000" />  <Product ProductId="3"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="7/1/2010"           Price="599.00" />  ...  ...  ... </Products> The GetProductXml() method uses a connection string from the Web.Config file, so add a <connectionStrings> element to the Web.Config file in your WCF Service application. Modify the settings shown below as needed for your server and database name. <connectionStrings>  <add name="Sandbox"        connectionString="Server=Localhost;Database=Sandbox;                         Integrated Security=Yes"/></connectionStrings> The Product Table You will need a Product table that you can read data from. I used the following structure for my product table. Add any data you want to this table after you create it in your database. CREATE TABLE Product(  ProductId int PRIMARY KEY IDENTITY(1,1) NOT NULL,  ProductName varchar(50) NOT NULL,  IntroductionDate datetime NULL,  Price money NULL) Step 3: Connect to WCF Service from Windows Phone Application Back in your Windows Phone application you will now need to add a Service Reference to the WCF Service application you just created. Right-mouse click on the Windows Phone Project and choose Add Service Reference… from the context menu. Click on the Discover button. In the Namespace text box enter “ProductServiceRefrence”, then click the OK button. If you entered everything correctly, Visual Studio will generate some code that allows you to connect to your Product service. On the MainPage.xaml designer window double click on the Get XML button to generate the Click event procedure for this button. In the Click event procedure make a call to a GetXmlFromServer() method. This method will also need a “Completed” event procedure to be written since all communication with a WCF Service from Windows Phone must be asynchronous.  Write these two methods as follows: private const string KEY_NAME = "ProductData"; private void GetXmlFromServer(){  ProductServiceClient client = new ProductServiceClient();   client.GetProductXmlCompleted += new     EventHandler<GetProductXmlCompletedEventArgs>      (client_GetProductXmlCompleted);   client.GetProductXmlAsync();  client.CloseAsync();} void client_GetProductXmlCompleted(object sender,                                   GetProductXmlCompletedEventArgs e){  // Store XML data in Isolated Storage  IsolatedStorageSettings.ApplicationSettings[KEY_NAME] = e.Result;   btnRead.IsEnabled = true;} As you can see, this is a fairly standard call to a WCF Service. In the Completed event you get the Result from the event argument, which is the XML, and store it into Isolated Storage using the IsolatedStorageSettings.ApplicationSettings class. Notice the constant that I added to specify the name of the key. You will use this constant later to read the data from Isolated Storage. Step 4: Create a Product Class Even though you stored XML data into Isolated Storage when you read that data out you will want to convert each element in the XML file into an actual Product object. This means that you need to create a Product class in your Windows Phone application. Add a Product class to your project that looks like the code below: public class Product{  public string ProductName{ get; set; }  public int ProductId{ get; set; }  public DateTime IntroductionDate{ get; set; }  public decimal Price{ get; set; }} Step 5: Read Settings from Isolated Storage Now that you have the XML data stored in Isolated Storage, it is time to use it. Go back to the MainPage.xaml design view and double click on the Read XML button to generate the Click event procedure. From the Click event procedure call a method named ReadProductXml().Create this method as shown below: private void ReadProductXml(){  XElement xElem = null;   if (IsolatedStorageSettings.ApplicationSettings.Contains(KEY_NAME))  {    xElem = XElement.Parse(     IsolatedStorageSettings.ApplicationSettings[KEY_NAME].ToString());     // Create a list of Product objects    var products =         from prod in xElem.Descendants("Product")        orderby prod.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(prod.Attribute("ProductId").Value),          ProductName = prod.Attribute("ProductName").Value,          IntroductionDate =             Convert.ToDateTime(prod.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(prod.Attribute("Price").Value)        };     lstData.DataContext = products;  }} The ReadProductXml() method checks to make sure that the key name that you saved your XML as exists in Isolated Storage prior to trying to open it. If the key name exists, then you retrieve the value as a string. Use the XElement’s Parse method to convert the XML string to a XElement object. LINQ to XML is used to iterate over each element in the XElement object and create a new Product object from each attribute in your XML file. The LINQ to XML code also orders the XML data by the ProductName. After the LINQ to XML code runs you end up with an IEnumerable collection of Product objects in the variable named “products”. You assign this collection of product data to the DataContext of the ListBox you created in XAML. The DisplayMemberPath property of the ListBox is set to “ProductName” so it will now display the product name for each row in your products collection. Summary In this article you learned how to retrieve an XML string from a table in a database, return that string across a WCF Service and store it into Isolated Storage on your Windows Phone. You then used LINQ to XML to create a collection of Product objects from the data stored and display that data in a Windows Phone list box. This same technique can be used in Silverlight or WPF applications too. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "Get XML From Server for Use on Windows Phone" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • Stack vs queue -based programming language efficiency [closed]

    - by Core Xii
    Suppose there are two programming languages; one where the only form of storage is one (preferred) or two (may be required for Turing-completeness) stacks, and another where the only form of storage is a single queue, with appropriate instructions in each to manipulate their respective storage to achieve Turing-completeness. Which one can more efficiently encode complex algorithms? Such that most given algorithms take less code to implement, less time to compute and less memory to do so. Also, how do they compare to a language with a traditional array (or unbounded tape, if you will) as storage?

    Read the article

  • iOS 5: View Details Of Used and Unused Space Of Your iCloud Account

    - by Gopinath
    Apple’s iCloud is an awesome free storage service that lets you store music, photos, apps, calendars, documents, and more on the Cloud. Also it wirelessly pushes them to all your iOS devices automatically. The free iCloud service offers everyone with 5 GB space to starts with and once you reach the cap you can subscribe to a premium account with few dollars of fee. If you would like to monitor iCloud usage details here steps to be followed on an iOS device 1. Tap on Settings app 2. Choose iCloud  from the list of available options 3. From the list of iCloud settings tap on Storage & Backup option 4. Under Storage section you will find the details of iCloud usage – Total Storage and Available Storage via tech-recipes This article titled,iOS 5: View Details Of Used and Unused Space Of Your iCloud Account, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • eSTEP Newsletter November 2012

    - by uwes
    Dear Partners,We would like to inform you that the November '12 issue of our Newsletter is now available.The issue contains information to the following topics: News from CorpOracle Celebrates 25 Years of SPARC Innovation; IDC White Papers Finds Growing Customer Comfort with Oracle Solaris Operating System; Oracle Buys Instantis; Pillar Axiom OpenWorld Highlights; Announcement Oracle Solaris 11.1 Availability (data sheet, new features, FAQ's, corporate pages, internal blog, download links, Oracle shop); Announcing StorageTek VSM 6; Announcement Oracle Solaris Cluster 4.1 Availability (new features, FAQ's, cluster corp page, download site, shop for media); Announcement: Oracle Database Appliance 2.4 patch update becomes available Technical SectionOracle White papers on SPARC SuperCluster; Understanding Parallel Execution; With LTFS, Tape is Gaining Storage Ground with additional link to How to Create Oracle Solaris 11 Zones with Oracle Enterprise Manager Ops Center; Provisioning Capabilities of Oracle Enterprise Ops Center Manager 12c; Maximizing your SPARC T4 Oracle Solaris Application Performance with the following articles: SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark, SPARC T4-Based Highly Scalable Solutions Posts New World Record on SPECjEnterprise2010 Benchmark, SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g; Oracle SUN ZFS Storage Appliance Reference Architecture for VMware vSphere4;  Why 4K? - George Wilson's ZFS Day Talk; Pillar Axiom 600 with connected subjects: Oracle Introduces Pillar Axiom Release 5 Storage System Software, Driving down the high cost of Storage, This Provisioning with Pilar Axiom 600, Pillar Axiom 600- System overview and architecture; Migrate to Oracle;s SPARC Systems; Top 5 Reasons to Migrate to Oracle's SPARC Systems Learning & EventsRecently delivered Techcasts: Learning Paths; Oracle Database 11g: Database Administration (New) - Learning Path; Webcast: Drill Down on Disaster Recovery; What are Oracle Users Doing to Improve Availability and Disaster Recovery; SAP NetWeaver and Oracle Exadata Database Machine ReferencesARTstor Selects Oracle’s Sun ZFS Storage 7420 Appliances To Support Rapidly Growing Digital Image Library, Scottish Widows Cuts Sales Administration 20%, Reduces Time to Prepare Reports by 75%, and Achieves Return on Investment in First Year, Oracle's CRM Cloud Service Powers Innovation: Applications on Demand; Technology on Demand, How toHow to Migrate Your Data to Oracle Solaris 11 Using Shadow Migration; Using svcbundle to Create SMF Manifests and Profiles in Oracle Solaris 11; How to prepare a Sun ZFS Storage Appliance to Serve as a Storage Devise with Oracle Enterprise Manager Ops Center 12c; Command Summary: Basic Operations with the Image Packaging System In Oracle Solaris 11; How to Update to Oracle Solaris 11.1 Using the Image Packaging System, How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11;  Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster; Ease the Chaos with Automated Patching: Oracle Enterprise Manager Cloud Control 12c; Book excerpt: Oracle Exalogic Elastic Cloud Handbook You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • PARTNER WEBCAST : Nimble SmartStack for Oracle with Cisco UCS (Nov 12)

    - by Zeynep Koch
    You are invited to the live webcast with Nimble Storage, Oracle and Cisco where we will talk about the new SmartStack solution from Nimble Storage that features Oracle Linux, Oracle VM and Cisco UCS products. When : Tuesday, November 12, 2013, 11:00 AM Pacific Time Panelists: Michele Resta, Director of Linux and Virtualization Alliances, Oracle John McAbel, Senior Product Manager, Cisco Ibby Rahmani, Solutions Marketing, Nimble Storage SmartStack™solutions provide pre-validated reference architectures that speed deployments and minimize risk. All SmartStack solutions incorporate Cisco UCS as the compute and network infrastructure. In this webinar, you will learn how Nimble Storage SmartStack with Oracle and Cisco provides a converged infrastructure for Oracle Database environments with Oracle Linux and Oracle VM. SmartStack, built on best-of-breed components, delivers the performance and reliability needed for deploying Oracle on a single symmetric multiprocessing (SMP) server or Oracle Real Application Clusters (RAC) on multiple nodes.  Register today 

    Read the article

  • Gnome trash on cifs mount random behaviour

    - by BobPenguin
    Hello everyone, I'm seeing some weird behaviour in Ubuntu 10.04. I have a cifs mount that's mounted by fstab as follows: //192.168.1.1/share /media/storage cifs_netdev,username=guest,password="",uid=1000,guid=100,file_mode=0777,dir_mode=0777 0 0 I can mount the share using: sudo mount -a my user can then access it, create and delete files. The deleted files appear in the gnome trash applet. A folder /media/storage/.Trash-1000 is created automatically. When I log out, restart the machine and log in, the cifs share is mounted but the trash applet is empty. If I unmount the share with sudo umount /media/share then remount with sudo mount -a the trash applet displays the contents of the .trash-1000 folder! It gets stranger...sometimes after umount then mount -a the trash is STILL empty, but another round of umount then mount -a fixes it. It seems like the trash applet is "forgetting" to scan the /media/storage mount point and is not always finding the .trash-1000 folder at that mount point. Even when the trash applet is not displaying any trash from /media/storage/.trash-1000 I can still delete things from the /media/storage and they're moved to the .trash-1000 folder. So I conclude there's a bug in the trash applet...anyone know how to fix it?

    Read the article

  • Download the ZFSSA Objection Handling document (PDF)

    - by swalker
    View and download the new ZFS Storage Appliance objection handling document from the Oracle HW Technical Resource Centre here. This document aims to address the most common objections encountered when positioning the ZFS Storage Appliance disk systems in production environments. It will help you to be more successful in establishing the undeniable benefits of the Oracle ZFS Storage Appliance in your customers´ IT environments. If you do not already have an account to access the Oracle Hardware Technical Resource Centre, please click here and follow the instructions to register.

    Read the article

  • Is the SAN dying???

    - by RickHeiges
    Is the SAN dying? The reason that I ask this question is that MSFT has unleashed technologies this year that point in that direction Always ON Availability Groups shuns shared storage Windows 2012 has Storage Replication Technology that does not require a SAN Windows 2012 has Hyper-V Replica Technology that does not require a SAN PDW v2 continues to reinforce the approach to avoid shared storage I'm not saying that SAN technology does not have its place or does not have benefits inherent to the beast....(read more)

    Read the article

  • Data Auditor by Example

    - by Jinjin.Wang
    OWB has a node Data Auditors under Oracle Module in Projects Navigator. What is data auditor and how to use it? I will give an introduction to data auditor and show its usage by examples. Data auditor is an important tool in ensuring that data quality levels meet business requirements. Data auditor validates data against a set of data rules to determine which records comply and which do not. It gathers statistical metrics on how well the data in a system complies with a rule by auditing and marking how many errors are occurring against the audited table. Data auditors are typically scheduled for regular execution as part of a process flow, to monitor the quality of the data in an operational environment such as a data warehouse or ERP system, either immediately after updates like data loads, or at regular intervals. How to use data auditor to monitor data quality? Only objects with data rules can be monitored, so the first step is to define data rules according to business requirements and apply them to the objects you want to monitor. The objects can be tables, views, materialized views, and external tables. Secondly create a data auditor containing the objects. You can configure the data auditor and set physical deployment parameters for it as optional, which will be used while running the data auditor. Then deploy and run the data auditor either manually or as part of the process flow. After execution, the data auditor sets several output values, and records that are identified as not complying with the defined data rules contained in the data auditor are written to error tables. Here is an example. We have two tables DEPARTMENTS and EMPLOYEES (see pic-1 and pic-2. Click here for DDL and data) imported into OWB. We want to gather statistical metrics on how well data in these two tables satisfies the following requirements: a. Values of the EMPLOYEES.EMPLOYEE_ID attribute are three-digit numbers. b. Valid values for EMPLOYEES.JOB_ID are IT_PROG, SA_REP, SH_CLERK, PU_CLERK, and ST_CLERK. c. EMPLOYEES.EMPLOYEE_ID is related to DEPARTMENTS.MANAGER_ID. Pic-1 EMPLOYEES Pic-2 DEPARTMENTS 1. To determine legal data within EMPLOYEES or legal relationships between data in different columns of the two tables, firstly we define data rules based on the three requirements and apply them to tables. a. The first requirement is about patterns that an attribute is allowed to conform to. We create a Domain Pattern List data rule EMPLOYEE_PATTERN_RULE here. The pattern is defined in the Oracle Database regular expression syntax as ^([0-9]{3})$ Apply data rule EMPLOYEE_PATTERN_RULE to table EMPLOYEES.

    Read the article

  • Closure Tables - Is this enough data to display a tree view?

    - by James Pitt
    Here is the table I have created by testing the closure table method. | id | parentId | childId | hops | | | | | 270 | 6 | 6 | 0 | 271 | 7 | 7 | 0 | 272 | 8 | 8 | 0 | 273 | 9 | 9 | 0 | 276 | 10 | 10 | 0 | 281 | 9 | 10 | 1 | 282 | 7 | 9 | 1 | 283 | 7 | 10 | 2 | 285 | 7 | 8 | 1 | 286 | 6 | 7 | 1 | 287 | 6 | 9 | 2 | 288 | 6 | 10 | 3 | 289 | 6 | 8 | 2 | 293 | 6 | 9 | 1 | 294 | 6 | 10 | 2 I am trying to create a simple tree of this using PHP. There does not seem to be enough data to create the table. For example, when I look purely at parentId = 6: -Part 6 -Part 7 - ? - ? -Part 9 - ? - ? We know that parts 8 and 10 exists below Part 7 or 9, but not which. We know that part 10 exists at both 3 and 4 nodes deep but where? If I look at other data in the table it is possible to tell it should be: - Part 6 - Part 7 - Part 9 - Part 10 - Part 9 - Part 10 I thought one of the benefits of closure tables was there was no need for recursive queries? Could you help explain what I am doing wrong? EDIT: For clarification, this is a mapping table. There is another table called "parts" which has a column called part_id that correlates to both the parentId and childId columns in the "closure" table. The "id" column in the table above (closure) is just for the purposes of maintaining a primary key. It is not really necessary. The methods I have used to create this closure table is described in the following article: http://dirtsimple.org/2010/11/simplest-way-to-do-tree-based-queries.html EDIT2: It can have two and three hops. I will explain easier by assigning names to the items. Part 6 = Bicycle Part 7 = Gears Part 8 = Chain Part 9 = Bolt Part 10 = Nut Nut is part of Bolt. The Bolt and Nut combo exists directly within Bicycle and within Gears which is part of Bicycle. In relation to what method to use I have looked at Adjacency, Edges, Enum Paths, Closures, DAGS(networks) and the Nested Set Model. I am still trying to work out what is what, but this is an extremely complex component database where there are multiple parents and any modification to a sub-tree must propogate through the other trees. More importantly there will be insertions, deletions and tree views that I wish to avoid recursion during general use, even at the cost of database space and query time during entry.

    Read the article

  • Oracle Advanced Compression Webcast Replay Available

    - by [email protected]
    Did you miss our webcast "Save BIG on Storage - with Oracle Database 11g and Advanced Compression"? Don't worry, you can still register and view the recording including the full Q&A session with Tim Shetler and Bill Hodak. Click here to learn how Oracle Advanced Compression can reduce your disk space requirements for all types of data, improve query and storage performance and lower storage costs throughout the datacenter.

    Read the article

  • Frustrated with MythTV 0.26

    - by Mike
    I've been using MythTV for a while now. Until a hardware crash I had a 0.25 box running without problems. Had to get new hardware and am now in the process of setting up 0.26. Every time I pick a menu option, it hangs for 1 minute. Every time I try to start the backend, same thing. I pick a new theme in the frontend, but it never gets used. I try to test audio, but all I get is static (from the proper channels though). I've setup the storage groups in the backend and put a video in /storage/videos but the front end won't see it when I scan for changes. I make changes in the front end configuration and they don't get saved (or get lost randomly 2-3 reloads later). Obviously there is something I am doing catastrophically wrong, but I have no idea what. Are the storage groups not working yet? Maybe I need to delete all the storage group entries and just use the front end override to set the path? I'm currently using lvm across 3 hard disks, and this has worked well for me in the past. I'd like to use storage groups, but frankly I don't see them working yet at all - especially not for videos (which is what we watch 99% of the time). Anyone have any suggestions for me to try before I just call 0.26 bad names and wipe the system?

    Read the article

  • How do I mount a CIFS share via FSTAB and give full RW to Guest

    - by Kendor
    I want to create a Public folder that has full RW access. The problem with my configuration is that Windows users have no issues as guests (they can RW and Delete), my Ubuntu client can't do the same. We can only write and read, but not create or delete. Here is the my smb.conf from my server: [global] workgroup = WORKGROUP netbios name = FILESERVER server string = TurnKey FileServer os level = 20 security = user map to guest = Bad Password passdb backend = tdbsam null passwords = yes admin users = root encrypt passwords = true obey pam restrictions = yes pam password change = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . add user script = /usr/sbin/useradd -m '%u' -g users -G users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' guest account = nobody syslog = 0 log file = /var/log/samba/samba.log max log size = 1000 wins support = yes dns proxy = no socket options = TCP_NODELAY panic action = /usr/share/samba/panic-action %d [homes] comment = Home Directory browseable = no read only = no valid users = %S [storage] create mask = 0777 directory mask = 0777 browseable = yes comment = Public Share writeable = yes public = yes path = /srv/storage The following FSTAB entry doesn't yield full R/W access to the share. //192.168.0.5/storage /media/myname/TK-Public/ cifs rw 0 0 This doesn't work either //192.168.0.5/storage /media/myname/TK-Public/ cifs rw,guest,iocharset=utf8,file_mode=0777,dir_mode=0777,noperm 0 0 Using the following location in Nemo/Nautilus w/o the Share being mounted does work: smb://192.168.0.5/storage/ Extra info. I just noticed that if I copy a file to the share after mounting, my Ubuntu client immediately make "nobody" be the owner, and the group "no group" has read and write, with everyone else as read-only. What am I doing wrong?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >