Search Results

Search found 58436 results on 2338 pages for 'data'.

Page 26/2338 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • How to store and remove dynamically and automatic variable of generic data type in custum list data

    - by Vineel Kumar Reddy
    Hi I have created a List data structure implementation for generic data type with each node declared as following. struct Node { void *data; .... .... } So each node in my list will have pointer to the actual data(generic could be anything) item that should be stored in the list. I have following signature for adding a node to the list AddNode(struct List *list, void* eledata); the problem is when i want to remove a node i want to free even the data block pointed by *data pointer inside the node structure that is going to be freed. at first freeing of datablock seems to be straight forward free(data) // forget about the syntax..... But if data is pointing to a block created by malloc then the above call is fine....and we can free that block using free function int *x = (int*) malloc(sizeof(int)); *x = 10; AddNode(list,(void*)x); // x can be freed as it was created using malloc what if a node is created as following int x = 10; AddNode(list,(void*)&x); // x cannot be freed as it was not created using malloc Here we cannot call free on variable x!!!! How do i know or implement the functionality for both dynamically allocated variables and static ones....that are passed to my list.... Thanks in advance...

    Read the article

  • Manually wiring up unobtrusive jquery validation client-side without Model/Data Annotations, MVC3

    - by cmorganmcp
    After searching and experimenting for 2 days I relent. What I'd like to do is manually wire up injected html with jquery validation. I'm getting a simple string array back from the server and creating a select with the strings as options. The static fields on the form are validating fine. I've been trying the following: var dates = $("<select id='ShiftDate' data-val='true' data-val-required='Please select a date'>"); dates.append("<option value=''>-Select a Date-</option>"); for (var i = 0; i < data.length; i++) { dates.append("<option value='" + data[i] + "'>" + data[i] + "</option>"); } $("fieldset", addShift).append($("<p>").append("<label for='ShiftDate'>Shift Date</label>\r").append(dates).append("<span class='field-validation-valid' data-valmsg-for='ShiftDate' data-valmsg-replace='true'></span>")); // I tried the code below as well instead of adding the data-val attributes and span manually with no luck dates.rules("add", { required: true, messages: { required: "Please select a date" } }); // Thought this would do it when I came across several posts but it didn't $.validator.unobtrusive.parse(dates.closest("form")); I know I could create a view model ,decorate it with a required attribute, create a SelectList server-side and send that, but it's more of a "how would I do this" situation now. Can anyone shed light on why the above code wouldn't work as I expect? -chad

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • Data not transfred from form to mysql table (updating of data is not happening)

    - by Jimson
    Hi all and thanks in advance to all for this I tired and was unable to find the answer i am looking for an answer. my problem is that i am unable to update the values enterd in the form. I have attached all the files i'm using MYSQL database to fetch data. what happens is that i'm able to add and delete records from form using ajax and PHP scripts to MYSQL database, but i am not able to update data which was retrived from database. the file structure is as follows index.php is a file with ajax functions where it displays form for adding new data to MYSQL using save.php file and list of all records are view without refrishing page (calling load-list.php to view all records from index.php works fine, and save.php to save data from form) - *Delete*is an ajax function called from index.php to delete record from mysql database (function calling delete.php works fine) - Update is an ajax function called from index.php to update data using update-form.php by retriving specific record from mysql tabel, (works fine) Problem lies in updating data from update-form.php to update.php (in which update query is wrriten for mysql) i had tried in many ways at last i had figured out that data is not being transfred from update-form.php to update.php there is a small problem in jquery ajax function where it is not transfering data to update.php page. can any one correct this ????? i will be greatfull to them..... please find the link below for all files link to get my form files

    Read the article

  • Retain numerical precision in an R data frame?

    - by David
    When I create a dataframe from numeric vectors, R seems to truncate the value below the precision that I require in my analysis: data.frame(x=0.99999996) returns 1 (see update 1) I am stuck when fitting spline(x,y) and two of the x values are set to 1 due to rounding while y changes. I could hack around this but I would prefer to use a standard solution if available. example Here is an example data set d <- data.frame(x = c(0.668732936336141, 0.95351462456867, 0.994620622127435, 0.999602102672081, 0.999987126195509, 0.999999955814133, 0.999999999999966), y = c(38.3026509783688, 11.5895099585560, 10.0443344234229, 9.86152339768516, 9.84461434575695, 9.81648333804257, 9.83306725758297)) The following solution works, but I would prefer something that is less subjective: plot(d$x, d$y, ylim=c(0,50)) lines(spline(d$x, d$y),col='grey') #bad fit lines(spline(d[-c(4:6),]$x, d[-c(4:6),]$y),col='red') #reasonable fit Update 1 Since posting this question, I realize that this will return 1 even though the data frame still contains the original value, e.g. > dput(data.frame(x=0.99999999996)) returns structure(list(x = 0.99999999996), .Names = "x", row.names = c(NA, -1L), class = "data.frame") Update 2 After using dput to post this example data set, and some pointers from Dirk, I can see that the problem is not in the truncation of the x values but the limits of the numerical errors in the model that I have used to calculate y. This justifies dropping a few of the equivalent data points (as in the example red line).

    Read the article

  • Creation of model in core data on the fly

    - by user1740045
    How can we create a model in core data on the fly? I.e getting the schema of database from somewhere and then creating a Core Data Object graph? *QuesTion:* Yes thats fine, agreed with all the advantages. But, can anybody can tell practically, what is the benefit of integrating Core Data into project instead of using SQL directly. 1.No need to write SQL boiler plate code [but need to learn Core Data Model (steep curve)] 2.WE can undo and redo changes [but practically who needs it] 3.we can migrate to another schema [that can be done by SQLite as well jus need to add another field into table] 4.For say aggregation on some field in table,in Core Data we need to loop through Core Data Objects whereas in SQLite we need to first write SQLite Boiler Plate Code and then the basic aggregation SQL query,which is easy to write,only length of code will increase...But in case of Core Data (need to learn a lot). So apart from reducing the length of Code,does it actually adds value to project? or in terms of Memory Efficiency,Performance,etc.. PS: If anybody has actualy worked on Core Data(Model Creation On the Fly) , if possible share and gve pointers..thanks!

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • data recovery from deleted partition

    - by anique
    i recently merged my hard disk partitions f into c using a partition manager, i didnt need data in f but unfortunately i forgot to backup some important office docs in that partition. manager formated f and merged the space into c. is it possible for me to recover from a deleted partition, how will i do that thanks

    Read the article

  • Data Recovery from Tru64 Unix..

    - by RBA
    Hi I accidently deleted a directory in Tru64 unix Server. Is it possible to retrieve back the directory.. It contained many other directories and Anyhow I need to retrieve the data as it was a very important directory.. Kindly let me know the process or some good website which details about the particular scenario.. Thanks.

    Read the article

  • Invitation: Oracle EMEA Analytics & Data Integration Partner Forum, 12th November 2012, London (UK)

    - by rituchhibber
    Oracle PartnerNetwork | Account | Feedback INVITATIONORACLE EMEA ANALYTICS & DATA INTEGRATION PARTNER FORUM MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) Dear partner Come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialised partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops.Keynote: Andrew Sutherland, SVP Oracle Technology. Data Integration can bring great value to your customers by moving data to transform their business experiences in Oracle pan-EMEA Data Integration business development and opportunities for partners. Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development roadmap for Data Integration and Analytics Venue: Oracles London CITY Moorgate OfficesDuring this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Places are limited, Register your seat today! To register to this event CLICK HERE Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule November 12th:Day 1 Main Plenary Session : Full day, starting 10.30 am.Oracle Hosted Dinner in the Evening November 13th:onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined(1 day) BI-Apps Bootcamp(4-days) Oracle Data Integrator and Oracle Enterprise Data Quality workshop(1-day) Golden Gate Workshop(1-day) For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected] look forward to seeing you in there. Best regards, Mike HallettAlliances and Channels DirectorBI & EPM Oracle EMEAM.No: +44 7831 276 989 [email protected] Duncan HarveyBusiness Development Directorfor Data IntegrationM.No: +420 608 283 [email protected] Milomir VojvodicBusiness Development Manager for Data IntegrationM.No: +420 608 283 [email protected] Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact PBC | Legal Notices and Terms of Use | Privacy

    Read the article

  • Standard way of allowing general XML data

    - by Greg Jackson
    I'm writing a data gathering and reporting application that takes XML files as input, which will then be read, processed, and stored in a strongly-typed database. For example, an XML file for a "Job" might look like this: <Data type="Job"> <ID>12345</ID> <JobName>MyJob</JobName> <StartDate>04/07/2012 10:45:00 AM</StartDate> <Files> <File name="a.jpeg" path="images\" /> <File name="b.mp3" path="music\mp3\" /> </Files> </Data> I'd like to use a schema to have a standard format for these input files (depending on what type of data is being used, for example "Job", "User", "View"), but I'd also like to not fail validation if there is extra data provided. For example, perhaps a Job has additional properties such as "IsAutomated", "Requester", "EndDate", and so on. I don't particularly care about these extra properties. If they are included in the XML, I'll simply ignore them when I'm processing the XML file, and I'd like validation to do the same, without having to include in the schema every single possible property that a customer might provide me with. Is there a standard way of providing such a schema, or of allowing such a general XML file that can still be validated without resorting to something as naïve (and potentially difficult to deal with) as the below? <Data type="Job"> <Data name="ID">12345</Data> . . . <Data name="Files"> <Data name="File"> <Data name="Filename">a.jpeg</Data> <Data name="path">images</Data> . . . </Data> </Data>

    Read the article

  • SSAS: Utility to check you have the correct data types and sizes in your cube definition

    - by DrJohn
    This blog describes a tool I developed which allows you to compare the data types and data sizes found in the cube’s data source view with the data types/sizes of the corresponding dimensional attribute.  Why is this important?  Well when creating named queries in a cube’s data source view, it is often necessary to use the SQL CAST or CONVERT operation to change the data type to something more appropriate for SSAS.  This is particularly important when your cube is based on an Oracle data source or using custom SQL queries rather than views in the relational database.   The problem with BIDS is that if you change the underlying SQL query, then the size of the data type in the dimension does not update automatically.  This then causes problems during deployment whereby processing the dimension fails because the data in the relational database is wider than that allowed by the dimensional attribute. In particular, if you use some string manipulation functions provided by SQL Server or Oracle in your queries, you may find that the 10 character string you expect suddenly turns into an 8,000 character monster.  For example, the SQL Server function REPLACE returns column with a width of 8,000 characters.  So if you use this function in the named query in your DSV, you will get a column width of 8,000 characters.  Although the Oracle REPLACE function is far more intelligent, the generated column size could still be way bigger than the maximum length of the data actually in the field. Now this may not be a problem when prototyping, but in your production cubes you really should clean up this kind of thing as these massive strings will add to processing times and storage space. Similarly, you do not want to forget to change the size of the dimension attribute if your database columns increase in size. Introducing CheckCubeDataTypes Utiltity The CheckCubeDataTypes application extracts all the data types and data sizes for all attributes in the cube and compares them to the data types and data sizes in the cube’s data source view.  It then generates an Excel CSV file which contains all this metadata along with a flag indicating if there is a mismatch between the DSV and the dimensional attribute.  Note that the app not only checks all the attribute keys but also the name and value columns for each attribute. Another benefit of having the metadata held in a CSV text file format is that you can place the file under source code control.  This allows you to compare the metadata of the previous cube release with your new release to highlight problems introduced by new development. You can download the C# source code from here: CheckCubeDataTypes.zip A typical example of the output Excel CSV file is shown below - note that the last column shows a data size mismatch by TRUE appearing in the column

    Read the article

  • SQL Server Manageability Series: how to change the default path of .cache files of a data collector? #sql #mdw #dba

    - by ssqa.net
    How to change the default path of .cache files of a data collector after the Management Data Warehouse (MDW has been setup? This was the question asked by one of the DBAs in a client's place, instantly I enquired that were there any folder specified while setting up the MDW and obvious answer was no as there were left default. This means all the .CACHE files are stored under %C\TEMP directory which may post out of disk space problem on the server where the MDW is setup to collect. Going back...(read more)

    Read the article

  • secure data transport between web server and database server

    - by atypicalgeek
    I asked this question in stackoverflow and it was suggested to try here so here goes... I'm planning on provisioning a web server and database server in a server farm environment. They will be in the same network but not in the same domain, both windows server 2008 and the database server is sql server 2008. My question being, what is the best way to secure data in transport between the servers? I've looked into IPSEC and SSL but not sure how to go about implementing either.

    Read the article

  • How to use data mining principles in this project?

    - by Simon
    I'm getting a Data Mining class this semester and we are free for the final project. For a few months I'm working on procedural planets rendering (something like this http://www.youtube.com/watch?v=rL8zDgTlXso). Do you have any idea of which data mining principles I could use to keep working this project ? Maybe I could try to generate interesting terrains from a set of real maps ? Any publications on that subject ? Any other ideas ?

    Read the article

  • Should I keep investing into data structures and algorithms?

    - by 4bu3li
    These days, I'm investing heavily in data structures and algorithms and trying to solve some programming puzzles. I'm trying to code and solve with Java and Clojure. Am I wasting my time? should I invest more in technologies and frameworks that I already know in order to gain deeper knowledge (the ins and the outs) and be able to code with them more quickly? By studying data structures and algorithms, am I going to become a better programmer or those subjects are only important during college years?

    Read the article

  • Is ECC mandatory in SSD technology?

    - by Alexander Shcheblikin
    While shopping for an SSD I have noticed that some manufacturers promote their "Pro" models as the ones sporting ECC data protection. Those manufacturers do not mention ECC in their budget models descriptions. However, Wikipedia article on flash memory states that "NAND relies on ECC to compensate for bits that may spontaneously fail during normal device operation." So the question is does any SSD device use ECC behind the scenes for its normal operation and is that ECC "feature" just a marketing ploy?

    Read the article

  • Does moving a file outside NTFS loose data in alternate data streams?

    - by jay
    I have a lot of files on machine running Windows Server 2008 which I wanted to move to a Fedora machine. How can I keep the attributes stored in, for example, media files (date taken, rating, length, etc) while transfering it to outside the realm of NTFS's Alternate Data Streams. I'm aware that similar metadata exists in other file systems, but what happens when you move these files? And what's the best way to retain them in other file systems?

    Read the article

  • Free Data Recovery Software

    - by Morais
    I lost some of my very important files from my computer. I tried the windows vista recovery from a restore point but could not get the files. Can some one please suggest some free data recovery tools. This is very urgent.

    Read the article

  • Data recovery for Mac

    - by josh3736
    I've got a Mac that won't boot and I'd like to recover whatever data I can before wiping the hard drive and reinstalling. I'm looking for something similar to TRK (which is Windows-centric) — boot from CD, mount the hard drive, and copy to a network share. I just noticed TRK does appear to support HFS+; has anyone had success with this?

    Read the article

  • Data Center Design and Preferences

    - by Warner
    When either selecting a data center as a co-location facility or designing a new one from scratch, what would your ideal specification be? Fundamentally, diversified power sources, multiple ISPs, redundant generators, UPS, cooling, and physical security are all desireable. What are the additional key requirements that someone might not consider on the first pass? What are the functional details someone might not consider during the initial high level design?

    Read the article

  • Does moving a file outside NTFS loose data in alertnate data streams?

    - by jay
    I have a lot of files on machine running Windows Server 2008 which I wanted to move to a Fedora machine. How can I keep the attributes stored in, for example, media files (date taken, rating, length, etc) while transfering it to outside the realm of NTFS's Alternate Data Streams. I'm aware that similar metadata exists in other file systems, but what happens when you move these files? And what's the best way to retain them in other file systems?

    Read the article

  • Best practice for organizing/storing character/monster data in an RPG?

    - by eclecto
    Synopsis: Attempting to build a cross-platform RPG app in Adobe Flash Builder and am trying to figure out the best class hierarchy and the best way to store the static data used to build each of the individual "hero" and "monster" types. My programming experience, particularly in AS3, is embarrassingly small. My ultra-alpha method is to include a "_class" object in the constructor for each instance. The _class, in turn, is a static Object pulled from a class created specifically for that purpose, so things look something like this: // Character.as package { public class Character extends Sprite { public var _strength:int; // etc. public function Character(_class:Object) { _strength = _class._strength; // etc. } } } // MonsterClasses.as package { public final class MonsterClasses extends Object { public static const Monster1:Object={ _strength:50, // etc. } // etc. } } // Some other class in which characters/monsters are created. // Create a new instance of Character var myMonster = new Character(MonsterClasses.Monster1); Another option I've toyed with is the idea of making each character class/monster type its own subclass of Character, but I'm not sure if it would be efficient or even make sense considering that these classes would only be used to store variables and would add no new methods. On the other hand, it would make creating instances as simple as var myMonster = new Monster1; and potentially cut down on the overhead of having to read a class containing the data for, at a conservative preliminary estimate, over 150 monsters just to fish out the one monster I want (assuming, and I really have no idea, that such a thing might cause any kind of slowdown in execution). But long story short, I want a system that's both efficient at compile time and easy to work with during coding. Should I stick with what I've got or try a different method? As a subquestion, I'm also assuming here that the best way to store data that will be bundled with the final game and not read externally is simply to declare everything in AS3. Seems to me that if I used, say, XML or JSON I'd have to use the associated AS3 classes and methods to pull in the data, parse it, and convert it to AS3 object(s) anyway, so it would be inefficient. Right?

    Read the article

  • Changing filesystem types "safely"

    - by warren
    Back in Windows 95 OSR2 (I believe), there was a conversion tool that would take your extant FAT16 partition and change it to FAT32 non-destructively (most of the time). Are there any tools like that now for going from one file system type to another in situ without destroying the data? For example, from etx3 to ext4? Or NTFS to XFS?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >