Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 59/588 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Reliable Storage Systems for SQL Server

    By validating the IO path before commissioning the production database system, and performing ongoing validation through page checksums and DBCC checks, you can hopefully avoid data corruption altogether, or at least nip it in the bud. If corruption occurs, then you have to take the right decisions fast to deal with it. Rod Colledge explains how a pessimistic mindset can be an advantage...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Reliable Storage Systems for SQL Server

    By validating the IO path before commissioning the production database system, and performing ongoing validation through page checksums and DBCC checks, you can hopefully avoid data corruption altogether, or at least nip it in the bud. If corruption occurs, then you have to take the right decisions fast to deal with it. Rod Colledge explains how a pessimistic mindset can be an advantage

    Read the article

  • Cookie Settings Storage Method

    - by Paul
    I've got an web app that needs to store some non-sensitive preferences for the user. Right now I'm storing their language preference and what mode they want a window opened in by default in two cookies: "lang" can be "en" or "de" "mode" can be "design" or "view" I might add a few more in the future. I'm not sure how many, but probably never more than a dozen. Language is parsed on every request, whereas the mode cookie is only used occasionally. I saw a recommendation that made sense I shouldn't try to do what I was originally planning to do and strongly type a user settings class deserialized on each request because of the overhead involved. I see three options here and I'm not sure which is the best overall. Keep things as they are, add a new cookie for each new setting Combine the cookies into a single settings cookie and add future values to it Change the mode cookie to settings (leaving language alone), add new user settings values to the settings cookie All would work obviously. I'm leaning toward option three, but I'm not sure if there's a best practice for this?

    Read the article

  • CNC Information - Data Storage and Transfer

    A CNC machine must be tried when there is need to improve speed and accuracy. The machine performs better in doing repetitive tasks and getting large jobs done quicker. Woodworking shops or industria... [Author: Scheygen Smith - Computers and Internet - March 21, 2010]

    Read the article

  • Can't access external USB storage after updating to 12.10

    - by user99252
    I installed Ubuntu 12.04 after Windows failed me and Samsung got a bit pissy about providing any help. I then upgraded to 12.10 after a week or two and suddenly my external USB devices no longer work. The same devices I plugged in are no longer recognised. As I say, I'm only a user of Ubuntu for a fortnight, so any advice and directions to very simple instructions, would be appreciated greatly. I've seen this asked elsewhere, but the advice was to ask again if you needed clarification.

    Read the article

  • Sphinx As MySQL Storage Engine (SphinxSE)

    <b>Howtoforge:</b> "SphinX is a great full-text search engine for MySQL. Installing the Sphinx daemon was straightforward as you can compile it from the source or use a .DEB/.RPM package but SphinxSE was a little bit tricky since it needed to be installed as a plugin on a running MySQL server."

    Read the article

  • Windows Azure Active Directory: Should You Use It?

    Computerworld offers a full review of the developer preview version of WAAD. Jonathan Hassell, the review's author, effectively gave the service a grade of incomplete. While you can't expect something that's effectively a beta to include everything that will be in the final version, WAAD presented a number of annoying problems that definitely need fixing before it can attract a wide share of the market. First, you can't even try out the service unless you sign up for a trial of Office 365, Microsoft's cloud Office suite. The software giant does plan to let you bring up an instance of WAAD as...

    Read the article

  • Tidy up old Windows Server Backup snapshots

    - by dty
    Hi, I'm running wbadmin from a scheduled job, backing up my C: and D: drives to my E: and (I believe!) including the system state: wbadmin start backup -backuptarget:e: -include:c:,d: -allCritical -noVerify -quiet I'd like to delete old backups, but I'm concerned that all the information I can find says to use wbadmin to delete old system state backups, and vssadmin to delete other backups. As far as I know, my backups ARE system state backups, but are using VSS on E: for storage, so I'm worried about trying either of these techniques for fear of losing all my backups. This is a home network, so I don't have a spare server to test this on. I'm also happy to simply restrict the space used on E:, but I can't make sense of the difference between the /for and /on parameters of the relevant vssadmin command. For reference, here's the output of vssadmin show shadows: Contents of shadow copy set ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Contained 1 shadow copies at creation time: 07/01/2011 08:12:05 Shadow Copy ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Original Volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Volume: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy83 Originating Machine: x.y.com Service Machine: x.y.com Provider: 'Microsoft Software Shadow Copy provider 1.0' Type: DataVolumeRollback Attributes: Persistent, No auto release, No writers, Differential [... repeated a lot...] vssadmin show shadowstorage: Shadow Copy Storage association For volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 5.859 GB Shadow Copy Storage association For volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 40.317 GB Shadow Copy Storage association For volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 168.284 GB Allocated Shadow Copy Storage space: 171.15 GB Maximum Shadow Copy Storage space: UNBOUNDED wbadmin get versions: Backup time: 07/01/2011 03:00 Backup target: 1394/USB Disk labeled xxxxxxxxx(E:) Version identifier: 01/07/2011-03:00 Can Recover: Volume(s), File(s), Application(s), Bare Metal Recovery, System State [... repeated a lot...]

    Read the article

  • How upload files to azure in background with Delphi and OmniThread?

    - by mamcx
    I have tried to upload +100 files to azure with Delphi. However, the calls block the main thread, so I want to do this with a async call or with a background thread. This is what I do now (like explained here): procedure TCloudManager.UploadTask(const input: TOmniValue; var output: TOmniValue); var FileTask:TFileTask; begin FileTask := input.AsRecord<TFileTask>; Upload(FileTask.BaseFolder, FileTask.LocalFile, FileTask.CloudFile); end; function TCloudManager.MassiveUpload(const BaseFolder: String; Files: TDictionary<String, String>): TStringList; var pipeline: IOmniPipeline; FileInfo : TPair<String,String>; FileTask:TFileTask; begin // set up pipeline pipeline := Parallel.Pipeline .Stage(UploadTask) .NumTasks(Environment.Process.Affinity.Count * 2) .Run; // insert URLs to be retrieved for FileInfo in Files do begin FileTask.LocalFile := FileInfo.Key; FileTask.CloudFile := FileInfo.Value; FileTask.BaseFolder := BaseFolder; pipeline.Input.Add(TOmniValue.FromRecord(FileTask)); end;//for pipeline.Input.CompleteAdding; // wait for pipeline to complete pipeline.WaitFor(INFINITE); end; However this block too (why? I don't understand).

    Read the article

  • Azure : The process cannot access the file "" because it is being used by another process.

    - by Shantanu
    Hi all, I am trying to get a matlab-compiled exe running on Azure cloud, and for that purpose need to get a v78.zip onto the local storage of the cloud and unzip it, before I can try to run an exe on the cloud. The program works fine when executed locally, but on deployment gives and error at line marked below in the code. The error is : The process cannot access the file 'C:\Resources\directory\cc0a20f5c1314f299ade4973ff1f4cad.WebRole.LocalStorage1\v78.zip' because it is being used by another process. Exception Details: System.IO.IOException: The process cannot access the file 'C:\Resources\directory\cc0a20f5c1314f299ade4973ff1f4cad.WebRole.LocalStorage1\v78.zip' because it is being used by another process. The code is given below: string localPath = RoleEnvironment.GetLocalResource("LocalStorage1").RootPath; Response.Write(localPath + " \n"); Directory.SetCurrentDirectory(localPath); CloudBlob mblob = GetProgramContainer().GetBlobReference("v78.zip"); CloudBlockBlob mbblob = mblob.ToBlockBlob; CloudBlob zipblob = GetProgramContainer().GetBlobReference("7z.exe"); string zipPath = Path.Combine(localPath, "7z.exe"); string matlabPath = Path.Combine(localPath, "v78.zip"); IEnumerable<ListBlockItem> blocklist = mbblob.DownloadBlockList(); BlobStream stream = mbblob.OpenRead(); FileStream fs = File.Create(matlabPath); (Exception occurs here) It'll be great help if someone could tell me where I'm going wrong. Thanks! Shan

    Read the article

  • How do I include 2 tables in LocalStorage?

    - by Noor
    I've got a table that you can edit, and I've got a simple code saving that list when you're done with editing it. (the tables have the contenteditable on) The problem I've stumbled upon is that if I double click on enter, the table gets divided into two separate tables with the same ID. This causes the code I'm using to set the localStorage to only store one of the tables (I assume the first).. I've thought of different solutions and I wonder if someone could point out the pro's and con's (if the solutions even works that is). Make a loop that checks the page after tables and stores them into an array of localStorage-items.. I'd have to dynamically create a localStorage item for each table. Take the whole div that the tables are in, and store that in the localStorage, when a user revisits the page, the page checks after the items in storage and displays the whole divs. Any suggestions you have that can beat this :).. (but no cache, it has to be with the localStorage!) Thanks

    Read the article

  • TSQL - How to join 1..* from multiple tables in one resultset?

    - by ElHaix
    A location table record has two address id's - mailing and business addressID that refer to an address table. Thus, the address table will contain up to two records for a given addressID. Given a location ID, I need an sproc to return all tbl_Location fields, and all tbl_Address fields in one resultset: LocationID INT, ClientID INT, LocationName NVARCHAR(50), LocationDescription NVARCHAR(50), MailingAddressID INT, BillingAddressID INT, MAddress1 NVARCHAR(255), MAddress2 NVARCHAR(255), MCity NVARCHAR(50), MState NVARCHAR(50), MZip NVARCHAR(10), MCountry CHAR(3), BAddress1 NVARCHAR(255), BAddress2 NVARCHAR(255), BCity NVARCHAR(50), BState NVARCHAR(50), BZip NVARCHAR(10), BCountry CHAR(3) I've started by creating a temp table with the required fields, but am a bit stuck on how to accomplish this. I could do sub-selects for each of the required address fields, but seems a bit messy. I've already got a table-valued-function that accepts an address ID, and returns all fields for that ID, but not sure how to integrate it into my required result. Off hand, it looks like 3 selects to create this table - 1: Location, 2: Mailing address, 3: Billing address. What I'd like to do is just create a view and use that. Any assistance would be helpful. Thanks.

    Read the article

  • Approach for altering Primary Key from GUID to BigInt in SQL Server related tables

    - by Tom
    I have two tables with 10-20 million rows that have GUID primary keys and at leat 12 tables related via foreign key. The base tables have 10-20 indexes each. We are moving from GUID to BigInt primary keys. I'm wondering if anyone has any suggestions on an approach. Right now this is the approach I'm pondering: Drop all indexes and fkeys on all the tables involved. Add 'NewPrimaryKey' column to each table Make the key identity on the two base tables Script the data change "update table x, set NewPrimaryKey = y where OldPrimaryKey = z Rename the original primarykey to 'oldprimarykey' Rename the 'NewPrimaryKey' column 'PrimaryKey' Script back all the indexes and fkeys Does this seem like a good approach? Does anyone know of a tool or script that would help with this? TD: Edited per additional information. See this blog post that addresses an approach when the GUID is the Primary: http://www.sqlmag.com/blogs/sql-server-questions-answered/sql-server-questions-answered/tabid/1977/entryid/12749/Default.aspx

    Read the article

  • Entity Framework Create Database & Tables At Runtime

    - by dhsto
    I created some tables in an .edmx file and have been generating the database by selecting "Generate Database From Model" and manually executing an .edmx.sql file on the database to build the tables. Now, however, I am creating a setup dialog that allows the user to connect the program up to their own database. I thought running context.CreateDatabase would be good enough to create the database, along with the tables, but the tables are not created. What is the preferred method for creating the database and tables when the user specifies their own server and database to use, when originally starting with a model?

    Read the article

  • Creating tables with pylons and SQLAlchemy

    - by Sid
    I'm using SQLAlchemy and I can create tables that I have defined in /model/__init__.py but I have defined my classes, tables and their mappings in other files found in the /model directory. For example I have a profile class and a profile table which are defined and mapped in /model/profile.py To create the tables I run: paster setup-app development.ini But my problem is that the tables that I have defined in /model/__init__.py are created properly but the table definitions found in /model/profile.py are not created. How can I execute the table definitions found in the /model/profile.py so that all my tables can be created? Thanks for the help!

    Read the article

  • Storing SQL Tables for use in visual studio

    - by Raven Dreamer
    Greetings. I'm trying to create a windows form application that manipulates data from several tables stored on a SQL server. 1) What's the best way to store the data locally, while the application is running? I had a previous program that only modified one table, and that was set up to use a datagridview. However, as I don't necessarily want to view all the tables, I am looking for another way to store the data retrieved by the SELECT * FROM ... query. 2) Is it better to load the tables, make changes within the C# application, and then update the modified tables at the end, or simply perform all operations on the database, remotely (retrieving the tables each time they are needed)? Thank you.

    Read the article

  • Sorting the columns of an HTML table using JQuery

    - by nikolaosk
    In this post I will show you how easy is to sort the columns of an HTML table. I will use an external library,called Tablesorter which makes life so much easier for developers. ?here are other posts in my blog regarding JQuery.You can find them all here. You can find another post regarding HTML tables and JQuery here. We will demonstrate this with a step by step example. I will use Visual Studio 2012 Ultimate. You can also use Visual Studio 2012 Express Edition. You can also use VS 2010 editions.   1) Launch Visual Studio. Create an ASP.Net Empty Web application. Choose an appropriate name for your application. 2) Add a web form, default.aspx page to the application. 3) Add a table from the HTML controls tab control (from the Toolbox) on the default.aspx page 4) Now we need to download the JQuery library. Please visit the http://jquery.com/ and download the minified version.Then we need to download the Tablesorter JQuery plugin. Please donwload it, here. 5) We need to reference the JQuery library and the external JQuery Plugin. In the head section ? add the following lines.   <script src="jquery-1_8_2_min.js" type="text/javascript"></script>  <script src="jquery.tablesorter.js" type="text/javascript"></script>6) We need to type the HTML markup, the HTML table and its columns <body>    <form id="form1" runat="server">    <div>        <h1>Liverpool Legends</h1>        <table style="width: 50%;" border="1" cellpadding="10" cellspacing ="10" class="liverpool">            <thead>                <tr><th>Defenders</th><th>MidFielders</th><th>Strikers</th></tr>            </thead>            <tbody>            <tr>                <td>Alan Hansen</td>                <td>Graeme Souness</td>                <td>Ian Rush</td>            </tr>            <tr>                <td>Alan Kennedy</td>                <td>Steven Gerrard</td>                <td>Michael Owen</td>            </tr>            <tr>                <td>Jamie Garragher</td>                <td>Kenny Dalglish</td>                <td>Robbie Fowler</td>            </tr>            <tr>                <td>Rob Jones</td>                <td>Xabi Alonso</td>                <td>Dirk Kuyt</td>            </tr>                </tbody>        </table>            </div>    </form></body> 7) Inside the head section we also write the simple JQuery code.   <script type="text/javascript"> $(document).ready(function() { $('.liverpool').tablesorter(); }); </script> 8) Run your application.This is how the HTML table looks before the table is sorted on the basis of the selected column.   9) Now I will click on the Midfielders header.Have a look at the picture below  Tablesorter is an excellent JQuery plugin that makes sorting HTML tables a piece of cake. Hope it helps!!!

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • SQL SERVER – GUID vs INT – Your Opinion

    - by pinaldave
    I think the title is clear what I am going to write in your post. This is age old problem and I want to compile the list stating advantages and disadvantages of using GUID and INT as a Primary Key or Clustered Index or Both (the usual case). Let me start a list by suggesting one advantage and one disadvantage in each case. INT Advantage: Numeric values (and specifically integers) are better for performance when used in joins, indexes and conditions. Numeric values are easier to understand for application users if they are displayed. Disadvantage: If your table is large, it is quite possible it will run out of it and after some numeric value there will be no additional identity to use. GUID Advantage: Unique across the server. Disadvantage: String values are not as optimal as integer values for performance when used in joins, indexes and conditions. More storage space is required than INT. Please note that I am looking to create list of all the generic comparisons. There can be special cases where the stated information is incorrect, feel free to comment on the same. Please leave your opinion and advice in comment section. I will combine a final list and update this blog after a week. By listing your name in post, I will also give due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Data Storage, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >