Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 344/588 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Generating db schema from c# class

    - by Niran
    Hi, Is there any other method than nHibernate by wich we can generate db schema from class definition? My classes arn't that complex etc (few one-to-many relations). However I would like to just be able to save my objects in db and recreate schema if needed. I am stuck with .NET 2.0. I am not that particular about performance for this project, I am just lazy to create tables and write save/load code and deel with nHibernate xml. Thanks

    Read the article

  • How to Disable secondary drive from booting upon restart - Windows

    - by DevCompany
    I had a Windows 2003 Hard Drive on my server and it went bad so I installed a new clean hard drive and installed Windows 2008 R2 on the new clean drive. I moved the old 2003 drive to be used only for general storage on the same computer. It usually boots into Windows 2008 upon a restart, but just sometimes it starts trying to boot the old 2003 drive and causes boot issues(NTDLR Bootloader, and other errors), even though the order of boot preference is set to boot 2008, and NOT 2003. I need to know how to remove any old code that keeps this old drive as a bootable drive. I still want to use it as a secondary drive just dont want to have any boot code on it. hopefully my situation is clear for everyone to get a good response. Thank you...

    Read the article

  • Dell Powervault MD3000 - Not sharing Files between servers

    - by Kevin
    I'm a developer who has to set up a Dell Powervault MD3000 due to lack of resources. I have connected the Powervault to 2 Dell 2950 servers via the SAS cables. I performed the setup using Dell's MD Storage Manager software (4 disks, RAID 5 with hot spare). Then I added the disks using Windows 2003 disk management (Basic, not dynamic disk and formatted with NTFS). When I add files to the array from one server, they are not visible on the other server (and vice-versa). Is the error in the windows disk management configuration?

    Read the article

  • Choosing Truecrypt volume names and keyfile names

    - by Howiecamp
    Any recommendations on what to name Truecrypt volumes (container files) and where to locate them? Certainly a name like "this is a truecrypt volume.tc" isn't a good idea. Any recommended storage locations? Same question for keyfiles that are generated with Truecrypt. Finally, lets say you choose an existing file, ymca.mp3, as your keyfile. Given that that file is innocuous and normal looking, isn't it easy to forget that's your key file so when you get sick of the Village People and delete the song you're hosed?

    Read the article

  • node.js with SQL Server Native Client 11 scope_identity not being returned

    - by binderbound
    I'm having trouble with inserting a value into a database through node.js. Here is the offending code: sql.query(conn_str ,"INSERT INTO Login(email, hash, salt, firstName, lastName) VALUES(?, ?, ?, ?, ?); SELECT SCOPE_IDENTITY() AS 'Identity';" , [email, hash, salt, firstName, lastName], function(err, results){ console.log(results) } Unfortunately, the console is just echoing [], meaning results is an empty array, I suppose. Does anyone know why the identity is not being returned? Even if it was null, why isn't results then [{Identity: null }] ? Database is on Azure, which does have a "Scope_Identity" function, and the native client also recognises this function. Using node package "msnodesql" Please Help

    Read the article

  • Resilient Linux Mail Server Setup

    - by Coops
    How would people design a resilient mail server setup with Linux? On an application level what the system needs to provide is both an incoming and outgoing mail service (i.e. SMTP & IMAP), along with filtering and archive storage (the archive part isn't critical yet, so we'll look at this later probably). What is required on top of this is a resilient system, i.e. one which will handle individual server failures without interrupting service. As such I would term this a High Availability mail system. This is in contrast to a High Performance mail setup, as in our case the volume of mail being handled isn't the important factor, it's simply that it stays online. Having not approached this problem before, the first thing I thought of was a clustered file system (gfs/gluster/etc), combined with heartbeat to failover a floating IP to another box in the case of a server failure. Combined with postfix & dovecot does this sound feasible to people?

    Read the article

  • mysql - union with creating demarcated field

    - by Qiao
    I need UNION two tables with creating new field, where 1 for first table, and 2 for second. I tried ( SELECT field, 1 AS tmp FROM table1 ) UNION ( SELECT field, 2 AS tmp FROM table2 ) But in result, tmp field was full of "1". How it can be implemented?

    Read the article

  • Issue about Exchange 07 SP2 Backup in SBS 08

    - by Bastien974
    Hi, I'm trying to backup my Exchange 07 SP2 with the Windows Server Backup. Since it's supposed to make a exchange-aware backup with the SP2, I created a scheduled full backup of the C: (where is located my First Storage Group). The backup is successful, but when I go in Mailbox database's properties, I see that the last full backup is 2 months ago (a that time backup worked but we had some issue then). In Server Manager, Features, I checked that I have Windows Server Backup Features checked. What am I missing ? Thank you !

    Read the article

  • SQL query problem

    - by Pankaj
    Hello All I have 2 tables Project and ProjectList like this Project ProjectID Name ProjectListID - allow null In ProjectList ProjectListID ProjName Now what i need here, i want only those recoed from ProjectList table which ProjectListID not in Project table. I made a query but it is taking lot of time to execute. select * FROM projectslist pl where pl.ProjectsListID not in (SELECT p.ProjectsListID FROM project p where (p.ProjectsListID is not null and p.ProjectsListID <>0)) Please help me to create optimize query. I am using My SQL.

    Read the article

  • SQL optimization: deletes taking a long time

    - by Will
    I have an Oracle SQL query as part of a stored proc: DELETE FROM item i WHERE NOT EXISTS (SELECT 1 FROM item_queue q WHERE q.n=i.n) AND NOT EXISTS (SELECT 1 FROM tool_queue t WHERE t.n=i.n); A bit about the tables: item contains about 10k rows with an index on the n column item_queue contains about 1mil rows also with index on n column tool_queue contains about 5mil rows indexed as well I am wondering if the query/subqueries can be optimized somehow to make them run faster, I thought that deletes were generally fairly fast

    Read the article

  • Giving GUID for data using NDBUnit

    - by jess
    I am using NDBUbit to load data from XML file.Right now,I am manually giving GUID for each record(our primary key for all tables is unique-identifier) in the xml file.But,I wonder if there is a better way to do this?

    Read the article

  • Comprehensive HTML page for testing CSS?

    - by yaya3
    Looking for example code that uses semantic HTML that I can use to test stylesheets with. The version/doctype is unimportant though HTML5 would be great. When I say "comprehensive" - I am looking for use of definition lists, forms, tables, plus all the usual. Thanks

    Read the article

  • LINQ2SQL: how many datacontexts ?

    - by sh00
    I have a SQL Server 2008 database with 300 tables. The application I have to design is an Windows Forms app, .NET 3.5, C#. Which is the best way to work with LINQ2SQL ? I intend to make a datacontext for each business entity. Is there any problem ? I need to know if this way of working with LINQ has any disadvantage or can create performance issues ? Thanks.

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • how to call include

    - by user276640
    i have 3 tables with such relationship table1-table2-table3 in code i call include("table2"). but how can i call include table3 so i can access to properties of objects table3? if i call include("table3") it show error A specified Include path is not valid

    Read the article

  • Office Web Components compatibility issues

    - by Sebastian
    Hello, I'm doing some research on the convenience of using Office Web Components on a web to show pivot tables and graphics and I have a question regarding this. Does the use of these components will turn my web app (at least for this feature) into a "Internet Explorer only" app Thanks in advance!

    Read the article

  • Is it worth a try LINQ to SQL as a beginner to an ORM?

    - by Pandiya Chendur
    Thus far used sql server stored procedures for all my web applications... Now thought of moving to an ORM... I would like to ask SO users about LINQ to SQL Is Linq to sql worth a try as a beginner to an ORM? or should i look for some others... Any suggestion... EDIT: I have a sql server 2005 database with all tables.... How to use this db with Linq to sql?

    Read the article

  • File store: CouchDB vs SQL Server + file system

    - by Andrey
    I'm exploring different ways of storing user-uploaded files (all are MS Office documents or alikes) on our high load web site. It's currently designed to store documents as files and have a SQL database store all metadata for those files. I'm concerned about growing out of the storage server and SQL server performance when number of documents reaches hundreds of millions. I was reading a lot of good information about CouchDB including its built-in scalability and performance, but I'm not sure how storing files as attachments in CouchDB would compare to storing files on a file system in terms of performance. Anybody used CouchDB clusters for storing LARGE amounts of documents and in high load environment?

    Read the article

  • How do I create a stored procedure that calls sp_refreshview for each view in the database?

    - by Allrameest
    Today I run this select 'exec sp_refreshview N''['+table_schema+'].['+table_name+']''' from information_schema.tables where table_type = 'view' This generates a lot of: exec sp_refreshview N'[SCHEMA].[TABLE]'. I then copy the result to the query editor window and run all those execs. How do I do this all at once? I would like to have a stored procedure called something like dev.RefreshAllViews which I can execute to do this...

    Read the article

  • How to delete drupal's unused core modules correctly?

    - by vegatron
    Hi I want to delete the unused drupal modules like ( blog, Forum, taxonomy ...) but I'm worried if I delete the modules from the modules directory I might cause an error (now or in the future) . is it safe? and if I deleted the corresponding tables what will happen? the reason for this is because I want to deliver the site to my client, and teach him how to use the admin page, but I want to make as easy as possible for him..

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >