Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 41/588 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • SQL SERVER – Finding Different ColumnName From Almost Identitical Tables

    - by pinaldave
    I have mentioned earlier on this blog that I love social media – Facebook and Twitter. I receive so many interesting questions that sometimes I wonder how come I never faced them in my real life scenario. Well, let us see one of the similar situation. Here is one of the questions which I received on my social media handle. “Pinal, I have a large database. I did not develop this database but I have inherited this database. In our database we have many tables but all the tables are in pairs. We have one archive table and one current table. Now here is interesting situation. For a while due to some reason our organization has stopped paying attention to archive data. We did not archive anything for a while. If this was not enough we  even changed the schema of current table but did not change the corresponding archive table. This is now becoming a huge huge problem. We know for sure that in current table we have added few column but we do not know which ones. Is there any way we can figure out what are the new column added in the current table and does not exist in the archive tables? We cannot use any third party tool. Would you please guide us?” Well here is the interesting example of how we can use sys.column catalogue views and get the details of the newly added column. I have previously written about EXCEPT over here which is very similar to MINUS of Oracle. In following example we are going to create two tables. One of the tables has extra column. In our resultset we will get the name of the extra column as we are comparing the catalogue view of the column name. USE AdventureWorks2012 GO CREATE TABLE ArchiveTable (ID INT, Col1 VARCHAR(10), Col2 VARCHAR(100), Col3 VARCHAR(100)); CREATE TABLE CurrentTable (ID INT, Col1 VARCHAR(10), Col2 VARCHAR(100), Col3 VARCHAR(100), ExtraCol INT); GO -- Columns in ArchiveTable but not in CurrentTable SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'ArchiveTable' EXCEPT SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'CurrentTable' GO -- Columns in CurrentTable but not in ArchiveTable SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'CurrentTable' EXCEPT SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'ArchiveTable' GO DROP TABLE ArchiveTable; DROP TABLE CurrentTable; GO The above query will return us following result. I hope this solves the problems. It is not the most elegant solution ever possible but it works. Here is the puzzle back to you – what native T-SQL solution would you have provided in this situation? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using Bulk Operations with Coherence Off-Heap Storage

    - by jpurdy
    Some NamedCache methods (including clear(), entrySet(Filter), aggregate(Filter, …), invoke(Filter, …)) may generate large intermediate results. The size of these intermediate results may result in out-of-memory exceptions on cache servers, and in some cases on cache clients. This may be particularly problematic if out-of-memory exceptions occur on more than one server (since these operations may be cluster-wide) or if these exceptions cause additional memory use on the surviving servers as they take over partitions from the failed servers. This may be particularly problematic with clusters that use off-heap storage (such as NIO or Elastic Data storage options), since these storage options allow greater than normal cache sizes but do nothing to address the size of intermediate results or final result sets. One workaround is to use a PartitionedFilter, which allows the application to break up a larger operation into a number of smaller operations, each targeting either a set of partitions (useful for reducing the load on each cache server) or a set of members (useful for managing client result set sizes). It is also possible to return a key set, and then pull in the full entries using that key set. This also allows the application to take advantage of near caching, though this may be of limited value if the result is large enough to result in near cache thrashing.

    Read the article

  • C programming multiple storage backends

    - by ahjmorton
    I am starting a side project in C which requires multiple storage backends to be driven by a particular piece of logic. These storage backends would each be linked with the decision of which one to use specified at runtime. So for example if I invoke my program with one set of parameters it will perform the operations in memory but if I change the program configuration it would write to disk. The underlying idea is that each storage backend should implement the same protocol. In other words the logic for performing operations should need to know which backend it is operating on. Currently the way I have thought to provide this indirection is to have a struct of function pointers with the logic calling these function pointers. Essentially the struct would contain all the operations needed to implement the higher level logic E.g. struct Context { void (* doPartOfDoOp)(void) int (* getResult)(void); } //logic.h void doOp(Context * context) { //bunch of stuff context->doPartOfDoOp(); } int getResult(Context * context) { //bunch of stuff return context->getResult(); } My questions is if this way of solving the problem is one a C programmer would understand? I am a Java developer by trade but enjoy using C/++. Essentially the Context struct provides an interface like level of indirection. However I would like to know if there is a more idiomatic way of achieving this.

    Read the article

  • How to read from multiple queues in real-world?

    - by Leon Cullens
    Here's a theoretical question: When I'm building an application using message queueing, I'm going to need multiple queues support different data types for different purposes. Let's assume I have 20 queues (e.g. one to create new users, one to process new orders, one to edit user settings, etc.). I'm going to deploy this to Windows Azure using the 'minimum' of 1 web role and 1 worker role. How does one read from all those 20 queues in a proper way? This is what I had in mind, but I have little or no real-world practical experience with this: Create a class that spawns 20 threads in the worker role 'main' class. Let each of these threads execute a method to poll a different queue, and let all those threads sleep between each poll (of course with a back-off mechanism that increases the sleep time). This leads to have 20 threads (or 21?), and 20 queues that are being actively polled, resulting in a lot of wasted messages (each time you poll an empty queue it's being billed as a message). How do you solve this problem?

    Read the article

  • How can one prevent page breaks between Tables in SQL Reporting Services 2005?

    - by James
    I've been developing an SSRS 2005 report (a form letter) that contains two tables. The first "table" uses DataSet "A" to populate address fields on letterhead. The second table uses DataSet "B" to display a basic list of records that pertain to the addressed party. Even though the two tables are within the same page and all PageBreak* properties are set to "False", SSRS renders the report as two pages. How can I force both tables to render in series, without any forced page breaks?

    Read the article

  • How can I get the name of all tables in a JavaDB database?

    - by Jonas
    How can i programmatically get the names of all tables in a JavaDB database? Is there any specific SQL-statement over JDBC I can use for this or any built in function in JDBC? I will use it for exporting the tables to XML, and would like to do it this way so I don't miss any tables from the database when exporting.

    Read the article

  • Force.com presents Database.com SQL Azure/Amazon RDS unfazed

    - by Sarang
    At the DreamForce 2010 event in San Francisco Force.com unveiled their next big thing in the Fat SaaS portfolio "Database.com".  I am still wondering how would they would've shelled out for that domain name. Now why would a already established SaaS player foray into a key building block like Database? Potentially allowing enterprises to build apps that do not utilize the Force.com stack! One key reason is being seen as the Fat SaaS player with evey trick in the SaaS space under his belt. You want CRM come hither, want a custom development PaaS like solution welcome home (VMForce), want all your apps to talk to a cloud DB and minimize latency by having it reside closer to you cloud apps? You've come to the right place sire! Other is potentially killing foray of smaller DB players like Oracle (Not surprisingly, the Database.com offering is a highly customized and scalable Oracle database) from entering the lucrative SaaS db marketplace. The feature set promised looks great out of the box for someone who likes to visualize cool new architectures. The ground realities are certainly going to be a lot different considering the SOAP/REST style access patterns in lieu of the comfortable old shoe of SQL. Microsoft suffered heavily with SDS (SQL Data Services) offering in early 2009 and had to pull the plug on the product only to reintroduce as a simple SQL Server in the cloud, SQL Windows Azure. Though MSFT is playing cool by providing OData semantics to work with SQL Windows Azure satisfying atleast some needs of the Web-Style to a DB. The other features like Social data models including Profiles, Status updates, feeds seem interesting as well. (Although I beleive social is just one of the aspects of large scale collaborative computing). All these features start "Free" for devs its a good news but the good news stops here. The overall pricing model of $ per Users per Transactions / Month is highly disproportionate compared to Amazon RDS (Based on MySQL) or SQL Windows Azure (Based on MSSQL). Roger Jennigs of Oakleaf did an interesting comparo based on 3, 10, 100, 500 users and it turns out that Database.com going by current understanding is way too expensive for the services on offer. The offering may not impact the decision for DotNet shops mulling their cloud stategy or even some Java/MySQL shops thinking about Amazon RDS, however for enterprises having already invested in other force.com offerings this could be a very important piece in the cloud strategy jigsaw. One which would address a key cloud DB issue of "Latency" for them at least it will help having the DB in the neighborhood. The tooling and "SQL like" access provider drivers (Think ODBC/JDBC) will be available later this year. Progress Software has already announced their JDBC driver stack for Database.com. It remains to be seen how effective the overall solutions proves to be in the longer run but for starts its a important decision towards consolidating Force.com's already strong positioning in the SaaS space. As always contrasting views are welcome! :)

    Read the article

  • Connect Microsoft Excel To SQL Azure Database

    A number of people have found my post about getting started with SQL Azure pretty useful. But, it's all worthless if it doesn't add up to user value. Database are like potential energy in physics-it's a promise that something could be put in motion. Users actually making decisions based on analysis is analogous to kinetic energy in physics. It's the fulfillment of the promise of potential energy. So what does this have to with Office 2010? In Excel 2010 we made it truly easy to connect to a SQL Azure...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • Sesame update du jour: SL 4, OOB, Azure, and proxy support

    I've just published a new version of Sesame Data Browser. Here's what's new this time: Upgraded to Silverlight 4 Can run out-of-browser (OOB), with elevated permissions. This gives you an icon on your desktop and enables new scenarios. Note: The application is unsigned for the moment. Support for Windows Azure authentication Support for SQL Azure authentication If you are behind a proxy that requires authentication, just give Sesame a new try after clicking on "If you are behind a proxy that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Register for Cloud Computing Bootcamp: Free Technical Training on Developing for Windows Azure

    This two-day workshop will help you prepare to deliver solutions on the Windows Azure Platform. We've worked to bring the region's best Azure experts together to teach you how to work in the cloud. Each day will be filled with training, discussion, reviewing real scenarios, and hands-on labs. It's more than just a training class, it's also an event-in-a box. If you don't see a class near you, then throw your own....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Speaking in Raleigh NC June 15th

    - by Andrew Kelly
    Just a heads up to those in the area that I will be speaking at the (TriPASS) Raleigh SQL Server user group on the 15th of June 2010. The topic is Storage & I/O Best Practices. The abstract is listed below: SQL Server relies heavily on a well configured storage sub-system to perform at its peak but unfortunately this is one of the most neglected or mis-configured areas of a SQL Server instance. Here we will focus on the best practices related to how SQL Server works with the underlying storage...(read more)

    Read the article

  • Fetching Data from Multiple Tables using Joins

    Applying normalization to relational databases tends to promote better accuracy of queries, but it also leads to queries that take a little more work to develop, as the data may be spread amongst several tables. In today's article, we'll learn how to fetch data from multiple tables by using joins.

    Read the article

  • Live Security Talk Webcast: Security Best Practices for Design and Deployment on Windows Azure (Leve

    Developing secure applications and services in the cloud requires knowledge of the threat landscape specific to the cloud provider. The key is understanding threat mitigations implemented by the cloud architecture versus those that are the responsibility of the developer. Register for this exciting live webcast to learn about the threats that are specific to the cloud and how the Windows Azure architecture deals with these threats. We also cover how to use built-in Windows Azure security features...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Is there an online storage service that works with Windows 7 Backup and Restore? [closed]

    - by user57813
    I love Windows 7 Backup and Restore.. However, I kind of find it inconvenient to use external hard drive, plug it in, take it out, put it away safely etc.. Is there an online storage service to which Windows Backup can save the backup files to? Can I make Windows Backup store the backed up data to an online storage service? I do not want to manually upload the backup files. I want it to save directly to the cloud. Doesn't matter if the service costs me some $$$. I am using Windows 7 Ultimate 32-bit edition.

    Read the article

  • Windows Azure Platform TCO/ROI Analysis Tool

    - by kaleidoscope
    Microsoft have released a tool to help you figure out how much money you can save by switching to Windows Azure from your on-premises solution. The tool will provide you with a customized estimate of potential cost savings you (or your company or organization) may achieve by building on the Windows Azure Platform. Upon completion of the TCO and ROI Calculator profile analysis, you will be presented with a detailed report which shows estimated line item costs for an accurate TCO and a 1 to 3 year ROI analysis for you or your company or organization. You should not interpret the analysis report you receive as a part of this process to be a commitment on the part of Microsoft, and Microsoft makes no guarantees regarding the accuracy of any information presented in the report. You should not view the results of this report as a substitute for engaging with a third party expert to independently evaluate you or your company’s specific computing needs. The analysis report you will receive is for informational purposes only. For more information check this link. Geeta, G

    Read the article

  • Add SQL Azure database to Azure Web Role and persist data with entity framework code first.

    - by MagnusKarlsson
    In my last post I went for a warts n all approach to set up a web role on Azure. In this post I’ll describe how to add an SQL Azure database to the project. This will be described with an as minimal as possible amount of code and screen dumps. All questions are welcome in the comments area. Please don’t email since questions answered in the comments field is made available to other visitors. As an example we will add a comments section to the site we used in the previous post (Länk här). Steps: 1. Create a Comments entity and then use Scaffolding to set up controller and view, and add ConnectionString to web.config. 2. Create SQL Azure database in Management Portal and link the new database 3. Test it online!   1. Right click Models folder, choose add, choose “class…” . Name the Class Comment. 1.1 Replace the Code in the class with the following: using System.Data.Entity; namespace MvcWebRole1.Models { public class Comment {    public int CommentId { get; set; }    public string Name { get; set; }      public string Content { get; set; } } public class CommentsDb : DbContext { public DbSet<Comment> CommentEntries { get; set; } } } Now Entity Framework can create a database and a table named Comment. Build your project to assert there are no build errors.   1.2 Right click Controllers folder, choose add, choose “class…” . Name the Class CommentController and fill out the values as in the example below.     1.3 Click Add. Visual Studio now creates default View for CRUD operations and a Controller adhering to these and opens them. 1.3 Open Web.config and add the following connectionstring in <connectionStrings> node. <add name="CommentsDb” connectionString="data source=(LocalDB)\v11.0;Integrated Security=SSPI;AttachDbFileName=|DataDirectory|\CommentsDb.mdf;Initial Catalog=CommentsDb;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />   1.4 Save All and press F5 to start the application. 1.5 Go to http://127.0.0.1:81/Comments which will redirect you through CommentsController to the Index View which looks like this:     Click Create new. In the Create-view, add name and content and press Create.   1: // 2: // POST: /Comments/Create 3:  4: [HttpPost] 5: public ActionResult Create(Comment comment) 6: { 7: if (ModelState.IsValid) 8: { 9: db.CommentEntries.Add(comment); 10: db.SaveChanges(); 11: return RedirectToAction("Index"); 12: } 13:  14: return View(comment); 15: } 16:    The default View() is Index so that is the View you will come to. Looking like this: 1: // 2: // GET: /Comments/ 3: 4: public ActionResult Index() 5: { 6: return View(db.CommentEntries.ToList()); 7: } Resulting in the following screen dump(success!):   2. Now, go to the Management portal and Create a new db.   2.1 With the new database created. Click the DB icon in the left most menu. Then click the newly created database. Click DASHBOARD in the top menu. Finally click Connections strings in the right menu to get the connection string we need to add in our web.debug.config file.   2.2 Now, take a copy of the connection String earlier added to the web.config and paste in web.debug.conifg in the connectionstrings node. Replace everything within “ “ in the copied connectionstring with that you got from SQL Azure. You will have something like this:   2.3 Rebuild the application, right click the cloud project and choose “Package…” (if you haven’t set up publishing profile which we will do in our next blog post). Remember to choose the right config file, use debug for staging and release for production so your databases won’t collide. You should see something like this:   2.4 Go to Management Portal and click the Web Services menu, choose your service and click update in the bottom menu.   2.5 Link the newly created database to your application. Click the LINKED RESOURCES in the top menu and then click “Link” in the bottom menu. You should get something like this. 3. Alright then. Under the Dashboard you can find the link to your application. Click it to open it in a browser and then go to ~/Comments to try it out just the way we did locally. Success and end of this story!

    Read the article

  • Never update systems tables directly - a study in Agent job scheduling

    It is often recommended that system tables should not be updated directly. Presenting a case in point built around nightly job configuration in order to demonstrate the possible issues with updating system tables directly. What can SQL Monitor 3.2 monitor?Whatever you think is most important. Use custom metrics to monitor and alert on data that's most important for your environment. Find out more.

    Read the article

  • Integrating Azure ServiceBus and SharePoint 2010

    - by Sahil Malik
    SharePoint 2010 Training: more information My new article is finally online. I had been waiting for this for a while. The thing is, AppFabric became .NET 4, and left SharePoint 2010 behind. But fear not, we have REST API. But that brings up interesting challenges of how we can integrate Azure Service Bus with SharePoint 2010 (yes 2010, not vNext – I’m not giving NDA information out you fool), the design patterns you can use, figuring out challenging issues like security, sessions, and just app design patterns instead. Well, I hope you like my next article, SharePoint Applied: Azure ServiceBus and SharePoint 2010 Enjoy! Read full article ....

    Read the article

  • An overview of Windows Azure

    - by Sahil Malik
    SharePoint 2010 Training: more information I will be speaking on Windows Azure – an overview at one of my favorite user groups, CMAPonline on October 25th. Here are the details, When – October 25th 2011, 630PMWhere - 6021 Univeristy Blvd,  Suite 250, Ellicott City, MD 21043 About - "SharePoint Office365 and Azure – an Overview of what you can use today!"Everyone is talking about the cloud. Everyone is moving to the cloud. Microsoft's cloud offering is probably the most expansive of all. But how does it really compare with other offerings? What is the featureset of Google? Or Amazon? And in the jungle of Beta, what is currently proven and production ready in the Microsoft spectrum? Most of all, how do you move from your current setup to a cloud based setup? In this session, Sahil provides a manager and architect level overview demystifying all these topics and more. Read full article ....

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >