Search Results

Search found 28024 results on 1121 pages for 'sql 2014'.

Page 714/1121 | < Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >

  • Azure Mobile Services: lessons learned

    - by svdoever
    When I first started using Azure Mobile Services I thought of it as a nice way to: authenticate my users - login using Twitter, Google, Facebook, Windows Live create tables, and use the client code to create the columns in the table because that is not possible in the Azure Mobile Services UI run some Javascript code on the table crud actions (Insert, Update, Delete, Read) schedule a Javascript to run any 15 or more minutes I had no idea of the magic that was happening inside… where is the data stored? Is it a kind of big table, are relationships between tables possible? those Javascripts on the table crud actions, is that interpreted, what is that exactly? After working for some time with Azure Mobile Services I became a lot wiser: Those tables are just normal tables in an Azure SQL Server 2012 Creating the table columns through client code sucks, at least from my Javascript code, because the columns are deducted from the sent JSON data, and a datetime field is sent as string in JSON, so a string type column is created instead of a datetime column You can connect with SQL Management Studio to the Azure SQL Server, and although you can’t manage your columns through the SQL Management Studio UI, it is possible to just run SQL scripts to drop and create tables and indices When you create a table through SQL script, add the table with the same name in the Azure Mobile Services UI to hook it up and be able to access the table through the provided abstraction layer You can also go to the SQL Database through the Azure Mobile Services UI, and from there get in a web based SQL management studio where you can create columns and manage your data The table crud scripts and the scheduler scripts are full blown node.js scripts, introducing a lot of power with great performance The web based script editor is really powerful, I do most of my editing currently in the editor which has syntax highlighting and code completing. While editing the code JsHint is used for script validation. The documentation on Azure Mobile Services is… suboptimal. It is such a pity that there is no way to comment on it so the community could fill in the missing holes, like which node modules are already loaded, and which modules are available on Azure Mobile Services. Soon I was hacking away on Azure Mobile Services, creating my own database tables through script, and abusing the read script of an empty table named query to implement my own set of “services”. The latest updates to Azure Mobile Services described in the following posts added some great new features like creating web API’s, use shared code from your scripts, command line tools for managing Azure Mobile Services (upload and download scripts for example), support for node modules and git support: http://weblogs.asp.net/scottgu/archive/2013/06/14/windows-azure-major-updates-for-mobile-backend-development.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/14/custom-apis-in-azure-mobile-services.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/19/custom-api-in-azure-mobile-services-client-sdks.aspx In the mean time I rewrote all my “service-like” table scripts to API scripts, which works like a breeze. Bad thing with the current state of Azure Mobile Services is that the git support is not working if you are a co-administrator of your Azure subscription, and not and administrator (as in my case). Another bad thing is that Cross Origin Request Sharing (CORS) is not supported for the API yet, so no go yet from the browser client for API’s, which is my case. See http://social.msdn.microsoft.com/Forums/windowsazure/en-US/2b79c5ea-d187-4c2b-823a-3f3e0559829d/known-limitations-for-source-control-and-custom-api-features for more on these and other limitations. In his talk at Build 2013 Josh Twist showed that there is a work-around for accessing shared script code from the table scripts as well (another limitation mentioned in the post above). I could not find that code in the Votabl2 code example from the presentation at https://github.com/joshtwist/votabl2, but we can grab it from the presentation when it comes online on Channel9. By the way: you can always express your needs and ideas at http://mobileservices.uservoice.com, that’s the place they are listening to (I hope!).

    Read the article

  • Puppet: Making Windows Awesome Since 2011

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-making-windows-awesome-since-2011.aspxPuppet was one of the first configuration management (CM) tools to support Windows, way back in 2011. It has the heaviest investment on Windows infrastructure with 1/3 of the platform client development staff being Windows folks.  It appears that Microsoft believed an end state configuration tool like Puppet was the way forward, so much so that they cloned Puppet’s DSL (domain-specific language) in many ways and are calling it PowerShell DSC. Puppet Labs is pushing the envelope on Windows. Here are several things to note: Puppet x64 Ruby support for Windows coming in v3.7.0. An awesome ACL module (with order, SIDs and very granular control of permissions it is best of any CM). A wealth of modules that work with Windows on the Forge (and more on GitHub). Documentation solely for Windows folks - https://docs.puppetlabs.com/windows. Some of the common learning points with Puppet on Windows user are noted in this recent blog post. Microsoft OpenTech supports Puppet. Azure has the ability to deploy a Puppet Master (http://puppetlabs.com/solutions/microsoft). At Microsoft //Build 2014 in the Day 2 Keynote Puppet Labs CEO Luke Kanies co-presented with Mark Russonivich (http://channel9.msdn.com/Events/Build/2014/KEY02  fast forward to 19:30)! Puppet has a Visual Studio Plugin! It can be overwhelming learning a new tool like Puppet at first, but Puppet Labs has some resources to help you on that path. Take a look at the Learning VM, which has a quest-based learning tool. For real-time questions, feel free to drop onto #puppet on freenode.net (yes, some folks still use IRC) with questions, and #puppet-dev with thoughts/feedback on the language itself. You can subscribe to puppet-users / puppet-dev mailing lists. There is also ask.puppetlabs.com for questions and Server Fault if you want to go to a Stack Exchange site. There are books written on learning Puppet. There are even Puppet User Groups (PUGs) and other community resources! Puppet does take some time to learn, but with anything you need to learn, you need to weigh the benefits versus the ramp up time. I learned NHibernate once, it had a very high ramp time back then but was the only game on the street. Puppet’s ramp up time is considerably less than that. The advantage is that you are learning a DSL, and it can apply to multiple platforms (Linux, Windows, OS X, etc.) with the same Puppet resource constructs. As you learn Puppet you may wonder why it has a DSL instead of just leveraging the language of Ruby (or maybe this is one of those things that keeps you up wondering at night). I like the DSL over a small layer on top of Ruby. It allows the Puppet language to be portable and go more places. It makes you think about the end state of what you want to achieve in a declarative sense instead of in an imperative sense. You may also find that right now Puppet doesn’t run manifests (scripts) in order of the way resources are specified. This is the number one learning point for most folks. As a long time consternation of some folks about Puppet, manifest ordering was not possible in the past. In fact it might be why some other CMs exist! As of 3.3.0, Puppet can do manifest ordering, and it will be the default in Puppet 4. http://puppetlabs.com/blog/introducing-manifest-ordered-resources You may have caught earlier that I mentioned PowerShell DSC. But what about DSC? Shouldn’t that be what Windows users want to choose? Other CMs are integrating with DSC, will Puppet follow suit and integrate with DSC? The biggest concern that I have with DSC is it’s lack of visibility in fine-grained reporting of changes (which Puppet has). The other is that it is a very young Microsoft product (pre version 3, you know what they say :) ). I tried getting it working in December and ran into some issues. I’m hoping that newer releases are there that actually work, it does have some promising capabilities, it just doesn’t quite come up to the standard of something that should be used in production. In contrast Puppet is almost a ten year old language with an active community! It’s very stable, and when trusting your business to configuration management, you want something that has been around awhile and has been proven. Give DSC another couple of releases and you might see more folks integrating with it. That said there may be a future with DSC integration. Portability and fine-grained reporting of configuration changes are reasons to take a closer look at Puppet on Windows. Yes, Puppet on Windows is here to stay and it’s continually getting better folks.

    Read the article

  • Disaster Recovery Discovery

    - by Rodney Landrum
    Last weekend I joined several of my IT staff on a mission to perform a DR test in our remote CoLo center in a large South East city of the US. Can I be more obtuse? The goal was simple for me as the sole DBA in a throng of Windows, Storage, Network and SAN admins – restore the databases and make them work. There were 4 applications that back ended to 7 SQL Server databases on 4 different SQL Server instances. We would maintain the original server names, but beyond that it was fair game. We had time to prepare so I was able to script out or otherwise automate the recovery process. I used sp_help_revlogin for three of the servers, a bit of a cheat actually because restoring the Master database on the target DR servers was the specified course of action according to the DR procedures ( the caveat “IF REQUIRED” left it open to interpretation. I really wanted to avoid the step of restoring Master for a number of reasons but mainly because I did not want to deal with issues starting SQL Services afterward. Having to account for the location of TempDB and the version conflicts of the resource DBs were just two of the battles I chose not to fight. Not to mention other system database location problems that might arise and prevent SQL from starting.  I was going to have to restore all of the user databases anyway, so I would not really gain any benefit, outside of logins, for taking the time to restore the source Master database over the newly installed one on the fresh server. What I wanted was the ability to restore the Master database as a user database, call it Master_Mine, from a backup on the source system and then use that restored database to script the SQL Logins and passwords on the DR systems. While I did not attempt this on the trip, the thought stuck in my mind and this past week I succeeded at scripting user accounts and passwords using only a restored copy of the Master database. Granted there were several challenges to overcome.  Also, as is usual for any work like this the usual disclaimers apply:  This is not something that I would imagine Microsoft would condone or support and this was really only an experiment for me to learn if it was even possible. While I have tested the process with success, I do not know that I would use this technique in a documented procedure because future updates for SQL Server will render this technique non-functional. I thought at first, incorrectly of course, that I could use sp_help_revlogin on a restored copy of the master database I named Master_Mine.   Since sp_help_revlogin uses system schema objects, sys.syslogins and sys.server_principals, this was not going to work because all results would come from the main Master database. To test this I added a SQL login via SSMS, backed up Master, restored  it as Master_Mine, and then deleted the login.  Even though the test account I created should presumably still be in the Master_Mine database, I should be able to get to it and script out its creation with its password hash so that I would not need to know the password, but any applications that stored that password would not have to be altered in the DR scenario. They would just work as expected. Once I realized that would not work I began looking deeper.  Knowing that sys.syslogins and sys.server_principals are system views, their underlying code should be available with sp_helptext, right? They were. And this led me to discover the two tables sys.sysxlgns and sys.sysprivs, where the data I needed was stored. These tables existed in both the real Master and the restored copy, Master_Mine.  I used this information to tweak the sp_help_revlogin stored procedure to use these tables instead to create the logins cursor used in sp_help_revlogin. For the password hash,  sp_help_revlogin uses the function LoginProperty() which takes a user name and option ‘passwordhash’ to return the hash for the user. Unfortunately, it requires the login to exist in the Master database. This would not work. So another slight modification I had to make was to pull the password hash itself (pwdhash from sys.sysxlgns) into the logins cursor and comment out the section of sp_help_revlogin that uses LoginProperty. Instead, I pass the pwdhash value as the variable @PWD_varbinary to the sp_hexadecimal stored procedure which is also created by and used within the code provided by Microsoft in the link above for sp_help_revlogin. The final challenge: sys.sysxlgns and sys.server_principals are visible only within a Dedicated Administrator Connection (DAC) query window in SSMS or within SQLCDMD.  To open a DAC connection you have to be logged in on the SQL Server itself, via RDP in my case,  and you preface the server name in the query connection with ADMIN:, so that the server connection looks like ADMIN:ServerName. From there you can create the modified stored procedure in the restored copy of a Master database from a source system as whatever name you like, and then run the modified stored procedure. I named my new stored procedure usp_help_revlogin_MyMaster. Upon execution I was happy to see the logins and password hashes that I needed to apply from the source Master database without having to restore over the new Master system database and without the need to access the original server (assuming it was down due to whatever disaster put it in that state). You will note that I am not providing full code samples here of the modifications. I will say that it was a slight bit of work and anyone who needed to do this for whatever reason, could fairly easily roll their own solution with the information provided herein.  My goal, as I said was to prove that this could be done and provide another option if required to ease the burden of getting SQL Servers up and available in an emergency situation where alternatives may be more challenging or otherwise unavailable.  

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • WPF: How to connect to a sql ce 3.5 database with ADO.NET?

    - by EV
    Hi, I'm trying to write an application that has two DataGrids - the first one displays customers and the second one selected customer's records. I have generated the ADO.NET Entity Model and the connection property are set in App.config file. Now I want to populate the datagrids with the data from sql ce 3.5 database which does not support LINQ-TO-SQL. I used this code for the first datagrid in code behind: CompanyDBEntities db; private void Window_Loaded(object sender, RoutedEventArgs e) { db = new CompanyDBEntities(); ObjectQuery<Employees> departmentQuery = db.Employees; try { // lpgrid is the name of the first datagrid this.lpgrid.ItemsSource = departmentQuery; } catch (Exception ex) { MessageBox.Show(ex.Message); } However, the datagrid is empty although there are records in a database. It would also be better to use it in MVVM pattern but I'm not sure how does the ado.net and MVVM work together and if it is even possible to use them like that. Could anyone help? Thanks in advance.

    Read the article

  • Add UIView and UILabel to UICollectionViewCell. Then Segue based on clicked cell index

    - by JetSet
    I am new to collection views in Objective-C. Can anyone tell me why I can't see my UILabel embedded in the transparent UIView and the best way to resolve. I want to also segue from the cell to several various UIViewControllers based on the selected index cell. I am using GitHub project https://github.com/mayuur/MJParallaxCollectionView Overall, in MJRootViewController.m I wanted to add a UIView with a transparency and a UILabel with details of the cell from a array. MJCollectionViewCell.h // MJCollectionViewCell.h // RCCPeakableImageSample // // Created by Mayur on 4/1/14. // Copyright (c) 2014 RCCBox. All rights reserved. // #import <UIKit/UIKit.h> #define IMAGE_HEIGHT 200 #define IMAGE_OFFSET_SPEED 25 @interface MJCollectionViewCell : UICollectionViewCell /* image used in the cell which will be having the parallax effect */ @property (nonatomic, strong, readwrite) UIImage *image; /* Image will always animate according to the imageOffset provided. Higher the value means higher offset for the image */ @property (nonatomic, assign, readwrite) CGPoint imageOffset; //@property (nonatomic,readwrite) UILabel *textLabel; @property (weak, nonatomic) IBOutlet UILabel *textLabel; @property (nonatomic,readwrite) NSString *text; @property(nonatomic,readwrite) CGFloat x,y,width,height; @property (nonatomic,readwrite) NSInteger lineSpacing; @property (nonatomic, strong) IBOutlet UIView* overlayView; @end MJCollectionViewCell.m // // MJCollectionViewCell.m // RCCPeakableImageSample // // Created by Mayur on 4/1/14. // Copyright (c) 2014 RCCBox. All rights reserved. // #import "MJCollectionViewCell.h" @interface MJCollectionViewCell() @property (nonatomic, strong, readwrite) UIImageView *MJImageView; @end @implementation MJCollectionViewCell - (instancetype)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) [self setupImageView]; return self; } - (id)initWithCoder:(NSCoder *)aDecoder { self = [super initWithCoder:aDecoder]; if (self) [self setupImageView]; return self; } /* // Only override drawRect: if you perform custom drawing. // An empty implementation adversely affects performance during animation. - (void)drawRect:(CGRect)rect { // Drawing code } */ #pragma mark - Setup Method - (void)setupImageView { // Clip subviews self.clipsToBounds = YES; // Add image subview self.MJImageView = [[UIImageView alloc] initWithFrame:CGRectMake(self.bounds.origin.x, self.bounds.origin.y, self.bounds.size.width, IMAGE_HEIGHT)]; self.MJImageView.backgroundColor = [UIColor redColor]; self.MJImageView.contentMode = UIViewContentModeScaleAspectFill; self.MJImageView.clipsToBounds = NO; [self addSubview:self.MJImageView]; } # pragma mark - Setters - (void)setImage:(UIImage *)image { // Store image self.MJImageView.image = image; // Update padding [self setImageOffset:self.imageOffset]; } - (void)setImageOffset:(CGPoint)imageOffset { // Store padding value _imageOffset = imageOffset; // Grow image view CGRect frame = self.MJImageView.bounds; CGRect offsetFrame = CGRectOffset(frame, _imageOffset.x, _imageOffset.y); self.MJImageView.frame = offsetFrame; } - (void)setText:(NSString *)text{ _text=text; if (!self.textLabel) { CGFloat realH=self.height*2/3-self.lineSpacing; CGFloat latoA=realH/3; // self.textLabel=[[UILabel alloc] initWithFrame:CGRectMake(10,latoA/2, self.width-20, realH)]; self.textLabel.layer.anchorPoint=CGPointMake(.5, .5); self.textLabel.font=[UIFont fontWithName:@"HelveticaNeue-ultralight" size:38]; self.textLabel.numberOfLines=3; self.textLabel.textColor=[UIColor whiteColor]; self.textLabel.shadowColor=[UIColor blackColor]; self.textLabel.shadowOffset=CGSizeMake(1, 1); self.textLabel.transform=CGAffineTransformMakeRotation(-(asin(latoA/(sqrt(self.width*self.width+latoA*latoA))))); [self addSubview:self.textLabel]; } self.textLabel.text=text; } @end MJViewController.h // // MJViewController.h // ParallaxImages // // Created by Mayur on 4/1/14. // Copyright (c) 2014 sky. All rights reserved. // #import <UIKit/UIKit.h> @interface MJRootViewController : UIViewController{ NSInteger choosed; } @end MJViewController.m // // MJViewController.m // ParallaxImages // // Created by Mayur on 4/1/14. // Copyright (c) 2014 sky. All rights reserved. // #import "MJRootViewController.h" #import "MJCollectionViewCell.h" @interface MJRootViewController () <UICollectionViewDataSource, UICollectionViewDelegate, UIScrollViewDelegate> @property (weak, nonatomic) IBOutlet UICollectionView *parallaxCollectionView; @property (nonatomic, strong) NSMutableArray* images; @end @implementation MJRootViewController - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. //self.navigationController.navigationBarHidden=YES; // Fill image array with images NSUInteger index; for (index = 0; index < 14; ++index) { // Setup image name NSString *name = [NSString stringWithFormat:@"image%03ld.jpg", (unsigned long)index]; if(!self.images) self.images = [NSMutableArray arrayWithCapacity:0]; [self.images addObject:name]; } [self.parallaxCollectionView reloadData]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } #pragma mark - UICollectionViewDatasource Methods - (NSInteger)collectionView:(UICollectionView *)collectionView numberOfItemsInSection:(NSInteger)section { return self.images.count; } - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath { MJCollectionViewCell* cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"MJCell" forIndexPath:indexPath]; //get image name and assign NSString* imageName = [self.images objectAtIndex:indexPath.item]; cell.image = [UIImage imageNamed:imageName]; //set offset accordingly CGFloat yOffset = ((self.parallaxCollectionView.contentOffset.y - cell.frame.origin.y) / IMAGE_HEIGHT) * IMAGE_OFFSET_SPEED; cell.imageOffset = CGPointMake(0.0f, yOffset); NSString *text; NSInteger index=choosed>=0 ? choosed : indexPath.row%5; switch (index) { case 0: text=@"I am the home cell..."; break; case 1: text=@"I am next..."; break; case 2: text=@"Cell 3..."; break; case 3: text=@"Cell 4..."; break; case 4: text=@"The last cell"; break; default: break; } cell.text=text; cell.overlayView.backgroundColor = [UIColor colorWithWhite:0.0f alpha:0.4f]; //cell.textLabel.text = @"Label showing"; cell.textLabel.font = [UIFont boldSystemFontOfSize:22.0f]; cell.textLabel.textColor = [UIColor whiteColor]; //This is another attempt to display the label by using tags. //UILabel* label = (UILabel*)[cell viewWithTag:1]; //label.text = @"Label works"; return cell; } #pragma mark - UIScrollViewdelegate methods - (void)scrollViewDidScroll:(UIScrollView *)scrollView { for(MJCollectionViewCell *view in self.parallaxCollectionView.visibleCells) { CGFloat yOffset = ((self.parallaxCollectionView.contentOffset.y - view.frame.origin.y) / IMAGE_HEIGHT) * IMAGE_OFFSET_SPEED; view.imageOffset = CGPointMake(0.0f, yOffset); } } @end

    Read the article

  • LNK2001 error when compiling windows forms application with VC++ 2008

    - by Blin
    I've been trying to write a small application which will work with mysql in C++. I am using MySQL server 5.1.41 and MySQL C++ connector 1.0.5. Everything compiles fine when i write console applications, but when i try to compile windows forms application exactly the same way (same libraries, same paths, same project properties) i get this errors: Error 1 error LNK2001: unresolved external symbol "public: virtual int __clrcall sql::mysql::MySQL_Savepoint::getSavepointId(void)" (?getSavepointId@MySQL_Savepoint@mysql@sql@@$$FUAMHXZ) test1.obj test1 Error 2 error LNK2001: unresolved external symbol "public: virtual class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __clrcall sql::mysql::MySQL_Savepoint::getSavepointName(void)" (?getSavepointName@MySQL_Savepoint@mysql@sql@@$$FUAM?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@XZ) test1.obj test1 following instructions from here, i've got this: Undecoration of :- "?getSavepointId@MySQL_Savepoint@mysql@sql@@UEAAHXZ" is :- "public: virtual int __cdecl sql::mysql::MySQL_Savepoint::getSavepointId(void) __ptr64" Undecoration of :- "?getSavepointName@MySQL_Savepoint@mysql@sql@@UEAA?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@XZ" is :- "public: virtual class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __cdecl sql::mysql::MySQL_Savepoint::getSavepointName(void) __ptr64" but what should i do now?

    Read the article

  • How to implement an ID field on a POCO representing an Identity field in MS SQL?

    - by Dr. Zim
    If I have a Domain Model that has an ID that maps to a SQL identity column, what does the POCO look like that contains that field? Candidate 1: Allows anyone to set and get the ID. I don't think we want anyone setting the ID except the Repository, from the SQL table. public class Thing { public int ID {get;set;} } Candidate 2: Allows someone to set the ID upon creation, but we won't know the ID until after we create the object (factory creates a blank Thing object where ID = 0 until we persist it). How would we set the ID after persisting? public class Thing { public Thing () : This (ID: 0) {} public Thing (int ID) { this.ID = ID } private int _ID; public int ID { get { return this.ID;}; } Candidate 3: Methods to set ID? Somehow we would need to allow the Repository to set the ID without allowing the consumer to change it. Any ideas? Is this barking up the wrong tree? Do we send the object to the Repository, save it, throw it away, then create a new object from the loaded version and return that as a new object?

    Read the article

  • How to maintain an ordered table with Core Data (or SQL) with insertions/deletions?

    - by Jean-Denis Muys
    This question is in the context of Core Data, but if I am not mistaken, it applies equally well to a more general SQL case. I want to maintain an ordered table using Core Data, with the possibility for the user to: reorder rows insert new lines anywhere delete any existing line What's the best data model to do that? I can see two ways: 1) Model it as an array: I add an int position property to my entity 2) Model it as a linked list: I add two one-to-one relations, next and previous from my entity to itself 1) makes it easy to sort, but painful to insert or delete as you then have to update the position of all objects that come after 2) makes it easy to insert or delete, but very difficult to sort. In fact, I don't think I know how to express a Sort Descriptor (SQL ORDER BY clause) for that case. Now I can imagine a variation on 1): 3) add an int ordering property to the entity, but instead of having it count one-by-one, have it count 100 by 100 (for example). Then inserting is as simple as finding any number between the ordering of the previous and next existing objects. The expensive renumbering only has to occur when the 100 holes have been filled. Making that property a float rather than an int makes it even better: it's almost always possible to find a new float midway between two floats. Am I on the right track with solution 3), or is there something smarter?

    Read the article

  • wxPython - ListCrtl and SQLite3

    - by Dunwitch
    I'm trying to get a SQLite3 DB to populate a wx.ListCrtl. I can get it to print to stdout/stderr without any problem. I just can't seem to figure out how to display the data in the DataWindow/DataList? I'm sure I've made some code mistakes, so any help is appreciated. Main.py import wx import wx.lib.mixins.listctrl as listmix from database import * import sys class DataWindow(wx.Frame): def __init__(self, parent = None): wx.Frame.__init__(self, parent, -1, 'DataList', size=(640,480)) self.win = DataList(self) self.Center() self.Show(True) class DataList(wx.ListCtrl, listmix.ListCtrlAutoWidthMixin, listmix.ColumnSorterMixin): def __init__(self, parent = DataWindow): wx.ListCtrl.__init__( self, parent, -1, style=wx.LC_REPORT|wx.LC_VIRTUAL|wx.LC_HRULES|wx.LC_VRULES) #building the columns self.InsertColumn(0, "Location") self.InsertColumn(1, "Address") self.InsertColumn(2, "Subnet") self.InsertColumn(3, "Gateway") self.SetColumnWidth(0, 100) self.SetColumnWidth(1, 150) self.SetColumnWidth(2, 150) self.SetColumnWidth(3, 150) class MainWindow(wx.Frame): def __init__(self, parent = None, id = -1, title = "MainWindow"): wx.Frame.__init__(self, parent, id, title, size = (800,600), style = wx.DEFAULT_FRAME_STYLE ^ (wx.RESIZE_BORDER)) # StatusBar self.CreateStatusBar() # Filemenu filemenu = wx.Menu() # Filemenu - About menuitem = filemenu.Append(-1, "&About", "Information about this application") self.Bind(wx.EVT_MENU, self.onAbout, menuitem) #Filemenu - Data menuitem = filemenu.Append(-1, "&Data", "Get data") self.Bind(wx.EVT_MENU, self.onData, menuitem) # Filemenu - Seperator filemenu.AppendSeparator() #Filemenu - Exit menuitem = filemenu.Append(-1, "&Exit", "Exit the application") self.Bind(wx.EVT_MENU, self.onExit, menuitem) # Menubar menubar = wx.MenuBar() menubar.Append(filemenu, "&File") self.SetMenuBar(menubar) # Show self.Show(True) self.Center() def onAbout(self, event): pass def onData(self, event): DataWindow(self) callDb = Database() sql = "SELECT rowid, address, subnet, gateway FROM pod1" records = callDb.select(sql) for v in records: print "How do I get the records on the DataList?" #print "%s%s%s" % (v[1],v[2],v[3]) #for v in records: #DataList.InsertStringItem("%s") % (v[0], v[1], v[2]) def onExit(self, event): self.Close() self.Destroy() def onSave(self, event): pass if __name__ == '__main__': app = wx.App() frame = MainWindow(None, -1) frame.Show() app.MainLoop() database.py import os import sqlite3 class Database(object): def __init__(self, db_file="data/data.sqlite"): database_allready_exists = os.path.exists(db_file) self.db = sqlite3.connect(db_file) if not database_allready_exists: self.setupDefaultData() def select(self,sql): cursor = self.db.cursor() cursor.execute(sql) records = cursor.fetchall() cursor.close return records def insert(self,sql): newID = 0 cursor = self.db.cursor() cursor.execute(sql) newID = cursor.lastrowid self.db.commit() cursor.close() return newID def save(self,sql): cursor = self.db.cursor() cursor.execute(sql) self.db.commit() cursor.close() def setupDefaultData(self): pass

    Read the article

  • [CODE GENERATION] How to generate DELETE statements in PL/SQL, based on the tables FK relations?

    - by The chicken in the kitchen
    Is it possible via script/tool to generate authomatically many delete statements based on the tables fk relations, using Oracle PL/SQL? In example: I have the table: CHICKEN (CHICKEN_CODE NUMBER) and there are 30 tables with fk references to its CHICKEN_CODE that I need to delete; there are also other 150 tables foreign-key-linked to that 30 tables that I need to delete first. Is there some tool/script PL/SQL that I can run in order to generate all the necessary delete statements based on the FK relations for me? (by the way, I know about cascade delete on the relations, but please pay attention: I CAN'T USE IT IN MY PRODUCTION DATABASE, because it's dangerous!) I'm using Oracle DataBase 10G R2. This is the result I've written, but it is not recursive: This is a view I have previously written, but of course it is not recursive! CREATE OR REPLACE FORCE VIEW RUN ( OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, VINCOLO ) AS SELECT OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, '(' || LTRIM ( EXTRACT (XMLAGG (XMLELEMENT ("x", ',' || COLUMN_NAME)), '/x/text()'), ',') || ')' VINCOLO FROM ( SELECT CON1.OWNER OWNER_1, CON1.TABLE_NAME TABLE_NAME_1, CON1.CONSTRAINT_NAME CONSTRAINT_NAME_1, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME FROM DBA_CONSTRAINTS CON, DBA_CONS_COLUMNS COL, DBA_CONSTRAINTS CON1 WHERE CON.OWNER = 'TABLE_OWNER' AND CON.TABLE_NAME = 'TABLE_OWNED' AND ( (CON.CONSTRAINT_TYPE = 'P') OR (CON.CONSTRAINT_TYPE = 'U')) AND COL.TABLE_NAME = CON1.TABLE_NAME AND COL.CONSTRAINT_NAME = CON1.CONSTRAINT_NAME --AND CON1.OWNER = CON.OWNER AND CON1.R_CONSTRAINT_NAME = CON.CONSTRAINT_NAME AND CON1.CONSTRAINT_TYPE = 'R' GROUP BY CON1.OWNER, CON1.TABLE_NAME, CON1.CONSTRAINT_NAME, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME) GROUP BY OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME; ... and it contains the error of using DBA_CONSTRAINTS instead of ALL_CONSTRAINTS...

    Read the article

  • How to implement an ID field on a POCO representing an Identity field in SQL Server?

    - by Dr. Zim
    If I have a Domain Model that has an ID that maps to a SQL Server identity column, what does the POCO look like that contains that field? Candidate 1: Allows anyone to set and get the ID. I don't think we want anyone setting the ID except the Repository, from the SQL table. public class Thing { public int ID {get;set;} } Candidate 2: Allows someone to set the ID upon creation, but we won't know the ID until after we create the object (factory creates a blank Thing object where ID = 0 until we persist it). How would we set the ID after persisting? public class Thing { public Thing () : This (ID: 0) {} public Thing (int ID) { this.ID = ID } private int _ID; public int ID { get { return this.ID;}; } Candidate 3: Methods to set ID? Somehow we would need to allow the Repository to set the ID without allowing the consumer to change it. Any ideas? Is this barking up the wrong tree? Do we send the object to the Repository, save it, throw it away, then create a new object from the loaded version and return that as a new object?

    Read the article

  • GUID or int entity key with SQL Compact/EF4?

    - by David Veeneman
    This is a follow-up to an earlier question I posted on EF4 entity keys with SQL Compact. SQL Compact doesn't allow server-generated identity keys, so I am left with creating my own keys as objects are added to the ObjectContext. My first choice would be an integer key, and the previous answer linked to a blog post that shows an extension method that uses the Max operator with a selector expression to find the next available key: public static TResult NextId<TSource, TResult>(this ObjectSet<TSource> table, Expression<Func<TSource, TResult>> selector) where TSource : class { TResult lastId = table.Any() ? table.Max(selector) : default(TResult); if (lastId is int) { lastId = (TResult)(object)(((int)(object)lastId) + 1); } return lastId; } Here's my take on the extension method: It will work fine if the ObjectContext that I am working with has an unfiltered entity set. In that case, the ObjectContext will contain all rows from the data table, and I will get an accurate result. But if the entity set is the result of a query filter, the method will return the last entity key in the filtered entity set, which will not necessarily be the last key in the data table. So I think the extension method won't really work. At this point, the obvious solution seems to be to simply use a GUID as the entity key. That way, I only need to call Guid.NewGuid() method to set the ID property before I add a new entity to my ObjectContext. Here is my question: Is there a simple way of getting the last primary key in the data store from EF4 (without having to create a second ObjectContext for that purpose)? Any other reason not to take the easy way out and simply use a GUID? Thanks for your help.

    Read the article

  • How can I extend a LINQ-to-SQL class without having to make changes every time the code is generated

    - by csharpnoob
    Hi, Update from comment: I need to extend linq-to-sql classes by own parameters and dont want to touch any generated classes. Any better suggestes are welcome. But I also don't want to do all attributes assignments all time again if the linq-to-sql classes are changing. so if vstudio generates new attribute to a class i have my own extended attributes kept separate, and the new innerited from the class itself Original question: i'm not sure if it's possible. I have a class car and a class mycar extended from class car. Class mycar has also a string list. Only difference. How can i cast now any car object to a mycar object without assigning all attributes each by hand. Like: Car car = new Car(); MyCar mcar = (MyCar) car; or MyCar mcar = new MyCar(car); or however i can extend car with own variables and don't have to do always Car car = new Car(); MyCar mcar = new MyCar(); mcar.name = car.name; mcar.xyz = car.xyz; ... Thanks.

    Read the article

  • Creating android app Database with big amount of data

    - by Thomas
    Hi all, The database of my application need to be filled with a lot of data, so during onCreate(), it's not only some create table sql instructions, there is a lot of inserts. The solution I chose is to store all this instructions in a sql file located in res/raw and which is loaded with Resources.openRawResource(id). It works well but I face to encoding issue, I have some accentuated caharacters in the sql file which appears bad in my application. This my code to do this : public String getFileContent(Resources resources, int rawId) throws IOException { InputStream is = resources.openRawResource(rawId); int size = is.available(); // Read the entire asset into a local byte buffer. byte[] buffer = new byte[size]; is.read(buffer); is.close(); // Convert the buffer into a string. return new String(buffer); } public void onCreate(SQLiteDatabase db) { try { // get file content String sqlCode = getFileContent(mCtx.getResources(), R.raw.db_create); // execute code for (String sqlStatements : sqlCode.split(";")) { db.execSQL(sqlStatements); } Log.v("Creating database done."); } catch (IOException e) { // Should never happen! Log.e("Error reading sql file " + e.getMessage(), e); throw new RuntimeException(e); } catch (SQLException e) { Log.e("Error executing sql code " + e.getMessage(), e); throw new RuntimeException(e); } The solution I found to avoid this is to load the sql instructions from a huge static final string instead of a file, and all accentutated characters appears well. But Isn't there a more elegant way to load sql instructions than a big static final String attribute with all sql instructions ? Thanks in advance Thomas

    Read the article

  • Using an embedded DB (SQLite / SQL Compact) for Message Passing within an app?

    - by wk1989
    Hello, Just out of curiosity, for applications that have a fairly complicated module tree, would something like sqlite/sql compact edition work well for message passing? So if I have modules containing data such as: \SubsystemA\SubSubSysB\ModuleB\ModuleDataC, \SubSystemB\SubSubSystemC\ModuleA\ModuleDataX Using traditional message passing/routing, you have to go through intermediate modules in order to pass a message to ModuleB to request say ModuleDataC. Instead of doing that, if we we simply store "\SubsystemA\SubSubSysB\ModuleB\ModuleDataC" in a sqlite database, getting that data is as simple as a sql query and needs no routing and passing stuff around. Has anyone done this before? Even if you haven't, do you foresee any issues & performance impact? The only concern I have right now would be the passing of custom types, e.g. if ModuleDataC is a custom data structure or a pointer, I'll need some way of storing the data structure into the DB or storing the pointer into the DB. Thanks, JW EDIT One usage case I haven't thought about is when you want to send a message from ModuleA to ModuleB to get ModuleB to do something rather than just getting/setting data. Is it possible to do this using an embedded DB? I believe callback from the DB would be needed, how feasible is this?

    Read the article

  • Custom ADO.NET provider to intercept and modify sql queries.

    - by Faisal
    Our client has an application that stores blobs in database which has now grown enough to impact the performance of SQL Server. To overcome this issue, we are planning to offload all blobs to file system and leave the path of file in a new column in user table. Like if user has a table docs with columns id, name and content (blob); we would ask him to add a new column 'filepath' in this table. Our client is willing to make this change in this database. But when it comes to changing the sql queries to read and write into this table, they are not ready to accep this. Actually, they don't want any change that results in recompilation and deployment. Now we are planning to write a custom ADO.NET provider that will intercept the select queries add a column 'filepath' at the end of the select statement retieve the result set and modify the 'content' column value based on 'filepath' value Is there any use case that you think will certainly fail with this approach? I know this sounds dirty but do we have a better way?

    Read the article

  • Call from a singleton class to a function which in turn calls that class's method

    - by dare2be
    Hello, I am still looking for a way to phrase it properly (I'm not a native speaker...). So I have this class SQL which implements the singleton pattern (for obvious reasons) and I also have this function, checkUsr(), which queries the database using one of SQL's methods. Everything works fine as long as I don't call checkUsr() from within the SQL class. When I do so, the scripts just exits and a blank page is displayed - no errors are returned, no exception is thrown... What's happening? And how do I work around this problem? EDIT: class SQL { public static function singleton() { static $instance; if(!isset($instance)) $instance = new SQL; return $instance; } public function tryLoginAuthor( $login, $sha1 ) { (...) } } function checkUsr() { if (!isset($_SESSION['login']) || !isset($_SESSION['sha1'])) throw new Exception('Not logged in', 1); $SQL = SQL::singleton(); $res = $SQL->tryLoginAuthor($_SESSION['login'], $_SESSION['sha1']); if (!isset($res[0])) throw new Exception('Not logged in', 1); }

    Read the article

  • Java ORM related question - SQL Vs Google DB (Big Table?) GAE

    - by StackerFlow
    I was wondering about the following two options when one is not using SQL tables but ORM based DBs (Example - when you are using GAE) Would the second option be less efficient? Requirement: There is an object. The object has a collection of similar items. I need to store this object. Example, say the object is a tree and it has a collection of leaves. Option 1: Traditional SQL type structure: Table for the Tree (with TreeId as the identifier for a row in the Table.) Table for the Leaves (where each leaf has a TreeId and to show the leaves of a tree, I query all leaves where the TreeId is the Id of the tree.) Here, the Tree structure DOES NOT have a field with leaves. Option 2: ORM / GAE Tables: Using the same example above, I have an object for Tree where the object has a collection (Set/List in Java/C++) of leaves. I store and retrieve the Tree together with the leaves (as the leaves are implemented as a Set in the Tree object) My question is, will the second one be less efficient that the first option? If so, why? Are there other alternatives? Thank you!

    Read the article

  • C# how to get trailing spaces from the end of a varchar(513) field while exporting SQL table to a fl

    - by svon
    How do I get empty spaces from the end of a varchar(513) field while exporting data from SQL table to a flat file. I have a console application. Here is what I am using to export a SQL table having only one column of varchar(513) to a flat file. But I need to get all the 513 characters including spaces at the end. How do I change this code to incorporate that. Thanks { var destination = args[0]; var command = string.Format("Select * from {0}", Validator.Check(args[1])); var connectionstring = string.Format("Data Source={0}; Initial Catalog=dbname;Integrated Security=SSPI;", args[2]); var helper = new SqlHelper(command, CommandType.Text, connectionstring); using (StreamWriter writer = new StreamWriter(destination)) using (IDataReader reader = helper.ExecuteReader()) { while (reader.Read()) { Object[] values = new Object[reader.FieldCount]; int fieldCount = reader.GetValues(values); for (int i = 0; i < fieldCount; i++) writer.Write(values[i]); writer.WriteLine(); } writer.Close(); }

    Read the article

  • New training on Power Pivot with recorded video courses

    - by Marco Russo (SQLBI)
    I and Alberto Ferrari started delivering training on Power Pivot in 2010, initially in classrooms and then also online. We also recorded videos for Project Botticelli, where you can find content about Microsoft tools and services for Business Intelligence. In the last months, we produced a recorded video course for people that want to learn Power Pivot without attending a scheduled course. We split the entire Power Pivot course training in three editions, offering at a lower price the more introductive modules: Beginner: introduces Power Pivot to any user who knows Excel and want to create reports with more complex and large data structures than a single table. Intermediate: improves skills on Power Pivot for Excel, introducing the DAX language and important features such as CALCULATE and Time Intelligence functions. Advanced: includes a depth coverage of the DAX language, which is required for writing complex calculations, and other advanced features of both Excel and Power Pivot. There are also two bundles, that includes two or three editions at a lower price. Most important, we have a special 40% launch discount on all published video courses using the coupon SQLBI-FRNDS-14 valid until August 31, 2014. Just follow the link to see a more complete description of the editions available and their discounted prices. Regular prices start at $29, which means that you can start a training with less than $18 using the special promotion. P.S.: we recently launched a new responsive version of the SQLBI web site, and now we also have a page dedicated to all videos available about our sessions in conferences around the world. You can find more than 30 hours of free videos here: http://www.sqlbi.com/tv.

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • SSISDB Analysis Script on Gist

    - by Davide Mauri
    I've created two simple, yet very useful, script to extract some useful data to quickly monitor SSIS packages execution in SQL Server 2012 and after.get-ssis-execution-status  get-ssis-data-pumped-rows  I've started to use gist since it comes very handy, for this "quick'n'dirty" scripts and snippets, and you can find the above scripts and others (hopefully the number will increase over time...I plan to use gist to store all the code snippet I used to store in a dedicated folder on my machine) there.Now, back to the aforementioned scripts. The first one ("get-ssis-execution-status") returns a list of all executed and executing packages along with latest successful and running executions (so that on can have an idea of the expected run time)error messageswarning messages related to duplicate rows found in lookupsthe second one ("get-ssis-data-pumped-rows") returns information on DataFlows status. Here there's something interesting, IMHO. Nothing exceptional, let it be clear, but nonetheless useful: the script extract information on destinations and row sent to destinations right from the messages produced by the DataFlow component. This helps to quickly understand how many rows as been sent and where...without having to increase the logging level.Enjoy! PSI haven't tested it with SQL Server 2014, but AFAIK they should work without problems. Of course any feedback on this is welcome. 

    Read the article

  • Christmas in the Clouds

    - by andrewbrust
    I have been spending the last 2 weeks immersing myself in a number of Windows Azure and SQL Azure technologies.  And in setting up a new business (I’ll speak more about that in the future), I have also become a customer of Microsoft’s BPOS (Business Productivity Online Services).  In short, it has been a fortnight of Microsoft cloud computing. On the Azure side, I’ve looked, of course, at Web Roles and Worker Roles.  But I’ve also looked at Azure Storage’s REST API (including coding to it directly), I’ve looked at Azure Drive and the new VM Role; I’ve looked quite a bit at SQL Azure (including the project “Houston” Silverlight UI) and I’ve looked at SQL Azure labs’ OData service too. I’ve also looked at DataMarket and its integration with both PowerPivot and native Excel.  Then there’s AppFabric Caching, SQL Azure Reporting (what I could learn of it) and the Visual Studio tooling for Azure, including the storage of certificate-based credentials.  And to round it out with some user stuff, on the BPOS side, I’ve been working with Exchange Online, SharePoint Online and LiveMeeting. I have to say I like a lot of what I’ve been seeing.  Azure’s not perfect, and BPOS certainly isn’t either.  But there’s good stuff in all these products, and there’s a lot of value. Azure Goes Deep Most people know that Web and Worker roles put the platform in charge of spinning virtual machines up and down, and keeping them up to date. But you can go way beyond that now.  The still-in-beta VM Role gives you the power to craft the machine (much as does Amazon’s EC2), though it takes away the platform’s self-managing attributes.  It still spins instances up and down, making drive storage non-durable, but Azure Drive gives you the ability to store VHD files as blobs and mount them as virtual hard drives that are readable and writeable.  Whether with Azure Storage or SQL Azure, Azure does data.  And OData is everywhere.  Azure Table Storage supports an OData Interface.  So does SQL Azure and so does DataMarket (the former project “Dallas”).  That means that Azure data repositories aren’t just straightforward to provision and configure…they’re also easy to program against, from just about any programming environment, in a RESTful manner.  And for more .NET-centric implementations, Azure AppFabric caching takes the technology formerly known as “Velocity” and throws it up into the cloud, speeding data access even more. Snapping in Place Once you get the hang of it, this stuff just starts to work in a way that becomes natural to understand.  I wasn’t expecting that, and I was really happy to discover it. In retrospect, I am not surprised, because I think the various Azure teams are the center of gravity for Redmond’s innovation right now.  The products belie this and so do my observations of the product teams’ motivation and high morale.  It is really good to see this; Microsoft needs to lead somewhere, and they need to be seen as the underdog while doing so.  With Azure, both requirements are in place.   BPOS: Bad Acronym, Easy Setup BPOS is about products you already know; Exchange, SharePoint, Live Meeting and Office Communications Server.  As such, it’s hard not to be underwhelmed by BPOS.  Until you realize how easy it makes it to get all that stuff set up.  I would say that from sign-up to productive use took me about 45 minutes…and that included the time necessary to wrestle with my DNS provider, set up Outlook and my SmartPhone up to talk to the Exchange account, create my SharePoint site collection, and configure the Outlook Conferencing add-in to talk to the provisioned Live Meeting account. Never before did I think setting up my own Exchange mail could come anywhere close to the simplicity of setting up an SMTP/POP account, and yet BPOS actually made it faster.   What I want from my Azure Christmas Next Year Not everything about Microsoft’s cloud is good.  I close this post with a list of things I’d like to see addressed: BPOS offerings are still based on the 2007 Wave of Microsoft server technologies.  We need to get to 2010, and fast.  Arguably, the 2010 products should have been released to the off-premises channel before the on-premise sone.  Office 365 can’t come fast enough. Azure’s Internet tooling and domain naming, is scattered and confusing.  Deployed ASP.NET applications go to cloudapp.net; SQL Azure and Azure storage work off windows.net.  The Azure portal and Project Houston are at azure.com.  Then there’s appfabriclabs.com and sqlazurelabs.com.  There is a new Silverlight portal that replaces most, but not all of the HTML ones.  And Project Houston is Silvelright-based too, though separate from the Silverlight portal tooling. Microsoft is the king off tooling.  They should not make me keep an entire OneNote notebook full of portal links, account names, access keys, assemblies and namespaces and do so much CTRL-C/CTRL-V work.  I’d like to see more project templates, have them automatically reference the appropriate assemblies, generate the right using/Imports statements and prime my config files with the right markup.  Then I want a UI that lets me log in with my Live ID and pick the appropriate project, database, namespace and key string to get set up fast. Beta programs, if they’re open, should onboard me quickly.  I know the process is difficult and everyone’s going as fast as they can.  But I don’t know why it’s so difficult or why it takes so long.  Getting developers up to speed on new features quickly helps popularize the platform.  Make this a priority. Make Azure accessible from the simplicity platforms, i.e. ASP.NET Web Pages (Razor) and LightSwitch.  Support .NET 4 now.  Make WebMatrix, IIS Express and SQL Compact work with the Azure development fabric. Have HTML helpers make Azure programming easier.  Have LightSwitch work with SQL Azure and not require SQL Express.  LightSwitch has some promising Azure integration now.  But we need more.  WebMatrix has none and that’s just silly, now that the Extra Small Instance is being introduced. The Windows Azure Platform Training Kit is great.  But I want Microsoft to make it even better and I want them to evangelize it much more aggressively.  There’s a lot of good material on Azure development out there, but it’s scattered in the same way that the platform is.   The Training Kit ties a lot of disparate stuff together nicely.  Make it known. Should Old Acquaintance Be Forgot All in all, diving deep into Azure was a good way to end the year.  Diving deeper into Azure should a great way to spend next year, not just for me, but for Microsoft too.

    Read the article

< Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >