Search Results

Search found 64482 results on 2580 pages for 'sql source control'.

Page 241/2580 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • SQL Server 2008 Replication: Snap Shot Agent timing out on a specific table

    - by Nai
    Hi all, I am going through the process of setting up a transactional replication on my set up. However, I am unable to generate a full snap shot of my database. I keep getting stuck at the same particular point of the process. From snapshot agent: [46%] The process is running and is waiting for a response from the server Other sources have mentioned increasing the timeout period but I do not think this is the problem as this occurs at around the 10 min mark, way within the default value of 1800s. So, I removed that article/table from the snapshot, and lo and behold, it works. So I was looking at the Schema of the table and in my identity column, the NOT FOR REPLICATION was set to TRUE. All my other tables have it set to FALSE. Could this be the source of the error? Also, I tried changing this to false via the GUI but got received a timeout error. Is there a script that allows me to change this? Thanks all.

    Read the article

  • sys.dm_exec_query_stats interaction with recompilation

    - by Sam Saffron
    We use sys.dm_exec_query_stats to track down slow queries and queries that are IO offenders. This works great, we get a lot of very insightful stats. It is clear this is not as accurate as running a profiler trace, as you have no idea when SQL Server will decide to chuck out a an execution plan. We have quite a few queries where the wrong execution plan is cached. For example queries like the following: SELECT TOP 30 a.Id FROM Posts a JOIN Posts q ON q.Id = a.ParentId JOIN PostTags pt ON q.Id = pt.PostId WHERE a.PostTypeId = 2 AND a.DeletionDate IS NULL AND a.CommunityOwnedDate IS NULL AND a.CreationDate @date AND LEN(a.Body) 300 AND pt.Tag = @tag AND a.Score 0 ORDER BY a.Score DESC The problem is that the ideal plan really depends on the date selected (screenshot of ideal plan): However if the wrong plan is cached, it totally chokes when the date range is big: (notice the big fat lines) To overcome this we were recommended to use either OPTION (OPTIMIZE FOR UNKNOWN) or OPTION (RECOMPILE) OPTIMIZE FOR UNKNOWN results in a slightly better plan, which is far from optimal. Executions are tracked in sys.dm_exec_query_stats. RECOMPILE results in the best plan being chosen, however no execution counts and stats are tracked in sys.dm_exec_query_stats. Is there another DMV we could use to track stats on queries with OPTION (RECOMPILE)? Is this behavior by-design? Is there another way we can for recompilation while keeping stats tracked in sys.dm_exec_query_stats? Note: the framework will always execute parameterized queries using sp_executesql

    Read the article

  • SQL Server 2012 Service Pack 1 is available - this time for sure!

    - by AaronBertrand
    Last week I mentioned in passing that Service Pack 1 is now available, while I was blogging from the PASS Summit keynote . I wanted to put up an official post instead of having it appear as a footnote there (I also updated my April Fools' joke to point to the right place). Service Pack 1 Details Service Pack 1 is build # 11.0.3000 and includes 13 fixes to public KB items and 35 other internal (VSTS) items. You can see the list of fixes in KB #2674319 . You can also read about new features included...(read more)

    Read the article

  • Why hasn't C# gained much traction within the opensource community?

    - by tmitchel2
    I'm not expecting C# to be on par with say Java or Python in the open source community, but it still surprises me just how far behind it is. 'Multi language' open source repos like google code or github have barely any C# projects in comparison to the other languages I mentioned. I'd like to see C# and .Net shake off that slight corporate feel and move more into the open source arena but I just can't see that happening. I'd be interested to hear peoples opinion on why this might be?

    Read the article

  • T-SQL Snack: How Much Free Storage Space is Available?

    - by andyleonard
    Introduction Ever have a need to calculate the total available storage space for a server? Recently I did. Here's a solution I came up with - I bet someone can do this better! xp_fixeddrives There's a handy stored procedure called xp_fixeddrives that reports the available storage space: exec xp_fixeddrives This returns: drive MB free ----- ----------- C 6998 E 201066 Problem solved right? Maybe. The Sum What I really want is the sum total of all available space presented to the server. I built this...(read more)

    Read the article

  • SQL Server 2012 Service Pack 1 is available - this time for sure!

    - by AaronBertrand
    Last week I mentioned in passing that Service Pack 1 is now available, while I was blogging from the PASS Summit keynote . I wanted to put up an official post instead of having it appear as a footnote there (I also updated my April Fools' joke to point to the right place). Service Pack 1 Details Service Pack 1 is build # 11.0.3000 and includes 13 fixes to public KB items and 35 other internal (VSTS) items. You can see the list of fixes in KB #2674319 . You can also read about new features included...(read more)

    Read the article

  • Why hasn't C# gained much traction within the opensource community?

    - by tmitchel2
    I'm not expecting C# to be on par with say Java or Python in the open source community, but it still surprises me just how far behind it is. 'Multi language' open source repos like google code or github have barely any C# projects in comparison to the other languages I mentioned. I'd like to see C# and .Net shake off that slight corporate feel and move more into the open source arena but I just can't see that happening. I'd be interested to hear peoples opinion on why this might be?

    Read the article

  • Reporting SQL Vulnerability [migrated]

    - by Ciaran87Bel
    My first post here so i'll hopefully keep it simple. I have just finished building a CMS targeted at a certain industry and built a test site to see how everything works. Anyway I wrote a program to check for sql injection vulnerabilities and the program followed a blog link to an external website. The program discovered that the external site had a massive vulnerability that left it open to practically anyone who could then access every bit of data on their MYSQL Server and run queries etc. The thing is this external site is the brand leader in their industry and do millions upon millions of sales per annum. I have tried contacting them to let them know and even went as far as contacting the company that built their platform but I was pretty much brushed off and haven't heard back from them. Their database would contain the details of hundreds of thousands of customers and all their data. I could easily make myself site admin etc in a few seconds but they won't listen to me even though I have offered to share the vulnerability with them and help in anyway I can. Is there anything else I can do because it is one of the biggest security risks I have ever personally come across. Is there any other steps I should take to report this? Thanks

    Read the article

  • IBM's RTC and Microsoft's TFS 2010

    - by gkdm
    Hi, What in your view are the most important differences? Need to make an expensive decision... Thanks. Information: We have both Java and .Net Projects (few more .net) Very interested in project life cycle management. Migrating from ClearCase

    Read the article

  • Tables created programmatically don't appear in WebBrowser control

    - by John Hall
    I'm creating HTML dynamically in a WebBrowser control. Most elements seems to appear correctly, with the exception of a table. My code is: var doc = webBrowser1.Document; var body = webBrowser1.Document.Body; body.AppendChild(webBrowser1.Document.CreateElement("hr")); var div = doc.CreateElement("DIV"); var table = doc.CreateElement("TABLE"); var row1 = doc.CreateElement("TR"); var cell1 = doc.CreateElement("TD"); cell1.InnerText = "Cell 1"; row1.AppendChild(cell1); var cell2 = doc.CreateElement("TD"); cell2.InnerText = "Cell 2"; row1.AppendChild(cell2); table.AppendChild(row1); div.AppendChild(table); body.AppendChild(div); body.AppendChild(webBrowser1.Document.CreateElement("hr")); The HTML tags are visible in the OuterHTML property of the body, but all that appears in the browser are the two horizontal rules. If I replace div.AppendChild(table); with div.InnerHtml = table.OuterHtml then everything appears as expected.

    Read the article

  • Transitioning to Branching with TFS

    - by Rob
    Our team is currently using plain old TFS 2005, no branching, shared checkouts etc... I would like to introduce a DEV/MAIN/PROD branching system simillar to the basic flavor in the TFS Guidance document so that we can do some parallel dev, isolation, and firm up review and deployment processes. I have read most of the whitepapers etc. Do you guys have any practical advice, suggested tools, gotchas or reccomendations. Also, we plan to migrate to 2010 once it comes out - not sure if that would affect anything. I appreciate all the suggestions and help I can get as I am a branching neophyte. Thanks in advance.

    Read the article

  • Set predefine form value (webbrowser control)

    - by Khou
    Hi I want to load my windows form: web browser thats using the webbrowser control, It would load a web page, and load my defination and search for elements that has been define, it will then assign the default values and these values can not be changed by the end user. Example If my application finds "FirstName" it would always assign the value "John" If my application finds "LastName" it would always assign the value "Smith" (these values should not be changed by the end user). Here's how to do it in HTML/JAVASCRIPT, but how do i do this in a windows form? HTML <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <html> <head> <title>page title</title> <script script type="text/javascript" src="demo1.js"></script> </head> <body onload="def(document.someform, 'name', 'my default name value');"> <h2 style="color: #8e9182">test form title</h2> <form name="someform" id="someform_frm" action="#"> <table cellspacing="1"> <tr><td><label for="name">NameX: </label></td><td><input type="text" size="30" maxlength="155" name="name" onchange="def(document.someform, 'name', 'my default name value');"></td></tr> <tr><td><label for="name2">NameY: </label></td><td><input type="text" size="30" maxlength="155" name="name2"></td></tr> <tr><td colspan="2"><input type="button" name="submit" value="Submit" onclick="showFormData(this.form);" ></td></table> </form> </body> </html> JAVASCRIPT function def(oForm, element_name, def_txt) { oForm.elements[element_name].value = def_txt; }

    Read the article

  • in TFS can we customize the merge algorithm (conflict resolution)

    - by Jennifer Zouak
    In our case we want to igonore changes in code comment headers for generated code. In Visual Studio, we can change the merge tool (GUI that pops up) and use a 3rd party tool that is able to be customized to ignore changes (http://msdn.microsoft.com/en-us/library/ms181446.aspx). Great, so a file comparison no longer highlights code comments as differences. However when it comes time to checkin, the TFS merge algorith is still prompting us to resolve conflicts. Is there any way to better inform the merge conflict resolution algorithm about which changes are actually important to us? Or can we replace the algorithm or otherwise have it subcontract its work to a 3rd party?

    Read the article

  • Version control - stubs and mocks

    - by Tesserex
    For the sake of this question, I don't care about the difference between stubs, mocks, dummies, fakes, etc. Let's say I'm working on a project with one other person. I'm working on component A and he is working on component B. They work together, so I stub out B for testing, and he stubs out A. We're working in a DVCS, let's say Git, because that's actually the case here. When it comes time to merge our components together, we need to get the "real" files from my A and his B, but throw away all the fake stuff. During development, it's likely (unless I need to learn how to properly stub things) that the fakes have the same file names and class names as the real thing. So my question is: what is the proper procedure for doing version control on the fakes, and how are the components correctly merged, making sure to grab the real thing and not the fake? I would guess that one way is just do the merge, expect it to say CONFLICT, and then manually delete all the fake code out of the half-merged files. But this sounds tedious and inefficient. Should the fake things not go under VC at all? Should they be ripped out just before merging? Sorry if the answer to this should be obvious or trivial, I'm just looking for a "suggested practice" here.

    Read the article

  • Vim, LaTeX, word-wrapping, and version control

    - by Bkkbrad
    I'm writing a LaTeX document in vim, and I have it hard wrapping at 80 characters to make reading easier. However, this causes problems with tracking changes with in version control. For example, inserting "Lorem ipsum" at the beginning of this text: 1 Dolor sit amet, consectetur adipiscing elit. Phasellus bibendum lobortis lectus 2 quis porta. Aenean vestibulum magna vel purus laoreet at molestie massa 3 suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien 4 ullamcorper elit, dignissim consectetur justo tellus et nunc. results in: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. Phasellus bibendum 2 lobortis lectus quis porta. Aenean vestibulum magna vel purus laoreet at 3 molestie massa suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, 4 tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. When I review this change in git, it tells me that all the lines of the paragraph have changed because of the wrapping, even though only one semantic change has occurred. One way around this problem is to have every sentence on its own line. This looks the same in the rendered document, but the source now is harder to read, because each line has quite a different line length: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. 2 Phasellus bibendum lobortis lectus quis porta. 3 Aenean vestibulum magna vel purus laoreet at molestie massa suscipit. 4 Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. (If I soft wrap at 80, things still look bad, just in a different way.) Is it possible to have my text on disk with one newline per sentence, but display and edit it in vim as if the text of each paragraph was one long line, soft wrapped at 80 characters? I assume it requires some vim-foo rather than tweaking git or LaTeX.

    Read the article

  • Add existing project to solution under visualvsn

    - by Eric
    We are changing from SourceSafe 2005 to visual svn. How can an exisiting project be added to a solution Example: I create solution1 with 3 projects and add to /trunk I create solution2 with 1 project and add to /trunk In solution1 I add existing project from solution2, but I cannot add to subversion. I get "out of working copy, use the VisualSVN-Set Working Copy root menu" In SourceSafe2005 it would just link, what is the procedure for VisualSVN? Branch? Regards _Eric

    Read the article

  • Does git ignore empty folders?

    - by Eno
    I created an Android project, added it to my git repo, comitted and pushed my clone to the master. Later I tried checking out the project and Eclipse complained about missing src folders. I checked my repo and the master repo and the src folders are missing (Im sure they were there when I created the project). So can someone explain what happened here? Im new to git so maybe I missed something?

    Read the article

  • Public code repository

    - by Andy White
    Can anyone recommend a public code repository? A few friends and I are thinking of starting a few projects for iPhone, web, Android, etc., and it would be nice to have a public (internet) code repository to use that would work well on any platform (Mac, PC, Linux). Any type of repository is fine (SVN, CVS, Git, etc.). A few ideas are Sourceforge or Google Code. Any recommendations? Thanks

    Read the article

  • Restoring a subversion repository to workcopy revision

    - by tinny
    My subversion VM died the other day (host hardware melted) and I had to restore a backed up copy of the vmware server image. The restore went well and the VM is running again on a new host. The problem I have is that my restored repository is at revision 60 but my working copy on my PC is at 66. When I try and commit my working copy I get the following error message. svn: Commit failed (details follow): svn: No such revision 61 What is the best way to force this commit and bring subversion up to the same revision as my working copy? Thanks

    Read the article

  • Generic version control strategy for select table data within a heavily normalized database

    - by leppie
    Hi Sorry for the long winded title, but the requirement/problem is rather specific. With reference to the following sample (but very simplified) structure (in psuedo SQL), I hope to explain it a bit better. TABLE StructureName { Id GUID PK, Name varchar(50) NOT NULL } TABLE Structure { Id GUID PK, ParentId GUID (FK to Structure), NameId GUID (FK to StructureName) NOT NULL } TABLE Something { Id GUID PK, RootStructureId GUID (FK to Structure) NOT NULL } As one can see, Structure is a simple tree structure (not worried about ordering of children for the problem). StructureName is a simplification of a translation system. Finally 'Something' is simply something referencing the tree's root structure. This is just one of many tables that need to be versioned, but this one serves as a good example for most cases. There is a requirement to version to any changes to the name and/or the tree 'layout' of the Structure table. Previous versions should always be available. There seems to be a few possibilities to tackle this issue, like copying the entire structure, but most approaches causes one to 'loose' referential integrity. Example if one followed this approach, one would have to make a duplicate of the 'Something' record, given that the root structure will be a new record, and have a new ID. Other avenues of possible solutions are looking into how Wiki's handle this or go a lot further and look how proper version control systems work. Currently, I feel a bit clueless how to proceed on this in a generic way. Any ideas will be greatly appreciated. Thanks leppie

    Read the article

  • Control SQL Server CLR Reserved Memory

    - by Ryu
    I've recently enabled CLR on my 64 bit SQL Server 2005 machine for usage of about 3 procs. When I run the following query to gather some info on memory usage... select single_pages_kb+ multi_pages_kb + virtual_memory_committed_kb as TotalMemoryUsage, virtual_memory_reserved_kb from sys.dm_os_memory_clerks where type = 'MEMORYCLERK_SQLCLR' I get 129 mb MemoryUsage and 6.3 gb Virtual Memory Reserved The total memory of the machine is 21 gig. What does reserved virtual memory mean exactly and how can I control the size that is allocated? 6 gig is overkill for what we're doing and the memory would be much better utilized by the sproc cache. I'm concerned this reserved memory will cause swapping to the page file. Please help me take back control of the memory! Thanks

    Read the article

  • Git Reverting the Repository to Previous State

    - by azamsharp
    I have a .gitignore file in my project directory and I placed the following entry in the file to not to commit the files in the following folder: EStudyMongoDb.Integration.Test\ For some reason Git pushed the files to repository anyway! Anyway! now I want to remove those files that have been pushed to the repository but I don't want to loose my local changes to the files inside the folder. How can I do that?

    Read the article

  • Assembla is no longer free, is there a good alternative?!

    - by pabloide86
    http://blog.assembla.com/assemblablog/tabid/12618/bid/6986/Release-2-0-restricting-free-plans-giving-back-with-features-and-pric I'm very disappointed about this... I use Assembla for my personal projects(commercial) and now I have to move everything to another place! There are some questions about different free hosting... I extracted some of the sites that offers free hosting for projects: http://www.svnhostingcomparison.com/ http://www.codespaces.com/ If you know about others like assembla please post it! Cheers from Argentina!

    Read the article

  • WPF Calling a custom command on a custom control (from a viewmodel?)

    - by user190615
    I want to take a snap of the visual tree of a custom wpf control when the user clicks a button in a toolbar. The control is bound to a viewmodel. I have a BitmapSource dp in the custom control holding the snapped image which is bound to a property on my VM. The BitmapSource dp on the control is updated via a custom command on the control. I've tied the toolbar button's command to call the controls command which updates the BitmapSource. Now the problem is the end result I want is when the user clicks the button, the control updates its image and then the vm offers to save this image. I cant wrap my mind around an mvvm way of doing this. One inelegant solution is that control fires an event after the image is updated which is routed to the viewmodel as a command(command behavior) but then if i want to do something else with the image on some other button click, all the commands bound to the events will fire. All thoughts appreciated. EDIT The command on the control is a RoutedCommand and the commands in my vm are Prism delegate commands.

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >