Search Results

Search found 15041 results on 602 pages for 'breaking changes'.

Page 73/602 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2

    - by pinaldave
    Yesterday I wrote a real world story of how a friend who thought they have an issue with intrusion or virus whereas the issue was really in the code. I strongly suggest you read my earlier blog post Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 before continuing this blog post as this is second part of the first blog post. Let me reproduce the simple scenario in T-SQL. Building Sample Data USE [TestDB] GO -- Creating Table Products CREATE TABLE [dbo].[Products]( [ProductID] [int] NOT NULL, [ProductDesc] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )) ON [PRIMARY] GO -- Creating Table ProductDetails CREATE TABLE [dbo].[ProductDetails]( [ProductDetailID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Total] [int] NOT NULL, CONSTRAINT [PK_ProductDetails] PRIMARY KEY CLUSTERED ( [ProductDetailID] ASC )) ON [PRIMARY] GO ALTER TABLE [dbo].[ProductDetails] WITH CHECK ADD CONSTRAINT [FK_ProductDetails_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Products] ([ProductID]) ON UPDATE CASCADE ON DELETE CASCADE GO -- Insert Data into Table USE TestDB GO INSERT INTO Products (ProductID, ProductDesc) SELECT 1, 'Bike' UNION ALL SELECT 2, 'Car' UNION ALL SELECT 3, 'Books' GO INSERT INTO ProductDetails ([ProductDetailID],[ProductID],[Total]) SELECT 1, 1, 200 UNION ALL SELECT 2, 1, 100 UNION ALL SELECT 3, 1, 111 UNION ALL SELECT 4, 2, 200 UNION ALL SELECT 5, 3, 100 UNION ALL SELECT 6, 3, 100 UNION ALL SELECT 7, 3, 200 GO Select Data from Tables -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Delete Data from Products Table -- Deleting Data DELETE FROM Products WHERE ProductID = 1 GO Select Data from Tables Again -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Clean up Data -- Clean up DROP TABLE ProductDetails DROP TABLE Products GO My friend was confused as there was no delete was firing over ProductsDetails Table still there was a delete happening. The reason was because there is a foreign key created between Products and ProductsDetails Table with the keywords ON DELETE CASCADE. Due to ON DELETE CASCADE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. Workaround 1: Design Changes – 3 Tables Change the design to have more than two tables. Create One Product Mater Table with all the products. It should historically store all the products list in it. No products should be ever removed from it. Add another table called Current Product and it should contain only the table which should be visible in the product catalogue. Another table should be called as ProductHistory table. There should be no use of CASCADE keyword among them. Workaround 2: Design Changes - Column IsVisible You can keep the same two tables. 1) Products and 2) ProductsDetails. Add a column with BIT datatype to it and name it as a IsVisible. Now change your application code to display the catalogue based on this column. There should be no need to delete anything. Workaround 3: Bad Advices (Bad advises begins here) The reason I have said bad advices because these are going to be bad advices for sure. You should make necessary design changes and not use poor workarounds which can damage the system and database integrity further. Here are the examples 1) Do not delete the data – well, this is not a real solution but can give time to implement design changes. 2) Do not have ON CASCADE DELETE – in this case, you will have entry in productsdetails which will have no corresponding product id and later on there will be lots of confusion. 3) Duplicate Data – you can have all the data of the product table move to the product details table and repeat them at each row. Now remove CASCADE code. This will let you delete the product table rows without any issue. There are so many things wrong this suggestion, that I will not even start here. (Bad advises ends here)  Well, did I miss anything? Please help me with your suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Make your TSQL easier to read during a presentation

    - by Jonathan Allen
    SQL Server Management Studio 2012 has some neat settings that you can use to help your presentations at a SQL event better for the attendees if you are willing to spend a few minutes making some settings changes. Historically, I have been reluctant to make changes to my SSMS settings as it is such a tedious process and it’s not 100% clear that what you think you are changing is actually what gets changed. With SSMS 2012 this has become a lot easier and a lot less risky. In any session that involves TSQL there is a trade off between the speaker having all the code on screen and the attendees being able to read any of what is on screen. You (the speaker) might be able to read this when you are working on the code but plenty of your audience wont be able to make head or tail of it. SSMS 2012 has a zoom facility that can help: but don’t go nuts … Having the font too big means you will be scrolling a lot and the code will again be rendered unreadable. There is more though but you need to take a deep breath and open the Tools menu and delve into the SSMS options. In previous versions of SSMS this is a deep, dark and scary place where changing values can be obscure and sometimes catastrophic to the UI when you get back to the code editor. First things first, we set out as a good DBA and save our current (and presumably acceptable) SSMS configuration. From the import and Export Settings you can set up a file to hold all of the settings that you currently have. The wizard will open and ask you to pick an option. This time around choose to export settings. hit next and next again and then name your settings profile in the final step of the wizard and then click Finish. Once this is done then you can change whatever you like and always get back to this configuration in a couple of clicks. So what can you change to make for a good experience? Well there are plenty of things that can be altered but don’t go too mad and change too many things without taking a look at the results for every item on the list above you can change font, size, weight, colour, background colour etc. etc. but consider what you are trying to achieve and take it slowly. I have seen presenters with their settings set to have a yellow highlight and black font rather than the default pale blue background and slightly darker font so to achieve that select Text Editor and then select “Selected Text” in the Display Items listbox. As you change things the Sample area give you an idea of what effect you are going to have. Black and yellow is the colour combination with the highest contrast – that’s why bees and wasps# are that colour. What next? how about increasing the default font for your demo scripts? This means that any script you open and any new ones that you start will take on this font. No more zooming (or forgetting to) in the middle of sessions. now don’t forget to save this profile – follow the same steps as above but give the profile a different name, something like PresentationBigFontHighContrast might be appropriate. Once you are done making changes, export the settings once more and then go into the Import Export wizard and import settings from the first profile you created. Everything will be back to normal. Now making changes to suit your environment can be done very easily and with confidence. * – and warning tape and safety signs and so forth – Health and Safety officers simply copy nature!

    Read the article

  • Universities 2030: Learning from the Past to Anticipate the Future

    - by Mohit Phogat
    What will the landscape of international higher education look like a generation from now? What challenges and opportunities lie ahead for universities, especially “global” research universities? And what can university leaders do to prepare for the major social, economic, and political changes—both foreseen and unforeseen—that may be on the horizon? The nine essays in this collection proceed on the premise that one way to envision “the global university” of the future is to explore how earlier generations of university leaders prepared for “global” change—or at least responded to change—in the past. As the essays in this collection attest, many of the patterns associated with contemporary “globalization” or “internationalization” are not new; similar processes have been underway for a long time (some would say for centuries).[1] A comparative-historical look at universities’ responses to global change can help today’s higher-education leaders prepare for the future. Written by leading historians of higher education from around the world, these nine essays identify “key moments” in the internationalization of higher education: moments when universities and university leaders responded to new historical circumstances by reorienting their relationship with the broader world. Covering more than a century of change—from the late nineteenth century to the early twenty-first—they explore different approaches to internationalization across Europe, Asia, Australia, North America, and South America. Notably, while the choice of historical eras was left entirely open, the essays converged around four periods: the 1880s and the international extension of the “modern research university” model; the 1930s and universities’ attempts to cope with international financial and political crises; the 1960s and universities’ role in an emerging postcolonial international development apparatus; and the 2000s and the rise of neoliberal efforts to reform universities in the name of international economic “competitiveness.” Each of these four periods saw universities adopt new approaches to internationalization in response to major historical-structural changes, and each has clear parallels to today. Among the most important historical-structural challenges that universities confronted were: (1) fluctuating enrollments and funding resources associated with global economic booms and busts; (2) new modes of transportation and communication that facilitated mobility (among students, scholars, and knowledge itself); (3) increasing demands for applied science, technical expertise, and commercial innovation; and (4) ideological reconfigurations accompanying regime changes (e.g., from one internal regime to another, from colonialism to postcolonialism, from the cold war to globalized capitalism, etc.). Like universities today, universities in the past responded to major historical-structural changes by internationalizing: by joining forces across space to meet new expectations and solve problems on an ever-widening scale. Approaches to internationalization have typically built on prior cultural or institutional ties. In general, only when the benefits of existing ties had been exhausted did universities reach out to foreign (or less familiar) partners. As one might expect, this process of “reaching out” has stretched universities’ traditional cultural, political, and/or intellectual bonds and has invariably presented challenges, particularly when national priorities have differed—for example, with respect to curricular programs, governance structures, norms of academic freedom, etc. Strategies of university internationalization that either ignore or downplay cultural, political, or intellectual differences often fail, especially when the pursuit of new international connections is perceived to weaken national ties. If the essays in this collection agree on anything, they agree that approaches to internationalization that seem to “de-nationalize” the university usually do not succeed (at least not for long). Please continue reading the other essays at http://globalhighered.wordpress.com/

    Read the article

  • Would this be a good web application architecture?

    - by Gustav Bertram
    My problem Our MVC based framework does not allow us to cache only part of our output. Ideally we want to cahce static and semi-static bits, and run dynamic bits. In addition, we need to consider data caching that reacts to database changes. My idea The concept I came up with was to represent a page as a tree of XML fragment objects. (I say XML, but I mean XHTML). Some of the fragments are dynamic, and can pull their data directly from models or other sources, but most of the fragments are static scaffolding. If a subtree of fragments is completely static, then I imagine that they could unfold into pure XML that would then be cached as the text representation of their parent element. This process would ideally continue until we are left with a root element that contains all of the static XML, and has a couple of dynamic XML fragments that are resolved and attached to the relevant nodes of the XML tree just before the page is displayed. In addition to separating content into dynamic and static fragments, some fragments could be dynamic and cached. A simple expiry time which propagates up through the XML fragment tree would indicate that a specific fragment should periodically be refreshed. A newspaper section or front page does not need to be updated each second. Minutes or sometimes even longer is sufficient. Other fragments would be dynamic and uncached. Typically too many articles are viewed for them to be cached - the cache would overflow. Some individual articles may be cached if they are extremely popular. Functional notes The folding mechanism could be to be smart enough to judge when it would be more profitable to fold a dynamic cached fragment and propagate the expiry date to the parent fragment, or to keep it separate and simple attach to the XML tree when resolving the page. If some dynamic cached fragments are associated to database objects through mechanisms like a globally unique content id, then changes to the database could trigger changes to the output cache. If fragments store the identifiers of parent fragments, then they could trigger a refolding process that would then include the updated data. A set of pure XML with an ordered array of fragment objects (that each store the identifying information of the node to which they should be attached), can be resolved in a fairly simple way by walking the XML tree, and merging the data from the fragments. Because it is not necessary to parse and construct the entire tree in memory before attaching nodes, processing should be fairly fast. The identifiers of each fragment would be a combination of relevant identity data and the type of fragment object. Cached parent fragments would contain references to these identifiers, in order to then either pull them from the fragment cache, or to run their code. The controller's responsibility is reduced to making changes to the database, and telling the root XML fragment object to render itself. The Question My question has two parts: Is this a good design? Are there any obvious flaws I'm missing? Has somebody else thought of this before? References? Is there an existing alternative that I should consider? A cool templating engine maybe?

    Read the article

  • resizing row's height in QTreeWidget/QTreeView

    - by serge
    Hi everyone, I have some problems with sizing row's height in QTreeWidget. I use QStyledItemDelegate with QPlainTextEdit. During editing text in QPlainTextEdit i check for changes with a help of: rect = self.blockBoundingRect(self.firstVisibleBlock()) and if text's height changes i resize editor size and need row in QTreeWidget also resizing. But i don't know how to inform TreeWidget or a Delegate about changes. I tried to initialize editor with index, that i could use in a future, but Delegate creates new editor every time and i failed to use signals. Also I used following function to catch resize event, but it' doesn't: bool QAbstractItemDelegate::editorEvent ( QEvent * event, QAbstractItemModel * model, const QStyleOptionViewItem & option, const QModelIndex & index ) How can i bound editor's size changes with TreeWidget? And one more thing, by default all items (cells) in TreeWidget have -1 or some big value as default width. I need whole text in cell to be visible, so how can i limit cells width only by visible range and make it expand in height? I want for it the same behavior as in instance a table in MSWord. Thank you in advance, Serge

    Read the article

  • Permission denied: .hg\store\lock

    - by harpo
    This smells like a serverfault question, yet there are many similar questions here. Your call. I'm setting up Mercurial over IIS6, and thanks to a number of detailed blogs, it's working fine — almost. I can browse and clone the repositories fine, but this is what happens when I try to push: D:\sample2>hg push pushing to http://localhost/hg/sample2 searching for changes abort: HTTP Error 500: Permission denied: .hg\store\lock First of all, there is no such file or folder. Second, the App Pool's logon has total permission on the repository's parent directory, with these inherited ad infinitum. The repository is located on another logical drive (on the same machine), and if I push to it directly, that also works: D:\sample2>hg push e:\hg\sample2 pushing to e:\hg\sample2 searching for changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files If I change the password in my hgrc, the message indicates a failed authorization, so I believe that's working. I've been fighting this for a couple of days, so any leads would be helpful. Thanks!

    Read the article

  • Core Data Migration - "Can't add source store" error

    - by Tofrizer
    Hi, In my iPhone app I'm using Core Data and I've made changes to my data model that cannot be automatically migrated over (i.e. added new relationships). I added the data model version (Design - Data Model - Add Model Version) and applied my new data model changes to the new version 2. I then created a mapping object model and set the Source and Destination models to their correct data models (old and new respectively). When I run the app and call the persistentStoreCoordinator, my app barfs with the following: 2010-02-27 02:40:30.922 XXXX[73578:20b] Unresolved error Error Domain=NSCocoaErrorDomain Code=134110 UserInfo=0xfc2240 "Operation could not be completed. (Cocoa error 134110.)", { NSUnderlyingError = Error Domain=NSCocoaErrorDomain Code=134130 UserInfo=0xfbb3a0 "Operation could not be completed. (Cocoa error 134130.)"; reason = "Can't add source store"; } FWIW (not much i think) I've also made the usual code changes in persistentStoreCoordinator to use the NSMigratePersistentStoresAutomaticallyOption and NSInferMappingModelAutomaticallyOption (for future data model changes that can be automatically migrated). More relevantly, my managedObjectModel is created by calling initWithContentsOfURL where the file/resource type is "momd". I've tried updating both the source and destination model in the mapping model (Design - Mapping Model - Update XXX Model) as well as deleted the mapping model and recreated it. I've cleaned and re-built but all to no avail. I still get the above error message. Any pointers/thoughts on how I can further debug or resolve this problem please? I haven't posted any code snippets because this feels much more like a build environment issue (and my code is very standard - just the usual core data code to handle migrations using a mapping model but I'm happy to show the code if it helps). Appreciate any help. Thanks

    Read the article

  • How to Encourage More Frequent Commits to SVN

    - by Yaakov Ellis
    A group of developers that I am working with switched from VSS to SVN about half a year ago. The transition from CheckOut-CheckIn to Update-Commit has been hard on a number of users. Now that they are no longer forced to check in their files when they are done (or more accurately, now that no one else can see that they have the file checked out and tell them to check back in in order to release the lock on the file), it has happened on more than one occasion that users have forgotten to Commit their changes until long after they were completed. Although most users are good about Committing their changes, the issue is serious enough that the decision might be made to force users to get locks on all files in SVN before editing. I would rather not see this happen, but I am at a loss over how to improve the situation in another way. So can anyone suggest ways to do any of the following: Track what files users have edited but have not yet Committed changes for Encourage users to be more consistent with Committing changes when they are done Help finish off the user education necessary to get people used to the new version control paradigm Out-of-the-box solutions welcome (ie: desktop program that reminds users to commit if they have not done so in a given interval, automatically get stats of user Commit rates and send warning emails if frequency drops below a certain threshold, etc).

    Read the article

  • Boto - How to delete a record set from route53 -Tried to delete resource record set but it was not found

    - by Tampa
    I am using the following to delete route53 records. I get no error messages. conn = Route53Connection(aws_access_key_id, aws_secret_access_key) changes = ResourceRecordSets(conn, zone_id) change = changes.add_change("DELETE",sub_domain, "A", 60,weight=weight,identifier=identifier) change.add_value(ip_old) changes.commit() all required fields are present and they match..weight, identifier, ttl=60 etc.\ e.g. test.com. A 111.111.111.111 60 1 id1 test.com. A 111.111.111.222 60 1 id2 I want to delete 111.111.111.222 and the record set. So, what is the proper way to delete a record set? For a record set, I will have multiple values that are distinguished by a unique identifier. When an ip address becomes in active I want to remove from route53. I am using a a poor mans load balancing. Here is the meta of the record want to delete. {'alias_dns_name': None, 'alias_hosted_zone_id': None, 'identifier': u'15754-1', 'name': u'hui.com.', 'resource_records': [u'103.4.xxx.xxx'], 'ttl': u'60', 'type': u'A', 'weight': u'1'} Traceback (most recent call last): File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 353, in <module> deleteRedisSubDomains(aws_access_key_id, aws_secret_access_key,platform=platform,sub_domain=sub_domain,redis_domain=redis_domain,zone_id=zone_id,ip_address=ip_address,weight=1,identifier=identifier) File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 341, in deleteRedisSubDomains changes.commit() File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/record.py", line 131, in commit return self.connection.change_rrsets(self.hosted_zone_id, self.to_xml()) File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/connection.py", line 291, in change_rrsets body) boto.route53.exception.DNSServerError: DNSServerError: 400 Bad Request <?xml version="1.0"?> <ErrorResponse xmlns="https://route53.amazonaws.com/doc/2011-05-05/"><Error><Type>Sender</Type><Code>InvalidChangeBatch</Code><Message>Tried to delete resource record set hui.com., type A, SetIdentifier 15754-1 but it was not found</Message></Error><RequestId>9972af89-cb69-11e1-803b-7bde5b9c457d</RequestId></ErrorResponse> Thanks

    Read the article

  • PHP Site Deployment Suggestion

    - by TheOnly92
    I'm currently quite troubled by the way of deployment my team is adopting... It's very old-fashioned and I know it doesn't work very well. But I don't exactly know how to change it, so please give some suggestions about it... Here is our current setup: 2 webservers 1 database server 1 test server Current deployment adaptation 1. We develop and work on the test server, every changes is uploaded manually to the test server. 2. When a change or feature is complete, we then commit the changes to SVN repository. 3. After committing the changes, we then upload our changes to the first webserver, where there will be a cronjob running every minute to sync the files between the servers. Something very annoying is, whenever we upload a file just as the syncing job starts, the file that is sync-ed will appear corrupted, since it is only half-uploaded. Another thing is whenever there is a deployment fault, it will be extremely difficult to revert. These are basically the problem I'm facing, what should I do?

    Read the article

  • Status of Data in Rollback of Large Transaction in SQL Server

    - by Lloyd Banks
    I have a data warehousing procedure that downloads and replaces dozens of tables from a linked server to a local database. Every once in a while, the code will get stuck on one of the tables on the linked server because the table on the linked server is in a state of transition. I am under the assumption that since the entire procedure is considered one transaction commit, when the procedure gets stuck, none of the changes made by the procudure so far would have committed. But the opposite seems to be true, tables that were "downloaded" before the procedure got stuck would have been updated with today's versions on the local server. Shouldn't SQL Server wait for the entire procedure to finish before the changes are durable? CREATE PROCEDURE MYIMPORT AS BEGIN SET NOCOUNT ON IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE1') DROP TABLE TABLE1 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE1 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE1') IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE2') DROP TABLE TABLE2 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE2 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE2') --IF THE PROCEDURE GETS STUCK HERE, THEN CHANGES TO TABLE1 WOULD HAVE BEEN MADE ON THE LOCAL SERVER WHILE NO CHANGES WOULD HAVE BEEN MADE TO TABLE3 ON THE LOCAL SERVER IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE3') DROP TABLE TABLE3 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE3 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE3') END

    Read the article

  • How to get local bzr commits to server?

    - by C.W.Holeman II
    lanchpad.net states that for project Emle - Electronic Mathematics Laboratory Equipment 2.0 series is the current focus of development This is what I have done so far: Set the launchpad.net project to import from the sourceforge.net project Emle (this actually set the launchpad.net project to mirror the sourceforge.net project rather than just inport the content once) Examined the launchpad.net project to see that the three commits (#1 - #3) which were done in the sourceorge.net project previousley made it into launchpad.net. Used bzrto get the launchpad.net project which I did while is was still set for mirroring. Made three changes and commits using bzr (#4 - #6). Was unable to see the changes on the launchpad.net site. Requested the mirroring to be stopped (it did). Here is an extract from lanchpad.net for project Emle 2.0 series showing that launchpad.net has #1 - #3: Code for this series The following branch has been registered as the mainline branch for this release series: lp:emle - C.W.Holeman II 3 revisions, 3 in the past month. Here on my system showing changes #1 - #6: $ bzr log --line 6: C.W.Holeman II 2010-02-27 #528096 Corrected setting of paramter value for emleDi... 5: C.W.Holeman II 2010-02-27 Minor refactor - improved comment regarding workaround... 4: C.W.Holeman II 2010-02-27 #529089 #529087 Index file html tag lang attribute cor... 3: cwhii 2010-02-25 {Emle 2.4-5 (BL0011)} New top levels: trunk and tags. 2: cwhii 2010-02-25 New top levels: trunk and tags. 1: cwhii 2010-02-25 New top level trunk and tags. How do I get the changes that are in bzr on my system to apply to launchpad.net?

    Read the article

  • Problem reintegrating a branch into the trunk in Subversion 1.5

    - by pako
    I'm trying to reintegrate a development branch into the trunk in my Subversion 1.5 repository. I merged all the changes from the trunk to the development branch prior to this operation. Now when I try to reintegrate the changes from the branch I get the following error message: Command: Reintegrate merge https://dev/svn/branches/devel into C:\trunk Error: Reintegrate can only be used if revisions 280 through 325 were previously Error: merged from https://dev/svn/trunk to the reintegrate Error: source, but this is not the case: Error: branches/devel/images/test Error: Missing ranges: /trunk/images/test:280-324 ... The message then goes on complaining about some folders in my project. But when I try to merge the changes from the trunk to the development branch again, TortoiseSVN tells me that there's nothing to merge (as I already merged all the changes before): Command: Merging revisions 1-HEAD of https://dev/svn/trunk into C:\devel, respecting ancestry Completed: C:\devel I'm trying to follow the instructions from here: http://svnbook.red-bean.com/en/1.5/svn.branchmerge.basicmerging.html, but there's nothing about solving such a problem. Any ideas? Perhaps I should just delete the trunk and then make a copy of my branch? But I'm not really sure if it's safe.

    Read the article

  • JavaScript To Clear Form Field On Submit Before Form Submission To Perl Script

    - by Russell C.
    We have a very long form that has a number of fields and 2 different submit buttons. When a user clicks the 1st submit button ("Photo Search") the form should POST and our script will do a search for matching photos based on what the user entered in the text input ("photo_search_text") next to the 1st submit button and reload the entire form with matching photos added to the form. Upon clicking the 2nd submit button ("Save Changes") at the end of the form, it should POST and our script should update the database with the information the user entered in the form. Unfortunately the layout of the form makes it impossible to separate it into 2 separate forms. I checked the entire form POST and unfortunately the submitted fields are identical to the perl script processing the form submission no matter which submit button is clicked so the perl script can't differentiate which action to perform based on which submit button is pushed. The only thing I can think of is to update the onclick action of the 2nd submit button so it clears the "photo_search_text" field before the form is submitted and then only perform a photo search if that fields has a value. Based on all this, my question is what does the JavaScript look that could clear out the "photo_search_text" field when someone clicks on the 2nd submit button? Here is what I've tried so far none of which has worked successfully: <input type="submit" name="submit" onclick="document.update-form.photo_search_text.value='';" value="Save Changes"> <input type="submit" name="submit" onsubmit="document.update-form.photo_search_text.value='';" value="Save Changes"> <input type="submit" name="submit" onclick="document.getElementById('photo_search_text')='';" value="Save Changes"> We also use JQuery on the site so if there is a way to do this with jQuery instead of plain JavaScript feel free to provide example code for that instead. Lastly, if there is another way to handle this that I'm not thinking of any and all suggestions would be welcome. Thanks in advance for your help!

    Read the article

  • Basic Subversion questions

    - by Epitaph
    I've just started using subversion, and have read the official documentation (svn book), cheat sheet and a couple of guides. I know how to install subversion (in linux), create a repository (svnadmin create), and import my Eclipse project into the repository (SVN import), view the repository files (using svn list). But I am unable to understand some of the other terminologies. For example, after importing my Eclipse project into the newly created repository I have made changes to my Eclipse project (more than 1 file). Now, how should I update the repository with this added files/changes made to my Eclipse project? The svn update command brings the changes from the repository into your working copy - which is the opposite of what I want i.e. bring the changes I made in my Eclipse project into the previously imported project in repository. If I am correct, you update the repository more often (as you keep extending your project implementation) than your current project (with update). Also, I do not understand when would you use svn merge. The svn book states it applies the differences between 2 sources to a working copy. Is there a scenario which would explain this? Finally, can I have more than 1 project checked into the repository? Or is it better to create a new repository for each project?

    Read the article

  • What does "active directory integration" mean in your .NET app?

    - by flipdoubt
    Our marketing department comes back with "active directory integration" being a key customer request, but our company does not seem to have the attention span to (1) decide on what functional changes we want to make toward this end, (2) interview a broad range of customer to identify the most requested functional changes, and (3) still have this be the "hot potato" issue next week. To help me get beyond the broad topic of "active directory integration," what does it mean in your .NET app, both ASP.NET and WinForms? Here are some sample changes I have to consider: When creating and managing users in your app, are administrators presented with a list of all AD users or just a group of AD users? When creating new security groups within your app (we call them Departments, like "Human Resources"), should this create new AD groups? Do administrators assign users to security groups within your app or outside via AD? Does it matter? Is the user signed on to your app by virtue of being signed on to Windows? If not, do you track users with your own user table and some kind of foreign key into AD? What foreign key do you use to link app users to AD users? Do you have to prove your login process protects user passwords? What foreign key do you use to link app security groups to AD security groups? If you have a WinForms component to your app (we have both ASP.NET and WinForms), do you use the Membership Provider in your WinForms app? Currently, our Membership and Role management predates the framework's version, so we do not use the Membership Provider. Am I missing any other areas of functional changes? Followup question Do apps that support "active directory integration" have the ability to authenticate users against more than one domain? Not that one user would authenticate to more than one domain but that different users of the same system would authenticate against different domains.

    Read the article

  • Using Git to work with subversion: Ignoring modifications to tracked files

    - by Chris Nicola
    I am currently working with a subversion repository but I am using git to work locally on my machine. It makes work much easier, but it also makes some of the bad behavior going on in the subversion repo quite glaring and that creates problems for me. There is a somewhat complex local build process after pulling down the code and it creates (and unfortunately modifies) a number of files. Obviously these changes are not meant to be committed back to the repository. Unfortunately the build process is actually modifying some tracked files (yes, most likely because someone mistakenly committed these build artifacts at some point to the subversion repository). Since these are modifications adding them to my ignore file does nothing for me. I can avoid checking these changes back it, I simple don't stage or commit them, but having unstaged local changes means I can't rebase without first cleaning them up. What I would like to know is if there any way to ignore future changes to a set of tracked files? Alternatively, is there another way to handle the problem I am having, or will I just have to tell whoever checked in these files to clean them up?

    Read the article

  • git: how to not delete files when rebasing commits with file deletion

    - by Benjol
    I have a branch that I would like to rebase onto the lastest commit on my master. The problem is that one of the intervening commits on master was to delete and ignore a particular set of files (see this question). If I just do a straight rebase, those files will get deleted again. Is there anyway of doing this, inside git, rather than copying all the files out by hand, then copying them back in again afterwards? Or should I do something like create a new branch off master, then merge in just the commits from the old branch? Attempts ascii art: master branch | w work in progress on branch C | committed further changes on master | | B / committed delete/ignore files on master | 2 committed changes on branch | / A / committed changes on master which I now need to get branch working | 1 committed changes on branch 0___/ created branch (Doing the art, I realise that I could just rebase branch from A, then merge when I've finished, but I'd still like to know if there's a way to do this 'properly') UPDATE Warning to anyone trying this. The solution proposed here is fine, but when you checkout master again, the B commit will be re-applied, and you lose all your files again :(

    Read the article

  • Detecting what changed in an HTML Textfield

    - by teehoo
    For a major school project I am implementing a real-time collaborative editor. For a little background, basically what this means is that two(or more) users can type into a document at the same time, and their changes are automatically propagated to one another (similar to Etherpad). Now my problem is as follows: I want to be able to detect what changes a user carried out onto an HTML textfield. They could: Insert a character Delete a character Paste a string of characters Cut a string of characters I want to be able to detect which of these changes happened and then notify other clients similar to "insert character 'c' at position 2" etc. Anyway I was hoping to get some advice on how I would go about implementing the detection of these changes? My first attempt was to consider the carot position before and after a change occurred, but this failed miserably. For my second attempt I was thinking about doing a diff on the entire contents of the textfields old and new value. Am I missing anything obvious with this solution? Is there something simpler?

    Read the article

  • delivery mechanism, Rational ClearCase

    - by kadaba
    Hi All, We came up with a stream structure for the Rational ClearCase UCM model. We recently migrated the code base into the new setup. We had three different code bases, i.e. three physical code bases. The way migration was done in this way. we moved the production code first, created a baseline. Then the uat code and created a baseline and then the development code and created a baseline. As of now the integration stream has the latest baseline that is the development baseline. Now we have other two streams for the prd and the uat from which the release will be done in the respective environments. I have my dev stream now. I create an activity and make some changes. now I need to promote these changes into the uat environment. If I deliver the changes to the integration stream, merge is done but on a development basline. I do not want to rebase it to uat as many development apps wil get rebased into the uat which is not desired. How do I achieve promoting changes to the uat environment(uat stream). kindly advice.

    Read the article

  • git merge with renamed files

    - by Kevin
    I have a large website that I am moving into a new framework and in the process adding git. The current site doesn't have any version control on it. I started by copying the site into a new git repository. I made a new branch and made all of the changes that were needed to make it work with the new framework. One of those steps was changing the file extension of all of the pages. Now in the time that I have been working on the new site changes have been made to files on the old site. So I switched to master and copied all of those changes in. The problem is when I merge the branch with the new framework back onto master there is a conflict on every file that was changed on the master branch. I wouldn't be to worried about it but there are a couple of hundred files with changes. I have tried git rebase and git rebase --merge with no luck. How can I merge these 2 branches without dealing with every file?

    Read the article

  • Bidirectional replication update record problem

    - by Mirek
    Hi, I would like to present you my problem related to SQL Server 2005 bidirectional replication. What do I need? My teamleader wants to solve one of our problems using bidirectional replication between two databases, each used by different application. One application creates records in table A, changes should replicate to second database into a copy of table A. When data on second server are changed, then those changes have to be propagated back to the first server. I am trying to achieve bidirectional transactional replication between two databases on one server, which is running SQL Server 2005. I have manage to set this up using scripts, established 2 publications and 2 read only subscriptions with loopback detection. Distributtion database is created, publishment on both databases is enabled. Distributor and publisher are up. We are using some rules to control, which records will be replicated, so we need to call our custom stored procedures during replication. So, articles are set to use update, insert and delete custom stored procedures. So far so good, but? Everything works fine, changes are replicating, until updates are done on both tables simultaneously or before changes are replicated (and that takes about 3-6 seconds). Both records then end up with different values. UPDATE db1.dbo.TestTable SET Col = 4 WHERE ID = 1 UPDATE db2.dbo.TestTable SET Col = 5 WHERE ID = 1 results to: db1.dbo.TestTable COL = 5 db2.dbo.TestTable COL = 4 But we want to have last change winning replication. Please, is there a way to solve my problem? How can I ensure same values in both records? Or is there easier solution than this kind of replication? I can provide sample replication script which I am using. I am looking forward for you ideas, Mirek

    Read the article

  • How do I protect the trunk from hapless newbies?

    - by Michael Haren
    A coworker relayed the following problem, let's say it's fictional to protect the guilty: A team of 5-10 works on a project which is issue-driven. That is, the typical flow goes like this: a chunk of work (bug, enhancement, etc.) is created as an issue in the issue tracker The issue is assigned to a developer The developer resolves the issue and commits their code changes to the trunk At release time, the frozen, and heavily tested trunk or release branch or whatever is built in release mode and released The problem he's having is that a couple newbies made several bad commits that weren't caught due to an unfortunate chain of events. This was followed by a bad release with a rollback or flurry of hot fixes. One idea we're toying with: Revoke commit access to the trunk for newbies and make them develop on a per-developer branch (we're using SVN): Good: newbies are isolated and can't hurt others Good: committers merge newbie branches with the trunk frequently Good: this enforces rigid code reviews Bad: this is burdensome on the committers (but there's probably no way around it since the code needs reviewed!) Bad: it might make traceability of trunk changes a little tougher since the reviewer would be doing the commit--not too sure on this. Update: Thank you, everyone, for your valuable input. I have concluded that this is far less a code/coder problem than I first presented. The root of the issue is that the release procedure failed to capture and test some poor quality changes to the trunk. Plugging that hole is most important. Relying on the false assumption that code in the trunk is "good" is not the solution. Once that hole--testing--is plugged, mistakes by everyone--newbie or senior--will be caught properly and dealt with accordingly. Next, a greater emphasis on code reviews and mentorship (probably driven by some systematic changes to encourage it) will go a long way toward improving code quality. With those two fixes in place, I don't think something as rigid or draconian as what I proposed above is necessary. Thanks!

    Read the article

  • Sell me Distributed revision control

    - by ring bearer
    I know 1000s of similar topics floating around. I read at lest 5 threads here in SO But why am I still not convinced about DVCS? I have only following questions (note that I am selfishly worried only about Java projects) What is the advantage or value of committing locally? What? really? All modern IDEs allows you to keep track of your changes? and if required you can restore a particular change. Also, they have a feature to label your changes/versions at IDE level!? what if I crash my hard drive? where did my local repository go? (so how is it cool compared to checking in to a central repo?) Working offline or in an air plane. What is the big deal?In order for me to build a release with my changes, I must eventually connect to the central repository. Till then it does not matter how I track my changes locally. Ok Linus Torvalds gives his life to Git and hates everything else. Is that enough to blindly sing praises? Linus lives in a different world compared to offshore developers in my mid-sized project? Pitch me!

    Read the article

  • Is ADO.NET Entity framework database schema update possible ?

    - by fyasar
    Hi All I'm working on proof of concept application like crm and i need your some advice. My application's data layer completely dynamic and run onto EF 3.5. When the user update the entity, change relation or add new column to the database, first i'm planning make for these with custom classes. After I rebuild my database model layer with new changes during the application runtime. And my model layer tie with tightly coupled to my project for easy reflecting model layer changes (It connected to my project via interfaces and loading onto to application domain in the runtime). I need to create dynamic entities, create entity relations and modify them during the runtime after that i need to create change database script for updating database schema. I know ADO.NET team says "we will be able to provide this property in EF 4.0", but i don't need to wait for them. How can i update database changes during the runtime via EF 3.5 ? For example, i need to create new entity or need to change some entity schema, add new properties or change property types after than how can apply these changes on the physical database schema ? Any ideas ?

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >