Search Results

Search found 2601 results on 105 pages for 'commit'.

Page 30/105 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Parse git log by modified files

    - by MrUser
    I have been told to make git messages for each modified file all one line so I can use grep to find all changes to that file. For instance: $git commit -a modified: path/to/file.cpp/.h - 1) change1 , 2) change2, etc...... $git log | grep path/to/file.cpp/.h modified: path/to/file.cpp/.h - 1) change1 , 2) change2, etc...... modified: path/to/file.cpp/.h - 1) change1 , 2) change2, etc...... modified: path/to/file.cpp/.h - 1) change1 , 2) change2, etc...... That's great, but then the actual line is harder to read because it either runs off the screen or wraps and wraps and wraps. If I want to make messages like this: $git commit -a modified: path/to/file.cpp/.h 1) change1 2) change2 etc...... is there a good way to then use grep or cut or some other tool to get a readout like $git log | grep path/to/file.cpp/.h modified: path/to/file.cpp/.h 1) change1 2) change2 etc...... modified: path/to/file.cpp/.h 1) change1 2) change2 etc...... modified: path/to/file.cpp/.h 1) change1 2) change2 etc......

    Read the article

  • Lucene best practice

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea(make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Commenting on Commits

    The source code experience on CodePlex just got more social. Now, you can comment directly on a commit when viewing a project’s source code history. This feature acts as a nice compliment to the existing ability to comment on a pull request's commits. To get started with this feature, navigate to the commit you are interested in from the source tab. As you hover over the line you would like to comment on, an icon to add a comment will appear to the left. Click on the icon, and comment away!     Once you comment on a line, the person who committed the change and anyone involved in the conversation will be notified of the comment by email. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @Rick_Marron.    

    Read the article

  • Is having a single `IndexWriter` instance in Lucene a good idea?

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea (make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Introducing RedPatch

    - by timhill
    The Ksplice team is happy to announce the public availability of one of our git repositories, RedPatch. RedPatch contains the source for all of the changes Red Hat makes to their kernel, one commit per fix and we've published it on oss.oracle.com/git. With RedPatch, you can access the broken-out patches using git, browse them online via gitweb, and freely redistribute the source under the terms of the GPL. This is the same policy we provide for Oracle Linux and the Unbreakable Enterprise Kernel (UEK). Users can freely access the source, view the commit logs and easily identify the changes that are relevant to their environments. To understand why we've created this project we'll need a little history. In early 2011, Red Hat changed how they released their kernel source, going from a tarball that had individual patch files to shipping the kernel source as one giant tarball with a single patch for all Red Hat-introduced changes. For most people who work in the kernel this is merely an inconvenience; driver developers and other out-of-kernel module developers can see the end result to make sure their module still performs as expected. For Ksplice, we build individual updates for each change and rely on source patches that are broken-out, not a giant tarball. Otherwise, we wouldn’t be able to take the right patches to create individual updates for each fix, and to skip over the noise — like a change that speeds up bootup — which is unnecessary for an already-running system. We’ve been taking the monolithic Red Hat patch tarball and breaking it into smaller commits internally ever since they introduced this change. At Oracle, we feel everyone in the Linux community can benefit from the work we already do to get our jobs done, so now we’re sharing these broken-out patches publicly. In addition to RedPatch, the complete source code for Oracle Linux and the Oracle Unbreakable Enterprise Kernel (UEK) is available from both ULN and our public yum server, including all security errata. Check out RedPatch and subscribe to [email protected] for discussion about the project. Also, drop us a line and let us know how you're using RedPatch!

    Read the article

  • Good SQL error handling in Strored Procedure

    - by developerit
    When writing SQL procedures, it is really important to handle errors cautiously. Having that in mind will probably save your efforts, time and money. I have been working with MS-SQL 2000 and MS-SQL 2005 (I have not got the opportunity to work with MS-SQL 2008 yet) for many years now and I want to share with you how I handle errors in T-SQL Stored Procedure. This code has been working for many years now without a hitch. N.B.: As antoher "best pratice", I suggest using only ONE level of TRY … CATCH and only ONE level of TRANSACTION encapsulation, as doing otherwise may not be 100% sure. BEGIN TRANSACTION; BEGIN TRY -- Code in transaction go here COMMIT TRANSACTION; END TRY BEGIN CATCH -- Rollback on error ROLLBACK TRANSACTION; -- Raise the error with the appropriate message and error severity DECLARE @ErrMsg nvarchar(4000), @ErrSeverity int; SELECT @ErrMsg = ERROR_MESSAGE(), @ErrSeverity = ERROR_SEVERITY(); RAISERROR(@ErrMsg, @ErrSeverity, 1); END CATCH; In conclusion, I will just mention that I have been using this code with .NET 2.0 and .NET 3.5 and it works like a charm. The .NET TDS parser throws back a SQLException which is ideal to work with.

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • GitHub Integration in Windows Azure Web Site

    - by Shaun
    Microsoft had just announced an update for Windows Azure Web Site (a.k.a. WAWS). There are four major features added in WAWS which are free scaling mode, GitHub integration, custom domain and multi branches. Since I ‘m working in Node.js and I would like to have my code in GitHub and deployed automatically to my Windows Azure Web Site once I sync my code, this feature is a big good news to me.   It’s very simple to establish the GitHub integration in WAWS. First we need a clean WAWS. In its dashboard page click “Set up Git publishing”. Currently WAWS doesn’t support to change the publish setting. So if you have an existing WAWS which published by TFS or local Git then you have to create a new WAWS and set the Git publishing. Then in the deployment page we can see now WAWS supports three Git publishing modes: - Push my local files to Windows Azure: In this mode we will create a new Git repository on local machine and commit, publish our code to Windows Azure through Git command or some GUI. - Deploy from my GitHub project: In this mode we will have a Git repository created on GitHub. Once we publish our code to GitHub Windows Azure will download the code and trigger a new deployment. - Deploy from my CodePlex project: Similar as the previous one but our code would be in CodePlex repository.   Now let’s back to GitHub and create a new publish repository. Currently WAWS GitHub integration only support for public repositories. The private repositories support will be available in several weeks. We can manage our repositories in GitHub website. But as a windows geek I prefer the GUI tool. So I opened the GitHub for Windows, login with my GitHub account and select the “github” category, click the “add” button to create a new repository on GitHub. You can download the GitHub for Windows here. I specified the repository name, description, local repository, do not check the “Keep this code private”. After few seconds it will create a new repository on GitHub and associate it to my local machine in that folder. We can find this new repository in GitHub website. And in GitHub for Windows we can also find the local repository by selecting the “local” category.   Next, we need to associate this repository with our WAWS. Back to windows developer portal, open the “Deploy from my GitHub project” in the deployment page and click the “Authorize Windows Azure” link. It will bring up a new windows on GitHub which let me allow the Windows Azure application can access your repositories. After we clicked “Allow”, windows azure will retrieve all my GitHub public repositories and let me select which one I want to integrate to this WAWS. I selected the one I had just created in GitHub for Windows. So that’s all. We had completed the GitHub integration configuration. Now let’s have a try. In GitHub for Windows, right click on this local repository and click “open in explorer”. Then I added a simple HTML file. 1: <html> 2: <head> 3: </head> 4: <body> 5: <h1> 6: I came from GitHub, WOW! 7: </h1> 8: </body> 9: </html> Save it and back to GitHub for Windows, commit this change and publish. This will upload our changes to GitHub, and Windows Azure will detect this update and trigger a new deployment. If we went back to azure developer portal we can find the new deployment. And our commit message will be shown as the deployment description as well. And here is the page deployed to WAWS.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to nest transactions nicely - &quot;begin transaction&quot; vs &quot;save transaction&quot; and SQL Server

    - by Brian Biales
    Do you write stored procedures that might be used by others?  And those others may or may not have already started a transaction?  And your SP does several things, but if any of them fail, you have to undo them all and return with a code indicating it failed? Well, I have written such code, and it wasn’t working right until I finally figured out how to handle the case when we are already in a transaction, as well as the case where the caller did not start a transaction.  When a problem occurred, my “ROLLBACK TRANSACTION” would roll back not just my nested transaction, but the caller’s transaction as well.  So when I tested the procedure stand-alone, it seemed to work fine, but when others used it, it would cause a problem if it had to rollback.  When something went wrong in my procedure, their entire transaction was rolled back.  This was not appreciated. Now, I knew one could "nest" transactions, but the technical documentation was very confusing.  And I still have not found the approach below documented anywhere.  So here is a very brief description of how I got it to work, I hope you find this helpful. My example is a stored procedure that must figure out on its own if the caller has started a transaction or not.  This can be done in SQL Server by checking the @@TRANCOUNT value.  If no BEGIN TRANSACTION has occurred yet, this will have a value of 0.  Any number greater than zero means that a transaction is in progress.  If there is no current transaction, my SP begins a transaction. But if a transaction is already in progress, my SP uses SAVE TRANSACTION and gives it a name.  SAVE TRANSACTION creates a “save point”.  Note that creating a save point has no effect on @@TRANCOUNT.  So my SP starts with something like this: DECLARE @startingTranCount int SET @startingTranCount = @@TRANCOUNT IF @startingTranCount > 0 SAVE TRANSACTION mySavePointName ELSE BEGIN TRANSACTION -- … Then, when ready to commit the changes, you only need to commit if we started the transaction ourselves: IF @startingTranCount = 0 COMMIT TRANSACTION And finally, to roll back just your changes so far: -- Roll back changes... IF @startingTranCount > 0 ROLLBACK TRANSACTION MySavePointName ELSE ROLLBACK TRANSACTION Here is some code that you can try that will demonstrate how the save points work inside a transaction. This sample code creates a temporary table, then executes selects and updates, documenting what is going on, then deletes the temporary table. if running in SQL Management Studio, set Query Results to: Text for best readability of the results. -- Create a temporary table to test with, we'll drop it at the end. CREATE TABLE #ATable( [Column_A] [varchar](5) NULL ) ON [PRIMARY] GO SET NOCOUNT ON -- Ensure just one row - delete all rows, add one DELETE #ATable -- Insert just one row INSERT INTO #ATable VALUES('000') SELECT 'Before TRANSACTION starts, value in table is: ' AS Note, * FROM #ATable SELECT @@trancount AS CurrentTrancount --insert into a values ('abc') UPDATE #ATable SET Column_A = 'abc' SELECT 'UPDATED without a TRANSACTION, value in table is: ' AS Note, * FROM #ATable BEGIN TRANSACTION SELECT 'BEGIN TRANSACTION, trancount is now ' AS Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '123' SELECT 'Row updated inside TRANSACTION, value in table is: ' AS Note, * FROM #ATable SAVE TRANSACTION MySavepoint SELECT 'Save point MySavepoint created, transaction count now:' as Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '456' SELECT 'Updated after MySavepoint created, value in table is: ' AS Note, * FROM #ATable SAVE TRANSACTION point2 SELECT 'Save point point2 created, transaction count now:' as Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '789' SELECT 'Updated after point2 savepoint created, value in table is: ' AS Note, * FROM #ATable ROLLBACK TRANSACTION point2 SELECT 'Just rolled back savepoint "point2", value in table is: ' AS Note, * FROM #ATable ROLLBACK TRANSACTION MySavepoint SELECT 'Just rolled back savepoint "MySavepoint", value in table is: ' AS Note, * FROM #ATable SELECT 'Both save points were rolled back, transaction count still:' as Note, @@TRANCOUNT AS TranCount ROLLBACK TRANSACTION SELECT 'Just rolled back the entire transaction..., value in table is: ' AS Note, * FROM #ATable DROP TABLE #ATable The output should look like this: Note                                           Column_A ---------------------------------------------- -------- Before TRANSACTION starts, value in table is:  000 CurrentTrancount ---------------- 0 Note                                               Column_A -------------------------------------------------- -------- UPDATED without a TRANSACTION, value in table is:  abc Note                                 TranCount ------------------------------------ ----------- BEGIN TRANSACTION, trancount is now  1 Note                                                Column_A --------------------------------------------------- -------- Row updated inside TRANSACTION, value in table is:  123 Note                                                   TranCount ------------------------------------------------------ ----------- Save point MySavepoint created, transaction count now: 1 Note                                                   Column_A ------------------------------------------------------ -------- Updated after MySavepoint created, value in table is:  456 Note                                              TranCount ------------------------------------------------- ----------- Save point point2 created, transaction count now: 1 Note                                                        Column_A ----------------------------------------------------------- -------- Updated after point2 savepoint created, value in table is:  789 Note                                                     Column_A -------------------------------------------------------- -------- Just rolled back savepoint "point2", value in table is:  456 Note                                                          Column_A ------------------------------------------------------------- -------- Just rolled back savepoint "MySavepoint", value in table is:  123 Note                                                        TranCount ----------------------------------------------------------- ----------- Both save points were rolled back, transaction count still: 1 Note                                                            Column_A --------------------------------------------------------------- -------- Just rolled back the entire transaction..., value in table is:  abc

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • SVN Error 403 Forbidden

    - by Chris
    I can't figure this out. I try to import a new project into a svn repository from Netbeans and get 403 Forbidden. I just setup svn on my serverbox today. I can get to it through a browser just fine, though its empty as I haven't imported my project yet. Apache's path for html files is /var/www I setup the svn repo in /var/svn This is the structure of /var/svn [root@localhost svn]# ls -lR /var/svn /var/svn: total 4 drwxrwxrwx 7 apache apache 4096 2010-03-26 10:18 repo /var/svn/repo: total 36 drwxrwxrwx 2 apache apache 4096 2010-03-26 09:47 conf drwxrwxrwx 3 apache apache 4096 2010-03-26 10:18 dav drwxrwsrwx 6 apache apache 4096 2010-03-26 11:19 db -rwxrwxrwx 1 apache apache 2 2010-03-26 09:47 format drwxrwxrwx 2 apache apache 4096 2010-03-26 09:47 hooks drwxrwxrwx 2 apache apache 4096 2010-03-26 09:47 locks -rwxrwxrwx 1 apache apache 229 2010-03-26 09:47 README.txt -rwxrwxrwx 1 apache apache 15 2010-03-26 09:47 svnauth -rwxrwxrwx 1 apache apache 43 2010-03-26 09:48 svnpass /var/svn/repo/conf: total 12 -rwxrwxrwx 1 apache apache 1080 2010-03-26 09:47 authz -rwxrwxrwx 1 apache apache 309 2010-03-26 09:47 passwd -rwxrwxrwx 1 apache apache 2279 2010-03-26 09:47 svnserve.conf /var/svn/repo/dav: total 4 drwxrwxrwx 2 apache apache 4096 2010-03-26 11:19 activities.d /var/svn/repo/dav/activities.d: total 0 /var/svn/repo/db: total 48 -rwxrwxrwx 1 apache apache 2 2010-03-26 09:47 current -rwxrwxrwx 1 apache apache 22 2010-03-26 09:47 format -rwxrwxrwx 1 apache apache 1920 2010-03-26 09:47 fsfs.conf -rwxrwxrwx 1 apache apache 5 2010-03-26 09:47 fs-type -rwxrwxrwx 1 apache apache 2 2010-03-26 09:47 min-unpacked-rev -rwxrwxrwx 1 apache apache 4096 2010-03-26 09:47 rep-cache.db drwxrwsrwx 3 apache apache 4096 2010-03-26 09:47 revprops drwxrwsrwx 3 apache apache 4096 2010-03-26 09:47 revs drwxrwsrwx 2 apache apache 4096 2010-03-26 11:19 transactions -rwxrwxrwx 1 apache apache 2 2010-03-26 11:19 txn-current -rwxrwxrwx 1 apache apache 0 2010-03-26 09:47 txn-current-lock drwxrwsrwx 2 apache apache 4096 2010-03-26 11:19 txn-protorevs -rwxrwxrwx 1 apache apache 37 2010-03-26 09:47 uuid -rwxrwxrwx 1 apache apache 0 2010-03-26 09:47 write-lock /var/svn/repo/db/revprops: total 4 drwxrwsrwx 2 apache apache 4096 2010-03-26 09:47 0 /var/svn/repo/db/revprops/0: total 4 -rwxrwxrwx 1 apache apache 50 2010-03-26 09:47 0 /var/svn/repo/db/revs: total 4 drwxrwsrwx 2 apache apache 4096 2010-03-26 09:47 0 /var/svn/repo/db/revs/0: total 4 -rwxrwxrwx 1 apache apache 115 2010-03-26 09:47 0 /var/svn/repo/db/transactions: total 0 /var/svn/repo/db/txn-protorevs: total 0 /var/svn/repo/hooks: total 36 -rwxrwxrwx 1 apache apache 1955 2010-03-26 09:47 post-commit.tmpl -rwxrwxrwx 1 apache apache 1638 2010-03-26 09:47 post-lock.tmpl -rwxrwxrwx 1 apache apache 2267 2010-03-26 09:47 post-revprop-change.tmpl -rwxrwxrwx 1 apache apache 1567 2010-03-26 09:47 post-unlock.tmpl -rwxrwxrwx 1 apache apache 3404 2010-03-26 09:47 pre-commit.tmpl -rwxrwxrwx 1 apache apache 2410 2010-03-26 09:47 pre-lock.tmpl -rwxrwxrwx 1 apache apache 2764 2010-03-26 09:47 pre-revprop-change.tmpl -rwxrwxrwx 1 apache apache 2100 2010-03-26 09:47 pre-unlock.tmpl -rwxrwxrwx 1 apache apache 2758 2010-03-26 09:47 start-commit.tmpl /var/svn/repo/locks: total 8 -rwxrwxrwx 1 apache apache 139 2010-03-26 09:47 db.lock -rwxrwxrwx 1 apache apache 139 2010-03-26 09:47 db-logs.lock I've got httpd.conf loading svn.conf which contains: <Location /svn> DAV on DAV svn #SVNParentPath /var/svn SVNPath /var/svn/repo Authtype Basic AuthName "Subversion" AuthUserFile /var/svn/repo/svnpass Require valid-user AuthzSVNAccessFile /var/svn/repo/svnauth </Location> Full error message is: org.tigris.subversion.javahl.ClientException: RA layer request failed Server sent unexpected return value (403 Forbidden) in response to CHECKOUT request for '/svn/!svn/bln/0' Sorry for the incredibly long post, but I thought more info would be better than less. I've been fidgeting with this problem for a long time now.

    Read the article

  • flex DataGrid row auto fit content

    - by orangestar
    Hi, all! I try to create a DataGrid in flex which can fit its content. The itemEdit is a TextArea that can auto change its height while inputing. I set variableRowHeight="true" and wordWrap="true". Although The TextArea will auto resize but the height of row is not changed. Could anyone tell me how to change row height of DataGrid while inputing? One way I found is auto commit its text to DataGrid so the DataGrid will know the data changes but which function can commit its data? Thank you!

    Read the article

  • Can't open .svn/text-base/file.svn-base ?

    - by Earlz
    I'm using TortoiseSVN. I just made quite a few changes to my working copy and now I went to do a commit some of the files went through but at one file named Search.aspx.cs it says Commit failed (details follow): Can't open file 'C:\-----\trunk\.svn\text-base\Search.aspx.cs.svn-base': The system cannot find the file specified. I have tried doing a SVN update and SVN cleanup and nothing is restoring this file. I can't even create a diff because it gives a similar error about missing files. How do I fix this? What did I do to cause it?

    Read the article

  • Mercurial client error 255 and HTTP error 404 when attempting to push large files to server

    - by coderunner
    Problem: When attempting to push a changeset that contains 6 large files (.exe, .dmg, etc) to my remote server my client (MacHG) is reporting the error: "Error During Push. Mercurial reported error number 255: abort: HTTP Error 404: Not Found" What does the error even mean?! The only thing unique (that I can tell) about this commit is the size, type, and filenames of the files. How can I determine which exact file within the changeset is failing? How can I delete the corrupt changeset from the repository? Someone reported using "mq" extensions, but it looks overly complicated for what I'm trying to achieve. Background: I can push and pull the following: source files, directories, .class files and a .jar file to and from the server, using both MacHG and toirtoise HG. I successfully committed to my local repository the addition for the first time the 6 large .exe, .dmg etc installer files (about 130Mb total). In the following commit to my local repository, I removed ("untracked" / forget) the 6 files causing the problem, however the previous (failing) changeset is still queued to be pushed to the server (i.e. my local host is trying to push the "add" and then the "remove" to the remote server - and keep aligned with the "keep everything in history" philosophy of the source control system). I can commit .txt .java files etc using TortoiseHG from Windows PCs. I haven't actually testing committing or pushing the same large files using TortoiseHG. Please help! Setup: Client applications = MacHG v0.9.7 (SCM 1.5.4), and TortoiseHG v1.0.4 (SCM 1.5.4) Server = HTTPS, IIS7.5, Mercurial 1.5.4, Python 2.6.5, setup using these instructions: http://www.jeremyskinner.co.uk/mercurial-on-iis7/ In IIS7.5 the CGI handler is configured to handle ALL verbs (not just GET, POST and HEAD). My hgweb.cgi file on the server is as follows: #!/usr/bin/env python # # An example hgweb CGI script, edit as necessary # Path to repo or hgweb config to serve (see 'hg help hgweb') #config = "/path/to/repo/or/config" # Uncomment and adjust if Mercurial is not installed system-wide: #import sys; sys.path.insert(0, "/path/to/python/lib") # Uncomment to send python tracebacks to the browser if an error occurs: #import cgitb; cgitb.enable() from mercurial import demandimport; demandimport.enable() from mercurial.hgweb import hgweb, wsgicgi application = hgweb('C:\inetpub\wwwroot\hg\hgweb.config') wsgicgi.launch(application) My hgweb.config file on the server is as follows: [collections] C:\Mercurial Repositories = C:\Mercurial Repositories [web] baseurl = /hg allow_push = usernamea allow_push = usernameb

    Read the article

  • DAL without web2py

    - by Jay
    I am using web2py to power my web site. I decided to use the web2py DAL for a long running program that runs behind the site. This program does not seem to update its data or the database (sometimes). from gluon.sql import * from gluon.sql import SQLDB from locdb import * # contains # db = SQLDB("mysql://user/pw@localhost/mydb", pool_size=10) # db.define_table('orders', Field('status', 'integer'), Field('item', 'string'), # migrate='orders.table') orderid = 20 # there is row with id == 20 in table orders #when I do db(db.orders.id==orderid).update(status=6703) db.commit() It does not update the database, and a select on orders with this id, shows the correct data. In some circumstances a "db.rollback()" after a commit seems to help. Very strange to say the least. Have you seen this, more importantly do you know the solution? Jay

    Read the article

  • java.sql.SQLException: SQL logic error or missing database

    - by Sunil Kumar Sahoo
    Hi All, I ahve created database connection with SQLite using JDBC in java. My sql statements execute properly. But sometimes I get the following error while i use conn.commit() java.sql.SQLException: SQL logic error or missing database Can anyone please help me how to avoid this type of problem. Can anyone give me better approach of calling JDBC programs Class.forName("org.sqlite.JDBC"); conn = DriverManager.getConnection("jdbc:sqlite:/home/Data/database.db3"); conn.setAutoCommit(false); String query = "Update Chits set BlockedForChit = 0 where ServerChitID = '" + serverChitId + "' AND ChitGatewayID = '" + chitGatewayId + "'"; Statement stmt = null; try { stmt.execute(query); conn.commit(); stmt.close(); stmt = null; } Thanks Sunil Kumar Sahoo

    Read the article

  • java.sql.SQLException: SQL logic error or missing database, SQLite, JDBC

    - by Sunil Kumar Sahoo
    Hi All, I ahve created database connection with SQLite using JDBC in java. My sql statements execute properly. But sometimes I get the following error while i use conn.commit() java.sql.SQLException: SQL logic error or missing database Can anyone please help me how to avoid this type of problem. Can anyone give me better approach of calling JDBC programs Class.forName("org.sqlite.JDBC"); conn = DriverManager.getConnection("jdbc:sqlite:/home/Data/database.db3"); conn.setAutoCommit(false); String query = "Update Chits set BlockedForChit = 0 where ServerChitID = '" + serverChitId + "' AND ChitGatewayID = '" + chitGatewayId + "'"; Statement stmt = conn.createStatement(); try { stmt.execute(query); conn.commit(); stmt.close(); stmt = null; } Thanks Sunil Kumar Sahoo

    Read the article

  • Nested/Child TransactionScope Rollback

    - by Robert Wagner
    I am trying to nest TransactionScopes (.net 4.0) as you would nest Transactions in SQL Server, however it looks like they operate differently. I want my child transactions to be able to rollback if they fail, but allow the parent transaction to decide whether to commit/rollback the whole operation. A greatly simplified example of what I am trying to do: static void Main(string[] args) { using(var scope = new TransactionScope()) // Trn A { // Insert Data A DoWork(true); DoWork(false); // Rollback or Commit } } // This class is a few layers down static void DoWork(bool fail) { using(var scope = new TransactionScope()) // Trn B { // Update Data A if(!fail) { scope.Complete(); } } } I can't use the Suppress or RequiresNew options as Trn B relies on data inserted by Trn A. If I do use those options, Trn B is blocked by Trn A. Any ideas how I would get it to work, or if it is even possible using the System.Transactions namespace? Thanks

    Read the article

  • Ensuring all git commits make it back to CVS when using git-cvs

    - by Eric
    I'm using git-cvs, and my general workflow is something like this: ...write some code... $ git commit $ git cvsexportcommit -c -p -v <asdf> $ git cvs-import $CVSROOT $ git pull This generally works fine for pushing my commits back to the CVS server and keeping things in sync. However, I'm wondering how I will realize that something is missing if I happen to do the "git commit" but forget to export it to the CVS server. Is there a reasonable way to get a diff between my git repository and the CVS server, so I would know that something hadn't been committed all the way through? Or perhaps there's a better method of doing this altogether?

    Read the article

  • JPA 2.0 EclipseLink Check for unique

    - by Parhs
    Hello... I have a collumn as unique=true.. in Exam class.... I found that because transactions are commited automaticaly so to force the commit i use em.commit() However i would like to know how to check if it is unique.Running a query isnt a solution because it may be an instert after checking because of the concurency.... Which is the best way to check for uniqness? List<Exam_Normal> exam_normals = exam.getExam_Normal(); exam.setExam_Normal(null); try { em.persist(exam); em.flush(); Long i = 0L; if (exam_normals != null) { for (Exam_Normal e_n : exam_normals) { i++; e_n.setItem(i); e_n.setId(exam); em.persist(e_n); } } } catch (Exception e) { System.out.print("sfalma--"); } } d

    Read the article

  • DELETE and EDIT is not working in my python program

    - by user2968025
    This is a simple python program that ADD, DELETE, EDIT and VIEW student records. The problem is, DELETE and EDIT is not working. I dont know why but when I tried removing one '?' in the DELETE dunction, I had the error that says there are only 8 columns and it needs 10. But originally, there are only 9 columns. I don't know where it got the other one to make it 10. Please help.. :( import sys import sqlite3 import tkinter import tkinter as tk from tkinter import * from tkinter.ttk import * def newRecord(): studentnum="" name="" age="" birthday="" address="" email="" course="" year="" section="" con=sqlite3.connect("Students.db") cur=con.cursor() cur.execute("CREATE TABLE IF NOT EXISTS student(studentnum TEXT, name TEXT, age TEXT, birthday TEXT, address TEXT, email TEXT, course TEXT, year TEXT, section TEXT)") def save(): studentnum=en1.get() name=en2.get() age=en3.get() birthday=en4.get() address=en5.get() email=en6.get() course=en7.get() year=en8.get() section=en9.get() student=(studentnum,name,age,birthday,address,email,course,year,section) cur.execute("INSERT INTO student(studentnum,name,age,birthday,address,email,course,year,section) VALUES(?,?,?,?,?,?,?,?,?)",student) con.commit() win=tkinter.Tk();win.title("Students") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Add Record") lbl.pack() lbl1=tkinter.Label(win,width=30,text="Student Number : ") lbl1.pack() en1=tkinter.Entry(win,width=30) en1.pack() lbl2=tkinter.Label(win,width=30,text="Name : ") lbl2.pack() en2=tkinter.Entry(win,width=30) en2.pack() lbl3=tkinter.Label(win,width=30,text="Age : ") lbl3.pack() en3=tkinter.Entry(win,width=30) en3.pack() lbl4=tkinter.Label(win,width=30,text="Birthday : ") lbl4.pack() en4=tkinter.Entry(win,width=30) en4.pack() lbl5=tkinter.Label(win,width=30,text="Address : ") lbl5.pack() en5=tkinter.Entry(win,width=30) en5.pack() lbl6=tkinter.Label(win,width=30,text="Email : ") lbl6.pack() en6=tkinter.Entry(win,width=30) en6.pack() lbl7=tkinter.Label(win,width=30,text="Course : ") lbl7.pack() en7=tkinter.Entry(win,width=30) en7.pack() lbl8=tkinter.Label(win,width=30,text="Year : ") lbl8.pack() en8=tkinter.Entry(win,width=30) en8.pack() lbl9=tkinter.Label(win,width=30,text="Section : ") lbl9.pack() en9=tkinter.Entry(win,width=30) en9.pack() btn1=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Save Student",command=save) btn1.pack() def editRecord(): studentnum1="" def edit(): studentnum1=en10.get() studentnum="" name="" age="" birthday="" address="" email="" course="" year="" section="" con=sqlite3.connect("Students.db") cur=con.cursor() row=cur.fetchone() cur.execute("DELETE FROM student WHERE name = '%s'" % studentnum1) con.commit() def save(): studentnum=en1.get() name=en2.get() age=en3.get() birthday=en4.get() address=en5.get() email=en6.get() course=en7.get() year=en8.get() section=en8.get() student=(studentnum,name,age,email,birthday,address,email,course,year,section) cur.execute("INSERT INTO student(studentnum,name,age,email,birthday,address,email,course,year,section) VALUES(?,?,?,?,?,?,?,?,?)",student) con.commit() win=tkinter.Tk();win.title("Students") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Edit Reocrd :"+'\t'+studentnum1) lbl.pack() lbl1=tkinter.Label(win,width=30,text="Student Number : ") lbl1.pack() en1=tkinter.Entry(win,width=30) en1.pack() lbl2=tkinter.Label(win,width=30,text="Name : ") lbl2.pack() en2=tkinter.Entry(win,width=30) en2.pack() lbl3=tkinter.Label(win,width=30,text="Age : ") lbl3.pack() en3=tkinter.Entry(win,width=30) en3.pack() lbl4=tkinter.Label(win,width=30,text="Birthday : ") lbl4.pack() en4=tkinter.Entry(win,width=30) en4.pack() lbl5=tkinter.Label(win,width=30,text="Address : ") lbl5.pack() en5=tkinter.Entry(win,width=30) en5.pack() lbl6=tkinter.Label(win,width=30,text="Email : ") lbl6.pack() en6=tkinter.Entry(win,width=30) en6.pack() lbl7=tkinter.Label(win,width=30,text="Course : ") lbl7.pack() en7=tkinter.Entry(win,width=30) en7.pack() lbl8=tkinter.Label(win,width=30,text="Year : ") lbl8.pack() en8=tkinter.Entry(win,width=30) en8.pack() lbl9=tkinter.Label(win,width=30,text="Section : ") lbl9.pack() en9=tkinter.Entry(win,width=30) en9.pack() btn1=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Save Record",command=save) btn1.pack() win=tkinter.Tk();win.title("Edit Student") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Edit Record") lbl.pack() lbl10=tkinter.Label(win,width=30,text="Student Number : ") lbl10.pack() en10=tkinter.Entry(win) en10.pack() btn2=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Edit",command=edit) btn2.pack() def deleteRecord(): studentnum1="" win=tkinter.Tk();win.title("Delete Student Record") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Delete Record") lbl.pack() lbl10=tkinter.Label(win,text="Student Number") lbl10.pack() en10=tkinter.Entry(win) en10.pack() def delete(): studentnum1=en10.get() con=sqlite3.connect("Students.db") cur=con.cursor() row=cur.fetchone() cur.execute("DELETE FROM student WHERE name = '%s';" % studentnum1) con.commit() win=tkinter.Tk();win.title("Record Deleted") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Record Deleted :") lbl.pack() lbl=tkinter.Label(win,width=30,text=studentnum1) lbl.pack() btn=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Ok",command=win.destroy) btn.pack() btn2=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Delete",command=delete) btn2.pack() def viewRecord(): con=sqlite3.connect("Students.db") cur=con.cursor() win=tkinter.Tk();win.title("View Student Record"); row=cur.fetchall() lbl1=tkinter.Label(win,background="#000",foreground="#ddd",width=300,text="\n\tStudent Number"+"\t\tName"+"\t\tAge"+"\t\tBirthday"+"\t\tAddress"+"\t\tEmail"+"\t\tCourse"+"\t\tYear"+"\t\nSection") lbl1.pack() for row in cur.execute("SELECT * FROM student"): lbl2=tkinter.Label(win,width=300,text= row[0] + '\t\t' + row[1] + '\t' + row[2] + '\t\t' + row[3] + '\t\t' + row[4] + '\t\t' + row[5] + '\t\t' + row[6] + '\t\t' + row[7] + '\t\t' + row[8] + '\n') lbl2.pack() con.close() but1=tkinter.Button(win,background="#000",foreground="#fff", width=150,text="Close",command=win.destroy) but1.pack() root=tkinter.Tk();root.title("Student Records") menubar=tkinter.Menu(root) manage=tkinter.Menu(menubar,tearoff=0) manage.add_command(label='New Record',command=newRecord) manage.add_command(label='Edit Record',command=editRecord) manage.add_command(label='Delete Record',command=deleteRecord) menubar.add_cascade(label='Manage',menu=manage) view=tkinter.Menu(menubar,tearoff=0) view.add_command(label='View Record',command=viewRecord) menubar.add_cascade(label='View',menu=view) root.config(menu=menubar) lbl=tkinter.Label(root,background="#000",foreground="#ddd",font=("Verdana",15),width=30,text="Student Records") lbl.pack() lbl1=tkinter.Label(root,text="\nSubmitted by :") lbl1.pack() lbl2=tkinter.Label(root,text="Chavez, Vissia Nicole P") lbl2.pack() lbl3=tkinter.Label(root,text="BSIT 4-4") lbl3.pack()

    Read the article

  • .xcodeproj does not get committed with XCode's SCM Tool

    - by Dimitris
    I am using the SCM Tools embedded in XCode to manage my app's versioning. I have created an iPhone app and I have added/committed it to the Subversion server but the .xcodeproj file won't upload (all the class files, resources etc are there)! I don't even get the option to "Add to Repository". Sometimes it gets an "A" (add) next to it under the "SCM" column but still, the next time I commit changes or commit entire project it still doesn't upload and show up on the server. As a result my team can't get and run the project. Is there a way to so something (other than just use the terminal or Versions)? Thank you.

    Read the article

  • Function inserted not all records

    - by user1799459
    I wrote the following code by data transfer from Access to Firebird def FirebirdDatetime(dt): return '\'%s.%s.%s\'' % (str(dt.day).rjust(2,'0'), str(dt.month).rjust(2,'0'), str(dt.year).rjust(4,'0')) def SelectFromAccessTable(tablename): return 'select * from [' + tablename+']' def InsertToFirebirdTable(tablename, row): values='' i=0 for elem in row: i+=1 #print type(elem) if type(elem) == int: temp = str(elem) elif (type(elem) == str) or (type(elem)==unicode): temp = '\'%s\'' % (elem,) elif type(elem) == datetime.datetime: temp =FirebirdDatetime(elem) elif type(elem) == decimal.Decimal: temp = str(elem) elif elem==None: temp='null' if (i<len(row)): values+=temp+', ' else: values+=temp return 'insert into '+tablename+' values ('+values+')' def AccessToFirebird(accesstablename, firebirdtablename, accesscursor, firebirdcursor): SelectSql=SelectFromAccessTable(accesstablename) for row in accesscursor.execute(SelectSql): InsertSql=InsertToFirebirdTable(firebirdtablename, row) InsertSql=InsertSql print InsertSql firebirdcursor.execute(InsertSql) In the main module there is an AccessToFirebird function call conAcc = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=D:\ThirdTask\Northwind.accdb') SqlAccess=conAcc.cursor(); conn.begin() cur=conn.cursor() sql.AccessToFirebird('Customers', 'CLIENTS', SqlAccess, cur) conn.commit() conn.begin() cur=conn.cursor() sql.AccessToFirebird('??????????', 'EMPLOYEES', SqlAccess, cur) sql.AccessToFirebird('????', 'ROLES', SqlAccess, cur) sql.AccessToFirebird('???? ???????????', 'EMPLOYEES_ROLES', SqlAccess, cur) sql.AccessToFirebird('????????', 'DELIVERY', SqlAccess, cur) sql.AccessToFirebird('??????????', 'SUPPLIERS', SqlAccess, cur) sql.AccessToFirebird('????????? ?????? ???????', 'TAX_STATUS_OF_ORDERS', SqlAccess, cur) sql.AccessToFirebird('????????? ???????? ? ??????', 'STATE_ORDER_DETAILS', SqlAccess, cur) sql.AccessToFirebird('????????? ???????', 'CONDITION_OF_ORDERS', SqlAccess, cur) sql.AccessToFirebird('??????', 'ORDERS', SqlAccess, cur) sql.AccessToFirebird('?????', 'BILLS', SqlAccess, cur) sql.AccessToFirebird('????????? ?????? ?? ????????????', 'STATUS_PURCHASE_ORDER', SqlAccess, cur) sql.AccessToFirebird('?????? ?? ????????????', 'ORDERS_FOR_ACQUISITION', SqlAccess, cur) sql.AccessToFirebird('???????? ? ?????? ?? ????????????', 'INFORMPURCHASEORDER', SqlAccess, cur) sql.AccessToFirebird('??????', 'PRODUCTS', SqlAccess, cur) conn.commit() conAcc.commit() conn.close() conAcc.close() But as a result, not all records have been inserted into the table Products (Table Goods - Northwind database), for example, does not work request insert into PRODUCTS values ('4', 1, 'NWTB-1', '?????????? ???', null, 13.5000, 18.0000, 10, 40, '10 ??????? ?? 20 ?????????', '10 ??????? ?? 20 ?????????', 10, '???????', '') In ibexpert to this request message issued can't format message 13:587 -- message file C:\Windows\firebird.msg not found. conversion error from string "10 ?????????±???? ???? 20 ???°???µ?‚????????". Worked only requests insert into PRODUCTS values ('1', 82, 'NWTC-82', '???????', null, 2.0000, 4.0000, 20, 100, null, null, null, '????', '') insert into PRODUCTS values ('9', 83, 'NWTCS-83', '???????????? ?????', null, 0.5000, 1.8000, 30, 200, null, null, null, '????? ? ???????', '') insert into PRODUCTS values ('1', 97, 'NWTC-82', '???????', null, 3.0000, 5.0000, 50, 200, null, null, null, '????', '') insert into PRODUCTS values ('6', 98, 'NWTSO-98', '??????? ???', null, 1.0000, 1.8900, 100, 200, null, null, null, '????', '') insert into PRODUCTS values ('6', 99, 'NWTSO-99', '??????? ??????', null, 1.0000, 1.9500, 100, 200, null, null, null, '????', '') other records were not inserted.

    Read the article

  • Git: can't undo local changes (error: path ... is unmerged)

    - by mklhmnn
    I have following working tree state $ git status foo/bar.txt # On branch master # Unmerged paths: # (use "git reset HEAD <file>..." to unstage) # (use "git add/rm <file>..." as appropriate to mark resolution) # # deleted by us: foo/bar.txt # no changes added to commit (use "git add" and/or "git commit -a") File foo/bar.txt is there and I want to get it to the "unchanged state" again (similar to 'svn revert'): $ git checkout HEAD foo/bar.txt error: path 'foo/bar.txt' is unmerged $ git reset HEAD foo/bar.txt Unstaged changes after reset: M foo/bar.txt Now it is getting confusing: $ git status foo/bar.txt # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # new file: foo/bar.txt # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: foo/bar.txt # The same file in both sections, new and modified? What should I do? Thanks in advance.

    Read the article

  • oracle global temporary tables

    - by mrp
    I created the global temp table. when I execute the code as an individual scripts it works fine. but when I execute it as a single script in TOAD then no record was created. there was just an empty global temp table. eg. CREATE GLOBAL TEMPORARY TABLE TEMP_TRAN ( COL1 NUMBER(9), COL2 VARCHAR2(30), COL3 DATE ) ON COMMIT PRESERVE ROWS / INSERT INTO TEMP_TRAN VALUES(1,'D',sysdate); / INSERT INTO TEMP_TRAN VALUES(2,'I',sysdate); / INSERT INTO TEMP_TRAN VALUES(3,'s',sysdate); / COMMIT; When I run the above code one statement at a time it works fine. But when I execute it as a script it runs fine but there was no records in temp table. can anyone help me on this please?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >