Search Results

Search found 4953 results on 199 pages for 'git commit'.

Page 180/199 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • Access DB Transaction on insert or updating

    - by Raju Gujarati
    I am going to implement the database access layer of the Window application using C#. The database (.accdb) is located to the project files. When it comes to two notebooks (clients) connecting to one access database through switches, it throws DBConcurrency Exception Error. My target is to check the timestamp of the sql executed first and then run the sql . Would you please provide me some guidelines to achieve this ? The below is my code protected void btnTransaction_Click(object sender, EventArgs e) { string custID = txtID.Text; string CompName = txtCompany.Text; string contact = txtContact.Text; string city = txtCity.Text; string connString = ConfigurationManager.ConnectionStrings["CustomersDatabase"].ConnectionString; OleDbConnection connection = new OleDbConnection(connString); connection.Open(); OleDbCommand command = new OleDbCommand(); command.Connection = connection; OleDbTransaction transaction = connection.BeginTransaction(); command.Transaction = transaction; try { command.CommandText = "INSERT INTO Customers(CustomerID, CompanyName, ContactName, City, Country) VALUES(@CustomerID, @CompanyName, @ContactName, @City, @Country)"; command.CommandType = CommandType.Text; command.Parameters.AddWithValue("@CustomerID", custID); command.Parameters.AddWithValue("@CompanyName", CompName); command.Parameters.AddWithValue("@ContactName", contact); command.Parameters.AddWithValue("@City", city); command.ExecuteNonQuery(); command.CommandText = "UPDATE Customers SET ContactName = @ContactName2 WHERE CustomerID = @CustomerID2"; command.CommandType = CommandType.Text; command.Parameters.AddWithValue("@CustomerID2", custIDUpdate); command.Parameters.AddWithValue("@ContactName2", contactUpdate); command.ExecuteNonQuery(); adapter.Fill(table); GridView1.DataSource = table; GridView1.DataBind(); transaction.Commit(); lblMessage.Text = "Transaction successfully completed"; } catch (Exception ex) { transaction.Rollback(); lblMessage.Text = "Transaction is not completed"; } finally { connection.Close(); } }

    Read the article

  • How to programmatically create a node in Drupal 8?

    - by chapka
    I'm designing a new module in Drupal 8. It's a long-term project that won't be going public for a few months at least, so I'm using it as a way to figure out what's new. In this module, I want to be able to programmatically create nodes. In Drupal 7, I would do this by creating the object, then calling "node_submit" and "node_save". These functions no longer exist in Drupal 8. Instead, according to the documentation, "Modules and scripts may programmatically submit nodes using the usual form API pattern." I'm at a loss. What does this mean? I've used Form API to create forms in Drupal 7, but I don't get what the docs are saying here. What I'm looking to do is programmatically create at least one and possibly multiple new nodes, based on information not taken directly from a user-presented form. I need to be able to: 1) Specify the content type 2) Specify the URL path 3) Set any other necessary variables that would previously have been handled by the now-obsolete node_object_prepare() 4) Commit the new node object I would prefer to be able to do this in an independent, highly abstracted function not tied to a specific block or form. So what am I missing?

    Read the article

  • What Test Environment Setup do Top Project Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • Django forms I cannot save picture file

    - by dana
    i have the model: class OpenCv(models.Model): created_by = models.ForeignKey(User, blank=True) first_name = models.CharField(('first name'), max_length=30, blank=True) last_name = models.CharField(('last name'), max_length=30, blank=True) url = models.URLField(verify_exists=True) picture = models.ImageField(help_text=('Upload an image (max %s kilobytes)' %settings.MAX_PHOTO_UPLOAD_SIZE),upload_to='jakido/avatar',blank=True, null= True) bio = models.CharField(('bio'), max_length=180, blank=True) date_birth = models.DateField(blank=True,null=True) domain = models.CharField(('domain'), max_length=30, blank=True, choices = domain_choices) specialisation = models.CharField(('specialization'), max_length=30, blank=True) degree = models.CharField(('degree'), max_length=30, choices = degree_choices) year_last_degree = models.CharField(('year last degree'), max_length=30, blank=True,choices = year_last_degree_choices) lyceum = models.CharField(('lyceum'), max_length=30, blank=True) faculty = models.ForeignKey(Faculty, blank=True,null=True) references = models.CharField(('references'), max_length=30, blank=True) workplace = models.ForeignKey(Workplace, blank=True,null=True) objects = OpenCvManager() the form: class OpencvForm(ModelForm): class Meta: model = OpenCv fields = ['first_name','last_name','url','picture','bio','domain','specialisation','degree','year_last_degree','lyceum','references'] and the view: def save_opencv(request): if request.method == 'POST': form = OpencvForm(request.POST, request.FILES) # if 'picture' in request.FILES: file = request.FILES['picture'] filename = file['filename'] fd = open('%s/%s' % (MEDIA_ROOT, filename), 'wb') fd.write(file['content']) fd.close() if form.is_valid(): new_obj = form.save(commit=False) new_obj.picture = form.cleaned_data['picture'] new_obj.created_by = request.user new_obj.save() return HttpResponseRedirect('.') else: form = OpencvForm() return render_to_response('opencv/opencv_form.html', { 'form': form, }, context_instance=RequestContext(request)) but i don't seem to save the picture in my database... something is wrong, and i can't figure out what :(

    Read the article

  • Managing Many to Many relationships in asp.net Wizard Control

    - by Luis
    Say I have this entity with a lot of attributes. In the input form I have decided to implement a wizard control so I can collect information about this entity in several steps. The problem is that I need to collect information that has been modeled has many to many relationships. I am planning to use a telerik gridview to manage this (add/edit/delete), the problem is where do I store that data since the entity in a insert form is not created on the database yet. OK so I can store all that info in temporary lists residing in the viewstate, waiting for the final submit where I dump all that in the DB, but one of the steps I am collecting files...now storing files in the viewstate is out of the question, same as as storing them in the session... I have been thinking of implementing in a way that the user has to submit some info first (say first 3 steps), commit the data to the database creating the parent entity and then start inserting all the childs entities...but this will get weird as it's confusing since on the first steps you not saving the data to the DB and on the next ones you are commiting directly... Anyone has any thoughts on this? Thanks

    Read the article

  • Update Input Value With jQuery, Old Value Submitted to Form

    - by Tyler DeWitt
    I've got a form with an input with id/name league_id <form accept-charset="UTF-8" action="/user" method="post"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="?"><input name="authenticity_token" type="hidden" value="bb92urX83ivxOZZJzWLJMcr5ZSuamowO9O9Sxh5gqKo="></div> <input id="league_id" name="league_id" type="text" value="11"> <select class="sport_selector" id="sport_type_id" name="sport_type_id"><option value="5" selected="selected">Football</option> <option value="25">Women's Soccer</option> <option value="30">Volleyball</option> <option value="10">Men's Soccer</option></select> <input name="commit" type="submit" value="Save changes"> </form> In another part of my page, I have a drop down that, when changed, clears the value of league_id $("#sport_type_id").change -> $("#league_id").val(null) $(this).parents('form:first').submit() If I debug this page, I can see the value get wiped from the text box, but when the form is submitted, my controller always gets the old value. I tried changing the name of the input and got the same results.

    Read the article

  • Real Time Sound Captureing J2ME

    - by Abdul jalil
    i am capturing sound in J2me and send these bytes to remote system, i then play these bytes on remote system.five second voice is capture and send to remote system. i get the repeated sound again .i am making a sound messenger please help me where i am doing wrong i am using the follown code . String remoteTimeServerAddress="192.168.137.179"; sc = (SocketConnection) Connector.open("socket://"+remoteTimeServerAddress+":13"); p = Manager.createPlayer("capture://audio?encoding=pcm&rate=11025&bits=16&channels=1"); p.realize(); RecordControl rc = (RecordControl)p.getControl("RecordControl"); ByteArrayOutputStream output = new ByteArrayOutputStream(); OutputStream outstream =sc.openOutputStream(); rc.setRecordStream(output); rc.startRecord(); p.start(); int size=output.size(); int offset=0; while(true) { Thread.currentThread().sleep(5000); rc.commit(); output.flush(); size=output.size(); if(size0) { recordedSoundArray=output.toByteArray(); outstream.write(recordedSoundArray,0,size); } output.reset(); rc.reset(); rc.setRecordStream(output); rc.startRecord(); }

    Read the article

  • Rails: Can't set or update tag_list using a text field with acts_as_taggable_on

    - by Josh
    Hey everyone, I'm trying to add tagging to a rails photo gallery system I'm working on. It works from the back-end, but if I try to set or change it in the form view, it doesn't work. I added acts_as_taggable to the photo model and did the migrations. My gallery builder is programmed to add one tag automatically to each photo it creates. This works fine, just as if it were setting it for the console. However, I can't seem to set tags using a text_field in the photo form. Here's the code I added to my photo form: <p> <%= f.label :tag_list %><br /> <%= f.text_field :tag_list %> </p> Now, that's pretty trivial, and since :tag_list supports single-string comma-separated assignment (e.g. tag_list = "this, that, the other" #= ['this', 'that', 'the other']), I don't see why using a text field doesn't work. And to make even less sense, if a tag list has already been populated, the list will still show up in the text field when editing the photo. I just can't seem to commit any changes to the list. The documentation on their github page doesn't appear to give any information on how to set these values from the view. Any ideas? Oh, and I'm using the Rails 3 gem version.

    Read the article

  • Threads are blocked in malloc and free, virtual size

    - by Albert Wang
    Hi, I'm running a 64-bit multi-threaded program on the windows server 2003 server (X64), It run into a case that some of the threads seem to be blocked in the malloc or free function forever. The stack trace is like follows: ntdll.dll!NtWaitForSingleObject() + 0xa bytes ntdll.dll!RtlpWaitOnCriticalSection() - 0x1aa bytes ntdll.dll!RtlEnterCriticalSection() + 0xb040 bytes ntdll.dll!RtlpDebugPageHeapAllocate() + 0x2f6 bytes ntdll.dll!RtlDebugAllocateHeap() + 0x40 bytes ntdll.dll!RtlAllocateHeapSlowly() + 0x5e898 bytes ntdll.dll!RtlAllocateHeap() - 0x1711a bytes MyProg.exe!malloc(unsigned __int64 size=0) Line 168 C MyProg.exe!operator new(unsigned __int64 size=1) Line 59 + 0x5 bytes C++ ntdll.dll!NtWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!RtlpDebugPageHeapFree() ntdll.dll!RtlDebugFreeHeap() ntdll.dll!RtlFreeHeapSlowly() ntdll.dll!RtlFreeHeap() MyProg.exe!free(void * pBlock=0x000000007e8e4fe0) C BTW, the param values passed to the new operator is not correct here maybe due to optimization. Also, at the same time, I found in the process Explorer, the virtual size of this program is 10GB, but the private bytes and working set is very small (<2GB). We did have some threads using virtualalloc but in a way that commit the memory in the call, and these threads are not blocked. m_pBuf = VirtualAlloc(NULL, m_size, MEM_COMMIT, PAGE_READWRITE); ...... VirtualFree(m_pBuf, 0, MEM_RELEASE); This looks strange to me, seems a lot of virtual space is reserved but not committed, and malloc/free is blocked by lock. I'm guessing if there's any corruptions in the memory/object, so plan to turn on gflag with pageheap to troubleshoot this. Does anyone has similar experience on this before? Could you share with me so I may get more hints? Thanks a lot!

    Read the article

  • Error when creating an image from a UIView

    - by Raphael Pinto
    Hi, I'm creating an image from a view whith the folowing code : UIGraphicsBeginImageContext(myView.bounds.size); [myView.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil); The image is created in the user album but in the consol I get this : 2010-05-11 10:17:17.974 myApp[3875:1807] sqlite error 8 [attempt to write a readonly database] 2010-05-11 10:17:18.014 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3d87f 0x34e3b029 0x34e3b22b 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.077 myApp[3875:1807] sqlite error 8 [attempt to write a readonly database] 2010-05-11 10:17:18.087 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3b053 0x34e3b22b 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.091 myApp[3875:1807] sqlite error 8 [attempt to write a readonly database] 2010-05-11 10:17:18.095 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3b085 0x34e3b22b 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.111 myApp[3875:1807] sqlite error 1 [SQL logic error or missing database] 2010-05-11 10:17:18.115 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3d87f 0x34e3b029 0x34e3b4dd 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.120 myApp[3875:1807] sqlite error 1 [SQL logic error or missing database] 2010-05-11 10:17:18.124 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3b053 0x34e3b4dd 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.129 myApp[3875:1807] sqlite error 1 [cannot commit - no transaction is active] 2010-05-11 10:17:18.133 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3b085 0x34e3b4dd 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 I don't know where the problem come from. Can you help me?

    Read the article

  • Hibernate Search + Spring

    - by Zane
    I'm trying to integrate Hibernate Search with Spring, but I can't seem to index anything. I was able to get Hibernate Search to work without Spring, but I'm having a problem integrating it with Spring. Any help would be much appreciated. Below is my springmvc-servlet.xml: <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean"> <property name="persistenceUnitName" value="enewsclipsPersistenceUnit" /> </bean> And here is my DAO class: @Repository public class SearchDaoImpl implements SearchDao { JpaTemplate jpaTemplate; @Autowired public SearchDaoImpl(EntityManagerFactory entityManagerFactory) { this.jpaTemplate = new JpaTemplate(entityManagerFactory); } @SuppressWarnings("unchecked") public void updateSearchIndex() { /* Implement the callback method */ jpaTemplate.execute(new JpaCallback() { public Object doInJpa(EntityManager em) throws PersistenceException { List<Article> articles = em.createQuery("select a from Article a").getResultList(); FullTextEntityManager ftEm = Search.getFullTextEntityManager(em); ftEm.getTransaction().begin(); for(Article article : articles) { System.out.println("Indexing Item " + article.getTitle()); ftEm.index(article); } ftEm.getTransaction().commit(); return null; } }); } } I think that it may have to do with the transactions but I'm not exactly sure. If you could just point me in the right direction, that would be helpful too! Thank you.

    Read the article

  • Symfony 1.4 - Don't save a blank password on a executeUpdate action.

    - by Twelve47
    I have a form to edit a UserProfile which is stored in mysql db. Which includes the following custom configuration: public function configure() { $this->widgetSchema['password']=new sfWidgetFormInputPassword(); $this->validatorSchema['password']->setOption('required', false); // you don't need to specify a new password if you are editing a user. } When the user tries to save the executeUpdate method is called to commit the changes. If the password is left blank, the password field is set to '', but I want it to retain the old password instead of overwriting it. What is the best (/most in the symfony ethos) way of doing this? My solution was to override the setter method on the model (which i had done anyway for password encryption), and ignore blank values. public function setPassword( $password ) { if ($password=='') return false; // if password is blank don't save it. return $this->_set('password', UserProfile ::encryptPassword( $password )); } It seems to work fine like this, but is there a better way? If you're wondering I cannot use sfDoctrineGuard for this project as I am dealing with a legacy database, and cannot change the schema.

    Read the article

  • Subversion freaking out on me!

    - by Malfist
    I have two copies of a site, one is the production copy, and the other is the development copy. I recently added everything in the production to a subversion repository hosted on our linux backup server. I created a tag of the current version and I was done. I then copied the development copy overtop of the production copy (on my local machine where I have everything checked out). There are only 10-20 files changed, however, when I use tortoise SVN to do a commit, it says every file has changed. The diff file generated shows subversion removing everything, and replacing it with the new version (which is the exact same). What is going on? How do I fix it? An example diff: Index: C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html =================================================================== --- C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (revision 5) +++ C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (working copy) @@ -1,4 +1,4 @@ -<html> -<body bgcolor="#FFFFFF"> -</body> +<html> +<body bgcolor="#FFFFFF"> +</body> </html> \ No newline at end of file

    Read the article

  • FluentNHibernate error -- "Invalid object name"

    - by goober
    I'm attempting to do the most simple of mappings with FluentNHibernate & Sql2005. Basically, I have a database table called "sv_Categories". I'd like to add a category, setting the ID automatically, and adding the userid and title supplied. Database table layout: CategoryID -- int -- not-null, primary key, auto-incrementing UserID -- uniqueidentifier -- not null Title -- varchar(50) -- not null Simple. My SessionFactory code (which works, as far as I can tell): _SessionFactory = Fluently.Configure().Database( MsSqlConfiguration.MsSql2005 .ConnectionString(c => c.FromConnectionStringWithKey("SVTest"))) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<CategoryMap>()) .BuildSessionFactory(); My ClassMap code: public class CategoryMap : ClassMap<Category> { public CategoryMap() { Id(x => x.ID).Column("CategoryID").Unique(); Map(x => x.Title).Column("Title").Not.Nullable(); Map(x => x.UserID).Column("UserID").Not.Nullable(); } } My Class code: public class Category { public virtual int ID { get; private set; } public virtual string Title { get; set; } public virtual Guid UserID { get; set; } public Category() { // do nothing } } And the page where I save the object: public void Add(Category catToAdd) { using (ISession session = SessionProvider.GetSession()) { using (ITransaction Transaction = session.BeginTransaction()) { session.Save(catToAdd); Transaction.Commit(); } } } I receive the error Invalid object name 'Category'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Invalid object name 'Category'. I think it might be that I haven't told the CategoryMap class to use the "sv_Categories" table, but I'm not sure how to do that. Any help would be appreciated. Thanks!

    Read the article

  • Problem updating collection using JPA

    - by FarmBoy
    I have an entity class Foo foo that contains Collection<Bar> bars. I've tried a variety of ways, but I'm unable to successfully update my collection. One attempt: foo = em.find(key); foo.getBars().clear(); foo.setBars(bars); em.flush; \\ commit, etc. This appends the new collection to the old one. Another attempt: foo = em.find(key); bars = foo.getBars(); for (Bar bar : bars) { em.remove(bar); } em.flush; At this point, I thought I could add the new collection, but I find that the entity foo has been wiped out. Here are some annotations. In Foo: @OneToMany(cascade = { CascadeType.ALL }, mappedBy = "foo") private List<Bar> bars; In Bar: @ManyToOne(optional = false, cascade = { CascadeType.ALL }) @JoinColumn(name = "FOO_ID") private Foo foo; Has anyone else had trouble with this? Any ideas?

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • How do I prevent a branch from being pushed to another branch in BZR?

    - by cabbey
    We use a dev-test-prod branching scheme with bzr 2. I'd like to setup a bzr hook on the prod branch that will reject a push from the test branch. Looking at the bzr docs, this looks doable, but I'm kinda surprised that my searches don't turn up any one having done it, at least not via any of the keywords I've thought to search by. I'm hoping someone has already gotten this working and can share their path to success. My current thought is to use the pre_change_branch_tip hook to check for the presence of a file on the test branch. If it's present, fail the commit. You may ask, why test for a file, why not just test the branch name? Because I actually need to handle the case where our developers have branched their devel branch, pulled in the shared test branch and are now (erroneously) pushing that test branch to production instead of pushing their feature branch to production. And it seems a billion times easier to look for a file in the new branch than to try to interrogate the sending branch's lineage. So has someone done this? seen it done? or do I get to venture out into the uncharted wasteland that is hook development with bzr? :)

    Read the article

  • VisualSVN How to roll back the revision number ?

    - by Ita
    The company I work in has suffered a major server failure. During this failure the SVN Repository was lost. But there is still hope ! We have an old backup of the repository which I've managed to successfully restore using VisualSVN. The problem I'm facing now is that I can't update / commit pre-failure checkedout folders. The reason for this problem is that for instance: a local folder has a revision number of 2361, while the repository itself holds a revision number of 2290, which is older. Is there a way to deal with this issue ? Can I some how change the revision numbers on either the local copy or the server copy? A few points: I'm using TortoiseSVN 1.6.6. I can checkout folders from the repo and the connection is active. I've picked one of my folders and used the Relocate option on it. This helped me see that there is something wrong with the revision number I've experimented a bit with the merge option but this lead me now where special. (I'm open for suggestions ) Thank you for your time, Ita

    Read the article

  • Updating a field in a record dyanamically in extjs

    - by Daemon
    Scenario I want to update the column data of particular record in grid having store with static data. Here is my store: : : extend : 'Ext.data.Store', model : 'MyModel', autoLoad:true, proxy: { type: 'ajax', url:'app/data/data.json', reader: { type: 'json', root: 'users' } }, My data.json { 'users': [ { QMgrStatus:"active", QMgrName: 'R01QN00_LQYV', ChannelStatus:'active', ChannelName : 'LQYV.L.CLNT', MxConn: 50 } ] } What I am doing to update the record : var grid = Ext.getCmp('MyGrid'); var store = Ext.getStore('Mystore'); store.each(function(record,idx){ val = record.get('ChannelName'); if(val == "LQYV.L.CLNT"){ record.set('ChannelStatus','inactive'); record.commit(); } }); console.log(store); grid.getView().refresh(); MY PROBLEM I am getting the record updated over here.It is not getting reflected in my grid panel.The grid is using the same old store(static).Is the problem of static data? Or am I missing something or going somewhere wrong? Please help me out with this issue.Thanks a lot. MY EDIT I am tryng to color code the column based on the status.But here I am always getting the status="active" even though I am updating the store. What I am trying to do in my grid { xtype : 'grid', itemId : 'InterfaceQueueGrid', id :'MyGrid', store : 'Mytore', height : 216, width : 600, columns : [ { text: 'QueueMgr Status', dataIndex: 'QMgrStatus' , width:80 }, { text: 'Queue Manager \n Name', dataIndex: 'QMgrName' , width: 138}, { text: 'Channel Status',dataIndex: 'ChannelStatus' , width: 78,align:'center', renderer: function(value, meta, record) { var val = record.get('ChannelStatus'); console.log(val); // Here I am always getting status="active". if (val == 'inactive') { return '<img src="redIcon.png"/>'; }else if (val == 'active') { return '<img src="greenIcon.png"/>'; } } }, { text: 'Channel Name', align:'center', dataIndex: 'ChannelName', width: 80} { text: 'Max Connections', align:'center', dataIndex: 'MxConn', width: 80} ] }

    Read the article

  • Best practice for managing changes to 3rd party open source libraries?

    - by Jeff Knecht
    On a recent project, I had to modify an open source library to address a functional deficiency. I followed the SVN best practice of creating a "vendor source" repository and made my changes there. I also submitted the patch to the mailing list of that project. Unfortunately, the project only has a couple of maintainers and they are very slow to commit updates. At some point, I expect the library to be updated, and I expect that my project will want to use the upgraded library. But now I have a potential problem... I don't know whether my patch will have been applied to this future release of the 3rd party library. I also don't know whether my patch will even still be compatible with the internal implementation of the upgraded components. And in all likelihood, someone else will be maintaining my project by that point. Should I name the library in a special way so it is clear that we made special modifications (eg. commons-lang-2.x-for-my-project.jar)? Should I just document the patch and reference the SVN location and a link to the mailing list item in a README? No option that I can think of seems to be fool-proof in an upgrade scenario. What is the best practice for this?

    Read the article

  • Repository layout and sparse checkouts

    - by chuanose
    My team is considering to move from Clearcase to Subversion and we are thinking of organising the repository like this: \trunk\project1 \trunk\project2 \trunk\project3 \trunk\staticlib1 \trunk\staticlib2 \trunk\staticlib3 \branches\.. \tags\.. The issue here is that we have lots of projects (1000+) and each project is a dll that links in several common static libraries. Therefore checking out everything in trunk is a non-starter as it will take way too long (~2 GB), and is unwieldy for branching. Using svn:externals to pull out relevant folders for each project doesn't seem ideal because it results in several working copies for each static library folder. We also cannot do an atomic commit if the changes span the project and some static libraries. Sparse checkouts sounds very suitable for this as we can write a script to pull out only the required directories. However when we want to merge changes from a branch back to the trunk we will need to first check out a full trunk. Wonder if there is some advice on 1) a better repository organization or 2) a way to merge over branch changes to a trunk working copy that is sparse?

    Read the article

  • Using "from __future__ import division" in my program, but it isn't loaded with my program

    - by Sara Fauzia
    I wrote the following program in Python 2 to do Newton's method computations for my math problem set, and while it works perfectly, for reasons unbeknownst to me, when I initially load it in ipython with %run -i NewtonsMethodMultivariate.py, the Python 3 division is not imported. I know this because after I load my Python program, entering x**(3/4) gives "1". After manually importing the new division, then x**(3/4) remains x**(3/4), as expected. Why is this? # coding: utf-8 from __future__ import division from sympy import symbols, Matrix, zeros x, y = symbols('x y') X = Matrix([[x],[y]]) tol = 1e-3 def roots(h,a): def F(s): return h.subs({x: s[0,0], y: s[1,0]}) def D(s): return h.jacobian(X).subs({x: s[0,0], y: s[1,0]}) if F(a) == zeros(2)[:,0]: return a else: while (F(a)).norm() > tol: a = a - ((D(a))**(-1))*F(a) print a.evalf(10) I would use Python 3 to avoid this issue, but my Linux distribution only ships SymPy for Python 2. Thanks to the help anyone can provide. Also, in case anyone was wondering, I haven't yet generalized this script for nxn Jacobians, and only had to deal with 2x2 in my problem set. Additionally, I'm slicing the 2x2 zero matrix instead of using the command zeros(2,1) because SymPy 0.7.1, installed on my machine, complains that "zeros() takes exactly one argument", though the wiki suggests otherwise. Maybe this command is only for the git version.

    Read the article

  • can I put my sqlite connection and cursor in a function?

    - by steini
    I was thinking I'd try to make my sqlite db connection a function instead of copy/pasting the ~6 lines needed to connect and execute a query all over the place. I'd like to make it versatile so I can use the same function for create/select/insert/etc... Below is what I have tried. The 'INSERT' and 'CREATE TABLE' queries are working, but if I do a 'SELECT' query, how can I work with the values it fetches outside of the function? Usually I'd like to print the values it fetches and also do other things with them. When I do it like below I get an error Traceback (most recent call last): File "C:\Users\steini\Desktop\py\database\test3.py", line 15, in <module> for row in connection('testdb45.db', "select * from users"): ProgrammingError: Cannot operate on a closed database. So I guess the connection needs to be open so I can get the values from the cursor, but I need to close it so the file isn't always locked. Here's my testing code: import sqlite3 def connection (db, arg): conn = sqlite3.connect(db) conn.execute('pragma foreign_keys = on') cur = conn.cursor() cur.execute(arg) conn.commit() conn.close() return cur connection('testdb.db', "create table users ('user', 'email')") connection('testdb.db', "insert into users ('user', 'email') values ('joey', 'foo@bar')") for row in connection('testdb45.db', "select * from users"): print row How can I make this work?

    Read the article

  • Visual SourceSafe (VSS): "Access to file (filename) denied" error

    - by tk-421
    Hi, can anybody help with the above SourceSafe error? I've spent hours trying to find a fix. I've also Googled the heck out of it but couldn't find a scenario matching mine, because in my case only a few files (not all) are affected. Here's what I found: only a few files in my project generate this error other files in the same directory (for example, App_Code has one of the problem files) work fine I've tried checking out from both the VSS client and Visual Studio another developer can check out the main problem file without any problems This sounds like a permission issue for my user, right? However: I found the location of one of the problem files in VSS's data directory (using VSS's naming format, as in 'fddaaaaa.a') and checked its permissions; everything looks fine and its permissions match those of other files I can check out successfully I can see no differences in the file properties between working and non-working files What else can I check? Has anyone encountered this problem before and found a solution? Thanks. P.S.: SourceGear, svn or git are not options, unfortunately. P.P.S.: Tried unsuccessfully to add tag "sourcesafe." EDIT: Hey Paddy, I tried to click 'add comment' to respond to your comment, but I'm getting a javascript error when loading this page in IE8 ("jquery undefined," etc.) so this isn't working. This is when checking out files, and yes, I've obliterated my local copy more times than I can remember. ;) EDIT 2: Thanks for the responses, guys (again I can't 'add comment' due to jQuery not loading, maybe blocked as discussed in Meta). If the problem was caused by antivirus or a bad disk, would other users still be able to check out the file(s)? That's the case here, which makes me think it's a permission issue specific to my account. However I've looked at the permissions and they match both other users' settings and settings on other files which I can check out.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >