Search Results

Search found 3634 results on 146 pages for 'commit charge'.

Page 122/146 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Hibernate: "Field 'id' doesn't have a default value"

    - by André Neves
    Hi, all. I'm facing what I think is a simple problem with Hibernate, but can't get over it (Hibernate forums being unreachable certainly doesn't help). I have a simple class I'd like to persist, but keep getting: SEVERE: Field 'id' doesn't have a default value Exception in thread "main" org.hibernate.exception.GenericJDBCException: could not insert: [hibtest.model.Mensagem] at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:103) at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:91) [ a bunch more ] Caused by: java.sql.SQLException: Field 'id' doesn't have a default value [ a bunch more ] The relevant code for the persisted class is: package hibtest.model; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.Inheritance; import javax.persistence.InheritanceType; @Entity @Inheritance(strategy = InheritanceType.JOINED) public class Mensagem { protected Long id; protected Mensagem() { } @Id @GeneratedValue public Long getId() { return id; } public Mensagem setId(Long id) { this.id = id; return this; } } And the actual running code is just plain: SessionFactory factory = new AnnotationConfiguration() .configure() .buildSessionFactory(); { Session session = factory.openSession(); Transaction tx = session.beginTransaction(); Mensagem msg = new Mensagem("YARR!"); session.save(msg); tx.commit(); session.close(); } I tried some "strategies" within the GeneratedValue annotation but it just doesn't seem to work. Initializing id doesn't help either! (eg Long id = 20L). Could anyone shed some light? EDIT 2: confirmed: messing with@GeneratedValue(strategy = GenerationType.XXX) doesn't solve it SOLVED: recreating the database solved the problem

    Read the article

  • Bamboo to Build Specific SVN Revision

    - by Anton Gogolev
    Hi! Imagine there's a project in Bamboo with two build plans: Staging Deployment (SD) and Production Deployment (PD). Building SD checks out latest sources, builds them and deploys a web site to a staging server. Currently, PD does all the same, namely deploys the latest version of a web site to a production server. Clearly, this is not very good: I want to be able to deploy the same exact version of a web site that was previously deployed on a staging server, not the latest one. To illustrate: suppose we're at r101 in SVN repo. Clicking "Build SD" will deploy a web site version, say, 2.1.0.101 to staging server. Now we commit a breaking change and end up at r102. Now I want to deploy to a production server. If I hit "Build PD", Bamboo will happily check out r102 and build it, resulting in version 2.1.0.102 being deployed to a production server. What I want it to do, however, is to build and deploy a version which was previously built in an SD plan (that is, 2.1.0.101). Of course I can make SD plan to tag latest-successful build as tags/builds/latest, but I would rather have Bamboo itself handle that.

    Read the article

  • Github file size limit changed 6/18/13. Can't push now

    - by slindsey3000
    How does this change as of June 18, 2013 affect my existing repository with a file that exceeds that limit? I last pushed 2 months ago with a large file. I have a large file that I have removed locally but I can not push anything now. I get a "remote error" ... remote: error: File cron_log.log is 126.91 MB; this exceeds GitHub's file size limit of 100 MB I added the file to .gitignore after original push... But it still exists on remote (origin) Removing it locally should get rid of it at origin(Github) right? ... but ... it is not letting me push because there is a file on Github that exceeds the limit... https://github.com/blog/1533-new-file-size-limits These are the commands I issued plus error messages.. git add . git commit -m "delete cron_log.log" git push origin master remote: Error code: 40bef1f6653fd2410fb2ab40242bc879 remote: warning: Error GH413: Large files detected. remote: warning: See http://git.io/iEPt8g for more information. remote: error: File cron_log.log is 141.41 MB; this exceeds GitHub's file size limit of 100 MB remote: error: File cron_log.log is 126.91 MB; this exceeds GitHub's file size limit of 100 MB To https://github.com/slinds(omited_here)/linexxxx(omited_here).git ! [remote rejected] master - master (pre-receive hook declined) error: failed to push some refs to 'https://github.com/slinds(omited_here) I then tried things like git rm cron_log.log git rm --cached cron_log.log Same error.

    Read the article

  • Cannot turn off autocommit in a script using the Django ORM

    - by Wes
    I have a command line script that uses the Django ORM and MySQL backend. I want to turn off autocommit and commit manually. For the life of me, I cannot get this to work. Here is a pared down version of the script. A row is inserted into testtable every time I run this and I get this warning from MySQL: "Some non-transactional changed tables couldn't be rolled back". #!/usr/bin/python import os import sys django_dir = os.path.abspath(os.path.normpath(os.path.join(os.path.dirname(__file__), '..'))) sys.path.append(django_dir) os.environ['DJANGO_DIR'] = django_dir os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' from django.core.management import setup_environ from myproject import settings setup_environ(settings) from django.db import transaction, connection cursor = connection.cursor() cursor.execute('SET autocommit = 0') cursor.execute('insert into testtable values (\'X\')') cursor.execute('rollback') I also tried placing the insert in a function and adding Django's commit_manually wrapper, like so: @transaction.commit_manually def myfunction(): cursor = connection.cursor() cursor.execute('SET autocommit = 0') cursor.execute('insert into westest values (\'X\')') cursor.execute('rollback') myfunction() I also tried setting DISABLE_TRANSACTION_MANAGEMENT = True in settings.py, with no further luck. I feel like I am missing something obvious. Any help you can give me is greatly appreciated. Thanks!

    Read the article

  • git-p4 submit fails with "Not a valid object name HEAD~261"

    - by Harlan
    I've got a git repository that I'd like to mirror to a Perforce repository. I've downloaded the git-p4 script (the more recent version that doesn't give deprecation warnings), and have been working with that. I've figured out how to pull changes from Perforce, but I'm getting an error when I try to sync changes from the git repo back. Here's what I've done so far: git clone [email protected]:asdf/qwerty.git git-p4 sync //depot/path/to/querty git merge remotes/p4/master (there was a single README file...) So, I've copied the origin to a clean, new director, got a lovely looking merged tree of files, and git status shows I'm up-to-date. But: > git-p4 submit fatal: Not a valid object name HEAD~261 Command failed: git cat-file commit HEAD~261 This thread on the git mailing list seems to be relevant, but I can't figure out what they're doing with all the A, B, and Cs. Could someone please clarify what "Not a valid object name" means, and what I can do to fix the problem? All I want to do is to periodically snapshot the origin/master into Perforce; a full history is not required. Thanks.

    Read the article

  • git push error 'remote rejected] master -> master (branch is currently checked out)'

    - by hap497
    Hi, Yesterday, I post a question regarding how to clone a git repository from 1 of my machine to another. http://stackoverflow.com/questions/2808177/how-can-i-git-clone-from-another-machine/2809612#2809612 I am able to successfully clone a git repository from my src (192.168.1.2) to my dest (192.168.1.1). But when I did an edit to a file and then do a 'git commit -a -m "test"' and then do a git push. I get this error on my dest (192.168.1.1): git push [email protected]'s password: Counting objects: 21, done. Compressing objects: 100% (11/11), done. Writing objects: 100% (11/11), 1010 bytes, done. Total 11 (delta 9), reused 0 (delta 0) error: refusing to update checked out branch: refs/heads/master error: By default, updating the current branch in a non-bare repository error: is denied, because it will make the index and work tree inconsistent error: with what you pushed, and will require 'git reset --hard' to match error: the work tree to HEAD. error: error: You can set 'receive.denyCurrentBranch' configuration variable to error: 'ignore' or 'warn' in the remote repository to allow pushing into error: its current branch; however, this is not recommended unless you error: arranged to update its work tree to match what you pushed in some error: other way. error: error: To squelch this message and still keep the default behaviour, set error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To git+ssh://[email protected]/media/LINUXDATA/working ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'git+ssh://[email protected]/media/LINUXDATA/working' I have 2 version of git, will that causes this problem? I have git 1.7 on 192.168.1.2 (src) but git 1.5 on 192.168.1.1 (dest). I appreciate if someone can help me with this. Thank you.

    Read the article

  • mercurial .hgrc notify hook

    - by Eeyore
    Could someone tell me what is incorrect in my .hgrc configuration? I am trying to use gmail to send a e-mail after each push and/or commit. .hgrc [paths] default = ssh://www.domain.com/repo/hg [ui] username = intern <[email protected]> ssh="C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [extensions] hgext.notify = [hooks] changegroup.notify = python:hgext.notify.hook incoming.notify = python:hgext.notify.hook [email] from = [email protected] [smtp] host = smtp.gmail.com username = [email protected] password = sure port = 587 tls = true [web] baseurl = http://dev/... [notify] sources = serve push pull bundle test = False config = /path/to/subscription/file template = \ndetails: {baseurl}{webroot}/rev/{node|short}\nchangeset: {rev}:{node|short}\nuser: {author}\ndate: {date|date}\ndescription:\n{desc}\n maxdiff = 300 Error Incoming comand failed for P/project. running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg! , error code: -1 running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg!

    Read the article

  • SVN Workflow - Chicken Before the Egg - Before merging V1 with V2, I need code from V1 to work on V2

    - by Jake
    Hi, Our distributed team (3 internal devs and 3+ external devs) use SVN to manage our codebase for a web site. We have a branch for each minor version (4.1.0, 4.1.1, 4.1.2, etc...). We have a trunk which we merge each version into when we do a release and publish to our site. An example of the problem we are having is thus: A new feature is added, lets call it "Ability to Create A Project" to 4.1.1. Another feature which depends on the one in 4.1.1 is scheduled to go in 4.1.2, called "Ability to Add Tasks to Projects". So, on Monday, we say 4.1.1 is 'closed' and needs to be tested. Our remote developers usually will start working on features/tickets for 4.1.2 at this point. Throughout the week we will test 4.1.1 and fix any bugs and commit them back to 4.1.1. Then, on Friday or so, we will tag 4.1.1, merge it with trunk, and finally, merge it with 4.1.2. But, for the 4-5 days we are testing, 4.1.2 doesn't have the code from 4.1.1 that some of the new features for 4.1.2 depend on. So a dev who is adding the "Ability to Add Tasks To Projects" feature, doesn't have the "Ability to Create a Project" feature to build upon, and has to do some file copy shenanigans to be able to keep working on it. What could/should we do to smooth out this process? P.S. Apologies if this question has been asked before - I did search but couldn't find what I'm looking for.

    Read the article

  • What is causing this OverflowError in Django?

    - by orokusaki
    I'm using a normal ModelForm.save() to create an object, and this exception comes up. It worked fine before until I added commit_manually, transaction.rollback() and transaction.commit() to my view. Has anyone else ran into this? Is this because of sqlite3? OverflowError: long too big to convert C:\Python26\Lib\site-packages\django-trunk\django\db\backends\sqlite3\base.py in execute, line 197 params: (203866156270872165269663274649746494334L,) query: u'SELECT (1) AS "a", "auth_user"."id", "auth_user"."username", "auth_user"."first_name", "auth_user"."last_name", "auth_user"."email", "auth_user"."password", "auth_user"."is_staff", "auth_user"."is_active", "auth_user"."is_superuser", "auth_user"."last_login", "auth_user"."date_joined" FROM "auth_user" WHERE "auth_user"."id" = ? LIMIT 1' self <django.db.backends.sqlite3.base.SQLiteCursorWrapper object at 0x015D5A98> Why would that L param be passed in, and

    Read the article

  • I'm trying to populate a MySQL table with some data, but, mysqli won't let me insert every 10th stat

    - by Tunji Gbadamosi
    I want to initialise a 'ticket' table with some ticket IDs. To do this, I want to insert 120 ticket IDs into the table. However, at every 10th statement, MySQL tells me that the ID already exists and thus won't let me insert it. Here's my code: //make a query $insert_ticket_query = "INSERT INTO ticket (id) VALUES (?)"; $insert_ticket_stmt = $mysqli->stmt_init(); $insert_ticket_stmt->prepare($insert_ticket_query); $insert_ticket_stmt->bind_param('s', $ticket_id); $mysqli->autocommit(FALSE); //start transaction for($i=0;$i<NO_GUESTS;$i++){ $id = generate_id($i); $ticket_id = format_id($id, $prefix['ticket'], $suffix['ticket']); $t_id = $ticket_id; //echo '<p>'.$ticket_id.'</p>'; //$result = $mysqli->query("SELECT * FROM ticket WHERE id='".$ticket_id."'"); //$row_count = $result->num_rows; if(($result = $mysqli->query("SELECT * FROM ticket WHERE id='".$t_id."'")) == FALSE){ $result->close(); if($insert_ticket_stmt->execute()){ $mysqli->commit(); echo "<p>".$t_id."added to the ticket table!</p>"; } else{ $mysqli->rollback(); echo "problem inserting'".$t_id."' to the ticket table"; } } else{ echo "<p>".$t_id."already exists, so not adding it!</p>"; $result->close(); } } $mysqli->autocommit(TRUE); $insert_ticket_stmt->close(); ?>

    Read the article

  • OleDb database to DataSet and back in c#?

    - by Troy
    I'm writing a program that lets a user: Connect to an (arbitrary) View all of the tables in that database in separate DataGridViews Edit them in the program, generate random data, and see the results Choose to commit those changes or revert So I discovered the DataSet class, which looks like it's capable of holding everything that a database would, and I decided that the best thing to do here would be to load everything into one dataset, let the user edit it, and then save the dataset back to the database. The problem is that the only way I can find to load the database tables is this: set = new DataSet(); DataTable schema = connection.GetOleDbSchemaTable( OleDbSchemaGuid.Tables, new string[] { null, null, null, "TABLE" }); foreach (DataRow row in schema.Rows) { string tableName = row.Field<string>("TABLE_NAME"); DataTable dTable = new DataTable(); new OleDbDataAdapter("SELECT * FROM " + tableName, connection).Fill(dTable); dTable.TableName = tableName; set.Tables.Add(dTable); } while it seems like there should be a simpler way given that datasets appear to be designed for exactly this purpose. The real problem though is when I try saving these things. In order to use the OleDbDataAdapter.Update() method, I'm told that I have to provide valid INSERT queries. Doesn't that kind of negate the whole point of having a class to handle this stuff for me? Anyway, I'm hoping somebody can either explain how to load and save a database into a dataset or maybe give me a better idea of how to do what I'm trying to do. I could always parse the commands together myself, but that doesn't seem like the best solution.

    Read the article

  • PHP Site Deployment Suggestion

    - by TheOnly92
    I'm currently quite troubled by the way of deployment my team is adopting... It's very old-fashioned and I know it doesn't work very well. But I don't exactly know how to change it, so please give some suggestions about it... Here is our current setup: 2 webservers 1 database server 1 test server Current deployment adaptation 1. We develop and work on the test server, every changes is uploaded manually to the test server. 2. When a change or feature is complete, we then commit the changes to SVN repository. 3. After committing the changes, we then upload our changes to the first webserver, where there will be a cronjob running every minute to sync the files between the servers. Something very annoying is, whenever we upload a file just as the syncing job starts, the file that is sync-ed will appear corrupted, since it is only half-uploaded. Another thing is whenever there is a deployment fault, it will be extremely difficult to revert. These are basically the problem I'm facing, what should I do?

    Read the article

  • Status of Data in Rollback of Large Transaction in SQL Server

    - by Lloyd Banks
    I have a data warehousing procedure that downloads and replaces dozens of tables from a linked server to a local database. Every once in a while, the code will get stuck on one of the tables on the linked server because the table on the linked server is in a state of transition. I am under the assumption that since the entire procedure is considered one transaction commit, when the procedure gets stuck, none of the changes made by the procudure so far would have committed. But the opposite seems to be true, tables that were "downloaded" before the procedure got stuck would have been updated with today's versions on the local server. Shouldn't SQL Server wait for the entire procedure to finish before the changes are durable? CREATE PROCEDURE MYIMPORT AS BEGIN SET NOCOUNT ON IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE1') DROP TABLE TABLE1 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE1 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE1') IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE2') DROP TABLE TABLE2 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE2 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE2') --IF THE PROCEDURE GETS STUCK HERE, THEN CHANGES TO TABLE1 WOULD HAVE BEEN MADE ON THE LOCAL SERVER WHILE NO CHANGES WOULD HAVE BEEN MADE TO TABLE3 ON THE LOCAL SERVER IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE3') DROP TABLE TABLE3 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE3 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE3') END

    Read the article

  • persistence.xml ignores Hibernate and chooses DataNucleus

    - by iNPUTmice
    Hi, I'm toying around with GWT (dunno if this matters) and Hibernate. I've created a a file persistence.xml in META-INF with (amoung) other configuration the line: org.hibernate.ejb.HibernatePersistence But when I startup the EntityManager it chooses DataNucleus instead of Hibernate (which later fails because it isnt installed (jar are not in the class path)) Java Code is: EntityManagerFactory factory = Persistence.createEntityManagerFactory("gwt"); EntityManager em =factory.createEntityManager(); EntityTransaction transacation = em.getTransaction(); transacation.begin(); Campaign campaign = new Campaign(); campaign.setName("Test"); em.persist(campaign); transacation.commit(); config file contains: <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="gwt" transaction-type="JTA"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>java:/DefaultDS</jta-data-source> <properties> ...

    Read the article

  • Is there an "embedded DBMS" to support multiple writer applications (processes) on the same db files

    - by Amir Moghimi
    I need to know if there is any embedded DBMS (preferably in Java and not necessarily relational) which supports multiple writer applications (processes) on the same set of db files. BerkeleyDB supports multiple readers but just one writer. I need multiple writers and multiple readers. UPDATE: It is not a multiple connection issue. I mean I do not need multiple connections to a running DBMS application (process) to write data. I need multiple DBMS applications (processes) to commit on the same storage files. HSQLDB, H2, JavaDB (Derby) and MongoDB do not support this feature. I think that there may be some File System limitations that prohibit this. If so, is there a File System that allows multiple writers on a single file? Use Case: The use case is a high-throughput clustered system that intends to store its high-volume business log entries into a SAN storage. Storing business logs in separate files for each server does not fit because query and indexing capabilities are needed on the whole biz logs. Because "a SAN typically is its own network of storage devices that are generally not accessible through the regular network by regular devices", I want to use SAN network bandwidth for logging while cluster LAN bandwidth is being used for other server to server and client to server communications.

    Read the article

  • Paperclip wont save image in rails app

    - by Micke
    Hello stackoverflow. I am trying to use Paperclip with my rails app to add an avatar to a user but it wont save my image when creating the user. This is what the model looks like: class User < ActiveRecord::Base has_attached_file :avatar And the registerform in haml: - form_for :user, @user, :url => { :action => "signup" }, :html => { :multipart => true } do |f| ... ... %li %div{:class => "header"} Profilepicture %div{:class => "input"} = f.file_field :avatar And when i look in the log this is what is being passet to the "signup" action: Parameters: {"commit"=>"Save", "action"=>"signup", "controller"=>"user/register", "user"=>{"name"=>"Micke Lisinge", "birthmonth"=>"07", "password_confirmation"=>"[FILTERED]", "nickname"=>"lisinge", "avatar"=>#<File:/tmp/RackMultipart20100426-3076-1x04oxy-0>, "gen"=>"m", "birthday"=>"23", "password"=>"[FILTERED]", "birthyear"=>"1992", "email"=>"[email protected]"}} [paperclip] Saving attachments. Paperclip says it is saving the template but when i look in the public folder in my app it has created a system but the system folder is empty. So it seems like it isnt saving the picture to the folder. It gets handled by the form and saved in my /tmp folder. Maybe you guys have any tips or know what this problem might be? Thanks in advance, Micke.

    Read the article

  • What does this svn2git error mean?

    - by Hisham
    I am trying to import my repository from svn to git using svn2git, but it seems like it's failing when it hits a branch. What's the problem? Found possible branch point: https://s.aaa.com/repo/trunk/project => https://s.aaa.com/repo/branches/project-beta1.0, 128 Use of uninitialized value in substitution (s///) at /opt/local/libexec/git-core/git-svn line 1728. Use of uninitialized value in concatenation (.) or string at /opt/local/libexec/git-core/git-svn line 1728. refs/remotes/trunk: 'https://s.aaa.com/repo' not found in '' Running command: git branch -l --no-color * master Running command: git branch -r --no-color trunk Running command: git checkout trunk Note: checking out 'trunk'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b new_branch_name HEAD is now at f4e6268... Changing svn repository in cap files Running command: git branch -D master Deleted branch master (was f4e6268). Running command: git checkout -f -b master Switched to a new branch 'master' Running command: git gc Counting objects: 450, done. Delta compression using up to 2 threads. Compressing objects: 100% (368/368), done. Writing objects: 100% (450/450), done. Total 450 (delta 63), reused 450 (delta 63)

    Read the article

  • What purpose does “using” serve when used the following way

    - by user287745
    What purpose does “using” serve when used the following way:- ONE EXAMPLE IS THIS, (AN ANSWERER- @richj - USED THIS CODE TO SOLVE A PROBLEM THANKS) private Method(SqlConnection connection) { using (SqlTransaction transaction = connection.BeginTransaction()) { try { // Use the connection here .... transaction.Commit(); } catch { transaction.Rollback(); throw; } } } OTHER EXAMPLE I FOUND WHILE READING ON MICROSOFT SUPPORT SITE public static void ShowSqlException(string connectionString) { string queryString = "EXECUTE NonExistantStoredProcedure"; StringBuilder errorMessages = new StringBuilder(); using (SqlConnection connection = new SqlConnection(connectionString)) { SqlCommand command = new SqlCommand(queryString, connection); try { command.Connection.Open(); command.ExecuteNonQuery(); } catch (SqlException ex) { for (int i = 0; i < ex.Errors.Count; i++) { errorMessages.Append("Index #" + i + "\n" + "Message: " + ex.Errors[i].Message + "\n" + "LineNumber: " + ex.Errors[i].LineNumber + "\n" + "Source: " + ex.Errors[i].Source + "\n" + "Procedure: " + ex.Errors[i].Procedure + "\n"); } Console.WriteLine(errorMessages.ToString()); } } } I AM doing at top of page as using system.data.sqlclient etc so why this using thing in middle of code, What if I omit it (I know the code will work) but what functionality will I be losing

    Read the article

  • Cleanest RESTful design for purely "action" calls?

    - by Josh Handel
    Hello all, I am sticking my toe in the RESTful waters and I just can't find a "satisfactory" solution to how to handle truely "action" oriented calls on a RESTful service? My quandry can be broken down into two parts. 1) Transactional calls: I understand the idea of having an ActionTransactor that you get a resource too with a post, update the parameters and then commit with a PUT (as described all over the place and in the Orilly RESTful Web services book).. But I struggle with the idea of keeping URLs with states present for ever.. If we really honestly don't need to keep a transaction for ever can we kill the resource URI? do URIs need to be perminate or can they be transiant URIs that expire 2) Non transactional calls: these might be calls to perform some workflow that spans multiple resources but having a resource just doesn't make since.. An example might be to re-generating some calculated ans cached value like a large aggreget or re-indexing blog or some such "purely" action. Anyways, I'm curious about the communities thoughts on this... Thus far, I've read that Overloading Post is the cleanest way to handle part 2.. But there is an equal amount of argument against that approach as well. And (to me) its not self documenting which I though was one of the key design goals of RESTful APIs.

    Read the article

  • How to determine if DataGridView contains uncommitted changes when bound to a SqlDataAdapter

    - by JYelton
    I have a form which contains a DataGridView, a BindingSource, a DataTable, and a SqlDataAdapter. I populate the grid and data bindings as follows: private BindingSource bindingSource = new BindingSource(); private DataTable table = new DataTable(); private SqlDataAdapter dataAdapter = new SqlDataAdapter("SELECT * FROM table ORDER BY id ASC;", ClassSql.SqlConn()); private void LoadData() { table.Clear(); dataGridView1.AutoGenerateColumns = false; dataGridView1.DataSource = bindingSource; SqlCommandBuilder commandBuilder = new SqlCommandBuilder(dataAdapter); table.Locale = System.Globalization.CultureInfo.InvariantCulture; dataAdapter.Fill(table); bindingSource.DataSource = table; } The user can then make changes to the data, and commit those changes or discard them by clicking either a save or cancel button, respectively. private void btnSave_Click(object sender, EventArgs e) { // save everything to the displays table dataAdapter.Update(table); } private void btnCancel_Click(object sender, EventArgs e) { // alert user if unsaved changes, otherwise close form } I would like to add a dialog if cancel is clicked that warns the user of unsaved changes, if unsaved changes exist. Question: How can I determine if the user has modified data in the DataGridView but not committed it to the database? Is there an easy way to compare the current DataGridView data to the last-retrieved query? (Note that there would not be any other threads or users altering data in SQL at the same time.)

    Read the article

  • Best Practices for Setup and Management of an Open Source Project

    - by VirtuosiMedia
    Later this year I want to release a PHP framework that I've been working on as open source. I do use source control (SVN), but it's on an extremely limited basis. I'm self-taught, I develop by myself and don't have the experience of working with large teams. I have some ideas about what can help make a project successful, but I'm fuzzy on some of the details. Since it's not yet released, I want to do everything I can to set up the right infrastructure from the beginning. What do I need to know in order to setup and manage a successful project? Some ideas that I have to make it successful (beyond marketing it): Good documentation and tutorials Automated unit tests and builds to push update to the website A clear roadmap Bug Tracking integrated with the source control A style guide to keep the code consistent along with clear A forum for the community to get support, share ideas, etc. A good example application built with the framework A blog to keep the community informed Maintaining backwards compatibility wherever possible Some of my questions: How do I setup and automate a one step submit-test-commit-generate API docs-push update to website process? How do I handle (technically) submissions from other users? How can I ensure that those submissions must be approved before being integrated? What are some of the pitfalls that can be avoided in terms of the project community? I'd prefer to have it be as friendly and helpful as possible without a lot of drama. I'd love to learn from your experience on any of these points. If you think I'm missing anything big, please share that as well. Any resources (preferably geared toward a beginner) that you could point me towards would also be greatly appreciated.

    Read the article

  • How to best implement Version Control for Web Development?

    - by Adam Taylor
    Version control systems are obviously important in development projects but there use in web development projects appears to be more complex, what with the requirement of having a web server to run all but the simplest of web applications. With that in mind, I have looked around and discovered a few different methods of using version control in web development projects: Provide each developer with a virtual machine which is a replication of the development server and have the developer run their working copy of the application in the virtual machine. Have each developer use a sub domain on the development server, e.g. john.project.com and checkout their working copy of the app to the directories the sub domain points to. Use the version control system to checkout code, make a change, commit the code and then check it on the development server (which points to the head of the repository). I can see a drawback of 1 being the added time required to create the virtual machines and ensure that the virtual machines are kept insync with the development server (also the need(?) to continuously change the developers host file to point at the virtual machine not the development server). I can see 2 possibly being a problem if absolute URLs are used within the site unless there is an easy way to update the configuration to use the new subdomains as well. 3 is the easiest to set up but is rather primitive and it will presumably become quite tedious for a developer to keep checking in the code after every time change. How have the users of stackoverflow used version control with web development projects and which method/workflow was most effective. Please also include extra methods I haven't thought of / read about.

    Read the article

  • python / sqlite - database locked despite large timeouts

    - by Chris Phillips
    Hi, I'm sure I'm missing something pretty obvious, but I can't for the life of me stop my pysqlite scripts crashing out with a database is locked error. I have two scripts, one to load data into the database, and one to read data out, but both will frequently, and instantly, crash depending on what the other is doing with the database at any given time. I've got the timeout on both scripts set to 30 seconds: cx = sqlite.connect("database.sql", timeout=30.0) and think I can see some evidence of the timeouts in that i get what appears to be a timing stamp (e.g 0.12343827e-06 0.1 - and how do I stop that being printed?) dumped occasionally in the middle of my curses formatted output screen, but no delay that ever gets remotely near the 30 second timeout, but still one of the other keeps crashing again and again from this. I'm running RHEL5.4 on a 64 bit 4 cpu HS21 IBM blade, and have heard some mention about issues about multi-threading and am not sure if this might be relevant. Packages in use are sqlite-3.3.6-5 and python-sqlite-1.1.7-1.2.1, and upgrading to newer versions outside of RedHat's official provisions is not a great option for me. Possible, but not desirable due to the environment in general. I have had autocommit=1 on previously on both scripts, but have since disabled on both, and am now cx.commit()ing on the inserting script and not committing on the select script. Ultimately as I only ever have one script actually making any modifications, I don't really see why this locking should ever ever happen. I have noticed that this is significantly worse over time when the database has gotten larger. It was recently at 13mb with 3 equal sized tables, which was about 1 day's worth of data. creating a new file has significantly improved this, which seems understandable, but the timeout ultimately just doesn't seem to be being obeyed. Any pointers very much appreciated. Thanks Chris

    Read the article

  • maven dependencies and jetty - avoiding deploy

    - by James Cooper
    Hi, I have a project with 3 artifacts: common - entities, business logic. no UI code webapp-a - a public web app webapp-b - an admin web app webapp-a and webapp-b depend on common. common is configured to deploy to a local maven repo. so far so good. I have IntelliJ configured so that each artifact is a separate module. Module dependencies are configured properly. I can add a new method to a class in common and immediately use that method in a class in a webapp. However, when I run mvn jetty:run it uses the currently deployed common snapshot in my repository. It does not use my local classes. If I add a method to a class in common, it compiles fine, but blows up at runtime. So is it possible to either: a) Convince jetty:run to use my local common build output or b) Deploy my common output to my local ~/.m2/repo while I'm testing locally before I want to commit/deploy or c) some other solution? thank you! -- James

    Read the article

  • Git-svn branch hoses dcommit when using an odd branch structure

    - by Chuck Vose
    I had a boss, past-tense, who decided to put svn branches in the same folder as trunk. Normally, this wouldn't affect me that much but since I'm using git-svn things are going so well. After I did a fetch it created a folder for each branch in my root folder so I have three folders, drupal, trunk, and client. The drupal folder is git's master branch, client and trunk are the svn branches. Merging and committing works great, in fact everything git related is working superb. However dcommit is totally hosed, it's trying to commit a folder called client and one called trunk. I can't even imagine what havoc this would cause for svn later on. So my question is, what have I done wrong in my .git/config and is there anything I can do to fix this or am I going to have to suffer and go back to using svn? Please don't make me go back. I don't think I can take it anymore. Bastard boss knows how to leave a legacy. [svn-remote "svn"] url = https://svn.mydomain.com/svn/project_name fetch = trunk:refs/remotes/trunk branches = *:refs/remotes/* tags = tags/*:refs/remotes/tags/* Normally the branches line would look like this (when using --stdlayout): branches = branches/*:refs/remotes/branches/* ls output is thus: $ ls client/ docs/ drupal/ sql/ trunk/

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >