Search Results

Search found 25346 results on 1014 pages for 'update alternatives'.

Page 320/1014 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • No Change for Index of DropDownList in a Custom Control!!!

    - by mahdiahmadirad
    Hi Dears, I have Created A Custom Control which is a DropDownList with specified Items. I designed AutoPostback and SelectedCategoryId as Properties and SelectedIndexChanged as Event for My Custom Control. Here Is My ASCX file Behind Code: private int _selectedCategoryId; private bool _autoPostback = false; public event EventHandler SelectedIndexChanged; public void BindData() { //Some Code... } protected void Page_Load(object sender, EventArgs e) { BindData(); DropDownList1.AutoPostBack = this._autoPostback; } public int SelectedCategoryId { get { return int.Parse(this.DropDownList1.SelectedItem.Value); } set { this._selectedCategoryId = value; } } public string AutoPostback { get { return this.DropDownList1.AutoPostBack.ToString(); } set { this._autoPostback = Convert.ToBoolean(value); } } protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e) { if (SelectedIndexChanged != null) SelectedIndexChanged(this, EventArgs.Empty); } I Want Used Update Panel to Update Textbox Fields According to dorp down list selected index. this is my code in ASPX page: <asp:Panel ID="PanelCategory" runat="server"> <p> Select Product Category:&nbsp; <myCtrl:CategoryDDL ID="CategoryDDL1" AutoPostback="true" OnSelectedIndexChanged="CategoryIndexChanged" SelectedCategoryId="0" runat="server" /> </p> <hr /> </asp:Panel> <asp:UpdatePanel ID="UpdatePanelEdit" runat="server"> <ContentTemplate> <%--Some TextBoxes and Other Controls--%> </ContentTemplate> <Triggers> <asp:PostBackTrigger ControlID="CategoryDDL1" /> </Triggers> </asp:UpdatePanel> But Always The Selected Index of CategoryDDL1 is 0(Like default). this means Only Zero Value will pass to the event to update textboxes Data. what is the wrong with my code? why the selected Index not Changing? Help?

    Read the article

  • Reading Characher from Image

    - by Chinjoo
    I am working on an application which requires matching of numbers from a scanned image file to database entry and update the database with the match result. Say I have image- employee1.jpg. This image will have two two handwritten entries - Employee number and the amount to be paid to the employee. I have to read the employee number from the image and query the database for the that number, update the employee with the amount to be paid as got from the image. Both the employee number and amount to be paid are written inside two boxes at a specified place on the image. Is there any way to automate this. Basically I want a solution in .net using c#. I know this can be done using artificial neural networks. Any ideas would be much appreciated.

    Read the article

  • One model and Many edit views

    - by user179438
    Hi, I have a model I named User, and I want use two different Views to edit it: the usual edit view and another view I called edit_profile. I had no problem in creating routing, controller and views: I added edit_profile and update_profile views, and I added on routes.rb the line: map.resources :users ,:member => {:edit_profile => :get, :update_profile => :put} The problem is: when I submit the form in edit_profile and some error occur in some input fields, rails reload the edit_path page instead of edit_profile_path page ! This is the form on edit_profile.html.erb form_for(:user, @user, :url => {:action => :update_profile}, :html => { :method => :put} ) do |f| = f.text_field :description = f.text_area :description = f.error_message_on :description .... .... = f.submit 'Update profile' After clicking Update profile, if input errors occur I want to show edit_profile view instead of edit view Do You have some ideas ? many thanks

    Read the article

  • How SqlDataAdapter works internally?

    - by tigrou
    I wonder how SqlDataAdapter works internally, especially when using UpdateCommand for updating a huge DataTable (since it's usually a lot faster that just sending sql statements from a loop). Here is some idea I have in mind : It creates a prepared sql statement (using SqlCommand.Prepare()) with CommandText filled and sql parameters initialized with correct sql types. Then, it loops on datarows that need to be updated, and for each record, it updates parameters values, and call SqlCommand.ExecuteNonQuery(). It creates a bunch of SqlCommand objects with everything filled inside (CommandText and sql parameters). Several SqlCommands at once are then batched to the server (depending of UpdateBatchSize). It uses some special, low level or undocumented sql driver instructions that allow to perform an update on several rows in a effecient way (rows to update would need to be provided using a special data format and a the same sql query (UpdateCommand here) would be executed against each of these rows).

    Read the article

  • Declare global variables for a batch of execution statements - sql server 2005

    - by Shrewd Demon
    hi, i have an SQL statement wherein i am trying to update the table on the client's machine. the sql statement is as follows: BEGIN TRANSACTION DECLARE @CreatedBy INT SELECT @CreatedBy = [User_Id] FROM Users WHERE UserName = 'Administrator' --//////////////////////////////////////////////////////////////////// --//////////////////////////////////////////////////////////////////// PRINT @CreatedBy --(Works fine here and shows me the output) PRINT N'Rebuilding [dbo].[Some_Master]' ALTER TABLE [dbo].[Some_Master] ADD [CreatedBy] [BIGINT] NULL, [Reason] [VARCHAR](200) NULL GO PRINT @CreatedBy --(does not work here and throws me an error) PRINT N'Updating data in [Some_Master] table' UPDATE Some_Master SET CreatedBy = @CreatedBy COMMIT TRANSACTION but i am getting the following error: Must declare the scalar variable "@CreatedBy". Now i have observed if i write the Print statement above the alter command it works fine and shows me its value, but if i try to print the value after the Alter command it throws me the error i specified above. I dont know why ?? please help! Thank you

    Read the article

  • I want to exchange the Value of a column in two different rows in Microsoft SQL server

    - by Silmaril89
    Hi, I want to do the following two SQL Queries in Microsoft SQL SERVER UPDATE Partnerships SET sortOrder = 2 WHERE sortOrder = 1; UPDATE Partnerships SET sortOrder = 1 WHERE sortOrder = 2; The only problem is, I don't allow for sortOrder to contain the same value, it is a unique key. How could I get around this, because the first query violates the unique key rule and terminates? Or will I have to get rid of the unique key rule I have? Thanks!

    Read the article

  • how to kill an older jsonp request?

    - by pfg
    I have a cross-domain long polling request using getJSON with a callback that looks something like this: $.getJSON("http://long-polling.com/some.fcgi?jsoncallback=?" function(data){ if(data.items[0].rval == 1) { // update data in page } }); The problem is the request to the long-polling service might take a while to return, and another getJSON request might be made in the meantime and update the page even though it is a stale response. Req1: h**p://long-polling.com/some.fcgi at 10:00 AM Req2: h**p://long-polling.com/some.fcgi? at 10:01 AM Req1 returns and updates the data on the page 10:02 AM I want a way to invalidate the return from Req1 if Req2 has already been made. Thanks!

    Read the article

  • Dynamic SQL queries in code possible?

    - by SeanD
    Instead of hard coding sql queries like Select * from users where user_id =220202 can these be made dynamic like Select * from $users where $user_id = $input. Reason i ask is when changes are needed to table/column names i can just update it in one place and don't have to ask developers to go line by line to find all references to update. It is very time consuming. And I do not like the idea of exposing database stuff in the code. My major concern is load time. Like with dynamic pages, the database has to fetch the page content, same way if queries are dynamic first system has to lookup the references then execute the queries, so does it impact load times? I am using codeignitor PHP. If it it possible then the next question is where to store all the references? In the app, in a file, in the DB, and how?

    Read the article

  • Cannot "Install New Software" in Eclipse 3.5

    - by David E
    I have just installed Eclipse 3.5 for Java EE developers ("Galileo"). I need to add an extra plugin, but when I select the "Install New Software ..." menu item, nothing happens. Literally nothing - no dialog opens, no error message is displayed. If I have the Debug window open, no message are displayed. If I go to Preferences - Install/Update - Available Software Sites, that dialog opens OK, I can manage the list of update sites, and test the connections, and they all appear OK. But I cannot get to use them to actually install anything. Is it just broken, or could there be something more subtle wrong? Thanks.

    Read the article

  • Dynamically udpate canvasOverlay property of jQuery jqPlot

    - by You Kuper
    I want to dynamically udpate canvasOverlay property of jQuery jqPlot. This will provide the effect of a timeline in my jqPlot. This effect should be similar to the one shown in this jFiddle. However, instead of drawing points, I want to update canvasOverlay property every second: canvasOverlay: { show: true, objects: [ { rectangle: { xmax: new Date(), xminOffset: "0px", xmaxOffset: "0px", yminOffset: "0px", ymaxOffset: "0px", color: "rgba(0, 0, 0, 0.1)", showTooltip: true } }, ] } How can I do this? What are the functions I should use? UPDATE: My idea is to do something like this: canvasOverlay: { name: 'current', show: true, objects: [ { rectangle: { xmax: new Date(), xminOffset: "0px", xmaxOffset: "0px", yminOffset: "0px", ymaxOffset: "0px", color: "rgba(0, 0, 0, 0.1)", showTooltip: true } }, ] } /... var co = plot.plugins.canvasOverlay; var current = co.get('current'); current.options.objects.rectangle.xmax = new Date(); co.draw(plot);

    Read the article

  • Rails botches the SQL on a complex save

    - by Dan B
    Hi, I am doing something seemingly pretty easy, but Rails is messing up the SQL. I could just execute my own SQL, but the framework should be able to handle this. Here is the save I am trying to perform: w = WhipSenVote.find(:first, :conditions => ["whip_bill_id = ? AND whip_sen_id = ?", bill_id, k]) w.votes_no = w.votes_no - 1 w.save My generated SQL looks like this: SELECT * FROM "whip_sen_votes" WHERE (whip_bill_id = E'1' AND whip_sen_id = 7) LIMIT 1 And then: UPDATE "whip_sen_votes" SET "votes_yes" = 14, "updated_at" = '2009-11-13 19:55:54.807000' WHERE "id" = 15 The first select statement is correct, but as you can see, the Update SQL statement is pretty wrong, though the votes_yes value is correct. Any ideas? Thanks!

    Read the article

  • Count number of messages per user

    - by Pr0no
    Consider the following tables: users messages ----------------- ----------------------- user_id messages msg_id user_id content ----------------- ----------------------- 1 0 1 1 foo 2 0 2 1 bar 3 0 3 1 foobar 4 3 baz 5 3 bar I want to count the number of messages per user and insert the outcome into users.messages, like this: users ----------------- user_id messages ----------------- 1 3 2 0 3 2 I could use PHP to perform this operation, pseudo: foreach ($user_id in users) { $count = select count(msg_id) from messages where user_id = $user_id update users set messages = $count } But this is probably very inefficient as compared to one query executed in MySQL directly: UPDATE users SET messages = ( SELECT COUNT(msg_id) FROM messages ) But I'm sure this is not a proper query. Therefore, any help would be appreciated :-)

    Read the article

  • ProgressBar isn't updating

    - by Nuru Salihu
    I have a progressbar that that is show progress returned by the backgroundworker do_dowork event like below . if (ftpSourceFilePath.Scheme == Uri.UriSchemeFtp) { FtpWebRequest objRequest = (FtpWebRequest)FtpWebRequest.Create(ftpSourceFilePath); NetworkCredential objCredential = new NetworkCredential(userName, password); objRequest.Credentials = objCredential; objRequest.Method = WebRequestMethods.Ftp.DownloadFile; FtpWebResponse objResponse = (FtpWebResponse)objRequest.GetResponse(); StreamReader objReader = new StreamReader(objResponse.GetResponseStream()); int len = 0; int iProgressPercentage = 0; FileStream objFS = new FileStream((cd+"\\VolareUpdate.rar"), FileMode.Create, FileAccess.Write, FileShare.Read); while ((len = objReader.BaseStream.Read(buffer, 0, buffer.Length)) > 0) { objFS.Write(buffer, 0, len); iRunningByteTotal += len; double dIndex = (double)(iRunningByteTotal); double dTotal = (double)buffer.Length; double dProgressPercentage = (dIndex / dTotal); iProgressPercentage = (int)(dProgressPercentage); if (iProgressPercentage > 100) { iProgressPercentage = 100; } bw.ReportProgress(iProgressPercentage); } } However, my progressbar does not update. While searching , i was told the UI thread is being blocked and then i thought may be passing the progress outside the loop will do the trick. then i change to this if (ftpSourceFilePath.Scheme == Uri.UriSchemeFtp) { FtpWebRequest objRequest = (FtpWebRequest)FtpWebRequest.Create(ftpSourceFilePath); NetworkCredential objCredential = new NetworkCredential(userName, password); objRequest.Credentials = objCredential; objRequest.Method = WebRequestMethods.Ftp.DownloadFile; FtpWebResponse objResponse = (FtpWebResponse)objRequest.GetResponse(); StreamReader objReader = new StreamReader(objResponse.GetResponseStream()); int len = 0; int iProgressPercentage = 0; FileStream objFS = new FileStream((cd+"\\VolareUpdate.rar"), FileMode.Create, FileAccess.Write, FileShare.Read); while ((len = objReader.BaseStream.Read(buffer, 0, buffer.Length)) > 0) { objFS.Write(buffer, 0, len); iRunningByteTotal += len; double dIndex = (double)(iRunningByteTotal); double dTotal = (double)buffer.Length; double dProgressPercentage = (dIndex / dTotal); iProgressPercentage = (int)(dProgressPercentage); if (iProgressPercentage > 100) { iProgressPercentage = 100; } // System.Threading.Thread.Sleep(2000); iProgressPercentage++; // SetText("F", true); } bw.ReportProgress(iProgressPercentage); progressBar.Refresh(); } However still didn't help. When i put break point in my workerprogresschanged event, it show the progressbar.value however does not update. I tried progressbar.update(0, i also tried sleeping the thread for a while in the loop still didn't help. Please any suggestion/help would be appreciated .

    Read the article

  • Access uploaded file in JSON encoded data

    - by okello
    I've encoded my form data into JSON. This has been achieved by the following ExtJS store configuration: Ext.define('XXX.store.Registration', { extend: 'Ext.data.Store', model: 'XXX.model.Registration', autoLoad: true, pageSize: 15, autoLoad: { start: 0, limit: 15 }, proxy: { type: 'ajax', api: { create: './server/registration/create.php', read: './server/registration/get.php', update: './server/registration/update.php', destroy: './server/registration/destroy.php' }, reader: { type: 'json', root: 'registrations', successProperty: 'success' }, writer: { type: 'json', writeAllFields: true, encode: true, root: 'registrations' } } }); My server side code has been implemented in PHP. I can access the encoded form fields by using the field name as a key, as exemplified below: $reg = $_REQUEST['registrations']; $data = json_decode(stripslashes($reg)); $registerNum = $data->registerNum; $folioNum = $data->folioNum; One of the fields in my form is a fileuploadfield. How can I access the uploaded file from the uploaded JSON. Any assistance will be highly appreciated.

    Read the article

  • How should I implement simple caches with concurrency on Redis?

    - by solublefish
    Background I have a 2-tier web service - just my app server and an RDBMS. I want to move to a pool of identical app servers behind a load balancer. I currently cache a bunch of objects in-process. I hope to move them to a shared Redis. I have a dozen or so caches of simple, small-sized business objects. For example, I have a set of Foos. Each Foo has a unique FooId and an OwnerId. One "owner" may own multiple Foos. In a traditional RDBMS this is just a table with an index on the PK FooId and one on OwnerId. I'm caching this in one process simply: Dictionary<int,Foo> _cacheFooById; Dictionary<int,HashSet<int>> _indexFooIdsByOwnerId; Reads come straight from here, and writes go here and to the RDBMS. I usually have this invariant: "For a given group [say by OwnerId], the whole group is in cache or none of it is." So when I cache miss on a Foo, I pull that Foo and all the owner's other Foos from the RDBMS. Updates make sure to keep the index up to date and respect the invariant. When an owner calls GetMyFoos I never have to worry that some are cached and some aren't. What I did already The first/simplest answer seems to be to use plain ol' SET and GET with a composite key and json value: SET( "ServiceCache:Foo:" + theFoo.Id, JsonSerialize(theFoo)); I later decided I liked: HSET( "ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); That lets me get all the values in one cache as HVALS. It also felt right - I'm literally moving hashtables to Redis, so perhaps my top-level items should be hashes. This works to first order. If my high-level code is like: UpdateCache(myFoo); AddToIndex(myFoo); That translates into: HSET ("ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); var myFoos = JsonDeserialize( HGET ("ServiceCache:FooIndex", theFoo.OwnerId) ); myFoos.Add(theFoo.OwnerId); HSET ("ServiceCache:FooIndex", theFoo.OwnerId, JsonSerialize(myFoos)); However, this is broken in two ways. Two concurrent operations can read/modify/write at the same time. The latter "wins" the final HSET and the former's index update is lost. Another operation could read the index in between the first and second lines. It would miss a Foo that it should find. So how do I index properly? I think I could use a Redis set instead of a json-encoded value for the index. That would solve part of the problem since the "add-to-index-if-not-already-present" would be atomic. I also read about using MULTI as a "transaction" but it doesn't seem like it does what I want. Am I right that I can't really MULTI; HGET; {update}; HSET; EXEC since it doesn't even do the HGET before I issue the EXEC? I also read about using WATCH and MULTI for optimistic concurrency, then retrying on failure. But WATCH only works on top-level keys. So it's back to SET/GET instead of HSET/HGET. And now I need a new index-like-thing to support getting all the values in a given cache. If I understand it right, I can combine all these things to do the job. Something like: while(!succeeded) { WATCH( "ServiceCache:Foo:" + theFoo.FooId ); WATCH( "ServiceCache:FooIndexByOwner:" + theFoo.OwnerId ); WATCH( "ServiceCache:FooIndexAll" ); MULTI(); SET ("ServiceCache:Foo:" + theFoo.FooId, JsonSerialize(theFoo)); SADD ("ServiceCache:FooIndexByOwner:" + theFoo.OwnerId, theFoo.FooId); SADD ("ServiceCache:FooIndexAll", theFoo.FooId); EXEC(); //TODO somehow set succeeded properly } Finally I'd have to translate this pseudocode into real code depending how my client library uses WATCH/MULTI/EXEC; it looks like they need some sort of context to hook them together. All in all this seems like a lot of complexity for what has to be a very common case; I can't help but think there's a better, smarter, Redis-ish way to do things that I'm just not seeing. How do I lock properly? Even if I had no indexes, there's still a (probably rare) race condition. A: HGET - cache miss B: HGET - cache miss A: SELECT B: SELECT A: HSET C: HGET - cache hit C: UPDATE C: HSET B: HSET ** this is stale data that's clobbering C's update. Note that C could just be a really-fast A. Again I think WATCH, MULTI, retry would work, but... ick. I know in some places people use special Redis keys as locks for other objects. Is that a reasonable approach here? Should those be top-level keys like ServiceCache:FooLocks:{Id} or ServiceCache:Locks:Foo:{Id}? Or make a separate hash for them - ServiceCache:Locks with subkeys Foo:{Id}, or ServiceCache:Locks:Foo with subkeys {Id} ? How would I work around abandoned locks, say if a transaction (or a whole server) crashes while "holding" the lock?

    Read the article

  • Binding multiple objects in Grails

    - by WaZ
    I have there domain classes: :: Person. (Person.ID, Name,Address) :: Designation.(Designation.ID, Title, Band) :: SalarySlip (Person.ID, Designation.ID, totalIncome, Tax etc etc.) In the update method the person controller when I assign a person a designation from a list of designation values I want to insert a new record inside SalarySlip. Something like: def update = { def SalarySlipInstance = new SalarySlip() SalarySlipInstance.Person.ID = Params.ID //is this correct? SalarySlipInstance.Designation.ID = ?? //since the value is coming from a list. How can I bind this field? } Much Appreciated, Thanks, WB

    Read the article

  • Updating label in another form (C#)

    - by cthulhu
    I want to update label of Form1 from Form2. So here's what I have: // Form1 public string Label1 { get { return this.label1.Text; } set { this.label1.Text = value; } } // Form2 private void button1_Click(object sender, EventArgs e) { Form1 frm1 = new Form1(); frm1.Label1 = this.textBox1.Text; this.Close(); } So the above code doesn't work. However, when I add the following: frm1.Show(); after this.Close(); in Form2 code, the Form1 is being opened again (two windows). But I want it to update in the same window so I suggest this.Close() is unnecessary.

    Read the article

  • Fastest way to move records from an Oracle database into SQL Server

    - by user347748
    Ok this is the scenario... I have a table in Oracle that acts like a queue... A VB.net program reads the queue and calls a stored proc in SQL Server that processes and then inserts the message into another SQL Server table and then deletes the record from the oracle table. We use a DataReader to read the records from Oracle and then call the stored proc for each of the records. The program seems to be a little slow. The stored procedure itself isn't slow. The SP by itself when called in a loop can process about 2000 records in 20 seconds. But when called from the .Net program, the execution time is about 5 records per second. I have seen that most of the time consumed is in calling the stored procedure and waiting for it to return. Is there a better way of doing this? Here is a snippet of the actual code Function StartDataXfer() As Boolean Dim status As Boolean = False Try SqlConn.Open() OraConn.Open() c.ErrorLog(Now.ToString & "--Going to Get the messages from oracle", 1) If GetMsgsFromOracle() Then c.ErrorLog(Now.ToString & "--Got messages from oracle", 1) If ProcessMessages() Then c.ErrorLog(Now.ToString & "--Finished Processing all messages in the queue", 0) status = True Else c.ErrorLog(Now.ToString & "--Failed to Process all messages in the queue", 0) status = False End If Else status = True End If StartDataXfer = status Catch ex As Exception Finally SqlConn.Close() OraConn.Close() End Try End Function Private Function GetMsgsFromOracle() As Boolean Try OraDataAdapter = New OleDb.OleDbDataAdapter OraDataTable = New System.Data.DataTable OraSelCmd = New OleDb.OleDbCommand GetMsgsFromOracle = False With OraSelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = GetMsgSql End With OraDataAdapter.SelectCommand = OraSelCmd OraDataAdapter.Fill(OraDataTable) If OraDataTable.Rows.Count > 0 Then GetMsgsFromOracle = True End If Catch ex As Exception GetMsgsFromOracle = False End Try End Function Private Function ProcessMessages() As Boolean Try ProcessMessages = False PrepareSQLInsert() PrepOraDel() i = 0 Dim Method As Integer Dim OraDataRow As DataRow c.ErrorLog(Now.ToString & "--Going to call message sending procedure", 2) For Each OraDataRow In OraDataTable.Rows With OraDataRow Method = GetMethod(.Item(0)) SQLInsCmd.Parameters("RelLifeTime").Value = c.RelLifetime SQLInsCmd.Parameters("Param1").Value = Nothing SQLInsCmd.Parameters("ID").Value = GenerateTransactionID() ' Nothing SQLInsCmd.Parameters("UID").Value = Nothing SQLInsCmd.Parameters("Param").Value = Nothing SQLInsCmd.Parameters("Credit").Value = 0 SQLInsCmd.ExecuteNonQuery() 'check the return value If SQLInsCmd.Parameters("ReturnValue").Value = 1 And SQLInsCmd.Parameters("OutPutParam").Value = 0 Then 'success 'delete the input record from the source table once it is logged c.ErrorLog(Now.ToString & "--Moved record successfully", 2) OraDataAdapter.DeleteCommand.Parameters("P(0)").Value = OraDataRow.Item(6) OraDataAdapter.DeleteCommand.ExecuteNonQuery() c.ErrorLog(Now.ToString & "--Deleted record successfully", 2) OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Committed record successfully", 2) i = i + 1 Else 'failure c.ErrorLog(Now.ToString & "--Failed to exec: " & c.DestIns & "Status: " & SQLInsCmd.Parameters("OutPutParam").Value & " and TrackId: " & SQLInsCmd.Parameters("TrackID").Value.ToString, 0) End If If File.Exists("stop.txt") Then c.ErrorLog(Now.ToString & "--Stop File Found", 1) 'ProcessMessages = True 'Exit Function Exit For End If End With Next OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Updated Oracle Table", 1) c.ErrorLog(Now.ToString & "--Moved " & i & " records from Oracle to SQL Table", 1) ProcessMessages = True Catch ex As Exception ProcessMessages = False c.ErrorLog(Now.ToString & "--MoveMsgsToSQL: " & ex.Message, 0) Finally OraDataTable.Clear() OraDataTable.Dispose() OraDataAdapter.Dispose() OraDelCmd.Dispose() OraDelCmd = Nothing OraSelCmd = Nothing OraDataTable = Nothing OraDataAdapter = Nothing End Try End Function Public Function GenerateTransactionID() As Int64 Dim SeqNo As Int64 Dim qry As String Dim SqlTransCmd As New OleDb.OleDbCommand qry = " select seqno from StoreSeqNo" SqlTransCmd.CommandType = CommandType.Text SqlTransCmd.Connection = SqlConn SqlTransCmd.CommandText = qry SeqNo = SqlTransCmd.ExecuteScalar If SeqNo > 2147483647 Then qry = "update StoreSeqNo set seqno=1" SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = 1 Else qry = "update StoreSeqNo set seqno=" & SeqNo + 1 SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = SeqNo End If End Function Private Function PrepareSQLInsert() As Boolean 'function to prepare the insert statement for the insert into the SQL stmt using 'the sql procedure SMSProcessAndDispatch Try Dim dr As DataRow SQLInsCmd = New OleDb.OleDbCommand With SQLInsCmd .CommandType = CommandType.StoredProcedure .Connection = SqlConn .CommandText = SQLInsProc .Parameters.Add("ReturnValue", OleDb.OleDbType.Integer) .Parameters("ReturnValue").Direction = ParameterDirection.ReturnValue .Parameters.Add("OutPutParam", OleDb.OleDbType.Integer) .Parameters("OutPutParam").Direction = ParameterDirection.Output .Parameters.Add("TrackID", OleDb.OleDbType.VarChar, 70) .Parameters.Add("RelLifeTime", OleDb.OleDbType.TinyInt) .Parameters("RelLifeTime").Direction = ParameterDirection.Input .Parameters.Add("Param1", OleDb.OleDbType.VarChar, 160) .Parameters("Param1").Direction = ParameterDirection.Input .Parameters.Add("TransID", OleDb.OleDbType.VarChar, 70) .Parameters("TransID").Direction = ParameterDirection.Input .Parameters.Add("UID", OleDb.OleDbType.VarChar, 20) .Parameters("UID").Direction = ParameterDirection.Input .Parameters.Add("Param", OleDb.OleDbType.VarChar, 160) .Parameters("Param").Direction = ParameterDirection.Input .Parameters.Add("CheckCredit", OleDb.OleDbType.Integer) .Parameters("CheckCredit").Direction = ParameterDirection.Input .Prepare() End With Catch ex As Exception c.ErrorLog(Now.ToString & "--PrepareSQLInsert: " & ex.Message) End Try End Function Private Function PrepOraDel() As Boolean OraDelCmd = New OleDb.OleDbCommand Try PrepOraDel = False With OraDelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = DelSrcSQL .Parameters.Add("P(0)", OleDb.OleDbType.VarChar, 160) 'RowID .Parameters("P(0)").Direction = ParameterDirection.Input .Prepare() End With OraDataAdapter.DeleteCommand = OraDelCmd PrepOraDel = True Catch ex As Exception PrepOraDel = False End Try End Function WHat i would like to know is, if there is anyway to speed up this program? Any ideas/suggestions would be highly appreciated... Regardss, Chetan

    Read the article

  • Capistrano Error

    - by Casey van den Bergh
    I'm Running CentOS 5 32 bit version. This is my deploy.rb file on my local computer: #======================== #CONFIG #======================== set :application, "aeripets" set :scm, :git set :git_enable_submodules, 1 set :repository, "[email protected]:aeripets.git" set :branch, "master" set :ssh_options, { :forward_agent => true } set :stage, :production set :user, "root" set :use_sudo, false set :runner, "root" set :deploy_to, "/var/www/#{application}" set :app_server, :passenger set :domain, "aeripets.co.za" #======================== #ROLES #======================== role :app, domain role :web, domain role :db, domain, :primary => true #======================== #CUSTOM #======================== namespace :deploy do task :start, :roles => :app do run "touch #{current_release}/tmp/restart.txt" end task :stop, :roles => :app do # Do nothing. end desc "Restart Application" task :restart, :roles => :app do run "touch #{current_release}/tmp/restart.txt" end end And this the error I get on my local computer when I try to cap deploy. executing deploy' * executingdeploy:update' ** transaction: start * executing deploy:update_code' executing locally: "git ls-remote [email protected]:aeripets.git master" command finished in 1297ms * executing "git clone -q [email protected]:aeripets.git /var/www/seripets/releases/20111126013705 && cd /var/www/seripets/releases/20111126013705 && git checkout -q -b deploy 32ac552f57511b3ae9be1d58aec54d81f78f8376 && git submodule -q init && git submodule -q sync && export GIT_RECURSIVE=$([ ! \"git --version\" \\< \"git version 1.6.5\" ] && echo --recursive) && git submodule -q update --init $GIT_RECURSIVE && (echo 32ac552f57511b3ae9be1d58aec54d81f78f8376 > /var/www/seripets/releases/20111126013705/REVISION)" servers: ["aeripets.co.za"] Password: [aeripets.co.za] executing command ** [aeripets.co.za :: err] sh: git: command not found command finished in 224ms *** [deploy:update_code] rolling back * executing "rm -rf /var/www/seripets/releases/20111126013705; true" servers: ["aeripets.co.za"] [aeripets.co.za] executing command command finished in 238ms failed: "sh -c 'git clone -q [email protected]:aeripets.git /var/www/seripets/releases/20111126013705 && cd /var/www/seripets/releases/20111126013705 && git checkout -q -b deploy 32ac552f57511b3ae9be1d58aec54d81f78f8376 && git submodule -q init && git submodule -q sync && export GIT_RECURSIVE=$([ ! \"git --version`\" \< \"git version 1.6.5\" ] && echo --recursive) && git submodule -q update --init $GIT_RECURSIVE && (echo 32ac552f57511b3ae9be1d58aec54d81f78f8376 /var/www/seripets/releases/20111126013705/REVISION)'" on aeripets.co.za

    Read the article

  • What is the proper way to check the previous value of a field before saving an object? (Using Django

    - by anonymous coward
    I have a Django Model with updated_by and an approved_by fields, both are ForeignKey fields to the built-in (auth) User models. I am aware that with updated_by, it's easy enough to simply over-ride the .save() method on the Model, and shove the request.user in that field before saving. However, for approved_by, this field should only ever be filled in when a related field (date_approved) is first filled in. I'm somewhat certain that I can check this logically, and fill in the field if the previous value was empty. What is the proper way to check the previous value of a field before saving an object? I do not anticipate that date_approved will ever be changed or updated, nor should there be any reason to ever update the approved_by entry. UPDATE: Regarding forms/validation, I should have mentioned that none of the fields in question are seen by or editable by users of the site. If I have misunderstood, I'm sorry, but I'm not sure how forms and validation apply to my question.

    Read the article

  • ios - almost there with updating Core Data

    - by Jeff Kranenburg
    I have been following the answer of this question: How to update existing object in core data? and in the answer it comes across this line of code to update a record within the array: Favorits* favoritsGrabbed = [results objectAtIndex:0]; Now this updates whatever is set a record 0 of the database, no matter what cell I select to edit. I am sorry but I cannot figure out how to change it into updating the cell I have selected. Starting to grow grey hairs here:-) The problem (maybe it is my mindset) is that I am required to give an integer and I am unable to find something to substitute it. Any help would be great.

    Read the article

  • where id = multiple artists

    - by pixel
    Any time there is an update within my music community (song comment, artist update, new song added, yadda yadda yadda), a new row is inserted in my "updates" table. The row houses the artist id involved along with other information (what type of change, time and date, etc). My users have a "favorite artists" section where they can do just that -- mark artists as their favorites. As such, I'd like to create a new feature that shows the user the changes made to their various favorite artists. How should I be doing this efficiently? SELECT * FROM table_updates WHERE artist_id = 1 and artist_id = 500 and artist_id = 60032 Keep in mind, a user could have 43,000 of our artists marked as a favorite. Thoughts?

    Read the article

  • Best approach to synchronising properties across threads

    - by user290796
    Hi, I'm looking for some advice on the best approach to synchronising access to properties of an object in C++. The application has an internal cache of objects which have 10 properties. These objects are to be requested in sets which can then have their properties modified and be re-saved. They can be accessed by 2-4 threads at any given time but access is not intense so my options are: Lock the property accessors for each object using a critical section. This means lots of critical sections - one for each object. Return copies of the objects when requested and have an update function which locks a single critical section to update the object properties when appropriate. I think option 2 seems the most efficient but I just want to see if I'm missing a hidden 3rd option which would be more appropriate. Thanks, J

    Read the article

  • Do I have to hash twice in C#?

    - by Joe H
    I have code like the following: class MyClass { string Name; int NewInfo; } List<MyClass> newInfo = .... // initialize list with some values Dictionary<string, int> myDict = .... // initialize dictionary with some values foreach(var item in newInfo) { if(myDict.ContainsKey(item.Name)) // 'A' I hash the first time here myDict[item.Name] += item.NewInfo // 'B' I hash the second (and third?) time here else myDict.Add(item.Name, item.NewInfo); } Is there any way to avoid doing two lookups in the Dictionary -- the first time to see if contains an entry, and the second time to update the value? There may even be two hash lookups on line 'B' -- one to get the int value and another to update it.

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >