Search Results

Search found 5295 results on 212 pages for 'transaction scope'.

Page 67/212 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Data Transfer Objects VS Domain/ActiveRecord Entities in the View in RoR

    - by leypascua
    I'm coming from a .NET background, where it is a practice to not bind domain/entity models directly to the view in not-so-basic CRUD-ish applications where the view does not directly project entity fields as-is. I'm wondering what's the practice in RoR, where the default persistence mechanism is ActiveRecord. I would assert that presentation-related info should not be leaked to the entities, not sure though if this is how real RoR heads would do it. If DTOs/model per view is the approach, how will you do it in Rails? Your thoughts? EDIT: Some examples: - A view shows a list of invoices, with the number of unique items in one column. - A list of credit card accounts, where possibly fraudulent transactions were executed. For that, the UI needs to show this row in red. For both scenarios, The lists don't show all of the fields of the entities, just a few to show in the list (like invoice #, transaction date, name of the account, the amount of the transaction) For the invoice example, The invoice entity doesn't have a field "No. of line items" mapped on it. The database has not been denormalized for perf reasons and it will be computed during query time using aggregate functions. For the credit card accounts example, surely the card transaction entity doesn't have a "Show-in-red" or "IsFraudulent" invariant. Yes it may be a business rule, but for this example, that is a presentation concern, so I would like to keep it out of my domain model.

    Read the article

  • In .net what are the difference between Eventlog and ManagementObject for retriving logs from remote

    - by Mitesh Patel
    I have found out following two ways for getting Application Event log entries from remote server. 1. Using EventLog object string logType = "Application"; EventLog ev = new EventLog(logType,"rspl200"); EventLogEntryCollection evColl = ev.Entries 2. Using ManagementObjectSearcher object ConnectionOptions co = new ConnectionOptions(); co.Username = "testA"; co.Password = "testA"; ManagementScope scope = new ManagementScope(@"\" + "machineName"+ @"\root\cimv2", co); scope.Connect(); SelectQuery query = new SelectQuery(@"select * from Win32_NtLogEvent"); EnumerationOptions opt = new EnumerationOptions(); opt.BlockSize = 1000; using (ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query,opt)) { foreach (ManagementObject mo in searcher.Get()) { // write down log entries Console.Writeline(mo["EventCode"]); } } I can easily get remote eventlog using method #1 (Using EventLog object) without any security access denied exception. But using method #2 (Using ManagementObjectSearcher object) i get access denied exception. Actually I want remote event log (only application and also latest log not all application logs) to be displayed in treeview like below - ServerName - Logs + Error + Information + Warning Can anybody help me in this to find out best way from this or any other? Also the main thing is that user who reads remote logs may be in different domain than server. Thanks Mitesh Patel

    Read the article

  • Magento - Authorize.net - Get Payment Update for expired transactions

    - by pspahn
    Magento 1.6.1 I have set up Authorize.net (AIM) for the client's store. Previously they were using saved CC method and entering information manually in Authorize.net's merchant terminal. Most of it is working as expected, however for transactions that are flagged as 'Suspected Fraud' by Authorize.net, if the client does not update the transaction manually before the authorization expires, using 'Get Payment Update' in Magento fails because the transaction is expired (I believe it's five days for an authorize only transaction). For the client, it seems the only way to update this order in Magento is to simply delete the order, as it doesn't appear the Paygate model knows about expired transactions. Performing 'Get Payment Update' simply returns 'There is no update for this payment'. I have already modified the file: /app/code/core/Mage/Paygate/Model/Authorize.net to have the correct API URL as described in issue #27117 ( http://www.magentocommerce.com/bug-tracking/issue?issue=12991 - must be logged in to view ). This resolved the button not working for all other orders; however this does not fix the issue I am describing. Is anyone familiar with Authorize.net's AIM API so that we can update these orders in Magento to something that makes sense (canceled, etc.) without having to delete the order? I am thinking it should be a case of adding a new order status to Magento, checking the update for an 'Expired' status, and setting the order to the newly created order status. -- edit -- I just ran a diff for the file mentioned above and noticed that Magento 1.7.0.2 includes the _isTransactionExpired() method which seems like it would be the fix. Can it be as simple as updating this model with the newer version?

    Read the article

  • Ibatis startBatch() only works with SqlMapClient's own start and commit transactions, not with Sprin

    - by Brian
    Hi, I'm finding that even though I have code wrapped by Spring transactions, and it commits/rolls back when I would expect, in order to make use of JDBC batching when using Ibatis and Spring I need to use explicit SqlMapClient transaction methods. I.e. this does batching as I'd expect: dao.getSqlMapClient().startTransaction(); dao.getSqlMapClient().startBatch(); int i = 0; for (MyObject obj : allObjects) { dao.storeChange(obj); i++; if (i % DB_BATCH_SIZE == 0) { dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().startBatch(); } } dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().commitTransaction(); but if I don't have the opening and closing transaction statements, and rely on Spring to manage things (which is what I want to do!), batching just doesn't happen. Given that Spring does otherwise seem to be handling its side of the bargain regarding transaction management, can anyone advise on any known issues here? (Database is MySQL; I'm aware of the issues regarding its JDBC pseudo-batch approach with INSERT statement rewriting, that's definitely not an issue here)

    Read the article

  • Compilation error while compiling an existing code base

    - by brijesh
    Hi, While building an existing code base on Mac OS using its native build setup I am getting some basic strange error while compilation phase. /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/bits/locale_facets.h: In constructor 'std::collate_byname<_CharT::collate_byname(const char*, size_t)': /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/bits/locale_facets.h:1072: error: '_M_c_locale_collate' was not declared in this scope /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/ppc-darwin/bits/messages_members.h: In constructor 'std::messages_byname<_CharT::messages_byname(const char*, size_t)': /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/ppc-darwin/bits/messages_members.h:79: error: '_M_c_locale_messages' was not declared in this scope /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits: At global scope: /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:897: error: 'float __builtin_huge_valf()' cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:897: error: a function call cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:897: error: 'float __builtin_huge_valf()' cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:897: error: a function call cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:899: error: 'float __builtin_nanf(const char*)' cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:899: error: a function call cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:899: error: 'float __builtin_nanf(const char*)' cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:899: error: a function call cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:900: error: field initializer is not constant /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:915: error: field initializer is not constant

    Read the article

  • can't commit or rollback, MySQL out of sync error on .net

    - by sergiogx
    Im having trouble with a stored procedure, I can't commit after I execute it. Its showing this error "[System.Data.Odbc.OdbcException] = {"ERROR [HY000] [MySQL][ODBC 5.1 Driver]Commands out of sync; you can't run this command now"}" The SP by itself works fine. does anyone have idea of what might be happening? .net code: [WebMethod()] [SoapHeader("sesion")] public Boolean aceptarTasaCero(int idMunicipio, double valor) { Boolean resultado = false; OdbcConnection cxn = new OdbcConnection(); cxn.ConnectionString = ConfigurationManager.ConnectionStrings["mysqlConnection"].ConnectionString; cxn.Open(); OdbcCommand cmd = new OdbcCommand("call aceptarTasaCero(?,?)", cxn); cmd.Parameters.Add("idMunicipio", OdbcType.Int).Value = idMunicipio; cmd.Parameters.Add("valor", OdbcType.Double).Value = valor; cmd.Transaction = cxn.BeginTransaction(); try { OdbcDataReader dr = cmd.ExecuteReader(); if (dr.Read()) { resultado = Convert.ToBoolean(dr[0]); } cmd.Transaction.Commit(); } catch (Exception ex) { ExBi.log(ex, sesion.idUsuario); cmd.Transaction.Rollback(); } finally { cxn.Close(); } return resultado; } and this is the code for the stored procedure DELIMITER $$ DROP PROCEDURE IF EXISTS `aceptartasacero` $$ CREATE DEFINER=`database`@`%` PROCEDURE `aceptartasacero`(pidMun INTEGER, pvalor double) BEGIN declare vExito BOOLEAN; INSERT INTO tasacero(anio,valor,idmunicipios) VALUES(YEAR(curdate()),pValor,pidMun); set vExito = true; select vExito; END $$ DELIMITER ; thanks.

    Read the article

  • Java JSTL / EL: nesting

    - by NoozNooz42
    I really don't know how to name this question, it's great if anyone with 2K+ rep can edit this title to better reflect my question (the fact that I can't name this easily is probably why I can't Google the solution). I've got this, which is working: <c:choose> <c:when test="${sometest}"> Hello, world! </c:when> <c:otherwise> <fmt:message key="${page.title}" /> </c:otherwise> </c:choose> And I want to change it to this: <c:choose> <c:when test="${sometest}"> <c:set var="somevar" scope="page" value="Hello, world!"/> </c:when> <c:otherwise> <c:set var="somevar" scope="page" value="<fmt:message key="${page.title}">" </c:otherwise> </c:choose But of course the following line ain't correct: <c:set var="somevar" scope="page" value="<fmt:message key="${page.title}">" How can I assign to the somevar variable the string resulting from a call?

    Read the article

  • Declaration of arrays before "normal" variables in c?

    - by bjarkef
    Hi We are currently developing an application for a msp430 MCU, and are running into some weird problems. We discovered that declaring arrays withing a scope after declaration of "normal" variables, sometimes causes what seems to be undefined behavior. Like this: foo(int a, int *b); int main(void) { int x = 2; int arr[5]; foo(x, arr); return 0; } foo sometimes is passed a pointer as the second variable, that does not point to the arr array. We verify this by single stepping through the program, and see that the value of the arr variable in the main scope is not the same as the value of the b pointer variable in the foo scope. And no, this is not really reproduceable, we have just observed this behavior once in a while. Changing the example seems to solve the problem, like this: foo(int a, int *b); int main(void) { int arr[5]; int x = 2; foo(x, arr); return 0; } Does anybody have any input or hints as to why we experience this behavior? Or similar experiences? The MSP430 programming guide specifies that code should conform to the ANSI C89 spec. and so I was wondering if it says that arrays has to be declared before non-array variables? Any input on this would be appreciated.

    Read the article

  • Instead of trigger in SQL Server - looses SCOPE_IDENTITY?

    - by kastermester
    Hey StackOverflow, I am (once again) having some issues with some SQL. I have a table, on which I have created an INSTEAD OF trigger to enforce some buissness rules (rules not really important). This works as intended. My issue is, that now when inserting data into this table, SCOPE_IDENTITY() now returns a NULL value, rather than the actual inserted identity, my guess is that this is because it is now out of scope - but then how do I get this in scope? I am using SQL Server 2008. Per request, here's the SQL: Insert + Scope code INSERT INTO [dbo].[Payment]([DateFrom], [DateTo], [CustomerId], [AdminId]) VALUES ('2009-01-20', '2009-01-31', 6, 1) SELECT SCOPE_IDENTITY() Trigger: CREATE TRIGGER [dbo].[TR_Payments_Insert] ON [dbo].[Payment] INSTEAD OF INSERT AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; IF NOT EXISTS(SELECT 1 FROM dbo.Payment p INNER JOIN Inserted i ON p.CustomerId = i.CustomerId WHERE (i.DateFrom >= p.DateFrom AND i.DateFrom <= p.DateTo) OR (i.DateTo >= p.DateFrom AND i.DateTo <= p.DateTo) ) AND NOT EXISTS (SELECT 1 FROM Inserted p INNER JOIN Inserted i ON p.CustomerId = i.CustomerId WHERE (i.DateFrom <> p.DateFrom AND i.DateTo <> p.DateTo) AND ((i.DateFrom >= p.DateFrom AND i.DateFrom <= p.DateTo) OR (i.DateTo >= p.DateFrom AND i.DateTo <= p.DateTo)) ) BEGIN INSERT INTO dbo.Payment (DateFrom, DateTo, CustomerId, AdminId) SELECT DateFrom, DateTo, CustomerId, AdminId FROM Inserted END ELSE BEGIN ROLLBACK TRANSACTION END END The code did work before the creation of this trigger, also I am using LINQ to SQL in C# and as far as I can see, I have no way of changing SCOPE_IDENTITY to @@IDENITY - is there really no way out of this one?

    Read the article

  • Asp.net web forms, Asp Identity - how to store claims from Facebook, Twitter, etc

    - by user2959352
    This request is based upon the new Visual Studio 2013 integration of Asp.net Identity stuff. I have seen some of the posts regarding this question for MVC, but for the life of me cannot get it to work for standard Web Forms. What I'm trying to do is populate the AspNetUserClaims table from the claims that I get back from Facebook (or other service). I actually can see the values coming back in the OnAuthenticated below, but cannot for the life of me figure out how to add these claims to the context of the currently logged in user? There are literally hundreds of MVC examples surrounding this, but alas no Web Forms examples. This should be completely straightforward, but for some reason I cannot match up the context of the currently logged in user to the claims and credentials coming back from Facebook. Currently after the OnAuthenticated fires, it obviously returns me to the page (RegisterExternalLogin.aspx) as the built-in example provides. However, the claims are gone, the context of the login to Facebook is gone, and I can't do anything else at this point. So the ultimate question is, HOW does one populate the claims FROM Facebook into the AspNetUserClaims table based upon the context of the currently logged in user WITHOUT using MVC? var fboptions = new FacebookAuthenticationOptions(); fboptions.AppId = "xxxxxxxxxxxxxxxxxxx"; fboptions.AppSecret = "yyyyyyyyyyyyyyyyyyyyyy"; fboptions.Scope.Add("email"); fboptions.Scope.Add("friends_about_me"); fboptions.Scope.Add("friends_photos"); fboptions.Provider = new FacebookAuthenticationProvider() { OnAuthenticated = (context) => { foreach (var v in context.User) { context.Identity.AddClaim(new System.Security.Claims.Claim(v.Key, v.Value.ToString())); } context.Identity.AddClaim(new System.Security.Claims.Claim("FacebookAccessToken", context.AccessToken)); return Task.FromResult(0); }, }; app.UseFacebookAuthentication(fboptions);

    Read the article

  • take values from table cells and turn into array

    - by liz
    using jquery I need to retrieve an array from table cells, format the data and pass it into a js function. the code i am using is this: var l1 = new Array(); $('table#datatable tbody td:first-child').each(function() { l1.push($(this).text()); }); this is the table fragment <tr> <th scope="row">Age: 0-4</th> <td>0</td> <td>9.7</td> </tr> <tr> <th scope="row">5-17</th> <td>23.6</td> <td>18.0</td> </tr> <tr> <th scope="row">Total 0-17</th> <td>20.6</td> <td>16.1</td> </tr> the table's id is "datatable". i want to return an array of the contents of each first td and then format it like this: 0,23.6,20.6 i am very new to using arrays...

    Read the article

  • 1 oracle schema support large reques per day , is this safe ?

    - by Hlex
    I 'm java system designer. As we have large project to do tightly, Those projects are java api without webpage. I design to create general flow engine to support all project. This idea use 1 oracle schema , having general transaction table . And others control routing table. They all nearly complete. But DBA Team concern that he is suffered to maintain very large request to 1 schema. 1 reason is if there are problem is some table. He must offline tablespace to fix. This is problem because all project will be affected. I try to convince by split data of each table to partition by project_code & "month number to delete" . Eaxmple partition: PROJ1_05 PROJ1_06 PROJ1_07 PROJ2_05 PROJ2_06 PROJ2_07 and all transaction table will store on its partition. So, If there are problem on any part of tablespace then he should offline some partition and another project with use same table should able to service Transaction per day should around 10Meg Record per day. Is this a good idea? If I must use 1 schema, what is strategy to do? Do you have any comment?

    Read the article

  • CommandBuilder and SqlTransaction to insert/update a row

    - by Jesse
    I can get this to work, but I feel as though I'm not doing it properly. The first time this runs, it works as intended, and a new row is inserted where "thisField" contains "doesntExist" However, if I run it a subsequent time, I get a run-time error that I can't insert a duplicate key as it violate the primary key "thisField". static void Main(string[] args) { using(var sqlConn = new SqlConnection(connString) ) { sqlConn.Open(); var dt = new DataTable(); var sqlda = new SqlDataAdapter("SELECT * FROM table WHERE thisField ='doesntExist'", sqlConn); sqlda.Fill(dt); DataRow dr = dt.NewRow(); dr["thisField"] = "doesntExist"; //Primary key dt.Rows.Add(dr); //dt.AcceptChanges(); //I thought this may fix the problem. It didn't. var sqlTrans = sqlConn.BeginTransaction(); try { sqlda.SelectCommand = new SqlCommand("SELECT * FROM table WITH (HOLDLOCK, ROWLOCK) WHERE thisField = 'doesntExist'", sqlConn, sqlTrans); SqlCommandBuilder sqlCb = new SqlCommandBuilder(sqlda); sqlda.InsertCommand = sqlCb.GetInsertCommand(); sqlda.InsertCommand.Transaction = sqlTrans; sqlda.DeleteCommand = sqlCb.GetDeleteCommand(); sqlda.DeleteCommand.Transaction = sqlTrans; sqlda.UpdateCommand = sqlCb.GetUpdateCommand(); sqlda.UpdateCommand.Transaction = sqlTrans; sqlda.Update(dt); sqlTrans.Commit(); } catch (Exception) { //... } } } Even when I can get that working through trial and error of moving AcceptChanges around, or encapsulating changes within Begin/EndEdit, then I begin to experience a "Concurrency violation" in which it won't update the changes, but rather tell me it failed to update 0 of 1 affected rows. Is there something crazy obvious I'm missing?

    Read the article

  • SQL2008 merge replication fails to update depdendent items when table is added

    - by Dan Puzey
    Setup: an existing SQL2008 merge replication scenario. A large server database, including views and stored procs, being replicated to client machines. What I'm doing: * adding a new table to the database * mark the new table for replication (using SP_AddMergeArticle) * alter a view (which is already part of the replicated content) is updated to include fields from this new table (which is joined to the tables in the existing view). A stored procedure is similarly updated. The problem: the table gets replicated to client machines, but the view is not updated. The stored procedure is also not updated. Non-useful workaround: if I run the snapshot agent after calling SP_AddMergeArticle and before updating the view/SP, both the view and the stored procedure changes correctly replicate to the client. The bigger problem: I'm running a list of database scripts in a transaction, as part of a larger process. The snapshot agent can't be run during a transaction, and if I interrupt the transaction (e.g. by running the scripts in multiple transactions), I lose the ability to roll back the changes should something fail. Does anyone have any suggestions? It seems like I must be missing something obvious, because I don't see why the changes to the view/sproc wouldn't be replicating anyway, regardless of what's going on with the new table.

    Read the article

  • transactions and delete using fluent nhibernate

    - by Will I Am
    I am starting to play with (Fluent) nHibernate and I am wondering if someone can help with the following. I'm sure it's a total noob question. I want to do: delete from TABX where name = 'abc' where table TABX is defined as: ID int name varchar(32) ... I build the code based on internet samples: using (ITransaction transaction = session.BeginTransaction()) { IQuery query = session.CreateQuery("FROM TABX WHERE name = :uid") .SetString("uid", "abc"); session.Delete(query.List<Person>()[0]); transaction.Commit(); } but alas, it's generating two queries (one select and one delete). I want to do this in a single statement, as in my original SQL. What is the correct way of doing this? Also, I noticed that in most samples on the internet, people tend to always wrap all queries in transactions. Why is that? If I'm only running a single statement, that seems an overkill. Do people tend to just mindlessly cut and paste, or is there a reason beyond that? For example, in my query above, if I do manage it to get it from two queries down to one, i should be able to remove the begin/commit transaction, no? if it matters, I'm using PostgreSQL for experimenting.

    Read the article

  • Can't wrap my head around appengine data store persistence

    - by aloo
    Hi, I've run into the "can't operate on multiple entity groups in a single transaction." problem when using APPENGINE FOR JAVA w/ JDO with the following code: PersistenceManager pm = PMF.get().getPersistenceManager(); Query q = pm.newQuery("SELECT this FROM " + TypeA.class.getName() + " WHERE userId == userIdParam "); q.declareParameters("String userIdParam"); List<TypeA> poos = (List<TypeA>) q.execute(userIdParam); for (TypeA a : allTypeAs) { a.setSomeField(someValue); } pm.close(); } The problem it seems is that I can't operate on a multiple entities at the same time b/c they arent in the same entity group while in a transaction. Even though it doesn't seem like I'm in a transaction, appengine generates one because I have the following set in my jdoconfig.xml: <property name="datanucleus.appengine.autoCreateDatastoreTxns" value="true"/> Fine. So far I think I understand. BUT - if I replace TypeA in the above code, with TypeB - I don't get the error. I don't believe there is anything different between type a and type b - they both have the same key structure. They do have different fields but that shouldn't matter, right? My question is - what could possible be different between TypeA and TypeB that they give this different behavior? And consequently what do you I fundamentally misunderstand that this behavior could even exist.... Thanks.

    Read the article

  • Network Authentication when running exe from WMI

    - by Andy
    Hi, I have a C# exe that needs to be run using WMI and access a network share. However, when I access the share I get an UnauthorizedAccessException. If I run the exe directly the share is accessible. I am using the same user account in both cases. There are two parts to my application, a GUI client that runs on a local PC and a backend process that runs on a remote PC. When the client needs to connect to the backend it first launches the remote process using WMI (code reproduced below). The remote process does a number of things including accessing a network share using Directory.GetDirectories() and reports back to the client. When the remote process is launched automatically by the client using WMI, it cannot access the network share. However, if I connect to the remote machine using Remote Desktop and manually launch the backend process, access to the network share succeeds. The user specifed in the WMI call and the user logged in for the Remote Desktop session are the same, so the permissions should be the same, shouldn't they? I see in the MSDN entry for Directory.Exists() it states "The Exists method does not perform network authentication. If you query an existing network share without being pre-authenticated, the Exists method will return false." I assume this is related? How can I ensure the user is authenticated correctly in a WMI session? ConnectionOptions opts = new ConnectionOptions(); opts.Username = username; opts.Password = password; ManagementPath path = new ManagementPath(string.Format("\\\\{0}\\root\\cimv2:Win32_Process", remoteHost)); ManagementScope scope = new ManagementScope(path, opts); scope.Connect(); ObjectGetOptions getOpts = new ObjectGetOptions(); using (ManagementClass mngClass = new ManagementClass(scope, path, getOpts)) { ManagementBaseObject inParams = mngClass.GetMethodParameters("Create"); inParams["CommandLine"] = commandLine; ManagementBaseObject outParams = mngClass.InvokeMethod("Create", inParams, null); }

    Read the article

  • Monotouch threads, GC, WCF

    - by cvista
    Hi This is a question about best practices i guess but it applies directly to my current MT project. I'm using WCF services to communicate with the server. To do this i do the following: services.MethodToCall(params); and the asynch: services.OnMethodToCallCompleted += delegate{ //do stuff and ting }; This can lead to issues if you're not careful in that variables defined within the scope of the asynch callback can sometimes be cleaned up by the gc and this can cause crashes. So - I am making it a practice to declare these outside of the scope of the callback unless I am 100% sure they are not needed. Now - when doing stuff and ting implies changing the ui - i wrap it all in an InvokeOnMainThread call. I guess wrapping everything in this would slow the main thread down and rubbish the point of having multi threads. Even though I'm being careful about all this i am still getting crashes and I have no idea why! I am certain it has something to do with threads, scope and all that. Now - the only thing I can think of outside of updating the UI that may need to happen inside of InvokeOnMainThread is that I have a singleton 'Database' class. This is based on the version 5 code from this thread http://www.yoda.arachsys.com/csharp/singleton.html So now if the service method returns data that needs to be added/updated to the Database class -I also wrap this inside an InvokeOnMainThread call. Still getting random crashes. So... My question is this: I am new to thick client dev - I'm coming from a web dev perspective where we don't need to worry about threads so much :) Aside from what I have mentioned -are there any other things I should be aware of? Is the above stuff correct? Or am i miss-understanding something? Cheers w://

    Read the article

  • SQL: Join multiple tables and get a grouped sum

    - by Scienceprodigy
    I have a database with 3 tables that have related data. One table has transactions, and the other two relate to transaction categories. Basically it's financial data, so each transaction has a category (i.e. "gasoline" for a gas purchase transaction). A short version of my Transactions table looks like this- Transactions Table: ________________________________ | ID | Type | Amount | Category | --------------------------------- I also have two more tables relating a category to a categories parent. So basically, every Category entry in the Transactions Table belongs to a parent category (i.e. "gasoline" would belong to say "Automotive Expenses"). For categories, and their parent, I have two tables - Category Children: ____________________________________________ | ID | Parent Category ID | Child Category | -------------------------------------------- Category Parent: ________________________ | ID | Parent Category | ------------------------ What I'm trying to do is query the database and have it return a total spending by parent category. To get "spending" the Type of transactions must be "Debit". I tried the following statement: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category = 'category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id = category_parents._id WHERE trans_type = 'Debit' GROUP BY parent_category ORDER BY totals DESC but it gives me the following exception: 12-31 13:51:21.515: ERROR/Exception on query(4403): android.database.sqlite.SQLiteException: no such column: category_children.parent_category_id: , while compiling: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category='category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id=category_parents._id where trans_type='Debit' group by parent_category order by totals desc Any help is appreciated. (EXTRA CREDIT: I also need to make another statement to do spending by child category, given the parent category)

    Read the article

  • Maven compile plugin

    - by phanikiran
    Hi every body, in pom of a project, i have added dependency with scope compile . which is a jar file which contains some class file and jar's as well. my current java file needs internal jars of dependent jar to compile. But maven compile goal returning compilation error . :banghead: All the jar's needed to compile are in the single jar file which is added in dependency............................. Please help me! my pom: <dependency> <groupId>eagle</groupId> <artifactId>zkui</artifactId> <version>360LTS</version> <type>jar</type> <scope>compile</scope> </dependency> <build> <sourceDirectory>./src/main/java/</sourceDirectory> <outputDirectory>./target/classes/</outputDirectory> <finalName>${project.groupId}-${project.artifactId}</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.2</version> <configuration> <source>1.6</source> <target>1.6</target> </plugin> </plugins> </build> </project> error: package org.zkoss.zk.ui does not exist this package org.zkoss.zk.ui is in jar file zkex.jar which is in dependency jar eagle:zkui:360LTS jar file Please Help ME!!!! :jumpingjoy: Advance Thanks

    Read the article

  • How to bind an ADF Table on button click

    - by Juan Manuel Formoso
    Coming from ASP.NET I'm having a hard time with basic ADF concepts. I need to bind a table on a button click, and for some reason I don't understand (I'm leaning towards page life cycle, which I guess is different from ASP.NET) it's not working. This is my ADF code: <af:commandButton text="#{viewcontrollerBundle.CMD_SEARCH}" id="cmdSearch" action="#{backingBeanScope.indexBean.cmdSearch_click}" partialSubmit="true"/> <af:table var="row" rowBandingInterval="0" id="t1" value="#{backingBeanScope.indexBean.transactionList}" partialTriggers="::cmdSearch" binding="#{backingBeanScope.indexBean.table}"> <af:column sortable="false" headerText="idTransaction" id="c2"> <af:outputText value="#{row.idTransaction}" id="ot4"/> </af:column> <af:column sortable="false" headerText="referenceCode" id="c5"> <af:outputText value="#{row.referenceCode}" id="ot7"/> </af:column> </af:table> This is cmdSearch_click: public String cmdSearch_click() { List l = new ArrayList(); Transaction t = new Transaction(); t.setIdTransaction(BigDecimal.valueOf(1)); t.setReferenceCode("AAA"); l.add(t); t = new Transaction(); t.setIdTransaction(BigDecimal.valueOf(2)); t.setReferenceCode("BBB"); l.add(t); setTransactionList(l); // AdfFacesContext.getCurrentInstance().addPartialTarget(table); return null; } The commented line also doesn't work. If I populate the list on my Bean's constructor, the table renders ok. Any ideas?

    Read the article

  • JTA or LOCAL transactions in JPA2+Hibernate 3.6.0?

    - by Pangea
    We are in the process of re-thinking our tech stack and below are our choices (We can't live without Spring and Hibernate due to the complexity etc of the app). We are also moving from J2EE 1.4 to JEE 5. Tech stack JEE 5 JPA 2.0 (I know JEE 5 only supports JPA 1.0 but we want to use Hibernate as the JPA provider) Hibernate 3.6.0 (We already have lots of hbm files with custom types etc. so we doesn't want to migrate them at this time to JPA. This means we want both jpa/hbm mappings work together and hence the Hibernate as the JPA provider instead of using the default that comes with App Server) Now the problems is that I want to stick with local transactions but other team members want to use JTA. I have been working with J2EE for last 9 years and I've heard time and again people suggesting to stick with local transactions if I doesn't need two phase commits. This is not only for performance reasons but debugging/troubleshooting a local transaction is lot easier than a distributed transaction. My suggestion is to use spring declarative transaction management + local transactions (HibernateTransactionManager) I want to make sure if I am being paranoid or I have a valid point. I'd like to hear what the rest of the JEE world thinks. Thank you.

    Read the article

  • Getting broken link error whle Using App Engine service accounts

    - by jade
    I'm following this tutorial https://developers.google.com/bigquery/docs/authorization#service-accounts-appengine Here is my main.py code import httplib2 from apiclient.discovery import build from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from oauth2client.appengine import AppAssertionCredentials # BigQuery API Settings SCOPE = 'https://www.googleapis.com/auth/bigquery' PROJECT_NUMBER = 'XXXXXXXXXX' # REPLACE WITH YOUR Project ID # Create a new API service for interacting with BigQuery credentials = AppAssertionCredentials(scope=SCOPE) http = credentials.authorize(httplib2.Http()) bigquery_service = build('bigquery', 'v2', http=http) class ListDatasets(webapp.RequestHandler): def get(self): datasets = bigquery_service.datasets() listReply = datasets.list(projectId=PROJECT_NUMBER).execute() self.response.out.write('Dataset list:') self.response.out.write(listReply) application = webapp.WSGIApplication( [('/listdatasets(.*)', ListDatasets)], debug=True) def main(): run_wsgi_app(application) if __name__ == "__main__": main() Here is my app.yaml file code application: bigquerymashup version: 1 runtime: python api_version: 1 handlers: - url: /favicon\.ico static_files: favicon.ico upload: favicon\.ico - url: .* script: main.py And yes i have added app engine service account name in google api console Team tab with can edit permissions. When upload the app and try to access the link it says Oops! This link appears to be broken. Ealier i ran this locally and tried to access it using link localhost:8080.Then i thought may be running locally might be giving the error so i uploaded my code to http://bigquerymashup.appspot.com/ but still its giving error.

    Read the article

  • need help fixing unique key in rails. rails is adding id causing duplicate key

    - by railsnew
    I need some help in fixing the below issue. I had transaction blocks in my rails code like below: @sqlcontact = "INSERT INTO contacts (id,\"cid\", \"hphone\", mphone, provider, cemail, email, sms , mail, phone) VALUES ('"+@id1+"','" + @id1 + "', '"+ params[:hphone] + "', '"+params[:mphone]+ "', '" + params[:provider] + "', '" + params[:cemail]+ "', '" + @varemail+ "', '"+@varsms+ "', '"+ @varmail+"', '"+@varphone+"')" my app was deployed to heroku so I was advised by them to remove transaction blocks. So I changed the above to: @cont = Contact.new(:id => @id1, :cid => @id1, :hphone => params[:hphone], :mphone => params[:mphone], :provider => params[:provider], :cemail => params[:cemail], :email => @varemail, :sms => @varsms, :mail => @varmail, :phone => @varphone) @cont.save My app also already had data stored. Now the problem is that when I try to save a record ...I keep getting the error: duplicate key value violates unique constraint "contacts_pkey" The error also shows the sql query trying to insert data ...however, in that sql query i Do not see id value. As you can see from my code that I am passing the id. then why is rails not accepting it? does it always include its own sequential id? can I not overwrite the default rails magic? and if it does that...does it not look at data that is already in the DB?? I am really stuck here. What should I do? should I just go back to my transaction block

    Read the article

  • How can the Three-Phase Commit Protocol (3PC) guarantee atomicity?

    - by AndiDog
    I'm currently exploring worst case scenarios of atomic commit protocols like 2PC and 3PC and am stuck at the point that I can't find out why 3PC can guarantee atomicity. That is, how does it guarantee that if cohort A commits, cohort B also commits? Here's the simplified 3PC from the Wikipedia article: Now let's assume the following case: Two cohorts participate in the transaction (A and B) Both do their work, then vote for commit Coordinator now sends precommit messages... A receives the precommit message, acknowledges, and then goes offline for a long time B doesn't receive the precommit message (whatever the reason might be) and is thus still in "uncertain" state The results: Coordinator aborts the transaction because not all precommit messages were sent and acknowledged successfully A, who is in precommit state, is still offline, thus times out and commits B aborts in any case: He either stays offline and times out (causes abort) or comes online and receives the abort command from the coordinator And there you have it: One cohort committed, another aborted. The transaction is screwed. So what am I missing here? In my understanding, if the automatic commit on timeout (in precommit state) was replaced by infinitely waiting for a coordinator command, that case should work fine.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >