Search Results

Search found 5233 results on 210 pages for 'a records'.

Page 171/210 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Silverlight 4, Out of browser, Printing, Automatic updates

    - by minal
    I have a very critial business application presently running using Winforms. The application is a very core UI shell. It accepts input data, calls a webservice on my server to do the computation, displays the results on the winforms app and finally send a print stream to the printer. Presently the application is deployed using Click-once. Moving forward, I am trying to contemplate wheather I should move the application into a Silverlight application. Couple of reasons I am thinking silverlight. Gives clients the feel that it is a cloud based solution. Can be accessed from any PC. While the clickonce app is able to do this as well, they have to install an app, and when updates are available they have to click "Yes" to update. The application presently has a drop down list of customers, this list has expanded to over 3000 records. Scrolling through the list is very painful. With Silverlight I am thinking of the auto complete ability. Out of the browser - this will be handy for those users who use the app daily. I haven't used Silverlight previous hence looking for some expert advice on a few things: Printing - does silverlight allow sending raw print data to the printer. The application prints to a Zebra Thermal label printer. I have to send raw bytes to the printer with the commands. Can this be done with SL, or will it always prompt the "Print" dialog? Out of browser - when SL apps are installed as out of browser, how to updates come through, does the app update automatically or is the user prompted to opt for update?

    Read the article

  • Gridview empty when SelectedIndexChanged called

    - by xan
    I have a DataGrid which is being bound dynamically to a database query. The user enters some search text into a text field, clicks search, and the code behind creates the appropriate database query using LINQ (searches a table based on the string and returns a limited set of the columns). It then sets the GridView datasource to be the query and calls DataBind(). protected void btnSearch_Click(object sender, EventArgs e) { var query = from record in DB.Table where record.Name.Contains(txtSearch.Text) //Extra string checking etc. removed. select new { record.ID, record.Name, record.Date }; gvResults.DataSource = query; gvResults.DataBind(); } This works fine. When a user selects a row in the grid, the SelectedIndexChanged event handler gets the id from the row in the grid (one of the fields), queries the full record from the DB and then populates a set of editor / details fields with the records full details. protected void gvResults_SelectedIndexChanged(object sender, EventArgs e) { int id = int.Parse(gvResults.SelectedRow.Cells[1].Text); DisplayDetails(id); } This works fine on my local machine where I'm developing the code. On the production server however, the function is called successfully, but the row and column count on gvResults, the GridVeiw is 0 - the table is empty. The GridView's viewstate is enabled and I can't see obvious differences. Have I made some naive assumptions, or am I relying on something that is likely to be configured differently in debug? Locally I am running an empty asp.net web project in VS2008 to make development quicker. The production server is running the sitecore CMS so is configured rather differently. Any thoughts or suggestions would be most welcome. Thanks in advance!

    Read the article

  • Google App Engine on Google Apps Domain

    - by Bob Ralian
    I'm having trouble getting my domain pointed to my website hosted with google app engine. Here's the background... take care to separate the concepts of "google apps" (domain hosting, email, etc.) and "google app engine" (website framework). I have a domain that's using Google Apps for Your Domain, let's call it company.com. So my login for my google apps account is [email protected]. I have a different domain that is aliased back to my google apps account, let's call it mycompany.com. It's been successfully aliased and registered with my primary google apps account using the cname method, and has updated mx records. We have a ton of domains, and I only want to use one "google apps" account to maintain them all. Now I have a website I've built using google app engine, and the url is effectively mycompany.appspot.com. I want to get mycompany.com to point to my website that currently resides at mycompany.appspot.com. There's a spot in the google app engine dashboard under application settings where you can add a domain. So I click there and enter mycompany.com and I get an error message saying that domain is not using google apps. If I back up to the page I submitted, there's a note saying I need to register the domain with google apps. So I click the link to do that and enter mycompany.com and I get an error message saying the domain has been registered and is in the process of ownership verification. But that process is already finished. So... what do I do? Does google app engine not support a domain that is only aliased to a primary google apps account? Does mycompany.com need to have its own primary google apps account?

    Read the article

  • Increase Query Speed in PostgreSQL

    - by Anthoni Gardner
    Hello, First time posting here, but an avid reader. I am experiancing slow query times on my database (all tested locally thus far) and not sure how to go about it. The database itself has 44 tables and some of them tables have over 1 Million records (mainly the movies, actresses and actors tables). The table is made via JMDB using the flat files on IMDB. Also the SQL query that I am about to show is from that said program (that too experiances very slow search times). I have tried to include as much information as I can, such as the explain plan etc. "QUERY PLAN" "HashAggregate (cost=46492.52..46493.50 rows=98 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Append (cost=39094.17..46491.79 rows=98 width=46)" " - HashAggregate (cost=39094.17..39094.87 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on movies (cost=0.00..39093.65 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Nested Loop (cost=0.00..7395.94 rows=28 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on akatitles (cost=0.00..7159.24 rows=28 width=4)" " Output: akatitles.movieid, akatitles.language, akatitles.title, " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Index Scan using movies_pkey on movies (cost=0.00..8.44 rows=1 width=46)" " Output: public.movies.movieid, public.movies.title, public.movies.year, public.movies.imdbid" " Index Cond: (public.movies.movieid = akatitles.movieid)" SELECT * FROM ((SELECT DISTINCT title, movieid, year FROM movies WHERE title ILIKE '%Babe%' AND NOT (title ILIKE '"%}')) UNION (SELECT movies.title, movies.movieid, movies.year FROM movies INNER JOIN akatitles ON movies.movieid=akatitles.movieid WHERE akatitles.title ILIKE '%Babe%' AND NOT (akatitles.title ILIKE '"%}'))) AS union_tmp2; Returns 612 Rows in 9078ms Database backup (plain text) is 1.61GB It's a really complex query and I am not fully cognizant on it, like I said it was spat out by JMDB. Do you have any suggestions on how I can increase the speed ? Regards Anthoni

    Read the article

  • Best way to change jqGrid rowNum from ALL to -1 before pass to a web service

    - by Billyhole
    I'm looking to find the best way to allow users to choose to show ALL records in a jqGrid. I know that a -1 value passed for the rows parameter denotes ALL, but I want the word "ALL" not a -1 to appear in the rowList select element, ie. rowList: [15, 50, 100, 'ALL']. I'm passing the grid request to a web service which accepts an int for "rows", and I'm trying find how and when I should change the user selected value of "ALL" to a -1 before it gets sent to the web service. Below is my cleaned up grid code. I tried some various code blocks before my $.ajax in the datatype function. But most attempts just seemed like I have to be doing this the most convoluted way I possibly could. For example, datatype: function(postdata) { if ($("#gridTableAssets").jqGrid('getGridParam', 'rowNum') == 'ALL') { $("#gridTableAssets").appendPostData({ "rows": -1, "page": 1 }); } $.ajax({... But doing that seemed to cause the actual "page" GridParam to be nulled out on subsequent grid actions, forcing me handle that in other places. There just seems like this is something that would be frequently done out there and have clean way of doing it. Cleaned grid code: $("#gridTableAssets").jqGrid({ datatype: function(postdata) { $.ajax({ url: "/Service/Repository.asmx/GetAssets", data: JSON.stringify(postdata), type: 'POST', contentType: "application/json; charset=utf-8", error: function(XMLHttpRequest, textStatus, errorThrown) { alert('error'); }, success: function(msg) { var assetsGrid = $("#gridTableAssets")[0]; assetsGrid.addJSONData(JSON.parse(msg)); ... } }); }, ... pager: $('#pagerAssets'), rowNum: 15, rowList: [15, 50, 100, 'ALL'], ... onPaging: function(index, colindex, sortorder) { SessionKeepAlive(); } }); And here is the web service [WebMethod] public string GetAssetsOfAssetStructure(bool _search, int rows, int page, string sidx, string sord, string filters)

    Read the article

  • Is this possible? Overflow-y:visible with overflow-x:scroll/auto

    - by Kostrzak
    We have a problem in our team which we cannot solve :/ We made our own Grid control. When you click on the icon next to column name, the pop-up div (call it divFilter) appears and you can set filtering there. There can be dynamically generated div for each column so we can have f.e 5 divFilters in 5 different places. It works, but the only problem is that when there is for example 1-2 records on the Grid, the pop-up div will be displayed under horizontal scroll of div. We've tried with z-index but it looks like that won't work. We can set overflow:visible but we also need that horizontal scroll(our grids have up to 50 columns). We thought that we can solve it buy setting overflow-y visible and overflow-x:scroll but according to our tests and that page: http://www.brunildo.org/test/Overflowxy2.html it isn't possible(for IE7,IE8). I've also found this similar question CSS overflow-y:visible, overflow-x:scroll ,but our pop-up div must be position:absolute, because we need to position them under columns. Any ideas or workaround to it? Is it even possible to set it only with CSS without using Javascript(for dynamically changing gridview hight etc.). Thanks!!

    Read the article

  • SQL Server insert with XML parameter - empty string not converting to null for numeric

    - by Mayo
    I have a stored procedure that takes an XML parameter and inserts the "Entity" nodes as records into a table. This works fine unless one of the numeric fields has a value of empty string in the XML. Then it throws an "error converting data type nvarchar to numeric" error. Is there a way for me to tell SQL to convert empty string to null for those numeric fields in the code below? -- @importData XML <- stored procedure param DECLARE @l_index INT EXECUTE sp_xml_preparedocument @l_index OUTPUT, @importData INSERT INTO dbo.myTable ( [field1] ,[field2] ,[field3] ) SELECT [field1] ,[field2] ,[field3] FROM OPENXML(@l_index, 'Entities/Entity', 1) WITH ( field1 int 'field1' ,field2 varchar(40) 'field2' ,field3 decimal(15, 2) 'field3' ) EXECUTE sp_xml_removedocument @l_index EDIT: And if it helps, sample XML. Error is thrown unless I comment out field3 in the code above or provide a value in field3 below. <?xml version="1.0" encoding="utf-16"?> <Entities> <Entity> <field1>2435</field1> <field2>843257-3242</field2> <field3 /> </Entity> </Entities>

    Read the article

  • Integration tests in Continuous Integration environment: Database and filesystem state

    - by dario_ramos
    I'm trying to implement automated integration tests for my application. It's a very complex monster. You could say that its database and part of the filesystem are part of its state, because it saves image files in the hard drive, and references to those in the DB. The software needs all those, in a coherent state, to work properly. Back to writing tests: To run any relevant test, I need some image files in the filesystem, and certain records filled in the database. I thought of putting all of these in a separate folder called TestEnvironmentData in the repository, and retrieving them from the Continuous Integration Server (Team City), but a colleague said the repo is quite full as it is, and that I should set up a special directory, and databases, only in the Continuous Integration server. I don't like that because the tests success depend on me manually mantaining stuff in the server, and restoring initial state before every test becomes cumbersome. What do you guys do when you need to write integration tests for an app like this? The main goal is having an automated test harness to approach a large scale refactoring. There's lots of spaghetti code and the app's current architecture is hardly unit testable, that's why I decided on integration tests first. Any alternative approach is welcome.

    Read the article

  • Excel Range Format: Number is automatically formatted when Range::Value2 is set

    - by A9S6
    I have an Excel addin written in C# that imports a text file into Excel worksheet. Some of the fields in the file are text and some oare numbers. Problem Steps: Change the System's Regional Settings to Dutch (Belgium) Open Excel and import the file into Excel. Records contain values such as 78,1118 which gets converted to 781.118. Note that in Dutch(Belgium), COMMA is the decimal character and DOT is the thousand character. I do not require the number to be formatted automatically but just want to display whatver I get from the file (78,1118). If I set the cell's NumberFormat to "@" i.e. Text, then it displays an error (SmartTag) saying "Number stored as Text". I know I can change the settings by going to the "Options" box but I dont want to change any user options in Excel for this. I have tried setting the cell's Value2 with an "'" (apostrophe) but the same error is displayed. If I set the cell's format to something else after the value is set then the actual value changes and I loose the decimal. Is there a way in Excel to just display the value and NOT display the "Number Stored as Text" error in cell?

    Read the article

  • Nested Forms not passing belongs_to :id

    - by Bill Christian
    I have the following model class Project < ActiveRecord::Base has_many :assignments, :conditions => {:deleted_at => nil} has_many :members, :conditions => {:deleted_at => nil} accepts_nested_attributes_for :members, :allow_destroy => true end class Member < ActiveRecord::Base belongs_to :project belongs_to :person belongs_to :role has_many :assignments, :dependent => :destroy, :conditions => {:deleted_at => nil} accepts_nested_attributes_for :assignments, :allow_destroy => true validates_presence_of :role_id validates_presence_of :project_id end and I assume the controller will populate the member.project_id upon project.save for each nested member record. However, I get a validation error stating the project_id is blank. My controller method: def create # @project is created in before_filter if @project.save flash[:notice] = "Successfully created project." redirect_to @project else render :action => 'new' end end Do I need to manually set the project_id in each nested member record? Or what is necessary for the controller to populate when it creates the member records?

    Read the article

  • SQL select descendants of a row

    - by Joey Adams
    Suppose a tree structure is implemented in SQL like this: CREATE TABLE nodes ( id INTEGER PRIMARY KEY, parent INTEGER -- references nodes(id) ); Although cycles can be created in this representation, let's assume we never let that happen. The table will only store a collection of roots (records where parent is null) and their descendants. The goal is to, given an id of a node on the table, find all nodes that are descendants of it. A is a descendant of B if either A's parent is B or A's parent is a descendant of B. Note the recursive definition. Here is some sample data: INSERT INTO nodes VALUES (1, NULL); INSERT INTO nodes VALUES (2, 1); INSERT INTO nodes VALUES (3, 2); INSERT INTO nodes VALUES (4, 3); INSERT INTO nodes VALUES (5, 3); INSERT INTO nodes VALUES (6, 2); which represents: 1 `-- 2 |-- 3 | |-- 4 | `-- 5 | `-- 6 We can select the (immediate) children of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1; We can select the children and grandchildren of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1 UNION ALL SELECT b.* FROM nodes AS a, nodes AS b WHERE a.parent=1 AND b.parent=a.id; We can select the children, grandchildren, and great grandchildren of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1 UNION ALL SELECT b.* FROM nodes AS a, nodes AS b WHERE a.parent=1 AND b.parent=a.id UNION ALL SELECT c.* FROM nodes AS a, nodes AS b, nodes AS c WHERE a.parent=1 AND b.parent=a.id AND c.parent=b.id; How can a query be constructed that gets all descendants of node 1 rather than those at a finite depth? It seems like I would need to create a recursive query or something. I'd like to know if such a query would be possible using SQLite. However, if this type of query requires features not available in SQLite, I'm curious to know if it can be done in other SQL databases.

    Read the article

  • XMLhttpRequest > PHP > XMLhttpRequest

    - by usurper
    Hi guys, I have another question. XMLhttpRequests haunt me. Everything is now in the database but I need this data to update my page on firt page load or reload. The XHR is triggered in JavaScript file which triggers PHP-Script. PHP-Script access MySQL database. But how do I get the fetched records back into my JavaScript for page update. I can not figure it out. First my synchronous XMLhttpRequest: function retrieveRowsDB() { if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.open("GET","retrieveRowData.php", false); xmlhttp.send(null); return xmlhttp.responseText; } Then my PHP-Script: <?php $con = mysql_connect("localhost","root","*************"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("sadb", $con); $data="SELECT * FROM users ORDER BY rowdata ASC"; if (!mysql_query($data,$con)) { die('Error: ' . mysql_error()); } else { $dbrecords = mysql_query($data,$con); } $rowdata = mysql_fetch_array($dbrecords); return $rowdata; mysql_close($con); ?> What am I missing here? Anyone got a clue?

    Read the article

  • MVC and repository pattern data effeciency

    - by Shawn Mclean
    My project is structured as follows: DAL public IQueryable<Post> GetPosts() { var posts = from p in context.Post select p; return posts; } Service public IList<Post> GetPosts() { var posts = repository.GetPosts().ToList(); return posts; } //Returns a list of the latest feeds, restricted by the count. public IList<PostFeed> GetPostFeeds(int latestCount) { List<Post> post - GetPosts(); //CODE TO CREATE FEEDS HERE return feeds; } Lets say the GetPostFeeds(5) is supposed to return the 5 latest feeds. By going up the list, doesn't it pull down every single post from the database from GetPosts(), just to extract 5 from it? If each post is say 5kb from the database, and there is 1 million records. Wont that be 5GB of ram being used per call to GetPostFeeds()? Is this the way it happens? Should I go back to my DAL and write queries that return only what I need?

    Read the article

  • MVC 2.0 - JqGrid Sorting with Mulitple Tables

    - by Billy Logan
    I am in the process of implementing the jqGrid and would like to be able to use the sorting functionality. I have run into some issues with sorting columns that are related to the base table. Here is the script to load the grid: public JsonResult GetData(GridSettings grid) { try { using (IWE dataContext = new IWE()) { var query = dataContext.LKTYPE.Include("VWEPICORCATEGORY").AsQueryable(); ////sorting query = query.OrderBy<LKTYPE>(grid.SortColumn, grid.SortOrder); //count var count = query.Count(); //paging var data = query.Skip((grid.PageIndex - 1) * grid.PageSize).Take(grid.PageSize).ToArray(); //converting in grid format var result = new { total = (int)Math.Ceiling((double)count / grid.PageSize), page = grid.PageIndex, records = count, rows = (from host in data select new { TYPE_ID = host.TYPE_ID, TYPE = host.TYPE, CR_ACTIVE = host.CR_ACTIVE, description = host.VWEPICORCATEGORY.description }).ToArray() }; return Json(result, JsonRequestBehavior.AllowGet); } } catch (Exception ex) { //send the error email ExceptionPolicy.HandleException(ex, "Exception Policy"); } //have to return something if there is an issue return Json(""); } As you can see the description field is a part of the related table("VWEPICORCATEGORY") and the order by is targeted at LKTYPE. I am trying to figure out how exactly one goes about sorting that particular field or maybe even a better way to implement this grid using multiple tables and it's sorting functionality. Thanks in advance, Billy

    Read the article

  • EAV Database Sheme

    - by GLO
    Hello Stackoverflow comunity! I believe that my question has to do with all db guru here! Do you know the EAV DB Scheme ( http://en.wikipedia.org/wiki/Entity-attribute-value_model ) and what they say about the performing of this model. I wonder, If I break this model into smaller tables what the result is? Let's talk about it. I have a db with more that 100K records. A lot of categories and many items ( with different properties per category ) Everything is stored in a EAV. If I try to break this scheme and create for any category a unique table is something that will I have to avoid? Yes, I know that probably I'll have a lot of tables and I'll need to ALTER them if I want to add an extra field, BUT is this so wrong? I have also read that as many tables I have, the db will be populate with more files and this isn't good for any filesystem. Any suggestion? Thank you!

    Read the article

  • Automatic Adjusting Range Table

    - by Bradford
    I have a table with a start date range, an end date range, and a few other additional columns. On input of a new record, I want to automatically adjust any overlapping date ranges (shrinking them to allow for the new input). I also want to ensure that no overlapping records can accidentally be inserted into this table. I'm using Oracle and Java for my application code. How should I enforce the prevention of overlapping date ranges and also allow for automatically adjusting overlapping ranges? Should I create an AFTER INSERT trigger, with a dbms_lock to serialize access, to prevent the overlapping data. Then in Java, apply the logic to auto adjust everything? Or should that part be in PL/SQL in stored procedure call? This is something that we need for a couple other tables so it'd be nice to abstract. If anyone has something like this already written, please share :) I did find this reference: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:474221407101 Here's an example of how each of the 4 overlapping cases should be handled for adjustment on insert: = Example 1 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 50, 'A') Gives (0, 10, 'X') **(20, 50, 'A') **(51, 100, 'Z') (200, 500, 'Y') = Example 2 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (40, 80, 'A') Gives (0, 10, 'X') **(30, 39, 'Z') **(40, 80, 'A') **(81, 100, 'Z') (200, 500, 'Y') = Example 3 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (50, 120, 'A') Gives (0, 10, 'X') **(30, 49, 'Z') **(50, 120, 'A') (200, 500, 'Y') = Example 4 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 120, 'A') Gives (0, 10, 'X') **(20, 120, 'A') (200, 500, 'Y') The algorithm is as follows: given range = g; input range = i; output range set = o if i.start <= g.start if i.end >= g.end o_1 = i else o_1 = i o_2 = (o.end + 1, g.end) else if i.end >= g.end o_1 = (g.start, i.start - 1) o_2 = i else o_1 = (g.start, i.start - 1) o_2 = i o_3 = (i.end + 1, i.end)

    Read the article

  • WPF DataGrid inside Accordion height issue

    - by LucasS
    I am using the latest WPF Toolkit but am running into a height issue when I have a large record set bound into a datagrid inside an AccordionItem item. The height of the Accordion itself scales nicely but the datagrid inside the accordion control doesn't get get a scrollbar or get constrained in any way so the records are hidden. I know that I am most probably missing something very simple (like a binding from the datagrid's height property to the Accordion but that seems messy) here is a cut down version of the code (and yes, this has the same problem if you bind in a big recordset) ... </layouttoolkit:AccordionItem> <layouttoolkit:AccordionItem Header="grid 2"> <dg:DataGrid AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" SelectionMode="Single"> ... </dg:DataGrid.Columns> </dg:DataGrid> </layouttoolkit:AccordionItem> <layouttoolkit:AccordionItem Header="grid 3"> <dg:DataGrid AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" SelectionMode="Single"> ... </dg:DataGrid.Columns> </dg:DataGrid> </layouttoolkit:AccordionItem> </layouttoolkit:Accordion> </UserControl>

    Read the article

  • Lost Update Anomaly in Sql Server Update Command

    - by Javed
    Hi, I am very much confused. I have a transaction in ReadCommitted Isolation level. Among other things I am also updating a counter value in it, something similar to below: Update tblCount set counter = counter + 1 My application is a desktop application and this transaction happens to occur quite frequently and concurrently. We recently noticed an error that sometimes the counter value doesn't get updated or is missed. We also insert one record on each counter update so we are sure that records have been inserted but somehow counter fails to update. This happens once in 2000 simulaneous transactions. I seriously doubt it is a lost update anomaly I am facing but if you look at the command above, it's just update the counter from its own value: if I have started a transaction and the transaction has reached this statement, it should have locked the row. This should not cause lost update, but it's happening somehow. Is the thing that this update command works in two parts? Like first it reads the counter value (during which it doesn't get the exclusive lock) and then writes the new calculated value (when it does get an exclusive lock)? Please help, I have got really confused.

    Read the article

  • Cannot enlist Synchronization. LocalTransactionCoordinator is completing or completed issue with hib

    - by Bijendra Singh
    I am getting Cannot enlist Synchronization. LocalTransactionCoordinator is completing or completed exception when integrating my method is called from the portlet. I am using spring transaction management for handling all the hibernate transaction in the spring configuration file through AOP. When I run my hibernate dao method for persisting the data through Junit its working fine. Exception description: I am facing an issue that is when I run my code through unit test case data is getting updated in database properly but when I run the same code with integration with portlet my code is executing finely but after the completion of transaction the records is not getting updated to database. The following error can be seen in the log which is [4/7/10 23:06:38:685 MDT] 0000006c LocalTranCoor E WLTC0014E: Cannot enlist Synchronization. LocalTransactionContainment is completing or completed. [4/7/10 23:06:38:689 MDT] 0000006c LocalTransact E J2CA0026E: Method addSync caught java.lang.IllegalStateException: Cannot enlist Synchronization. LocalTransactionCoordinator is completing or completed. at com.ibm.ws.LocalTransaction.LocalTranCoordImpl.enlistSynchronization(LocalTranCoordImpl.java(Compiled Code)) at com.ibm.ejs.j2c.LocalTransactionWrapper.addSync(LocalTransactionWrapper.java(Compiled Code)) at com.ibm.ejs.j2c.ConnectionManager.initializeForUOW(ConnectionManager.java(Compiled Code)) at com.ibm.ejs.j2c.ConnectionManager.involveMCInTran(ConnectionManager.java(Compiled Code)) at com.ibm.ejs.j2c.ConnectionManager.associateConnection(ConnectionManager.java(Compiled Code)) at com.ibm.ejs.j2c.ConnectionManager.associateConnection(ConnectionManager.java(Compiled Code)) at com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.reactivate(WSJdbcConnection.java(Compiled Code)) at com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.getWarnings(WSJdbcConnection.java:1539)

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • How to implement a counter when using golang's goroutine?

    - by MrROY
    I'm trying to make a queue struct that have push and pop functions. I need to use 10 threads push and another 10 threads pop data, just like i did in the code below. Questions : 1. I need to print out how much i have pushed/popped, but i don't know how to do that. 2. Is there anyway to speed up my code ? the code is too slow for me. package main import ( "runtime" "time" ) const ( DATA_SIZE_PER_THREAD = 10000000 ) type Queue struct { records string } func (self Queue) push(record chan interface{}) { // need push counter record <- time.Now() } func (self Queue) pop(record chan interface{}) { // need pop counter <- record } func main() { runtime.GOMAXPROCS(runtime.NumCPU()) //record chan record := make(chan interface{},1000000) //finish flag chan finish := make(chan bool) queue := new(Queue) for i:=0; i<10; i++ { go func() { for j:=0; j<DATA_SIZE_PER_THREAD; j++ { queue.push(record) } finish<-true }() } for i:=0; i<10; i++ { go func() { for j:=0; j<DATA_SIZE_PER_THREAD; j++ { queue.pop(record) } finish<-true }() } for i:=0; i<20; i++ { <-finish } }

    Read the article

  • Is this a good approach to address double-base64-encoding?

    - by Freiheit
    My software understands attachments, like PNGs attached to user records. These attachments are usually sent in from outside sources as a Base64 encoded string. The database stores whatever data it is given, Base64 encoded or not. When I serve up the attachment for download I do this: if (Base64.isBase64(data)) { data = Base64.decodeBase64(data); } There is a potential for data that is double encoded. For instance the sender of a message had base64 encoded data, then encoded it again when building the message to send to me. I think the following code would address that circumstance: while (Base64.isBase64(data)) { data = Base64.decodeBase64(data); } So if data is encoded multiple times, it would be decoded until its in its 'raw' state and then served up for download. Is this approach an acceptable way to address that problem? Ideally some sort of checking could happen at the edge when I receive attachment data, but that will take more time. This looping seems to be a faster way to do it. The 'Base64' library is Apache Commons: http://commons.apache.org/codec/apidocs/org/apache/commons/codec/binary/Base64.html I trust it to properly identify Base64 encoded data.

    Read the article

  • Using for or while loops

    - by Gary
    Every month, 4 or 5 text files are created. The data in the files is pulled into MS Access and used in a mailmerge. Each file contains a header. This is an example: HEADER|0000000130|0000527350|0000171250|0000058000|0000756600|0000814753|0000819455|100106 The 2nd field is the number of records contained in the file (excluding the header line). The last field is the date in the form yymmdd. Using gawk (for Windows), I've done ok with rearranging/modifying the data and writing it all out to a new file for importing into Access except for the following. I'm trying to create a unique ID number for each record. The ID number has the form 1mmddyyXXXX, where XXXX is a number, padded with leading zeros. Using the header above, the first record in the output file would get the ID number 10106100001 and the last record would get the ID 10106100130. I've tried putting the second field in the header into a variable, rearranging the last header field into the required date format and then looping with "for" statements to append the XXXX part of the ID and then outputting it all with printf but so far I've been complete rubbish at it. Thanks for your help! gary

    Read the article

  • Multiple Out-of-Browser Applications in One Application

    - by Otaku
    I'm looking at a scenario where I need to create a single "master" Silverlight application and then add "child" applications for an out-of-browser Silverlight application. The scenario is something like this. A user will visit a gameboard web site and choose a game to play. Let's call it Checkers. He likes it, so then he installs the out-of-browser app to his desktop. He then finds Chess, and installs that too. For both games, while played on the site, he has stats (games played, win/loss records, etc.). For each game on the site, he navigates to a different page. But now he wants to play offline and view his stats and other cross-games information. He wants to have a single app to launch to play either game. From his single out-of-browser app, he sees that Go is also available, and he places a checkmark against it to download on his next connection. Does anyone have any experience at developing multiple out-of-browser Silverlight apps that reside within a single master app? What considerations need to be had for this type of design? How would this work in terms of install experience from different web pages?

    Read the article

  • Rails 3 memory issue

    - by Erik
    Hello! I'm developing a new site based on Ruby on Rails 3 beta. I knew this might be a bad idea considering it's just beta, but I still thought it might work. Now though I'm having HUGE problems with Rails consuming huge ammounts of memory. For my application today it consumes about 10 mb per request and it doesn't seem to release it either. So I thought this might be because of bloat in my application and thus I created a test app just to compare. For my test app I just generated a model with a scaffold and then created about 20 records on this model. I then went to the index page and hit refresh and I could immediately see memory taking off! Less than my app but still about 1-3 mb per request. I'm working in OSX Leopard, with Ruby 1.8.7, Rails 3.0.0.beta and a SQLLite db for development. Does anyone recognize my problem? I would really appreciate some help here. :/ Thanks!

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >