Search Results

Search found 60427 results on 2418 pages for 'data transfer'.

Page 521/2418 | < Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >

  • Design PDF template and populate data at runtime using java,xml etc..

    - by Samant
    well i have been looking for a java based PDF solutions...we dont have a clean way i guess-still.. all solutions are primitive and kind of workarounds... No easy solution for this requirement - 1. Designing a PDF template using a IDE (eg. Livecycle designer ..which is not free) 2. Then at runtime using java, populate data into this PDF template...either using xml or other datasources... such a simple requirement and NONE has a good "open-source and free" solution yet ! Is anyone aware of any ? I have been searching for since 3-4 years now..for a clean way out... Eclipse BIRT comes close.. but does not handle Barcode elements ..OOB. Jasper - ireport is also good but that tool does not have a table concept and is kind of annoying ! Also barcode support is not good. XSL-FO has not free IDE for design . Looking for a better answer .. got one ?

    Read the article

  • Posting forms to a 404 + HttpHandler in IIS7: why has all POST data gone missing?

    - by Rahul
    OK, this might sound a bit confusing and complicated, so bear with me. We've written a framework that allows us to define friendly URLs. If you surf to any arbitrary URL, IIS tries to display a 404 error (or, in some cases, 403;14 or 405). However, IIS is set up so that anything directed to those specific errors is sent to an .aspx file. This allows us to implement an HttpHandler to handle the request and do stuff, which involves finding the an associated template and then executing whatever's associated with it. Now, this all works in IIS 5 and 6 and, to an extent, on IIS7 - but for one catch, which happens when you post a form. See, when you post a form to a non-existent URL, IIS says "ah, but that url doesn't exist" and throws a 405 "method not allowed" error. Since we're telling IIS to redirect those errors to our .aspx page and therefore handling it with our HttpHandler, this normally isn't a problem. But as of IIS7, all POST information has gone missing after being redirected to the 405. And so you can no longer do the most trivial of things involving forms. To solve this we've tried using a HttpModule, which preserves POST data but appears to not have an initialized Session at the right time (when it's needed). We also tried using a HttpModule for all requests, not just the missing requests that hit 404/403;14/405, but that means stuff like images, css, js etc are being handled by .NET code, which is terribly inefficient. Which brings me to the actual question: has anyone ever encountered this, and does anyone have any advice or know what to do to get things working again? So far someone has suggested using Microsoft's own URL Rewriting module. Would this help solve our problem? Thanks.

    Read the article

  • How can I compare market data feed sources for quality and latency improvement?

    - by yves Baumes
    I am in the very first stages of implementing a tool to compare 2 market data feed sources in order to prove the quality of new developed sources to my boss ( meaning there are no regressions, no missed updates, or wrong ), and to prove latencies improvement. So the tool I need must be able to check updates differences as well as to tell which source is the best (in term of latency). Concrectly, reference source could be Reuters while the other one is a Feed handler we develop internally. People warned me that updates might not arrive in the same order as Reuters implementation could differs totally from ours. Therefore a simple algorithm based on the fact that updates could arrive in the same order is likely not to work. My very first idea would be to use fingerprint to compare feed sources, as Shazaam application does to find the title of the tube you are submitting. Google told me it is based on FFT. And I was wondering if signal processing theory could behaves well with market access applications. I wanted to know your own experience in that field, is that possible to develop a quite accurate algorithm to meet the needs? What was your own idea? What do you think about fingerprint based comparison?

    Read the article

  • ASP.Net MVC: Showing the same data using different layouts...

    - by vdh_ant
    Hi guys I'm wanting to create a page that allows the users to select how they would like to view their data - i.e. summary (which supports grouping), grid (which supports grouping), table (which supports grouping), map, time line, xml, json etc. Now each layout would probably have different use a different view model, which inherit from a common base class/view model. The reason being that each layout needs the object structure that it deals with to be different (some need hierarchical others a flatter structure). Each layout would call the same repository method and each layout would support the same functionality, i.e. searching and filtering (hence these controls would be shared between layouts). The main exception to this would be sorting which only grid and table views would need to support. Now my question is given this what do people think is the best approach. Using DisplayFor to handle the rendering of the different types? Also how do I work this with the actions... I would imagine that I would use the one action, and pass in the layout types, but then how does this support the grouping required for the summary, grid and table views. Do i treat each grouping as just a layout type Also how would this work from a URL point of view - what do people think is the template to support this layout functionality Cheers Anthony

    Read the article

  • Sync Framework Considerations for Smart Client app

    - by DarkwingDuck
    Microsoft Sync Framework with SQL 2005? Is it possible? It seems to hint that the OOTB providers use SQL2008 functionality. I'm looking for some quick wins in relation to a sync project. The client app will be offline for a number of days. There will be a central server that MUST be SQL Server 2005. I can use .net 3.5. Basically the client app could go offline for a week. When it comes back online it needs to sync its data. But the good thing is that the data only needs to push to the server. The stuff that syncs back to the client will just be lookup data which the client never changes. So this means I don't care about sync collisions. To simplify the scenario for you, this smart client goes offline and the user surveys data about some observations. They enter the data into the system. When the laptop is reconnected to the network, it syncs back all that data to the server. There will be other clients doing the same thing too, but no one ever touches each other's data. Then there are some reports on the server for viewing the data that has been pushed to the server. This also needs to use ClickOnce. My biggest concern is that there is an interim release while a client is offline. This release might require a new field in the database, and a new field to fill in on the survey. Obviously that new field will be nullable because we can't update old data, that's fine to set as an assumption. But when the client connects up and its local data schema and the server schema don't match, will sync framework be able to handle this? After the data is pushed to the server it is discarded locally. Hope my problem makes sense.

    Read the article

  • Problems with Json Serialize Dictionary<Enum, Int32>

    - by dbemerlin
    Hi, whenever i try to serialize the dictionary i get the exception: System.ArgumentException: Type 'System.Collections.Generic.Dictionary`2[[Foo.DictionarySerializationTest+TestEnum, Foo, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null],[System.Int32, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]' is not supported for serialization/deserialization of a dictionary, keys must be strings or object My Testcase is: public class DictionarySerializationTest { private enum TestEnum { A, B, C } public void SerializationTest() { Dictionary<TestEnum, Int32> data = new Dictionary<TestEnum, Int32>(); data.Add(TestEnum.A, 1); data.Add(TestEnum.B, 2); data.Add(TestEnum.C, 3); JavaScriptSerializer serializer = new JavaScriptSerializer(); String result = serializer.Serialize(data); // Throws } public void SerializationStringTest() { Dictionary<String, Int32> data = new Dictionary<String, Int32>(); data.Add(TestEnum.A.ToString(), 1); data.Add(TestEnum.B.ToString(), 2); data.Add(TestEnum.C.ToString(), 3); JavaScriptSerializer serializer = new JavaScriptSerializer(); String result = serializer.Serialize(data); // Succeeds } } Of course i could use .ToString() whenever i enter something into the Dictionary but since it's used quite often in performance relevant methods i would prefer using the enum. My only solution is using .ToString() and converting before entering the performance critical regions but that is clumsy and i would have to change my code structure just to be able to serialize the data. Does anyone have an idea how i could serialize the dictionary as <Enum, Int32>? I use the System.Web.Script.Serialization.JavaScriptSerializer for serialization.

    Read the article

  • Rails: creating a custom data type, to use with generator classes and a bunch of questions related t

    - by Shyam
    Hi, After being productive with Rails for some weeks, I learned some tricks and got some experience with the framework. About 10 days ago, I figured out it is possible to build a custom data type for migrations by adding some code in the Table definition. Also, after learning a bit about floating points (and how evil they are) vs integers, the money gem and other possible solutions, I decided I didn't WANT to use the money gem, but instead try to learn more about programming and finding a solution myself. Some suggestions said that I should be using integers, one for the whole numbers and one for the cents. When playing in script/console, I discovered how easy it is to work with calculations and arrays. But, I am talking to much (and the reason I am, is to give some sufficient background). Right now, while playing with the scaffold generator (yes, I use it, because I like they way I can quickly set up a prototype while I am still researching my objectives), I like to use a DRY method. In my opinion, I should build a custom "object", that can hold two variables (Fixnum), one for the whole, one for the cents. In my big dream, I would be able to do the following: script/generate scaffold Cake name:string description:text cost:mycustom Where mycustom should create two integer columns (one for wholes, one for cents). Right now I could do this by doing: script/generate scaffold Cake name:string description:text cost_w:integer cost_c:integer I had also had an idea that would be creating a "cost model", which would hold two columns of integers and create a cost_id column to my scaffold. But wouldn't that be an extra table that would cause some kind of performance penalty? And wouldn't that be defy the purpose of the Cake model in the first place, because the costs are an attribute of individual Cake entries? The reason why I would want to have such a functionality because I am thinking of having multiple "costs" inside my rails application. Thank you for your feedback, comments and answers! I hope my message got through as understandable, my apologies for incorrect grammar or weird sentences as English is not my native language.

    Read the article

  • What is a good automated data import method for SQL Server?

    - by Joel Potter
    I'm in the process of porting some SQL Server 2005 databases to SQL Server 2008. One of these databases has an associated import application (Windows task) which uses SSIS with a DTS package to import a large dataset from an MS Access database nightly. In upgrading to SQL Server 2008, I discovered that I can't run the same console application which has been performing the imports due to the missing manageddts DLL in SQL Server 2008. It's several years old and in need of a rewrite for various reason, plus, I've been fairly unhappy with DTS in general. The original reason DTS was chosen was for speed (5 min import time compared to 30+ for ADO.NET). The format of the data to import is out of my control (the client likes Access). I would also like to be able to run the import from a machine completely separate from the server hosting SQL Server and preferably with minimal SQL features installed. Options I've considered: Creating an Access application to connect to both databases (SQL Server and Access) and perform the import (Ugh!) Revisiting ADO.NET to see if the original implementation was poorly written. Updated SSIS packages. What other technologies should I be considering for this job?

    Read the article

  • how does one _model_ data from relational databases in clojure ?

    - by sandeep
    I have asked this question on twitter as well the #clojure IRC channel, yet got no responses. There have been several articles about Clojure-for-Ruby-programmers, Clojure-for-lisp-programmers.. but what is the missing part is Clojure for ActiveRecord programmers . There have been articles about interacting with MongoDB, Redis, etc. - but these are key value stores at the end of the day. However, coming from a Rails background, we are used to thinking about databases in terms of inheritance - has_many, polymorphic, belongs_to, etc. The few articles about Clojure/Compojure + MySQL (ffclassic) - delve right into sql. Of course, it might be that an ORM induces impedence mismatch, but the fact remains that after thinking like ActiveRecord, it is very difficult to think any other way. I believe that relational DBs, lend themselves very well to the object-oriented paradigm because of them being , essentially, Sets. Stuff like activerecord is very well suited for modelling this data. For e.g. a blog - simply put class Post < ActiveRecord::Base has_many :comments end class Comment < ActiveRecord::Base belongs_to :post end How does one model this in Clojure - which is so strictly anti-OO ? Perhaps the question would have been better if it referred to all functional programming languages, but I am more interested from a Clojure standpoint (and Clojure examples)

    Read the article

  • Fetch Data using predicate. Retrieve single value.

    - by Mr. McPepperNuts
    XYZAppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; NSManagedObjectContext *managedObjectContext = appDelegate.managedObjectContext; NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"name == %@", entryToSearchFor]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Entry" inManagedObjectContext:managedObjectContext]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [request setEntity: entity]; [request setPredicate: predicate]; NSArray *results = [managedObjectContext executeFetchRequest:request error:nil]; if (results == nil) { NSLog(@"No results found"); }else { NSLog(@"entryToSearchFor %@", entryToSearchFor); NSLog(@"results %@", [results objectAtIndex:0]); } I want to retrieve a single value (String) from "Entry." I believe I must be missing. Can someone point it out? Btw, the NSLog results outputs the following: results <NSManagedObject: 0x3d2d360> (entity: Entry; id: 0x3d13650 <x-coredata://6EA12ADA-8C0B-477F-801C-B44FE6E6C91C/Entry/p3> ; data: <fault>)

    Read the article

  • Using ASP.NET SQL Membership Provider, how do I store my own per-user data?

    - by Gary McGill
    I'm using the ASP.NET SQL Membership Provider. So, there's an aspnet_Users table that has details of each of my users. (Actually, the aspnet_Membership table seems to contain most of the actual data). I now want to store some per-user information in my database, so I thought I'd just create a new table with a UserId (GUID) column and an FK relationship to aspnet_Users. However, I then discovered that I can't easily get access to the UserId since it's not exposed via the membership API. (I know I can access it via the ProviderUserKey, but it seems like the API is abstracting away the internal UserID in favor of the UserName, and I don't want to go too far against the grain). So, I thought I should instead put a LoweredUserName column in my table, and create an FK relationship to aspnet_Users using that. Bzzzt. Wrong again, because while there is a unique index in aspnet_Users that includes the LoweredUserName, it also includes the ApplicationId - so in order to create my FK relationship, I'd need to have an ApplicationId column in my table too. At first I thought: fine, I'm only dealing with a single application, so I'll just add such a column and give it a default value. Then I realised that the ApplicationId is a GUID, so it'd be a pain to do this. Not hard exactly, but until I roll out my DB I can't predict what the GUID is going to be. I feel like I'm missing something, or going about things the wrong way. What am I supposed to do?

    Read the article

  • Where do objects merge/join data in a 3-tier model?

    - by BerggreenDK
    Its probarbly a simple 3-tier problem. I just want to make sure we use the best practice for this and I am not that familiary with the structures yet. We have the 3 tiers: GUI: ASP.NET for Presentation-layer (first platform) BAL: Business-layer will be handling the logic on a webserver in C#, so we both can use it for webforms/MVC + webservices DAL: LINQ to SQL in the Data-layer, returning BusinessObjects not LINQ. DB: The SQL will be Microsoft SQL-server/Express (havent decided yet). Lets think of setup where we have a database of [Persons]. They can all have multiple [Address]es and we have a complete list of all [PostalCode] and corresponding citynames etc. The deal is that we have joined a lot of details from other tables. {Relations}/[tables] [Person]:1 --- N:{PersonAddress}:M --- 1:[Address] [Address]:N --- 1:[PostalCode] Now we want to build the DAL for Person. How should the PersonBO look and when does the joins occure? Is it a business-layer problem to fetch all citynames and possible addressses pr. Person? or should the DAL complete all this before returning the PersonBO to the BAL ? Class PersonBO { public int ID {get;set;} public string Name {get;set;} public List<AddressBO> {get;set;} // Question #1 } // Q1: do we retrieve the objects before returning the PersonBO and should it be an Array instead? or is this totally wrong for n-tier/3-tier?? Class AddressBO { public int ID {get;set;} public string StreetName {get;set;} public int PostalCode {get;set;} // Question #2 } // Q2: do we make the lookup or just leave the PostalCode for later lookup? Can anyone explain in what order to pull which objects? Constructive criticism is very welcome. :o)

    Read the article

  • MS Acess SQL - where clause to get year from date based on the year - data located in MS access form

    - by primus285
    OK, back again. I have a problem getting a drop down list to populate based on information in two fields. I have got the SQL correct as to Select just one year if I put DateValue('01/01/2001') in both places, but I am trying now to get it to grab the year data from the MS access form - another drop down named "cboYear". I'd hate to have to do something in VB, unless necessary. so far I have gotten this to pull up something (its always incorrect) SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' & [cboYear]) And Database_New.Date<=DateValue('12/31/' & [cboYear]); and SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' + [cboYear]) And Database_New.Date<=DateValue('12/31/' + [cboYear]); SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' AND [cboYear]) And Database_New.Date<=DateValue('12/31/' AND [cboYear]); both give errors saying that it is too complex to compute. Its probably something simple, but where do I go from here?

    Read the article

  • If we make a number every millisecond, how much data would we have in a day?

    - by Roger Travis
    I'm a bit confused here... I'm being offered to get into a project, where would be an array of certain sensors, that would give off reading every millisecond ( yes, 1000 reading in a second ). Reading would be a 3 or 4 digit number, for example like 818 or 1529. This reading need to be stored in a database on a server and accessed remotely. I never worked with such big amounts of data, what do you think, how much in terms of MBs reading from one sensor for a day would be?... 4(digits)x1000x60x60x24 ... = 345600000 bits ... right ? about 42 MB per day... doesn't seem too bad, right? therefor a DB of, say, 1 GB, would hold 23 days of info from 1 sensor, correct? I understand that MySQL & PHP probably would not be able to handle it... what would you suggest, maybe some aps? azure? oracle? ... Thansk!

    Read the article

  • Grails - Removing an item from a hasMany association List on data bind?

    - by ecrane
    Grails offers the ability to automatically create and bind domain objects to a hasMany List, as described in the grails user guide. So, for example, if my domain object "Author" has a List of many "Book" objects, I could create and bind these using the following markup (from the user guide): <g:textField name="books[0].title" value="the Stand" /> <g:textField name="books[1].title" value="the Shining" /> <g:textField name="books[2].title" value="Red Madder" /> In this case, if any of the books specified don't already exist, Grails will create them and set their titles appropriately. If there are already books in the specified indices, their titles will be updated and they will be saved. My question is: is there some easy way to tell Grails to remove one of those books from the 'books' association on data bind? The most obvious way to do this would be to omit the form element that corresponds to the domain instance you want to delete; unfortunately, this does not work, as per the user guide: Then Grails will automatically create a new instance for you at the defined position. If you "skipped" a few elements in the middle ... Then Grails will automatically create instances in between. I realize that a specific solution could be engineered as part of a command object, or as part of a particular controller- however, the need for this functionality appears repeatedly throughout my application, across multiple domain objects and for associations of many different types of objects. A general solution, therefore, would be ideal. Does anyone know if there is something like this included in Grails?

    Read the article

  • JQuery autocomplete problem

    - by heffaklump
    Im using JQuerys Autocomplete plugin, but it doesn't autocomplete upon entering anything. Any ideas why it doesnt work? The basic example works, but not mine. var ppl = {"ppl":[{"name":"peterpeter", "work":"student"}, {"name":"piotr","work":"student"}]}; var options = { matchContains: true, // So we can search inside string too minChars: 2, // this sets autocomplete to begin from X characters dataType: 'json', parse: function(data) { var parsed = []; data = data.ppl; for (var i = 0; i < data.length; i++) { parsed[parsed.length] = { data: data[i], // the entire JSON entry value: data[i].name, // the default display value result: data[i].name // to populate the input element }; } return parsed; }, // To format the data returned by the autocompleter for display formatItem: function(item) { return item.name; } }; $('#inputplace').autocomplete(ppl, options); Ok. Updated: <input type="text" id="inputplace" /> So, when entering for example "peter" in the input field. No autocomplete suggestions appear. It should give "peterpeter" but nothing happens. And one more thing. Using this example works perfectly. var data = "Core Selectors Attributes Traversing Manipulation CSS Events Effects Ajax Utilities".split(" "); $("#inputplace").autocomplete(data);

    Read the article

  • How to implement conditional render in JS?

    - by mare
    Below is the JS (jQuery) code of autocomplete's result function. You can see there's some lines where I print out <li>s containing some data properties (that come in as a result of automcomplete's AJAX call). How could I rewrite this so that <li> would be conditionally rendered based on whether the property contains any value being either int or string (not empty string or whitespace) or something else that can be represented as string? $(".clients-dropdown").result(function (event, data, formatted) { if (data) { // set the hidden input that we need for Client entity rematerialize $(".client-id").val(data.client_id); if (data.ClientName && data.Address1 && data.postalcode && data.postname) { $(".client-address").html( "<li>" + data.ClientName + "</li>" + "<li>" + data.Address1 + "</li>" + "<li>" + data.postalcode + " " + data.postname + "</li>" ); $(".client-details").html( "<li>" + data.PrettyId + "</li>" + "<li>" + data.VatNo + "</li>" + "<li>" + data.Phone + "</li>" + "<li>" + data.Mobile + "</li>" + "<li>" + data.Email1 + "</li>" + "<li>" + data.Contact + "</li>" ); } } Also, for the AJAX call, should my server side action return null when there's a null for a property in the database or empty string?

    Read the article

  • How do I determine if form field is empty with jQuery?

    - by user338413
    I've got a form with two fields, firstname and lastname. The user does not have to fill in the fields. When the user clicks the submit button, a jquery dialog displays with the data the user entered in the form. I only want to show the data for the fields that were entered. I'm trying to do an if statement and use the length() function but it isn't working. Help would be great! Here is my dialog jquery script: $(function(){ //Initialize the validation dialog $('#validation_dialog').dialog({ autoOpen: false, height: 600, width: 600, modal: true, resizable: false, buttons: { "Submit Form": function() { document.account.submit(); }, "Cancel": function() { $(this).dialog("close"); } } }); // Populate the dialog with form data $('form#account').submit(function(){ $("p#dialog-data").append('<span>First Name: </span>'); $("p#dialog-data").append('<span>'); $("p#dialog-data").append($("input#firstname").val()); $("p#dialog-data").append('</span><br/>'); if (("input#lastname").val().length) > 0) { $("p#dialog-data").append('<span>Last Name: </span>'); $("p#dialog-data").append('<span>'); $("p#dialog-data").append($("input#lastname").val()); $("p#dialog-data").append('</span><br/>'); }; $('#validation_dialog').dialog('open'); return false; }); });

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • Can I dispose a DataTable and still use its data later?

    - by Eduardo León
    Noob ADO.NET question: Can I do the following? Retrieve a DataTable somehow. Dispose it. Still use its data. (But not send it back to the database, or request the database to update it.) I have the following function, which is indirectly called by every WebMethod in a Web Service of mine: public static DataTable GetDataTable(string cmdText, SqlParameter[] parameters) { // Read the connection string from the web.config file. Configuration configuration = WebConfigurationManager.OpenWebConfiguration("/WSProveedores"); ConnectionStringSettings connectionString = configuration.ConnectionStrings.ConnectionStrings["..."]; SqlConnection connection = null; SqlCommand command = null; SqlParameterCollection parameterCollection = null; SqlDataAdapter dataAdapter = null; DataTable dataTable = null; try { // Open a connection to the database. connection = new SqlConnection(connectionString.ConnectionString); connection.Open(); // Specify the stored procedure call and its parameters. command = new SqlCommand(cmdText, connection); command.CommandType = CommandType.StoredProcedure; parameterCollection = command.Parameters; foreach (SqlParameter parameter in parameters) parameterCollection.Add(parameter); // Execute the stored procedure and retrieve the results in a table. dataAdapter = new SqlDataAdapter(command); dataTable = new DataTable(); dataAdapter.Fill(dataTable); } finally { if (connection != null) { if (command != null) { if (dataAdapter != null) { // Here the DataTable gets disposed. if (dataTable != null) dataTable.Dispose(); dataAdapter.Dispose(); } parameterCollection.Clear(); command.Dispose(); } if (connection.State != ConnectionState.Closed) connection.Close(); connection.Dispose(); } } // However, I still return the DataTable // as if nothing had happened. return dataTable; }

    Read the article

  • Cache layer for MVC - Model or controller?

    - by Industrial
    Hi everyone, I am having some second thoughts about where to implement the caching part. Where is the most appropriate place to implement it, you think? Inside every model, or in the controller? Approach 1 (psuedo-code): // mycontroller.php MyController extends Controller_class { function index () { $data = $this->model->getData(); echo $data; } } // myModel.php MyModel extends Model_Class{ function getData() { $data = memcached->get('data'); if (!$data) { $query->SQL_QUERY("Do query!"); } return $data; } } Approach 2: // mycontroller.php MyController extends Controller_class { function index () { $dataArray = $this->memcached->getMulti('data','data2'); foreach ($dataArray as $key) { if (!$key) { $data = $this->model->getData(); $this->memcached->set($key, $data); } } echo $data; } } // myModel.php MyModel extends Model_Class{ function getData() { $query->SQL_QUERY("Do query!"); return $data; } } Thoughts: Approach 1: No multiget/multi-set. If a high number of keys would be returned, overhead would be caused. Easier to maintain, all database/cache handling is in each model Approach 2: Better performancewise - multiset/multiget is used More code required Harder to maintain Tell me what you think!

    Read the article

  • Why does this ActionFilterAttribute not import data to the ViewModel?

    - by Tomas Lycken
    I have the following attribute public class ImportStatusAttribute : ActionFilterAttribute { public override void OnActionExecuted(ActionExecutedContext filterContext) { var model = (IHasStatus)filterContext.Controller.ViewData.Model; model.Status = (StatusMessageViewModel)filterContext.Controller.TempData["status"]; filterContext.Controller.ViewData.Model = model; } } which I test with the following test method (the first of several I'll write when this one passes...) [TestMethod] public void OnActionExecuted_ImportsStatusFromTempDataToModel() { // Arrange Expect(new { Status = new StatusMessageViewModel() { Subject = "The test", Predicate = "has been tested" }, Key = "status" }); var filterContext = new Mock<ActionExecutedContext>(); var model = new Mock<IHasStatus>(); var tempData = new TempDataDictionary(); var viewData = new ViewDataDictionary(model.Object); var controller = new FakeController() { ViewData = viewData, TempData = tempData }; tempData.Add(expected.Key, expected.Status); filterContext.Setup(c => c.Controller).Returns(controller); var attribute = new ImportStatusAttribute(); // Act attribute.OnActionExecuted(filterContext.Object); // Assert Assert.IsNotNull(model.Object.Status, "The status was not exported"); Assert.AreEqual(model.Object.Status.ToString(), ((StatusMessageViewModel)expected.Status).ToString(), "The status was not the expected"); } (Expect() is a method that saves some expectations in the expected object...) When I run the test, it fails on the first assertion, and I can't get my head around why. Debugging, I can see that model is populated correctly, and that (StatusMessageViewModel)filterContext.Controller.TempData["status"] has the correct data. But after model.Status = (StatusMessageViewModel)filterContext.Controller.TempData["status"]; model.Status is still null in my watch window. Why can't I do this?

    Read the article

  • How can get unique values from data table using dql?

    - by piemesons
    I am having a table in which there is a column in which various values are stored.i want to retrieve unique values from that table using dql. Doctrine_Query::create() ->select('rec.school') ->from('Records rec') ->where("rec.city='$city' ") ->execute(); Now i want only unique values. Can anybody tell me how to do that... Edit Table Structure: CREATE TABLE IF NOT EXISTS `records` ( `id` int(11) NOT NULL AUTO_INCREMENT, `state` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL, `city` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL, `school` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=16334 ; This is the Query I am using: Doctrine_Query::create() ->select('DISTINCT rec.city') ->from('Records rec') ->where("rec.state = '$state'") // ->getSql(); ->execute(); Generting Sql for this gives me: SELECT DISTINCT r.id AS r__id, r.city AS r__city FROM records r WHERE r.state = 'AR' Now check the sql generated:::: DISTINCT is on 'id' column where as i want Distinct on city column. Anybody know how to fix this. EDIT2 Id is unique cause its an auto incremental value.Ya i have some real duplicates in city column like: Delhi and Delhi. Right.. Now when i am trying to fetch data from it, I am getting Delhi two times. How can i make query like this: select DISTINCT rec.city where state="xyz"; Cause this will give me the proper output. EDIT3: Anybody who can tell me how to figure out this query..???

    Read the article

  • this is my first time asking here, I wanted to create a linked list while sorting it thanks

    - by user2738718
    package practise; public class Node { // node contains data and next public int data; public Node next; //constructor public Node (int data, Node next){ this.data = data; this.next = next; } //list insert method public void insert(Node list, int s){ //case 1 if only one element in the list if(list.next == null && list.data > s) { Node T = new Node(s,list); } else if(list.next == null && list.data < s) { Node T = new Node(s,null); list.next = T; } //case 2 if more than 1 element in the list // 3 elements present I set prev to null and curr to list then performed while loop if(list.next.next != null) { Node curr = list; Node prev = null; while(curr != null) { prev = curr; curr = curr.next; if(curr.data > s && prev.data < s){ Node T = new Node(s,curr); prev.next = T; } } // case 3 at the end checks for the data if(prev.data < s){ Node T = new Node(s,null); prev.next = T; } } } } // this is a hw problem, i created the insert method so i can check and place it in the correct order so my list is sorted This is how far I got, please correct me if I am wrong, I keep inserting node in the main method, Node root = new Node(); and root.insert() to add.

    Read the article

  • Using Lucene to index private data, should I have a separate index for each user or a single index

    - by Nathan Bayles
    I am developing an Azure based website and I want to provide search capabilities using Lucene. (structured json objects would be indexed and stored in Lucene and other content such as Word documents, etc. would be indexed in lucene but stored in blob storage) I want the search to be secure, such that one user would never see a document belonging to another user. I want to allow ad-hoc searches as typed by the user. Lastly, I want to query programmatically to return predefined sets of data, such as "all notes for user X". I think I understand how to add properties to each document to achieve these 3 objectives. (I am listing them here so if anyone is kind enough to answer, they will have better idea of what I am trying to do) My questions revolve around performance and security. Can I improve document security by having a separate index for each user, or is including the user's ID as a parameter in each search sufficient? Can I improve indexing speed and total throughput of the system by having a separate index for each user? My thinking is that having separate indexes would allow me to scale the system by having multiple index writers (perhaps even on different server instances) working at the same time, each on their own index. Any insight would be greatly appreciated. Regards, Nate

    Read the article

< Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >