Search Results

Search found 12189 results on 488 pages for 'ds store'.

Page 426/488 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • MySQL Datefields: duplicate or calculate?

    - by Konerak
    We are using a table with a structure imposed upon us more than 10 years ago. We are allowed to add columns, but urged not to change existing columns. Certain columns are meant to represent dates, but are put in different format. Amongst others: * CHAR(6): YYMMDD * CHAR(6): DDMMYY * CHAR(8): YYYYMMDD * CHAR(8): DDMMYYYY * DATE * DATETIME Since we now would like to do some more complex queries, using advanced date functions, my manager proposed to d*uplicate those problem columns* to a proper FORMATTED_OLDCOLUMNNAME column using a DATE or DATETIME format. Is this the way to go? Couldn't we just use the STR_TO_DATE function each time we accessed the columns? To avoid every query having to copy-paste the function, I could still work with a view or a stored procedure, but duplicating data to avoid recalculation sounds wrong. Solutions I see (I guess I prefer 2.2.1) 1. Physically duplicate columns 1.1 In the same table 1.1.1 Added by each script that does a modification (INSERT/UPDATE/REPLACE/...) 1.1.2 Maintained by a trigger on each modification 1.2 In a separate table 1.2.1 Added by each script that does a modification (INSERT/UPDATE/REPLACE/...) 1.2.2 Maintained by a trigger on each modification 2. On-demand transformation 2.1 Each query has to perform the transformation 2.1.1 Using copy-paste in the source code 2.1.2 Using a library 2.1.3 Using a STORED PROCEDURE 2.2 A view performs the transformation 2.2.1 A separate table replacing the entire table 2.2.2 A separate table just adding the date-fields for the primary keys Am I right to say it's better to recalculate than to store? And would a view be a good solution?

    Read the article

  • find and replace values in csv using PHP

    - by peirix
    I'd think there was a question on this already, but I can't find one. Maybe the solution is too easy... Anyway, I have a csv and want to let the user change the values based on a name. I've already sorted out creating new name+value-pairs using the fopen('a') mode, using jQuery to send the AJAX call with newValue and newName. But say the content looks like this: host|http:www.stackoverflow.com folder|/questions/ folder2|/users/ And now I want to change the folder value. So I'll send in folder as oldName and /tags/ as newValue. What's the best way to overwrite the value? The order in the list doesn't matter, and the name will always be on the left, followed by a |(pipe), the value and then a new-line. My first thought was to read the list, store it in an array, search all the [0]'s for oldName, then change the [1] that belongs to it, and then write it back to a file. But I feel there is a better way around this? Any ideas? Maybe regex?

    Read the article

  • Beginner MVC question - Correct approach to render out a List and details?

    - by fizzer
    I'm trying to set up a page where I display a list of items and the details of the selected item. I have it working but wonder whether I have followed the correct approach. I'll use customers as an example I have set the aspx page to inherit from an IEnumerable of Customers. This seems to be the standard approach to display the list of items. For the Details I have added a Customer user control which inherits from customer. I think i'm on the right track so far but I was a bit confused as to where I should store the id of the customer whose details I intend to display. I wanted to make the id optional in the controller action so that the page could be hit using "/customers" or "customers/1" so I made the arg optional and stored the id in the ViewData like this: public ActionResult Customers(string id = "0") { Models.DBContext db = new Models.DBContext(); var cList = db.Customers.OrderByDescending(c => c.CustomerNumber); if (id == "0") { ViewData["CustomerNumber"] = cList.First().CustomerNumber.ToString(); } else { ViewData["CustomerNumber"] = id; } return View("Customers", cList); } I then rendered the User control using RenderPartial in the front end: <%var CustomerList = from x in Model where x.CustomerNumber == Convert.ToInt32(ViewData["CustomerNumber"]) select x; Customer c = (Customer)CustomerList.First(); %> <% Html.RenderPartial("Customer",c); %> Then I just have an actionLink on each listed item: <%: Html.ActionLink("Select", "Customers", new { id = item.CustomerNumber })% It all seems to work but as MVC is new to me I would just be interested in others thoughts on whether this is a good approach?

    Read the article

  • Updating a composite primary key

    - by VBCSharp
    I am struggling with the philosophical discussions about whether or not to use composite primary keys on my SQL Server database. I have always used the surrogate keys in the past and I am challenging myself by leaving my comfort zone to try something different. I have read many discussion but can't come to any kind of solution yet. The struggle I am having is when I have to update a record with the composite PK. For example, the record in questions is like this: ContactID, RoleID, EffectiveDate, TerminationDT. The PK in this case is the ContactID, RoleID, and EffectiveDate. TerminationDT can be null. If in my UI, the user changes the RoleID and then I need to update the record. Using the surrogate key I can do an Update Table Set RoleID = 1 WHERE surrogateID = Z. However, using the Composite Key way, once one of the fields in the composite key changes I have no way to reference the old record to update it without now maintaining somewhere in the UI a reference to the old values. I do not bind datasources in my UI. I open a connection, get the data and store it in a bucket, then close the connection. What are everyone's opinions? Thanks.

    Read the article

  • php - using file() incrementally?

    - by NeedBeerStat
    I'm not sure if this is possible, I've been googling for a solution... But, essentially, I have a very large file, the lines of which I want to store in an array. Thus, I'm using file(), but is there a way to do that in batches? So that every,say, 100 lines it creates, it "pauses"? I think there's likely to be something I can do with a foreach loop or something, but I'm not sure that I'm thinking about it the right way... Like $i=0; $j=0; $throttle=100; foreach($files as $k => $v) { if($i < $j+$throttle && $i > $j) { $lines[] = file($v); //Do some other stuff, like importing into a db } $i++; $j++; } But, I think that won't really work because $i & $j will always be equal... Anyway, feeling muddled... Can someone help me think a lil' clearer?

    Read the article

  • How can I compare the performance of log() and fp division in C++?

    - by Ventzi Zhechev
    Hi, I’m using a log-based class in C++ to store very small floating-point values (as the values otherwise go beyond the scope of double). As I’m performing a large number of multiplications, this has the added benefit of converting the multiplications to sums. However, at a certain point in my algorithm, I need to divide a standard double value by an integer value and than do a *= to a log-based value. I have overloaded the *= operator for my log-based class and the right-hand side value is first converted to a log-based value by running log() and than added to the left-hand side value. Thus the operations actually performed are floating-point division, log() and floating-point summation. My question whether it would be faster to first convert the denominator to a log-based value, which would replace the floating-point division with floating-point subtraction, yielding the following chain of operations: twice log(), floating-point subtraction, floating-point summation. In the end, this boils down to whether floating-point division is faster or slower than log(). I suspect that a common answer would be that this is compiler and architecture dependent, so I’ll say that I use gcc 4.2 from Apple on darwin 10.3.0. Still, I hope to get an answer with a general remark on the speed of these two operators and/or an idea on how to measure the difference myself, as there might be more going on here, e.g. executing the constructors that do the type conversion etc. Cheers!

    Read the article

  • Convert asp.net application to windows forms app

    - by rogdawg
    I have written and deployed an ASP.NET application that is pretty complex. It uses XSL transformations to create web forms for a large variety of data objects. The data comes from the database as XML via a web service. Now, I need to create a Windows desktop application that will provide a small subset of the web applications functionality to a user who may not have access to the web (working in remote areas). I will provide the data syncing using the MS Sync Framework. And I will have the desktop use a local data store. I would like to use the same xslt files in the desktop app that I use in the web app for the form creation so that, if changes are made, the desktop app can update itself when it connects and syncs its data. But, I am wondering how to replicate the asp.net codebehind logic of my web app in the windows forms. If I use a browser control to render the XSLTransformation result, then how could I handle click events, etc, in the form? Also, can I launch other windows as "dialog boxes" from my windows forms (I do this in my web app using RadControls functionality)? Thanks for any advice you can give.

    Read the article

  • What collection object is appropriate for fixed ordering of values?

    - by makerofthings7
    Scenario: I am tracking several performance counters and have a CounterDescription[] correlate to DataSnapshot[]... where CounterDescription[n] describes the data loaded within DataSnapshot[n]. I want to expose an easy to use API within C# that will allow for the easy and efficient expansion of the arrays. For example CounterDescription[0] = Humidity; DataSnapshot[0] = .9; CounterDescription[1] = Temp; DataSnapshot[1] = 63; My upload object is defined like this: Note how my intent is to correlate many Datasnapshots with a dattime reference, and using the offset of the data to refer to its meaning. This was determined to be the most efficient way to store the data on the back-end, and has now reflected itself into the following structure: public class myDataObject { [DataMember] public SortedDictionary<DateTime, float[]> Pages { get; set; } /// <summary> /// An array that identifies what each position in the array is supposed to be /// </summary> [DataMember] public CounterDescription[] Counters { get; set; } } I will need to expand each of these arrays (float[] and CounterDescription[] ), but whatever data already exists must stay in that relative offset. Which .NET objects support this? I think Array[] , LinkedList<t>, and List<t> Are able to keep the data fixed in the right locations. What do you think?

    Read the article

  • Storing large numbers of varying size objects on disk

    - by Foredecker
    I need to develop a system for storing large numbers (10's to 100's of thousands) of objects. Each object is email-like - there is a main text body, and several ancillary text fields of limited size. A body will be from a few bytes, to several KB in size. Each item will have a single unique ID (probably a GUID) that identifies it. The store will only be written to when an object is added to it. It will be read often. Deletions will be rare. The data is almost all human readable text so it will be readily compressible. A system that lets me issue the I/Os and mange the memory and caching would be ideal. I'm going to keep the indexes in memory, using it to map indexes to the single (and primary) key for the objects. Once I have the key, then I'll load it from disk, or the cache. The data management system needs to be part of my application - I do not want to depend on OS services. Or separately installed packages. Native (C++) would be best, but a manged (C#) thing would be ok. I believe that a database is an obvious choice, but this needs to be super-fast for look up and loading into memory of an object. I am not experienced with data base tech and I'm concerned that general relational systems will not handle all this variable sized data efficiently. (Note, this has nothing to do with my job - its a personal project.) In your experience, what are the viable alternatives to a traditional relational DB? Or would a DB work well for this?

    Read the article

  • C# XML parsing with LINQ storing directly to a struct?

    - by Luke
    Say I have the following XML schema: <root> <version>2.0</version> <type>fiction</type> <chapters> <chapter>1</chapter> <title>blah blah</title> </chapter> <chapters> <chapter>2</chapter> <title>blah blah</title> </chapters> </root> Would it be possibly to parse the elements which I know will not be repeated in the XML and store them directly into the struct using LINQ? For example, could I do something like this for "version" and "type" //setup structs Book book = new Book(); book.chapter = new Chapter(); //use LINQ to parse the xml var bookData = from b in xmlDoc.Decendants("root") select new { book.version = b.Element("version").Value, book.type = b.Element("type").Value }; //then for "chapters" since I know there are multiple I can do: var chapterData = from c in xmlDoc.Decendants("root" select new { chapter = c.Element("chapters") }; foreach (var ch in chapterData) { book.chapter.Add(getChapterData(ch.chapter)); }

    Read the article

  • C++ and its type system: How to deal with data with multiple types?

    - by sub
    "Introduction" I'm relatively new to C++. I went through all the basic stuff and managed to build 2-3 simple interpreters for my programming languages. The first thing that gave and still gives me a headache: Implementing the type system of my language in C++ Think of that: Ruby, Python, PHP and Co. have a lot of built-in types which obviously are implemented in C. So what I first tried was to make it possible to give a value in my language three possible types: Int, String and Nil. I came up with this: enum ValueType { Int, String, Nil }; class Value { public: ValueType type; int intVal; string stringVal; }; Yeah, wow, I know. It was extremely slow to pass this class around as the string allocator had to be called all the time. Next time I've tried something similar to this: enum ValueType { Int, String, Nil }; extern string stringTable[255]; class Value { public: ValueType type; int index; }; I would store all strings in stringTable and write their position to index. If the type of Value was Int, I just stored the integer in index, it wouldn't make sense at all using an int index to access another int, or? Anyways, the above gave me a headache too. After some time, accessing the string from the table here, referencing it there and copying it over there grew over my head - I lost control. I had to put the interpreter draft down. Now: Okay, so C and C++ are statically typed. How do the main implementations of the languages mentioned above handle the different types in their programs (fixnums, bignums, nums, strings, arrays, resources,...)? What should I do to get maximum speed with many different available types? How do the solutions compare to my simplified versions above?

    Read the article

  • Holding variables in memory, C++

    - by b-gen-jack-o-neill
    Today something strange came to my mind. When I want to hold some string in C (C++) the old way, without using string header, I just create array and store that string into it. But, I read that any variable definition in C in local scope of function ends up in pushing these values onto the stack. So, the string is actually 2* bigger than needed. Because first, the push instructions are located in memory, but then when they are executed (pushed onto the stack) another "copy" of the string is created. First the push instructions, than the stack space is used for one string. So, why is it this way? Why doesn't compiler just add the string (or other variables) to the program instead of creating them once again when executed? Yes, I know you cannot just have some data inside program block, but it could just be attached to the end of the program, with some jump instruction before. And than, we would just point to these data? Because they are stored in RAM when the program is executed. Thanks.

    Read the article

  • Best approach to cache Counts from SQL tables ?

    - by pixel3cs
    I would like to develop a Forum from scratch, with special needs and customization. I would like to prepare my forum for intensive usage and wondering how to cache things like User posts count and User replies count. Having only three tables, tblForum, tblForumTopics, tblForumReplies, what is the best approach of cache the User topics and replies counts ? Think at a simple scenario: user press a link and open the Replies.aspx?id=x&page=y page, and start reading replies. On the HTTP Request, the server will run an SQL command wich will fetch all replies for that page, also "inner joining with tblForumReplies to find out the number of User replies for each user that replied." select tblForumReplies.*, tblFR.TotalReplies from tblForumReplies inner join ( select IdRepliedBy, count(*) as TotalReplies from tblForumReplies group by IdRepliedBy ) as tblFR on tblFR.IdRepliedBy = tblForumReplies.IdRepliedBy Unfortunately this approach is very cpu intensive, and I would like to see your ideas of how to cache things like table Counts. If counting replies for each user on insert/delete, and store it in a separate field, how to syncronize with manual data changing. Suppose I will manually delete Replies from SQL.

    Read the article

  • Div's are not filtered as :hidden when display:none; is appended as style

    - by CodeMonkey
    Hey folks I have some simple HTML: <div id="selectorContainer"> <div id="chainedSelector" style="display: none;"><% Html.RenderPartial("ProjectSuggest/ChainedProjectSelector"); %></div> <div id="suggestSelector"><% Html.RenderPartial("ProjectSuggest/SuggestControl", new SuggestModeDTO{RegistrationMode = Model.RegistrationMode}); %></div> </div> which is two containers for controls. I have jQuery code to toggle between displaying these, but I need to store as a cookie which one was used last time the user was logged in (i.e. which one was visible). The storing of the cookie is not the problem. The problem is that I for some reason am not able to detect which one is the hidden one, using .is(":hidden"), and not able to detect which one is visible using .is(":visible") When I use those two selectors, I always get both. "true" and "true" for both, eventhough one has display: none; and the other doesn't. Please note that they are NOT placed inside a hidden container which otherwise would hide both, so there are not any hidden ancestor containers. Can anyone maybe explain why this could happen? jQuery code containing source for getting the Id's and for getting the selected one (which currently is broken): getChainedSelectorId: function() { return "#chainedSelector"; }, getSuggestSelectorId: function() { return "#suggestSelector"; }, getSelectedSelector: function() { alert($(this.getChainedSelectorId()).is(":hidden")); alert($(this.getSuggestSelectorId()).is(":hidden")); var selected = ($(this.getChainedSelectorId()).is(":visible") ? this.getChainedSelectorId() : this.getSuggestSelectorId()); alert(selected); return selected; }, Thanks in advance.

    Read the article

  • Twitter API similar to Google Alert

    - by Felix Perdana
    I am trying to create a web application which have a similar functionality with Google Alerts. (by similar I mean, the user can provide their email address for the alert to be sent to, daily or hourly) The only limitation is that it only gives alerts to user based on a certain keyword or hashtag. I think that I have found the fundamental API needed for this web application. https://dev.twitter.com/docs/api/1/get/search The problem is I still don't know all the web technologies needed for this application to work properly. For example, Do I have to store all of the searched keywords in database? Do I have to keep pooling ajax request all the time in order to keep my database updated? What if the keyword the user provided is very popular right now that might have thousands of tweets just in an hour (not to mention, there might be several emails that request several trending topics)? By the way, I am trying to build this application using PHP. So please let me know, what kind of techniques I need to learn for such web app (and some references maybe)? Any kind of help will be appreciated. Thanks in advance :) Regards, Felix Perdana

    Read the article

  • Should core application configuration be stored in the database, and if so what should be done to se

    - by Rl
    I'm writing an application around a lot of hierarchical data. Currently the hierarchy is fixed, but it's likely that new items will be added to the hierarchy in the future. (please let them be leaves) My current application and database design is fairly generic and nothing dealing with specific nodes in the hierarchy is hardcoded, with the exception of validation and lookup functions written to retrieve external data from each node's particular database. This pleases me from a design point of view, but I'm nervous at the realization that the entire application rests on a handful of records in the database. I'm also frustrated that I have to enforce certain aspects of data integrity with database triggers rather than by foreign key constraints (an example is where several different nodes in the hierarchy have their own proprietary IDs and I store them in a single column which, when coupled with the node ID can be used to locate the foreign data). I'm starting to wonder whether it may have been appropriate to simply hardcoded these known nodes into the system so that it would be more "type safe" and less generic. How does one know when something should be hardcoded, and when it should be a configuration item? Is it just a cost-benefit analysis of clarity/safety now vs less work later, or am I missing some metric I should be using to determine whether or not this is appropriate. The steps I'm taking to protect these valuable configurations are to add triggers that prevent updates/deletes. The database user that this application uses will only have the ability to manipulate data through stored procedures. What else can I do?

    Read the article

  • Indexing table with duplicates MySQL/SQL Server with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In SQL Server you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • Encrypting a file in win API

    - by Kristian
    hi I have to write a windows api code that encrypts a file by adding three to each character. so I wrote this now its not doing anything ... where i go wronge #include "stdafx.h" #include <windows.h> int _tmain(int argc, _TCHAR* argv[]) { HANDLE filein,fileout; filein=CreateFile (L"d:\\test.txt",GENERIC_READ,0,NULL,OPEN_ALWAYS,FILE_ATTRIBUTE_NORMAL,NULL); fileout=CreateFile (L"d:\\test.txt",GENERIC_WRITE,0,NULL,CREATE_ALWAYS,FILE_ATTRIBUTE_NORMAL,NULL); DWORD really; //later this will be used to store how many bytes I succeed to read do { BYTE x[1024]; //the buffer the thing Im using to read in ReadFile(filein,x,1024,&really,NULL); for(int i=0 ; i<really ; i++) { x[i]= (x[i]+3) % 256; } DWORD really2; WriteFile(fileout,x,really,&really2,NULL); }while(really==1024); CloseHandle(filein); CloseHandle(fileout); return 0; } and if Im right how can i know its ok

    Read the article

  • Why doesn't Firefox redownload images already on a page?

    - by vvo
    Hello, i just read this article : https://developer.mozilla.org/en/HTTP_Caching_FAQ There's a firefox behavior (and some other browsers i guess) i'd like to understand : if i take any webpage and try to insert the same image multiple times in javascript, the image is only downloaded ONCE even if i specifiy all needed headers to say "do no ever use cache". (see article) I know there are workarounds (like addind query strings to end of urls etc) but why do firefox act like that, if i say that an image do not have to be cached, why is the image still taken from cache when i try to re-insert it ? Plus, what cache is used for this ? (I guess it's the memory cache) Is this behavior the same for dynamic inclusion for example ? ANSWSER IS NO :) I just tested it and the same headers for a js script will make firefox redownload it each time you append the script to the DOM. PS: I know you're wondering WHY i need to do that (appending same image multiple times and force to redownload but this is the way our app works) thank you The good answer is : firefox will store images for the current page load in the memory cache even if you specify he doesnt have to cache them. You can't change this behavior but this is odd because it's not the same for javascript files for example Could someone explain or link to a document describing how firefox cache WORKS?

    Read the article

  • Another boost error

    - by user1676605
    On this code I get the enourmous error static void ParseTheCommandLine(int argc, char *argv[]) { int count; int seqNumber; namespace po = boost::program_options; std::string appName = boost::filesystem::basename(argv[0]); po::options_description desc("Generic options"); desc.add_options() ("version,v", "print version string") ("help", "produce help message") ("sequence-number", po::value<int>(&seqNumber)->default_value(0), "sequence number") ("pem-file", po::value< vector<string> >(), "pem file") ; po::positional_options_description p; p.add("pem-file", -1); po::variables_map vm; po::store(po::command_line_parser(argc, argv). options(desc).positional(p).run(), vm); po::notify(vm); if (vm.count("pem file")) { cout << "Pem files are: " << vm["pem-file"].as< vector<string> >() << "\n"; } cout << "Sequence number is " << seqNumber << "\n"; exit(1); ../../../FIXMarketDataCommandLineParameters/FIXMarketDataCommandLineParameters.hpp|98|error: no match for ‘operator<<’ in ‘std::operator<< [with _Traits = std::char_traits](((std::basic_ostream &)(& std::cout)), ((const char*)"Pem files are: ")) << ((const boost::program_options::variable_value*)vm.boost::program_options::variables_map::operator[](((const std::string&)(& std::basic_string, std::allocator (((const char*)"pem-file"), ((const std::allocator&)((const std::allocator*)(& std::allocator()))))))))-boost::program_options::variable_value::as with T = std::vector, std::allocator , std::allocator, std::allocator ’|

    Read the article

  • Is there a standard SQL Table design for overriding 'big picture' default values with lower level de

    - by RichardHowells
    Here's an example. Suppose we are trying to calculate a service charge. Say sales in the USA attract a 10 dollar charge, sales in the UK attract a 20 dollar charge So far it's easy - we are starting to imagine a table that lists charges by country. Now lets assume that Alaska and Hawaii are treated as special cases they are both 15 dollars That suggests a table with states, Alaska and Hawaii are charged at 15, but presumably we need 48 (redundant) rows all saying 10. This gives us a maintainance problem, our user only wants to type 10 once NOT 48 times. It does not sit well with the UK either. The UK does not have states. Suppose we throw in another couple of cross cutting rules. If you order by phone there is a 10% supplement on the charge. If you order via the web there is a 10% discount. But for some reason best known to the owners of the business the web/phone supplement/discount are not applied in Hawaii. It seems to me that this is quite a common kind of problem and there is probably a well known arrangement of tables to store the data. Most cases get handled by broad brush answers, but there are some very detailed low level variations that give rise to a huge number of theoretical combinations, most of which are not used.

    Read the article

  • JSP includes and MVC pattern

    - by xingyu
    I am new to JSP/Servlets/MVC and am writing a JSP page (using Servlets and MVC pattern) that displays information about recipies, and want the ability for users to "comment" on it too. So for the Servlet, on doGet(), it grabs all the required info into a Model POJO and forwards the request on to a JSP View for rendering. That is working just fine. I'd like the "comment" part to be a separate JSP, so on the RecipeView.jsp I can use to separate these views out. So I've made that, but am now a little stuck. The form in the CommentOnRecipe.jsp posts to a CommentAction servlet that handles the recording of the comment just fine. So when I reload the Recipe page, I can see the comment I just made. I'd like to: Reload the page automatically after commenting (no AJAX for now) Block the user from making more than one comment on each Recipe over a 1 day timeframe (via a Cookie). So I store a cookie indicating the product ID whenever the user makes a comment, so we can check this later? How would it work in a MVC context? Show a message to the user that they have already commented on the Recipe when they visit one which they have commented on I'm confused about using beans/including JSPs etc on how to achieve this. I know in ASP.NET land, it would be a UseControl that I would place on a page, or in ASP.NET MVC, it would be a PartialView of some sort. I'm just confused with the way this works in a JSP/Servlets/MVC context.

    Read the article

  • Dealing with large number of text strings

    - by Fadrian
    My project when it is running, will collect a large number of string text block (about 20K and largest I have seen is about 200K of them) in short span of time and store them in a relational database. Each of the string text is relatively small and the average would be about 15 short lines (about 300 characters). The current implementation is in C# (VS2008), .NET 3.5 and backend DBMS is Ms. SQL Server 2005 Performance and storage are both important concern of the project, but the priority will be performance first, then storage. I am looking for answers to these: Should I compress the text before storing them in DB? or let SQL Server worry about compacting the storage? Do you know what will be the best compression algorithm/library to use for this context that gives me the best performance? Currently I just use the standard GZip in .NET framework Do you know any best practices to deal with this? I welcome outside the box suggestions as long as it is implementable in .NET framework? (it is a big project and this requirements is only a small part of it) EDITED: I will keep adding to this to clarify points raised I don't need text indexing or searching on these text. I just need to be able to retrieve them in later stage for display as a text block using its primary key. I have a working solution implemented as above and SQL Server has no issue at all handling it. This program will run quite often and need to work with large data context so you can imagine the size will grow very rapidly hence every optimization I can do will help.

    Read the article

  • Picture.writeToStream() not writing out all bitmaps

    - by quickdraw mcgraw
    I'm using webview.capturePicture() to create a Picture object that contains all the drawing objects for a webpage. I can successfully render this Picture object to a bitmap using the canvas.drawPicture(picture, dst) with no problems. However when I use picture.writeToStream(fos) to serialize the picture object out to file, and then Picture.createFromStream(fis) to read the data back in and create a new picture object, the resultant bitmap when rendered as above is missing any larger images (anything over around 20KB! by observation). This occurs on all the Android OS platforms that I have tested 1.5, 1.6 and 2.1. Looking at the native code for Skia which is the underlying Android graphics library and the output file produced from the picture.writeToStream() I can see how the file format is constructed. I can see that some of the images in this Skia spool file are not being written out (the larger ones), the code that appears to be the problem is in skBitmap.cpp in the method void SkBitmap::flatten(SkFlattenableWriteBuffer& buffer) const; It writes out the bitmap fWidth, fHeight, fRowBytes, FConfig and isOpaque values but then just writes out SERIALIZE_PIXELTYPE_NONE (0). This means that the spool file does not contain any pixel information about the actual image and therefore cannot restore the picture object correctly. Effectively this renders the writeToStream and createFromStream() APIs useless as they do not reliably store and recreate the picture data. Has anybody else seen this behaviour and if so am I using the API incorrectly, can it be worked around, is there an explanation i.e. incomplete API / bug and if so are there any plans for a fix in a future release of Android? Thanks in advance.

    Read the article

  • Peculiar JRE behaviour running RMI server under load, should I worry?

    - by darri
    I've been developing a minimalistic Java rich client CRUD application framework for the past few years, mostly as a hobby but also actively using it to write applications for my current employer. The framework provides database access to clients either via a local JDBC based connection or a lightweight RMI server. Last night I started a load testing application, which ran 100 headless clients, bombarding the server with requests, each client waiting only 1 - 2 seconds between running simple use cases, consisting of selecting records along with associated detail records from a simple e-store database (Chinook). This morning when I looked at the telemetry results from the server profiling session I noticed something which to me seemed strange (and made me keep the setup running for the remainder of the day), I don't really know what conclusions to draw from it. Here are the results: Memory GC activity Threads CPU load Interesting, right? So the question is, is this normal or erratic? Is this simply the JRE (1.6.0_03 on Windows XP) doing it's thing (perhaps related to the JRE configuration) or is my framework design somehow causing this? Running the server against MySQL as opposed to an embedded H2 database does not affect the pattern. I am leaving out the details of my server design, but I'll be happy to elaborate if this behaviour is deemed erratic.

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >