Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 80/371 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Calculating rotation in > 360 deg. situations

    - by danglebrush
    I'm trying to work out a problem I'm having with degrees. I have data that is a list of of angles, in standard degree notation -- e.g. 26 deg. Usually when dealing with angles, if an angle exceeds 360 deg then the angle continues around and effectively "resets" -- i.e. the angle "starts again", e.g. 357 deg, 358 deg, 359 deg, 0 deg, 1 deg, etc. What I want to happen is the degree to continue increasing -- i.e. 357 deg, 358 deg, 359 deg, 360 deg, 361 deg, etc. I want to modify my data so that I have this converted data in it. When numbers approach the 0 deg limit, I want them to become negative -- i.e. 3 deg, 2 deg, 1 deg, 0 deg, -1 deg, -2 deg, etc. With multiples of 360 deg (both positive and negative), I want the degrees to continue, e.g. 720 deg, etc. Any suggestions on what approach to take? There is, no doubt, a frustratingly simple way of doing this, but my current solution is kludgey to say the least .... ! My best attempt to date is to look at the percentage difference between angle n and angle n - 1. If this is a large difference -- e.g. 60% -- then this needs to be modified, by adding or subtracting 360 deg to the current value, depending on the previous angle value. That is, if the previous angle is negative, substract 360, and add 360 if the previous angle is positive. Any suggestions on improving this? Any improvements?

    Read the article

  • What is the best way to detect white color?

    - by dnul
    I'm trying to detect white objects in a video. The first step is to filter the image so that it leaves only white-color pixels. My first approach was using HSV color space and then checking for high level of VAL channel. Here is the code: //convert image to hsv cvCvtColor( src, hsv, CV_BGR2HSV ); cvCvtPixToPlane( hsv, h_plane, s_plane, v_plane, 0 ); for(int x=0;x<srcSize.width;x++){ for(int y=0;y<srcSize.height;y++){ uchar * hue=&((uchar*) (h_plane->imageData+h_plane->widthStep*y))[x]; uchar * sat=&((uchar*) (s_plane->imageData+s_plane->widthStep*y))[x]; uchar * val=&((uchar*) (v_plane->imageData+v_plane->widthStep*y))[x]; if((*val>170)) *hue=255; else *hue=0; } } leaving the result in the hue channel. Unfortunately, this approach is very sensitive to lighting. I'm sure there is a better way. Any suggestions?

    Read the article

  • How to keep confirmation messages after POST while doing a post-submit redirect?

    - by MicE
    Hello, I'm looking for advise on how to share certain bits of data (i.e. post-submit confirmation messages) between individual requests in a web application. Let me explain: Current approach: user submits an add/edit form for a resource if there were no errors, user is shown a confirmation with links to: submit a new resource (for "add" form) view the submitted/edited resource view all resources (one step above in hierarchy) user then has to click on one of the three links to proceed (i.e. to the page "above") Progmatically, the form and its confirmation page are one set of classes. The page above that is another. They can technically share code, but at the moment they are both independent during processing of individual requests. We would like to amend the above as follows: user submits an add/edit form for a resource if there were no errors, the user is redirected to the page with all resources (one step above in hierarchy) with one or more confirmation messages displayed at the top of the page (i.e. success message, to whom was the request assigned, etc) This will: save users one click (they have to go through a lot of these add/edit forms) the post-submit redirect will address common problems with browser refresh / back-buttons What approach would you recommend for sharing data needed for the confirmation messages between the two requests, please? I'm not sure if it helps, it's a PHP application backed by a RESTful API, but I think that this is a language-agnostic question. A few simple solutions that come to mind are to share the data via cookies or in the session, this however breaks statelessness and would pose a significant problem for users who work in several tabs (the data could clash together). Passing the data as GET parameters is not suitable as we are talking about several messages which are dynamic (e.g. changing actors, dates). Thanks, M.

    Read the article

  • In .NET, how do I prevent, or handle, tampering with form data of disabled fields before submission?

    - by David
    Hi, If a disabled drop-down list is dynamically rendered to the page, it is still possible to use Firebug, or another tool, to tamper with the submitted value, and to remove the "disabled" HTML attribute. This code: protected override void OnLoad(EventArgs e) { var ddlTest = new DropDownList() {ID="ddlTest", Enabled = false}; ddlTest.Items.AddRange(new [] { new ListItem("Please select", ""), new ListItem("test 1", "1"), new ListItem("test 2", "2") }); Controls.Add(ddlTest); } results in this HTML being rendered: <select disabled="disabled" id="Properties_ddlTest" name="Properties$ddlTest"> <option value="" selected="selected">Please select</option> <option value="1">test 1</option> <option value="2">test 2</option> </select> The problem occurs when I use Firebug to remove the "disabled" attribute, and to change the selected option. On submission of the form, and re-creation of the field, the newly generated control has the correct value by the end of OnLoad, but by OnPreRender, it has assumed the identity of the submitted control and has been given the submitted form value. .NET seems to have no way of detecting the fact that the field was originally created in a disabled state and that the submitted value was faked. This is understandable, as there could be legitimate, client-side functionality that would allow the disabled attribute to be removed. Is there some way, other than a brute force approach, of detecting that this field's value should not have been changed? I see the brute force approach as being something crap, like saving the correct value somewhere while still in OnLoad, and restoring the value in the OnPreRender. As some fields have dependencies on others, that would be unacceptable to me.

    Read the article

  • Google Maps API and "rightclick" events on Macs

    - by samc
    Using the Google Maps API (v3), I can create a map and handle normal click events just fine, but when I want to handle rightclick events, it doesn't work on Macs. I assume this is because a rightclick on a Mac is actually converted to a ctrl-click, but the Google Maps API MouseEvent doesn't provide information about modifier keys, so I can't check for the ctrl key. I tried adding an "capture" event listener to the document that converts the click event to a rightclick event. function convertClick(e) { if (e.ctrlKey) { e.button = 2; } } document.addEventListener("click", convertClick, true) I added an alert to verify that the condition is correct, but modifying the event in this way didn't work. So, I decided to have my event handler set a global flag that my click handler could check. If the flag is set, it means ctrl was pressed, so the click handler just invokes the rightclick handler. var ctrl; function captureCtrl(e) { ctrl = e.ctrlKey; } This approach worked great, except for one thing. The ctrl flag gets set for the click after the one that occured when ctrl was pressed. That means the event handler is be called during the bubble phase rather than the capture phase. Could explain why the event modification approach didn't work. So, my question is how can you detect "rightclick" events from Macs with the Google Maps API? I can't be the first person to want to do this. That said, when I right-click on the map on http://maps.google.com from a Windows or Linux machine, I get a popup box with options like "Directions from here...", etc. On a Mac, nothing happens. So, not even the main Google Maps page has solved this problem. ...maybe I am the first person to want to do this.

    Read the article

  • Why can't I simply copy installed Perl modules to other machines?

    - by pistacchio
    Being very new to Perl but not to dynamic languages, I'm a bit surprised at how not straight forward the manage of modules is. Sure, cpan X does theoretically work, but I'm working on the same project from three different machines and OSs (at work, at home, testing in an external environment). At work (Windows 7) I have problem using cpan because of our firewall that makes ftp unusable At home (Mac OS X) it does work In the external environment (Linux CentOs) it worked after hours because I don't have root access and I had to configure cpan to operate as a non-root user I've tried on another server where I have an access. If the previous external environment is a VPS and so I have a shell access, this other one is a cheap shared hosting where I have no way to install new modules other than the ones pre-installed At the moment I still can't install Template under Windows. I've seen that as an alternative I could compile it and I've also tried ActiveState's PPM but the module is not existent there. Now, my perplexity is about Perl being a dynamic language. I've had all these kind of problems while working, for example, with C where I had to compile all the libraries for all the platform, but I thought that with Perl the approach would have been very similar to Python's or PHP's where in 90% of the cases copying the module in a directory and importing it simply works. So, my question: if Perl's modules are written in Perl, why the copy/paste approach will not work? If some (or some part) of the modules have to be compiled, how to see in CPAN if a module is Perl-only or it relies upon compiled libraries? Isn't there a way to download the module (tar, zip...) and use cpan to deploy it? This would solve my problem under Windows.

    Read the article

  • C++ & proper TDD

    - by Kotti
    Hi! I recently tried developing a small-sized project in C# and during the whole project our team used the Test-Driven-Development (TDD) technique (xunit, moq). I really think this was awesome, because (when paired with C#) this approach allowed to relax when coding, relax when projecting and relax when refactoring. I suspect that all this TDD-stuff actually simplifies the coding process and, well, it allowed (eventually, for me) to get the same result with fewer brain cells working. Right after that I tried using TDD paired with C++ (I used Google Test and Google Mock libraries), and, I don't know why but I actually think that TDD here was a step back in terms of rapid application development. I had some moments when I had to spend huge amounts of time thinking of my tests, building proper mocks, rebuilding them and swearing at my monitor. And, well, I obviously can't ask something like "what I did wrong?" or "what was wrong in my approach?", because I don't know what to describe. But if there are any people who are used to TDD in C++ (and, probably C#) too, could you please advise me how to do this properly. Framework recommendations, architecture approaches, plain coding advices - if you are experienced in TDD & C++, please respond.

    Read the article

  • Static code analysis for new language. Where to start?

    - by tinny
    I just been given a new assignment which looks like its going to be an interesting challenge. The customer is wanting a code style checking tool to be developed for their internal (soon to be open sourced) programming language which runs on the JVM. The language syntax is very Java like. The customer basically wants me to produce something like checkstyle. So my question is this, how would you approach this problem? Given a clean slate what recommendations would you make to the customer? I think I have 3 options Write something from scratch. Id prefer not to do this as it seems like this sort of code analysis tool problem has been solved so many times that there must be a more "framework" or "platform" orientated approach. Fork an existing code style checking tool and modify the parsing to fit with this new language etc etc Extend or plug into an existing static code analysis tool. (maybe write a plugin for Yasca?) Maybe you would like to share your experiences in this area? Thanks for reading

    Read the article

  • Using Ant how can I dex a directory of jars?

    - by cbeaudin
    I have a directory full of jars (felix bundles). I want to iterate through all of these jars and create dex'd versions. My intent is to deploy each of these dex'd jars as standalone apk's since they are bundles. Feel free to straighten me out if I am approaching this from the wrong direction. This first part is just to try and create a corresponding .dex file for each jar. However when I run this I am getting a "no resources specified" error coming out of Ant. Is this the right approach, or is there a simpler approach to just input a jar and output a dex'd version of that jar? The ${file} is valid as it is spitting out the name of the file in the echo command. <target name="dexBundles" description="Run dex on all the bundles"> <taskdef resource="net/sf/antcontrib/antlib.xml" classpath="${basedir}/libs/ant-contrib.jar" /> <echo>Starting</echo> <for param="file"> <path> <fileset dir="${pre.dex.dir}"> <include name="**/*.jar" /> </fileset> </path> <sequential> <echo message="@{file}" /> <echo>Converting jar file @{file} into ${post.dex.dir}/@{file}.class...</echo> <apply executable="${dx}" failonerror="true" parallel="true" verbose="true"> <arg value="--dex" /> <arg value="--output=${post.dex.dir}/${file}.dex" /> <arg path="@{file}" /> </apply> </sequential> </for> <echo>Finished</echo> </target>

    Read the article

  • How to hide the console of batch scripts without losing std err/out streams

    - by cooper.thompson
    My question is similar to Running a CMD or BAT in silent mode, but with one additional constraint. If you use WshScript.Run in vbscript, you lose access to the standard in/error/out streams of the process. WshScript.Exec gives you access to the standard streams, but you can't hide your windows. How can you have your cake (hide the windows) and eat it too (have direct access to the console streams)? I'm currently thinking about a C++ executable which creates a new Windows Station and Desktop, (see MSDN) and runs a specified script within that new Desktop (I'm not yet an expert on Window Stations and Desktops, so this idea may be retarded). This idea is based loosely on Condor's USE_VISIBLE_DESKTOP feature, which, if disabled, runs Condor jobs in a non-visible Desktop. I haven't quite figured out if this requires elevated priveledge. The tradeoff of this approach is that your script can disappear into limbo if it blocks on user input. Does anyone have any additional ideas? Or feedback on the approach outlined above? Edit: Also, the purpose of our script is to set up the user environment, so running as another user, or as a system scheduled task isn't really an option (unless there are clever tricks I don't know about).

    Read the article

  • How to deal with Unicode strings in C/C++ in a cross-platform friendly way?

    - by Sorin Sbarnea
    On platforms different than Windows you could easily use char * strings and treat them as UTF-8. The problem is that on Windows you are required to accept and send messages using wchar* strings (W). If you'll use the ANSI functions (A) you will not support Unicode. So if you want to write truly portable application you need to compile it as Unicode on Windows. Now, In order to keep the code clean I would like to see what is the recommended way of dealing with strings, a way that minimize ugliness in the code. Type of strings you may need: std::string, std::wstring, std::tstring,char *,wchat_t *, TCHAR*, CString (ATL one). Issues you may encounter: cout/cerr/cin and their Unicode variants wcout,wcerr,wcin all renamed wide string functions and their TCHAR macros - like strcmp, wcscmp and _tcscmp. constant strings inside code, with TCHAR you will have to fill your code with _T() macros. What approach do you see as being best? (examples are welcome) Personally I would go for a std::tstring approach but I would like to see how would do to the conversions where they are necessary.

    Read the article

  • What is the best practice in regards to building composite dtos off of an aggregate root with domain

    - by Chance
    I'm trying to figure out the best approach/practice for assembling a composite data transfer object off of an aggregate root and would love to hear people's thoughts on this. For example, lets say I have a root that has a few domain objects as children. I want to assemble a specific view dto, based on some business logic, that either has attributes or full dto's of it's objects. What I'm struggling with is trying to figure out where that assembly should happen. I can see it going on the domain object of the aggregate root as there is some business logic associated with it. The benefits of this approach from what I've deduced thus far is that it should reduce the inevitable business logic from bleeding outisde of the domain object. It also allows for private methods that take care of tasks that could become more complex from an external builder. The downsides being that the domain object becomes much more entrenched in the application's workflow and represents much more than just the domain object. It also could become very large in the scenario where you need multiple composite Dtos. Alternatively, I could also see it belonging to some form of transfer object assembler where there is a builder for each domain object. The domain objects would still be responsible for GetDto() and UpdateFromDto(dto). Outside of that, the builder would handle the construction and deconstruction of composite dtos. The downside is kind of mentioned above, where I fear this will easily lead to developers unfamiliar with DDD bleeding a ton of business logic into the assembler which is what I want to desperately avoid. Any thoughts would be greatly apperciated.

    Read the article

  • Using memcache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • What is the best way to store site configuration data?

    - by DaveDev
    I have a question about storing site configuration data. We have a platform for web applications. The idea is that different clients can have their data hosted and displayed on their own site which sits on top of this platform. Each site has a configuration which determines which panels relevant to the client appear on which pages. The system was originally designed to keep all the configuration data for each site in a database. When the site is loaded all the configuration data is loaded into a SiteConfiguration object, and the clients panels are generated based on the content of this object. This works, but I find it very difficult to work with to apply change requests or add new sites because there is so much data to sift through and it's difficult maintain a mental model of the site and its configuration. Recently I've been tasked with developing a subset of some of the sites to be generated as PDF documents for printing. I decided to take a different approach to how I would define the configuration in that instead of storing configuration data in the database, I wrote XML files to contain the data. I find it much easier to work with because instead of reading meaningless rows of data which are related to other meaningless rows of data, I have meaningful documents with semantic, readable information with the relationships defined by visually understandable element nesting. So now with these 2 approaches to storing site configuration data, I'd like to get the opinions of people more experienced in dealing with this issue on dealing with these two approaches. What is the best way of storing site configuration data? Is there a better way than the two ways I outlined here? note: StackOverflow is telling me the question appears to be subjective and is likely to be closed. I'm not trying to be subjective. I'd like to know how best to approach this issue next time and if people with industry experience on this could provide some input.

    Read the article

  • PHP check http referer for form submitted by AJAX, secure?

    - by Michael Mao
    Hi all: This is the first time I am working for a front-end project that requires server-side authentication for AJAX requests. I've encountered problems like I cannot make a call of session_start as the beginning line of the "destination page", cuz that would get me a PHP Warning : Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at C:\xampp\htdocs\comic\app\ajaxInsert Book.php:1) in C:\xampp\htdocs\comic\app\common.php on line 10 I reckon this means I have to figure out a way other than checking PHP session variables to authenticate the "caller" of this PHP script, and this is my approach : I have a "protected" PHP page, which must be used as the "container" of my javascript that posts the form through jQuery $.ajax(); method In my "receiver" PHP script, what I've got is: <?php define(BOOKS_TABLE, "books"); define(APPROOT, "/comic/"); define(CORRECT_REFERER, "/protected/staff/addBook.php"); function isRefererCorrect() { // the following line evaluates the relative path for the referer uri, // Say, $_SERVER['HTTP_REFERER'] returns "http://localhost/comic/protected/staff/addBook.php" // Then the part we concern is just this "/protected/staff/addBook.php" $referer = substr($_SERVER['HTTP_REFERER'], 6 + strrpos($_SERVER['HTTP_REFERER'], APPROOT)); return (strnatcmp(CORRECT_REFERER, $referer) == 0) ? true : false; } //http://stackoverflow.com/questions/267546/correct-http-header-for-json-file header('Content-type: application/json charset=UTF-8'); header('Cache-Control: no-cache, must-revalidate'); echo json_encode(array ( "feedback"=>"ok", "info"=>isRefererCorrect() )); ?> My code works, but I wonder is there any security risks in this approach? Can someone manipulate the post request so that he can pretend that the caller javascript is from the "protected" page? Many thanks to any hints or suggestions.

    Read the article

  • Using a "take-home" coding component in interview process

    - by Jeff Sargent
    In recent interviews I have been asking candidates to code through some questions on the whiteboard. I don't feel I'm getting a clear enough picture of the candidates technical ability with this approach. Granted, the questions might not be good enough, maybe the interview needs to be longer, etc, but I'm wondering if a different approach would be better. What I'd like to try is to create a simple, working project in Visual Studio and have it checked into source control. The candidate can check that code out from home/wherever and then check back in work representing their response to the assignment that I'll provide. I'm thinking that if the window of time is short enough and the assignment clear enough then the solution will be safe enough from all-out Googling (i.e. they couldn't search for and find the entire solution online). I would then be able to review the candidates work. Has enough worked with something like this before, either to vet a candidate or as a candidate yourself? Any thoughts in general? P.S. my first StackOverflow question - hi guys and gals. EDIT: I've seen comments about asking someone to work for free - I wouldn't mind paying the person for their time.

    Read the article

  • How do I find all paths through a set of given nodes in a DAG?

    - by Hanno Fietz
    I have a list of items (blue nodes below) which are categorized by the users of my application. The categories themselves can be grouped and categorized themselves. The resulting structure can be represented as a Directed Acyclic Graph (DAG) where the items are sinks at the bottom of the graph's topology and the top categories are sources. Note that while some of the categories might be well defined, a lot is going to be user defined and might be very messy. Example: On that structure, I want to perform the following operations: find all items (sinks) below a particular node (all items in Europe) find all paths (if any) that pass through all of a set of n nodes (all items sent via SMTP from example.com) find all nodes that lie below all of a set of nodes (intersection: goyish brown foods) The first seems quite straightforward: start at the node, follow all possible paths to the bottom and collect the items there. However, is there a faster approach? Remembering the nodes I already passed through probably helps avoiding unnecessary repetition, but are there more optimizations? How do I go about the second one? It seems that the first step would be to determine the height of each node in the set, as to determine at which one(s) to start and then find all paths below that which include the rest of the set. But is this the best (or even a good) approach? The graph traversal algorithms listed at Wikipedia all seem to be concerned with either finding a particular node or the shortest or otherwise most effective route between two nodes. I think both is not what I want, or did I just fail to see how this applies to my problem? Where else should I read?

    Read the article

  • Creating a function in Postgresql that does not return composite values

    - by celenius
    I'm learning how to write functions in Postgresql. I've defined a function called _tmp_myfunction() which takes in an id and returns a table (I also define a table object type called _tmp_mytable) -- create object type to be returned CREATE TYPE _tmp_mytable AS ( id integer, cost double precision ); -- create function which returns query CREATE OR REPLACE FUNCTION _tmp_myfunction( id integer ) RETURNS SETOF _tmp_mytable AS $$ BEGIN RETURN QUERY SELECT id, cost FROM sales WHERE id = sales.id; END; $$ LANGUAGE plpgsql; This works fine when I use one id and call it using the following approach: SELECT * FROM _tmp_myfunction(402); What I would like to be able to do is to call it, but to use a column of values instead of just one value. However, if I use the following approach I end up with all values of the table in one column, separated by commas: -- call function using all values in a column SELECT _tmp_myfunction(t.id) FROM transactions as t; I understand that I can get the same result if I use SELECT _tmp_myfunction(402); instead of SELECT * FROM _tmp_myfunction(402); but I don't know how to construct my query in such a way that I can separate out the results.

    Read the article

  • Caching issue with javascript and asp.net

    - by Ed Woodcock
    Hi guys: I asked a question a while back on here regarding caching data for a calendar/scheduling web app, and got some good responses. However, I have now decided to change my approach and stat caching the data in javascript. I am directly caching the HTML for each day's column in the calendar grid inside the $('body').data() object, which gives very fast page load times (almost unnoticable). However, problems start to arise when the user requests data that is not yet in the cache. This data is created by the server using an ajax call, so it's asynchronous, and takes about 0.2s per week's data. My current approach is simply to block for 0.5s when the user requests information from the server, and cache 4 weeks either side in the inital page load (and 1 extra week per page change request), however I doubt this is the optimal method. Does anyone have a suggestion as to how to improve the situation? To summarise: Each week takes 0.2s to retrieve from the server, asynchronously. Performance must be as close to real-time as possible. (however the data is not needed to be fully real-time: most appointments are added by the user and so we can re-cache after this) Currently 4 weeks are cached on either side of the inial week loaded: this is not enough. to cache 1 year takes ~ 21s, this is too slow for an initial load.

    Read the article

  • Aligning music notes using String matching algorithms or Dynamic Programming

    - by Dolphin
    Hi I need to compare 2 sets of musical pieces (i.e. a playing-taken in MIDI format-note details extracted and saved in a database table, against sheet music-taken into XML format). When evaluating playing against sheet music (i.e.note details-pitch, duration, rhythm), note alignment needs to be done - to identify missed/extra/incorrect/swapped notes that from the reference (sheet music) notes. I have like 1800-2500 notes in one piece approx (can even be more-with polyphonic, right now I'm doing for monophonic). So will I have to have all these into an array? Will it be memory overloading or stack overflow? There are string matching algorithms like KMP, Boyce-Moore. But note alignment can also be done through Dynamic Programming. How can I use Dynamic Programming to approach this? What are the available algorithms? Is it about approximate string matching? Which approach is much productive? String matching algos like Boyce-Moore, or dynamic programming? How can I assess which is more effective? Greatly appreciate any insight or suggestions Thanks in advance

    Read the article

  • Saving multiple items per single database cell...

    - by eugeneK
    Hi, i have a countries list. Each user can check multiple countries. Once saved, this "user country list" will be used to get whether other users fit into countries certain user chose. Question is what would be the most efficient approach to this problem... I have one, one to save user selection as delimited list like Canada,USA,France ... in single varchar(max) field but problem with it would be that once user from Germany enters page i perform this check on. To search for Germany i would be needed to get all items and un-delimit each field to check against value or to use sql 'like' which again is pretty damn slow.. If you have better solution or some tips i would be glad to hear. Just to make sure, many users will have their own selections of countries from which and only they want to have users to land on their page. While millions of users will reach those pages. So the faster approach will be the better. technology, MSSQL and ASP.NET thanks

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • MVC2 Modelbinder for List of derived objects

    - by user250773
    I want a list of different (derived) object types working with the Default Modelbinder in Asp.net MVC 2. I have the following ViewModel: public class ItemFormModel { [Required(ErrorMessage = "Required Field")] public string Name { get; set; } public string Description { get; set; } [ScaffoldColumn(true)] //public List<Core.Object> Objects { get; set; } public ArrayList Objects { get; set; } } And the list contains objects of diffent derived types, e.g. public class TextObject : Core.Object { public string Text { get; set; } } public class BoolObject : Core.Object { public bool Value { get; set; } } It doesn't matter if I use the List or the ArrayList implementation, everything get's nicely scaffolded in the form, but the modelbinder doesn't resolve the derived object type properties for me when posting back to the ActionResult. What could be a good solution for the Viewmodel structure to get a list of different object types handled? Having an extra list for every object type (e.g. List, List etc.) seems to be not a good solution for me, since this is a lot of overhead both in building the viewmodel and mapping it back to the domain model. Thinking about the other approach of binding all properties in a custom model binder, how can I make use the data annotations approach here (validating required attributes etc.) without a lot of overhead?

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • Progressively stream the output of an ASP.NET page - or render a page outside of an HTTP request

    - by Evgeny
    I have an ASP.NET 2.0 page with many repeating blocks, including a third-party server-side control (so it's not just plain HTML). Each is quite expensive to generate, in terms of both CPU and RAM. I'm currently using a standard Repeater control for this. There are two problems with this simple approach: The entire page must be rendered before any of it is returned to the client, so the user must wait a long time before they see any data. (I write progress messages using Response.Write, so there is feedback, but no actual results.) The ASP.NET worker process must hold everything in memory at the same time. There is no inherent needs for this: once one block is processed it won't be changed, so it could be returned to the client and the memory could be freed. I would like to somehow return these blocks to the client one at a time, as each is generated. I'm thinking of extracting the stuff inside the Repeater into a separate page and getting it repeatedly using AJAX, but there are some complications involved in that and I wonder if there is some simper approach. Ideally I'd like to keep it as one page (from the client's point of view), but return it incrementally. Another way would be to do something similar, but on the server: still create a separate page, but have the server access it and then Response.Write() the HTML it gets to the response stream for the real client request. Is there a way to avoid an HTTP request here, though? Is there some ASP.NET method that would render a UserControl or a Page outside of an HTTP request and simply return the HTML to me as a string? I'm open to other ideas on how to do this as well.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >