Search Results

Search found 1399 results on 56 pages for 'separation of concerns'.

Page 46/56 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • asp.net mvc: What is the correct way to return html from controller to refresh select list?

    - by Mark Redman
    Hi, I am new to ASP.NET MVC, particularly ajax operations. I have a form with a jquery dialog for adding items to a drop-down list. This posts to the controller action. If nothing (ie void method) is returned from the Controller Action the page returns having updated the database, but obviously there no chnage to the form. What would be the best practice in updating the drop down list with the added id/value and selecting the item. I think my options are: 1) Construct and return the html manually that makes up the new <select> tag [this would be easy enough and work, but seems like I am missing something] 2) Use some kind of "helper" to construct the new html [This seems to make sense] 3) Only return the id/value and add this to the list and select the item [This seems like an overkill considering the item needs to be placed in the correct order etc] 4) Use some kind of Partial View [Does this mean creating additional forms within ascx controls? not sure how this would effect submitting the main form its on? Also unless this is reusable by passing in parameters(not sure how thats done) maybe 2 is the option?] UPDATE: Having looked around a bit, it seems that generating html withing the controller is not a good idea. I have seen other posts that render partialviews to strings which I guess is what I need and separates concerns (since the html bits are in the ascx). Any comments on whether that is good practice.

    Read the article

  • How to store dynamic references to parts of a text

    - by Antoine L
    In fact, my question concerns an algorithm. I need to be able to attach annotations to certain parts of a text, like a word or a group of words. The first thing that came to me to do so is to store the position of this part (indexes) in the text. For instance, in the text 'The quick brown fox jumps over the lazy dog', I'd like to attach an annotation to 'quick brown fox', so the indexes of the annotation would be 4 - 14. But since the text is editable (other annotations could provoke a modification from text's author), the annoted part is likely to move (the indexes could change). In fact, I don't know how to update the indexes of the annoted part. What if the text becomes 'Everyday, the quick brown fox jumps over the lazy dog' ? I guess I have to watch every change of the text in the front-end application ? The front-end part of the application will be HTML with Javascript. I will be using PHP to develop the back-end part and every text and annotation will be stored in a database.

    Read the article

  • Schema-less design guidelines for Google App Engine Datastore and other NoSQL DBs

    - by jamesaharvey
    Coming from a relational database background, as I'm sure many others are, I'm looking for some solid guidelines for setting up / designing my datastore on Google App Engine. Are there any good rules of thumb people have for setting up these kinds of schema-less data stores? I understand some of the basics such as denormalizing since you can't do joins, but I was wondering what other recommendations people had. The particular simple example I am working with concerns storing searches and their results. For example I have the following 2 models defined in my Google App Engine app using Python: class Search(db.Model): who = db.StringProperty() what = db.StringProperty() where = db.StringProperty() createDate = db.DateTimeProperty(auto_now_add=True) class SearchResult(db.Model): title = db.StringProperty() content = db.StringProperty() who = db.StringProperty() what = db.StringProperty() where = db.StringProperty() createDate = db.DateTimeProperty(auto_now_add=True) I'm duplicating a bunch of properties between the models for the sake of denormalization since I can't join Search and SearchResult together. Does this make sense? Or should I store a search ID in the SearchResult model and effectively 'join' the 2 models in code when I retrieve them from the datastore? Please keep in mind that this is a simple example. Both models will have a lot more properties and the way I'm approaching this right now, I would put any property I put in the Search model in the SearchResult model as well.

    Read the article

  • Which version of Grady Booch's OOA/D book should I buy?

    - by jackj
    Grady Booch's "Object-Oriented Analysis and Design with Applications" is available brand new in both the 2nd edition (1993) and the 3rd edition (2007), while many used copies of both editions are available. Here are my concerns: 1) The 2nd edition uses C++: given that I just finished reading my first two C++ books (Accelerated C++ and C++ Primer) I guess practical tips can only help, so the 2nd edition is probably best (I think the 3rd edition has absolutely no code). On the other hand, the C++ books I read insist on the importance of using standard C++, whereas Booch's 2nd edition was published before the 1998 standard. 2) The 2nd edition is shorter (608 pages vs. 720) so, I guess, it will be slightly easier to get through. 3) The 3rd edition uses UML 2.0, whereas the 2nd edition is pre-UML. Some reviews say that the notation in the 2nd edition is close enough to UML, so it doesn't matter, but I don't know if I should be worrying about this or not. 4) The 2nd edition is available in good-shape used copies for considerably less than what the 3rd one goes for. Given all the above factors, do you think I should buy the 2nd or the 3rd edition? Recommendations on other books are also welcome but I would prefer it if whoever answers has read at least one of the versions of Booch's book (preferably both!). I have already bought but not read GoF and Riel's books. I also know that I should practice a lot with real-life code. Thanks.

    Read the article

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • Winsock WSAAsyncSelect sending without an infinite buffer

    - by Xexr
    Hi, This is more of a design question than a specific code question, I'm sure I am missing the obvious, I just need another set of eyes. I am writing a multi-client server based on WSAAsyncSelect, each connection is made into an object of a connection class I have written which contains associated settings and buffers etc. My question concerns FD_WRITE, I understand how it operates: One FD_WRITE is sent immediately after a connection is established. Thereafter, you should send until WSAEWOULDBLOCK is received at which point you store what is left to send in a buffer, and wait to be told that it is ok to send again. This is where I have a problem, how large do I make this holding buffer within each connections object? The amount of time until a new FD_WRITE is received is unknown, I could be attempting to send a lot of stuff during this period, all the time adding to my outgoing buffer. If I make the buffer dynamic, memory usage could spiral out of control if for whatever reason, I am unable to send() and reduce the buffer. So my question is how do you generally handle this situation? Note I am not talking about the network buffer itself which winsock uses, but one of my own creation used to "queue" up sends. Hope I explained that well enough, thanks all!

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • Javascript library: to obfuscate or not to obfuscate - that is the question

    - by morpheous
    I need to write a GUI related javascript library. It will give my website a bit of an edge (in terms of functionality I can offer) - up until my competitors play with it long enough to figure out how to write it by themselves. I can accept the fact that it will be emulated over time - thats par for the course (its part of business). However, what I cannot bear, is the idea of effectively, simply handing over all the hard work that would have gone into the library to my competitors, by using plain javascript that anyone can download and use. It is an established fact that no none in the industry I am "attacking" has this functionality, so the value of such a library is undeniable and is not up for discussion (i.e. thats not what I'm asking here). What I am seeking to find out are the pros and cons of obfuscating a javascript library, so that I can come to a final decision. Two of my biggest concerns are debugging, and subtle errors that may be introduced by the obfuscator. I would like to know: How can I manage those risks (being able to debug faulty code, ensuring/minimizing against obfuscation errors) Are there any good quality industry standard obfuscators you can recommend (preferably something you use yourself). What are your experiences of using obfuscated code in a production environment?

    Read the article

  • PHP recaptcha send mail issues

    - by Mike
    Hey guys, if anybody can help me out i'd love it... What i have is a form, that went sent, uses doublecheck.php php require_once('recaptchalib.php'); $privatekey = ""; $resp = recaptcha_check_answer ($privatekey, $_SERVER["REMOTE_ADDR"], $_POST["recaptcha_challenge_field"], $_POST["recaptcha_response_field"]); if (!$resp-is_valid) { die ("Sorry please go back and try it again." . "" . $resp-error . ")"); } if ($resp-is_valid) { require_once('sendmail.php'); } ? And then my sendmail.php php $ip = $_POST['ip']; $httpref = $_POST['httpref']; $httpagent = $_POST['httpagent']; $visitor = $_POST['visitor']; $notes = $_POST['notes']; $attn = $_POST['attn']; $todayis = date("l, F j, Y, g:i a") ; $attn = $attn ; $subject = $attn; $notes = stripcslashes($notes); $message = " $todayis [EST] \n Attention: $attn \n Message: $notes \n From: $visitor ($Your Prayer or Concern)\n Additional Info : IP = $ip \n Browser Info: $httpagent \n Referral : $httpref \n "; $from = "From:\r\n"; mail("", Prayers and Concerns, $message); ? Date: Attention: Message: ", $notes); echo $notesout; ? Next Page What i'm having a hard time with is when its succesful i need to send out $notes but $notes is always blank. Should i just put my sendmail php inside of my successful php? Or can someone explain to me why $notes is blank. I do have my recaptcha key in, and also i do have an email address. I kept some things private, also there is a notes textarea in my HTML

    Read the article

  • How do I make a web interface for a socket server

    - by mgroat
    I've got a socket server running (it's something that's basically like a chat server). Users can telnet into it, but I'd like to make a web interface. This is the first time I've ever done something like this, so I'm not really sure where to start. A few thoughts I've had: Have some server-side Python (or PHP) on my webserver, which accesses the socket server. I think I know enough about sockets to have Python interact with the server, but how do I go about getting the website that the user sees to update in real time? Should I just have the website refresh few seconds? I would prefer to do things this way if I can figure out how. Write a Java applet that interacts with the socket server, and embed the applet in the website. I would have to re-learn a language that I haven't touched in years, but my main goal here is learning -- so that wouldn't be such a bad thing. The main problem I have with this is that it requires end users to have Java installed on their computers, which I'd rather not do. Is one of these two solutions the right way to go? Anybody know where I can find a good tutorial to get started? Edit: There's no real security concerns with exposing the server to the internet.

    Read the article

  • Managing inverse relationships without CoreData

    - by Nathaniel Martin
    This is a question for Objective-J/Cappuccino, but I added the cocoa tag since the frameworks are so similar. One of the downsides of Cappuccino is that CoreData hasn't been ported over yet, so you have to make all your model objects manually. In CoreData, your inverse relationships get managed automatically for you... if you add an object to a to-many relationship in another object, you can traverse the graph in both directions. Without CoreData, is there any clean way to setup those inverse relationships automatically? For a more concrete example, let's take the typical Department and Employees example. To use rails terminology, a Department object has-many Employees, and an Employee belongs-to a Department. So our Department model has an NSMutableSet (or CPMutableSet ) "employees" that contains a set of Employees, and our Employee model has a variable "department" that points back to the Department model that owns it. Is there an easy way to make it so that, when I add a new Employee model into the set, the inverse relationship (employee.department) automatically gets set? Or the reverse: If I set the department model of an employee, then it automatically gets added to that department's employee set? Right know I'm making an object, "ValidatedModel" that all my models subclass, which adds a few methods that setup the inverse relationships, using KVO. But I'm afraid that I'm doing a lot of pointless work, and that there's already an easier way to do this. Can someone put my concerns to rest?

    Read the article

  • How are you using C++0x today? [closed]

    - by Roger Pate
    This is a question in two parts, the first is the most important and concerns now: Are you following the design and evolution of C++0x? What blogs, newsgroups, committee papers, and other resources do you follow? Even where you're not using any new features, how have they affected your current choices? What new features are you using now, either in production or otherwise? The second part is a follow-up, concerning the new standard once it is final: Do you expect to use it immediately? What are you doing to prepare for C++0x, other than as listed for the previous questions? Obviously, compiler support must be there, but there's still co-workers, ancillary tools, and other factors to consider. What will most affect your adoption? Edit: The original really was too argumentative; however, I'm still interested in the underlying question, so I've tried to clean it up and hopefully make it acceptable. This seems a much better avenue than duplicating—even though some answers responded to the argumentative tone, they still apply to the extent that they addressed the questions, and all answers are community property to be cleaned up as appropriate, too.

    Read the article

  • Programming style question on how to code functions

    - by shawnjan
    Hey all! So, I was just coding a bit today, and I realized that I don't have much consistency when it comes to a coding style when programming functions. One of my main concerns is whether or not its proper to code it so that you check that the input of the user is valid OUTSIDE of the function, or just throw the values passed by the user into the function and check if the values are valid in there. Let me sketch an example: I have a function that lists hosts based on an environment, and I want to be able to split the environment into chunks of hosts. So an example of the usage is this: listhosts -e testenv -s 2 1 This will get all the hosts from the "testenv", split it up into two parts, and it is displaying part one. In my code, I have a function that you pass it in a list, and it returns a list of lists based on you parameters for splitting. BUT, before I pass it a list, I first verify the parameters in my MAIN during the getops process, so in the main I check to make sure there are no negatives passed by the user, I make sure the user didnt request to split into say, 4 parts, but asking to display part 5 (which would not be valid), etc. tl;dr: Would you check the validity of a users input the flow of you're MAIN class, or would you do a check in your function itself, and either return a valid response in the case of valid input, or return NULL in the case of invalid input? Obviously both methods work, I'm just interested to hear from experts as to which approach is better :) Thanks for any comments and suggestions you guys have!

    Read the article

  • Is it impossible to secure .net code (intellectual property) ?

    - by JL
    I used to work in JavaScript a lot and one thing that really bothered my employers was that the source code was too easy to steal. Even with obfuscation, nothing really helped, because we all knew that any competent developer would be able to read that code if they wanted to. JS Scripts are one thing, but what about SOA projects that have millions invested in IP (Intellectual Property). I love .net, and especially C#, but I recently again had to answer the question "If we give this compiled program over to our clients, can their developers reverse engineer it?" I had gone out of my way to obfuscate the code, but I knew it wouldn't take that much for another determined C# developer to get at the code. So I earnestly pose the question, is it impossible to secure .net code? The considerations I have as as follows: Even regular native executables can be reversed, but not every developer has the skill to be able to do this. Its a lot harder to disassemble a native executable than a .net assembly. Obfuscation will only get you so far, but it does help a little. Why have I never seen any public acknowledgement by Microsoft that anything written in .net is subject to relatively easy IP theft? Why have I never seen a scrap of counter measure training on any Microsoft site? Why does VS come with a community obfuscater as an optional component? Ok maybe I have just had my head in the sand here, but its not exactly high on most developers priority list. Are there any plans to address my concerns in any future version of .net? I'm not knocking .net, but I would like some realistic answers, thank you, question marked as subjective and community!

    Read the article

  • What are the essential Java libraries and utilities for a returning dynamic language user?

    - by jbwiv
    Guys, Long time Java developer here, but I've spent more time working with Ruby over the past 3 years or so as far as web applications go. I really have enjoyed it, but there are concerns I've uncovered that I won't cover here. Now that I've found the Play! framework, I'm thrilled about the prospect of having a Rails-like experience with Java's speed and reliability. Aside from what Play! provides out of the box, I'm looking for recommendations on "can't miss" libraries and tools for the Java developer used to pragmatic, dynamic experiences. I've found Project Lombok, which looks like a very intriguing way to eliminate a lot of the boiler plate, unnecessary Java noise. What else should I know about? I know Google has released quite a few libraries over the past three years that I've heard mentioned on the Java Posse, but I can't recall exactly what they are. I'm sure I've missed others in my absence. So, what makes up your essential Java toolbox these days? Thanks for your answers!

    Read the article

  • Transitioning from desktop app written in C++ to a web-based app

    - by Karim
    We have a mature Windows desktop application written in C++. The application's GUI sits on top of a windows DLL that does most of the work for the GUI (it's kind of the engine). It, too, is written in C++. We are considering transitioning the Windows app to be a web-based app for various reasons. What I would like to avoid is having to writing the CGI for this web-based app in C++. That is, I would rather have the power of a 4G language like Python or a .NET language for creating the web-based version of this app. So, the question is: given that I need to use a C++ DLL on the backend to do the work of the app what technology stack would you recommend for sitting between the user's browser and are C++ dll? We can assume that the web server will be Windows. Some options: Write a COM layer on top of the windows DLL which can then be access via .NET and use ASP.NET for the UI Access the export DLL interface directly from .NET and use ASP.NET for the UI. Write a custom Python library that wraps the windows DLL so that the rest of the code can be written. Write the CGI using C++ and a C++-based MVC framework like Wt Concerns: I would rather not use C++ for the web framework if it can be avoided - I think languages like Python and C# are simply more powerful and efficient in terms of development time. I'm concerned that my mixing managed and unmanaged code with one of the .NET solutions I'm asking for lots of little problems that are hard to debug (purely anecdotal evidence for that) Same is true for using a Python layer. Anything that's slightly off the beaten path like that worries me in that I don't have much evidence one way or the other if this is a viable long term solution.

    Read the article

  • SQLite Databases and Grid Hosting

    - by jocull
    I'm considering moving my site from a GoDaddy shared hosting account to a Media Temple grid hosting account in anticipation of traffic. However, I first have some concerns with the grid hosting setup. My site stores a large personal set of data on a per-user basis (possibly 3-4MB per user). At this rate I was worried about blowing over a 1GB MySQL limit in no time. To deal with this I created distributed SQLite databases per user to store large data objects. It's worked wonderfully so far. SQLite is super fast and simple. I know that reading from and writing to files is different in a Grid Hosting environment. I need to know if this setup is going to cause serious problems. These databases are not (and will not be) highly trafficked. They are personal to the user and will only be touched maybe two locations at the same time (one updating the data hourly at the most, and one or more reading on demand). I'd like to keep this setup as getting additional space (beyond 4GB) on a MySQL database seems to be a real trouble point. Will Grid Hosting cause me serious problems? Thanks.

    Read the article

  • Disposing ActiveX resources owned by another thread

    - by Stefan Teitge
    I've got a problem problem with threading and disposing resources. I've got a C# Windows Forms application which runs expensive operation in a thread. This thread instantiates an ActiveX control (AxControl). This control must be disposed as it uses a high amount of memory. So I implemented a Dispose() method and even a destructor. After the thread ends the destructor is called. This is sadly called by the UI thread. So invoking activexControl.Dispose(); fails with the message "COM object that has been separated from its underlying RCW", as the object belongs to another thread. How to do this correctly or is it just a bad design I use? (I stripped the code down to the minimum including removing any safety concerns.) class Program { [STAThread] static void Main() { // do stuff here, e.g. open a form new Thread(new ThreadStart(RunStuff); // do more stuff } private void RunStuff() { DoStuff stuff = new DoStuff(); stuff.PerformStuff(); } } class DoStuff : IDisposable { private AxControl activexControl; DoStuff() { activexControl = new AxControl(); activexControl.CreateControl(); // force instance } ~DoStuff() { Dispose(); } public void Dispose() { activexControl.Dispose(); } public void PerformStuff() { // invent perpetuum mobile here, takes time } }

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • Utility of List<T>.Sort() versus List<T>.OrderBy() for a member of a custom container class

    - by ccomet
    I've found myself running back through some old 3.5 framework legacy code, and found some points where there are a whole bunch of lists and dictionaries that must be updated in a synchronized fashion. I've determined that I can make this process infinitely easier to both utilize and understand by converging these into custom container classes of new custom classes. There are some points, however, where I came to concerns with organizing the contents of these new container classes by a specific inner property. For example, sorting by the ID number property of one class. As the container classes are primarily based around a generic List object, my first instinct was to write the inner classes with IComparable, and write the CompareTo method that compares the properties. This way, I can just call items.Sort() when I want to invoke the sorting. However, I've been thinking instead about using items = items.OrderBy(Func) instead. This way it is more flexible if I need to sort by any other property. Readability is better as well, since the property used for sorting will be listed in-line with the sort call rather than having to look up the IComparable code. The overall implementation feels cleaner as a result. I don't care for premature or micro optimization, but I like consistency. I find it best to stick with one kind of implementation for as many cases as it is appropriate, and use different implementations where it is necessary. Is it worth it to convert my code to use the LINQ OrderBy instead of using List.Sort? Is it a better practice to stick with the IComparable implementation for these custom containers? Are there any significant mechanical advantages offered by either path that I should be weighing the decision on? Or is their end-functionality equivalent to the point that it just becomes coder's preference?

    Read the article

  • Newb Question: scanf() in C

    - by riemannliness
    So I started learning C today, and as an exercise i was told to write a program that asks the user for numbers until they type a 0, then adds the even ones and the odd ones together. Here is is (don't laugh at my bad style): #include <stdio.h>; int main() { int esum = 0, osum = 0; int n, mod; puts("Please enter some numbers, 0 to terminate:"); scanf("%d", &n); while (n != 0) { mod = n % 2; switch(mod) { case 0: esum += n; break; case 1: osum += n; } scanf("%d", &n); } printf("The sum of evens:%d,\t The sum of odds:%d", esum, osum); return 0; } My question concerns the mechanics of the scanf() function. It seems that when you enter several numbers at once separated by spaces (eg. 1 22 34 2 8), the scanf() function somehow remembers each distinct numbers in the line, and steps through the while loop for each one respectively. Why/how does this happen? Example interaction within command prompt: - Please enter some numbers, 0 to terminate: 42 8 77 23 11 (enter) 0 (enter) - The sum of evens:50, The sum of odds:111 I'm running the program through the command prompt, it's compiled for win32 platforms with visual studio.

    Read the article

  • C++ - passing references to boost::shared_ptr

    - by abigagli
    If I have a function that needs to work with a shared_ptr, wouldn't it be more efficient to pass it a reference to it (so to avoid copying the shared_ptr object)? What are the possible bad side effects? I envision two possible cases: 1) inside the function a copy is made of the argument, like in ClassA::take_copy_of_sp(boost::shared_ptr<foo> &sp) { ... m_sp_member=sp; //This will copy the object, incrementing refcount ... } 2) inside the function the argument is only used, like in Class::only_work_with_sp(boost::shared_ptr<foo> &sp) //Again, no copy here { ... sp->do_something(); ... } I can't see in both cases a good reason to pass the boost::shared_ptr by value instead of by reference. Passing by value would only "temporarily" increment the reference count due to the copying, and then decrement it when exiting the function scope. Am I overlooking something? Andrea. EDIT: Just to clarify, after reading several answers : I perfectly agree on the premature-optimization concerns, and I alwasy try to first-profile-then-work-on-the-hotspots. My question was more from a purely technical code-point-of-view, if you know what I mean.

    Read the article

  • PHP file upload issue

    - by Varun
    I am working on a PHP based, ticket management system. While creating a ticket, one can upload an attachment. I want to put a limit (say 10 MB) per file upload. To implement this I plan the following- 1. In php.ini set post_max_size = 10M 2.In PHP script which receives the POST- Since the file is larger than post_max_size, $_FILES[] will be empty. But I can still check the content-length header and discard the upload, if size more than 10M. While testing this I tried uploading a file of 1 GB and analysed the http traffic and this is what I found. - the entire 1 GB data is first uploaded to a to the server temporarily and discarded once the http request completes. Though I couldn't exactly find out where the file was getting saved(as it was not there in the temporary directory in the server.), but my http traffic analyzer showed that the browser did send 1 GB data to the server. - the PHP script execution started only after completion of the http request(i.e after uploading the entire 1 GB) Now I have 2 concerns: a) People may exploit my server bandwidth by trying to upload large file, which I will have to discard anyways. b) Even worse, if someone starts uploading a huge file (say 100 GB), entire 100 GB data is first uploaded to the server temporarily, that means for that period, it will consume that much of memory on my server. What's the common solution for this. Am I missing something here?

    Read the article

  • Possible to capture all events in a web browser?

    - by David
    I am working on a pet project and am at the research stage. Quick summary I am trying to intercept all form submits, onclick, and every single keydown. My library of choice is either jquery, or jquery + prototypejs. I figure I can batch up the events into a queue/stack and send it back to the server in time interval batches to keep performance relatively stable. Concerns Form submits and change's would be relatively simple.. Something like $("form :inputs").bind("change", function() { ... record event... }); But how to ensure I get precedence over the applications handlers as I have a habit of putting return false on a lot of my form handlers when there is a validation issue. As I understand it, that effectively stops the event in its tracks. My project For my smaller remote clients I will put their products onto a VPS or run it in my home data center. Currently I use a basic authentication system, given a username/password they see the website and then hopefully send me somewhat sane notes on what is broken or should be tweaked. As a better solution, I've written a simple proxy web server that does the above but allows me to have one DNS entry and then depending on credentials it makes an internal request relaying headers and re-writing URLS as needed. Every single html/text or application/* request is compressed and shoved into a sqlite table so I can partially replay what they've done. Now I am shifting to the frontend and would like to capture clicks, keydown's, and submits on everything on the page.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >