Search Results

Search found 2286 results on 92 pages for 'benefits'.

Page 79/92 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • What do you do when your boss doesn't care about code quality?

    - by Chad Johnson
    My boss (a proprietor) is a developer like me. He comes, however, from a C background and severely lacks knowledge of the benefits of proper object-oriented design. That, or he simply ignores them. So my co-worker developed this feature prototype in a week, and it's not release-ready--at least not from a good code standpoint. It works; it does the job--but it'sa freaking prototype. It's totally not scalable. My boss wants to wow clients and "just get the feature out." I understand that. But, we could take two weeks and finish this shit up, or we could take three and finish this shit up AND do it so that it's scalable. I just KNOW we are going to want to add onto this feature in the coming months, and then, a customer is going to "need it in a week," and so even though we've agreed to refactor when we want to add onto the feature, IT WILL NEVER HAPPEN! This ALWAYS happens. I'm the code quality assurance guy, but my boss seems to see me as a radical and thinks I just waste time, whereas I actually am trying to follow good, known solid design patterns. He just wants his stinking feature though, and he doesn't want to spend the time or money to do things well. He pretty much listens to what I have to say, and then he ultimately just makes the decision to take the shortest path (which cuts corners a lot). I often develop large, important features for our software. THOSE THINGS TAKE TIME! They're not happy with the time it's taken with past projects, though, but the features I've put in all work really damn well and are very scalable. How do you all deal with this kind of situation?

    Read the article

  • PHP Object References in Frameworks

    - by bigstylee
    Before I dive into the disscusion part a quick question; Is there a method to determine if a variable is a reference to another variable/object? For example $foo = 'Hello World'; $bar = &$foo; echo (is_reference($bar) ? 'Is reference' : 'Is orginal'; I have been using PHP5 for a few years now (personal use only) and I would say I am moderately reversed on the topic of Object Orientated implementation. However the concept of Model View Controller Framework is fairly new to me. I have looked a number of tutorials and looked at some of the open source frameworks (mainly CodeIgnitor) to get a better understanding how everything fits together. I am starting to appreciate the real benefits of using this type of structure. I am used to implementing object referencing in the following technique. class Foo{ public $var = 'Hello World!'; } class Bar{ public function __construct(){ global $Foo; echo $Foo->var; } } $Foo = new Foo; $Bar = new Bar; I was surprised to see that CodeIgnitor and Yii pass referencs of objects and can be accessed via the following method: $this->load->view('argument') The immediate advantage I can see is a lot less code and more user friendly. But I do wonder if it is more efficient as these frameworks are presumably optimised? Or simply to make the code more user friendly? This was an interesting article Do not use PHP references.

    Read the article

  • Using SetParent to steal the main window of another process but keeping the message loops separate

    - by insta
    Background: My coworker and I are maintaining a million-line legacy application we inherited. Its frontend is written in VB6, and as we're devoting almost all of our resources to converting it to C#, we are looking for quick & dirty solutions to our specific problem. The application behaves in a plugin-ish manner. There are up to 20ish separate ActiveX controls that can be loaded at once in a grid-style layout. The problem is that the ActiveX controls do all of their processing on their own UI thread, and as a lot of it is blocking waiting on network access, the UI gets very soupy. When our hosting C# app loads these controls, it becomes unresponsive because of how many controls are chewing up UI resources doing nothing. To top it off, the controls are fragile and will crash at the slightest provocation. When they are hosted in the main C# app, it creates serious instability. The best my coworker and I have come up with so far is starting a process per ActiveX control. This process, which we call the proxy, is another winforms app. It uses named pipes to communicate with the hosting process. The hosting process creates a window, loads an ActiveX control of our choice (via some reflections & AxHost magic), and tells the main process what its window handle is via the named pipe. The main process uses a combination of SetParent, and SetWindowPos to move the proxy application into itself to emulate a plugin. Size updates are sent via the named pipe. This works well enough until the ActiveX application does some sort of lengthy process and we click around on the main window while it's working. For awhile the main window is responsive, but eventually it becomes unresponsive as the child window waits for its UI thread. How can we keep the child windows on their own complete thread while still getting the benefits of SetParent? (please let me know if anything isn't clear!)

    Read the article

  • how to implement enhanced session handling in PHP

    - by praksant
    Hi, i'm working with sessions in PHP, and i have different applications on single domain. Problem is, that cookies are domain specific, and so session ids are sent to any page on single domain. (i don't know if there is a way to make cookies work in different way). So Session variables are visible in every page on this domain. I'm trying to implement custom session manager to overcome this behavior, but i'm not sure if i'm thinking about it right. I want to completely avoid PHP session system, and make a global object, which would store session data and on the end of script save it to database. On first access i would generate unique session_id and create a cookie On the end of script save session data with session_id, timestamps for start of session and last access, and data from $_SERVER, such as REMOTE_ADDR, REMOTE_PORT, HTTP_USER_AGENT. On every access chceck database for session_id sent in cookie from client, check IP, Port and user agent (for security) and read data into session variable (if not expired). If session_id expired, delete from database. That session variable would be implemented as singleton (i know i would get tight coupling with this class, but i don't know about better solution). I'm trying to get following benefits: Session variables invisible in another scripts on the same server and same domain Custom management of session expiration Way to see open sessions (something like list of online users) i'm not sure if i'm overlooking any disadvantages of this solution. Is there any better way? Thank you!!

    Read the article

  • Large scale perspective lights casting shadow maps, in the most optimized way?

    - by meds
    I'm using projected texture shadows coupled with lights to light a large sports field at night. To do this I'm using shadow cameras which I place in the position of the stadiums lights and shine it down on the field at the appropriate angle. The problem with this method is the textures to which I render the shadows into have to be very large so they can keep sufficient detail over the entire stadium. This is incredibly under optimized since at any given point the players attention is only directed on a small portion of the field meaning large chunks of the texture just take up space wit no benefits. However the issue is the lights need to be perspective based as they come from actual directional lights hovering over the stadium. The way to solve this, I believe, is to figure out in the shadow cameras view matrix it would be to place the actual camera to render from, and adjust the view matrix accordingly to the position it is. So my question is, how can I calculate the optimal position to put the shadow camera and calculate its view matrix such that the shadows it projects will appear to be coming from the light source rather than the camera?

    Read the article

  • What would be different in Java if Enum declaration didn't have the recursive part

    - by atamur
    Please see http://stackoverflow.com/questions/211143/java-enum-definition and http://stackoverflow.com/questions/3061759/why-in-java-enum-is-declared-as-enume-extends-enume for general discussion. Here I would like to learn what exactly would be broken (not typesafe anymore, or requiring additional casts etc) if Enum class was defined as public class Enum<E extends Enum> I'm using this code for testing my ideas: interface MyComparable<T> { int myCompare(T o); } class MyEnum<E extends MyEnum> implements MyComparable<E> { public int myCompare(E o) { return -1; } } class FirstEnum extends MyEnum<FirstEnum> {} class SecondEnum extends MyEnum<SecondEnum> {} With it I wasn't able to find any benefits in this exact case. PS. the fact that I'm not allowed to do class ThirdEnum extends MyEnum<SecondEnum> {} when MyEnum is defined with recursion is a) not relevant, because with real enums you are not allowed to do that just because you can't extend enum yourself b) not true - pls try it in a compiler and see that it in fact is able to compile w/o any errors PPS. I'm more and more inclined to believe that the correct answer here would be "nothing would change if you remove the recursive part" - but I just can't believe that.

    Read the article

  • What is the best practice in regards to building composite dtos off of an aggregate root with domain

    - by Chance
    I'm trying to figure out the best approach/practice for assembling a composite data transfer object off of an aggregate root and would love to hear people's thoughts on this. For example, lets say I have a root that has a few domain objects as children. I want to assemble a specific view dto, based on some business logic, that either has attributes or full dto's of it's objects. What I'm struggling with is trying to figure out where that assembly should happen. I can see it going on the domain object of the aggregate root as there is some business logic associated with it. The benefits of this approach from what I've deduced thus far is that it should reduce the inevitable business logic from bleeding outisde of the domain object. It also allows for private methods that take care of tasks that could become more complex from an external builder. The downsides being that the domain object becomes much more entrenched in the application's workflow and represents much more than just the domain object. It also could become very large in the scenario where you need multiple composite Dtos. Alternatively, I could also see it belonging to some form of transfer object assembler where there is a builder for each domain object. The domain objects would still be responsible for GetDto() and UpdateFromDto(dto). Outside of that, the builder would handle the construction and deconstruction of composite dtos. The downside is kind of mentioned above, where I fear this will easily lead to developers unfamiliar with DDD bleeding a ton of business logic into the assembler which is what I want to desperately avoid. Any thoughts would be greatly apperciated.

    Read the article

  • Question about the benefit of using an ORM

    - by johnny
    I want to use an ORM for learning purposes and am try nhibernate. I am using the tutorial and then I have a real project. I can go the "old way" or use an ORM. I'm not sure I totally understand the benefit. On the one hand I can create my abstractions in code such that I can change my databases and be database independent. On the other it seems that if I actually change the database columns I have to change all my code. Why wouldn't I have my application without the ORM, change my database and change my code, instead of changing my database, orm, and code? Is it that they database structure doesn't change that much? I believe there are real benefits because ORMs are used by so many. I'm just not sure I get it yet. Thank you. EDIT: In the tutorial they have many files that are used to make the ORM work http://www.hibernate.org/362.html In the event of an application change, it seems like a lot of extra work just to say that I have "proper" abstraction layers. Because I'm new at it it doesn't look that easy to maintain and again seems like extra work, not less.

    Read the article

  • Creation of Objects: Constructors or Static Factory Methods

    - by Rachel
    I am going through Effective Java and some of my things which I consider as standard are not suggested by the book, for instance creation of object, I was under the impression that constructors are the best way of doing it and books says we should make use of static factory methods, I am not able to few some advantages and so disadvantages and so am asking this question, here are the benefits of using it. Advantages: One advantage of static factory methods is that, unlike constructors, they have names. A second advantage of static factory methods is that, unlike constructors, they are not required to create a new object each time they’re invoked. A third advantage of static factory methods is that, unlike constructors, they can return an object of any subtype of their return type. A fourth advantage of static factory methods is that they reduce the verbosity of creating parameterized type instances. I am not able to understand this advantage and would appreciate if someone can explain this point Disadvantages: The main disadvantage of providing only static factory methods is that classes without public or protected constructors cannot be subclassed. A second disadvantage of static factory methods is that they are not readily distinguishable from other static methods.I am not getting this point and so would really appreciate some explanation. Reference: Effective Java, Joshua Bloch, Edition 2, pg: 5-10 Also, How to decide to use whether to go for Constructor or Static Factory Method for Object Creation ?

    Read the article

  • STL vector performance

    - by iAdam
    STL vector class stores a copy of the object using copy constructor each time I call push_back. Wouldn't it slow down the program? I can have a custom linkedlist kind of class which deals with pointers to objects. Though it would not have some benefits of STL but still should be faster. See this code below: #include <vector> #include <iostream> #include <cstring> using namespace std; class myclass { public: char* text; myclass(const char* val) { text = new char[10]; strcpy(text, val); } myclass(const myclass& v) { cout << "copy\n"; //copy data } }; int main() { vector<myclass> list; myclass m1("first"); myclass m2("second"); cout << "adding first..."; list.push_back(m1); cout << "adding second..."; list.push_back(m2); cout << "returning..."; myclass& ret1 = list.at(0); cout << ret1.text << endl; return 0; } its output comes out as: adding first...copy adding second...copy copy The output shows the copy constructor is called both times when adding and when retrieving the value even then. Does it have any effect on performance esp when we have larger objects?

    Read the article

  • VB.Net - Launch app on Windows startup

    - by Queops
    We all now the tricky folders where your application runs when you publish your VB.NET to other people, but I won't give up on the benefits of the system (auto-update, you know). Problem is: Program is supposed to startup, or not, with Windows if the user wishes so. I'm saving program preferences into My.Settings. All fine with that. If you debug it it will save the values between sessions. The problem is after deployment. I installed the program on a testing machine. Application works okay, the settings load, if it's the user launching it by themselfs (using shortcut on desktop for example). Now upon restarting the program does indeed start up as I want it to but the My.Settings don't show up! It's like the config file has been erased. If I close program and re-open by clicking shorcut it loads the settings just fine though. So I wonder what's the problem? This is the code I use to save the registry key: regKey = Registry.CurrentUser.OpenSubKey("SOFTWARE\Microsoft\Windows\CurrentVersion\Run", True) regKey.SetValue("ScapeTracker", Chr(34) & Application.ExecutablePath & Chr(34) & " startup") Does what it's supposed to. The startup parameter is needed so the program knows if it's launched on startup on not (to show up on tray and idle there until user decides to use it). So the problem is that I can't use the settings upon restart of Windows, so I'm assuming the VB.Net applications have some extra parameters when launching? How can I solve this?

    Read the article

  • How do you use stl's functions like for_each?

    - by thomas-gies
    I started using stl containers because they came in very handy when I needed functionality of a list, set and map and had nothing else available in my programming environment. I did not care much about the ideas behind it. STL documentations were only interesting up to the point where it came to functions, etc. Then I skipped reading and just used the containers. But yesterday, still being relaxed from my holidays, I just gave it a try and wanted to go a bit more the stl way. So I used the transform function (can I have a little bit of applause for me, thank you). From an academic point of view it really looked interesting and it worked. But the thing that boroughs me is that if you intensify the use of those functions, you need 10ks of helper classes for mostly everything you want to do in your code. The hole logic of the program is sliced in tiny pieces. This slicing is not the result of god coding habits. It's just a technical need. Something, that makes my life probably harder not easier. And I learned the hard way, that you should always choose the simplest approach that solves the problem at hand. And I can't see what, for example, the for_each function is doing for me that justifies the use of a helper class over several simple lines of code that sit inside a normal loop so that everybody can see what is going on. I would like to know, what you are thinking about my concerns? Did you see it like I do when you started working this way and have changed your mind when you got used to it? Are there benefits that I overlooked? Or do you just ignore this stuff as I did (and will go an doing it, probably). Thanks. PS: I know that there is a real for_each loop in boost. But I ignore it here since it is just a convenient way for my usual loops with iterators I guess.

    Read the article

  • CSS Clearing Floats

    - by Frank
    I'm making more of an effort to separate my html structure from presentation, but sometimes when I look at the complexity of the hacks or workarounds to make things work cross-browser, I'm amazed at huge collective waste of productive hours that are put into this. As I understand it, floats were never created for creating layouts, but because many layouts need a footer, that's how they're often being used. To clear the floats, you can add an empty div that clears both sides (div class="clear"). That is simple and works cross browser, but it adds "non-semantic" html rather than solving the presentation problem within the CSS. I realize this, but after looking at all of the solutions with their benefits and drawbacks, it seems to make more sense to go with the empty div (predictable behavior across browsers), rather than create separate stylesheets, including various css hacks and workarounds, etc. which would also need to change as CSS evolves. Is it o.k. to do this as long as you do understand what you're doing and why you're doing it? Or is it better to find the CSS workarounds, hacks and separate structure from presentation at all costs, even when the CSS presentation tools provided are not evolved to the point where they can handle such basic layout issues?

    Read the article

  • Good Replacement for User Control?

    - by David Lively
    I found user controls to be incredibly useful when working with ASP.NET webforms. By encapsulating the code required for displaying a control with the markup, creation of reusable components was very straightforward and very, very useful. While MVC provides convenient separation of concerns, this seems to break encapsulation (ie, you can add a control without adding or using its supporting code, leading to runtime errors). Having to modify a controller every time I add a control to a view seems to me to integrate concerns, not separate them. I'd rather break the purist MVC ideology than give up the benefits of reusable, packaged controls. I need to be able to include components similar to webforms user controls throughout a site, but not for the entire site, and not at a level that belongs in a master page. These components should have their own code not just markup (to interact with the business layer), and it would be great if the page controller didn't need to know about the control. Since MVC user controls don't have codebehind, I can't see a good way to do this. I've searched previous SO questions, and have yet to find a good answer. Options so far In an attempt to avoid turning the comments section into a discussion... RenderAction This allows the view to call another controller, which will be responsible for interacting with the BLL and whatever data is necessary to its corresponding view. The calling view needs to be aware of the sub controller. This seems to provide a nice way to encapsulate partial views and controls, without having to modify the calling controller. RenderPartial The calling controller is still responsible for executing whatever code is associated with the partial view, and making sure that the model passed to the partial view contains the data it expects. Effectively, modifying the partial view potentially means modifying the calling controller. Annoying especially if this is used in multiple places. Portable Areas Place each control in its own project/area?

    Read the article

  • Are there any code critique sites or similar resources?

    - by Ukko
    I have noticed when people post example code illustrating some issue that they are having often they will gather a number of comments addressing the quality of the code they presented and not the actual problem asked. This is very helpful--if not well directed. Often, this is wasted effort since the asker is often not receptive and the code is often chopped down to something small to post leaving lots of rough edges. In the old days you would see people asking questions like this on comp.lang.lisp and other parts of the comp.lang hierarchy. But that bit of the net kind of sank into the sewers of neglect. Is there a comparable one-stop-shop today? I am partially asking for selfish reasons, I know how to write good idiomatic C, Lisp, O'Caml, and Java code. But I learned C++ pre-template and STL, those rusty skills are not really applicable to today's C++. I have picked up languages like Scala in a vacuum and get by, but am I really doing it correctly? There are so many ways you can abuse a language, I am currently working against a codebase of Fortran written in C, and I recognize and loathe the "that guy" who wrote it. I don't want to be someone else's "that guy" if I can help it. Just because it works does not mean that one did not totally miss the boat on how it should have been done. Do you seek out this type of critique? If so how, where and why? What types of benefits do you derive from it? How about abuse and trolls?

    Read the article

  • Git repos over multiple machines - backups and keeping in sync

    - by a-or-b
    I'm new to git so please feel free to RTFM me... I have multiple development sites (none of which can communicate via a network with each other) and am working on a few projects (with a few people) at any one time. What I would ideally have is at each site a centralized repository that can be pulled from but development would occur in our own (personal) repos. Then I would like to be able to sync across the centralized repos (via USB key for example). I want a centralized repo at each location as (1) I'm new to git and do break my (personal) local repo by playing around and (2) some projects get put on hold so I want to be able to free up disk space by deleting them. This is the "backup" part of my question. I was also hoping to be able to use 'git clone --bare' for my centralized repos (and the USB key repos to?) as we don't need the full checkout, just the git benefits. However I can't seem to get a bare repo to work as repo I can push from. I've used 'git remote' to set up an remote origin (similar to http://toolmantim.com/thoughts/setting_up_a_new_remote_git_repository) but I can't get 'git push' to work - it seems I need a checked-out repo. . Does anyone else use this sort of repo/development structure or is there something fundamental about git usage that I'm missing? . A solution that I thought about that might not work - If I had a 'git clone --bare' at each site and then use a git repo on my removable media which has remotes set up for each site then I could ('pull') sync my USB key with each repo. But then can I update the site repo from my USB key? Could I push from USB?

    Read the article

  • Telerik RadGrid: grid clientside pagination

    - by ram
    I have a web service which returns me some data,I am massaging this data and using this as datasource for my radgrid (telerik). The datasource is quite large, and would like to paginate it. I found couple of problems when I paginate it in the server side I have to bind the grid again for pagination, which essentially means I have to make a call to WS again to get the data. This is an expensive call for me. I would rather forgo the benefits of pagination and would display all the results in the same page, except for it would be a bit clumsy During the postback RadGrid1.Items.Count happens to be the number of items getting paginated (25- in my case) which is expected as all the items in the datasource are not getting bound. This of course is not an issue. The real issue is that we have some checkboxes which get checked based on some business condition. We add this to our business object/DB later. So if the user has not navigated all the pages, these "checked" items do not get added as pagination limits the "Items" in the grid to those which get bound for that particular page index. My Thoughts: I would rather have some sort of client side pagination, where we can hide/show contents than going to the server and doing a databind every time. Though it will return all the results, the UI will not be clumsy and the grid would have "all the items" during postback Is there a way to do it ? If it were a regular asp.net gridView, can someone point me to a good article which would serve my purpose Ram PS: who else think radgrid is crazy ? (unfortunately I did not make this choice)

    Read the article

  • Advantage of WPF app vs Winform for business apps?

    - by Abdu
    I know asp.net and winform development. I am not the type of developer who jumps into a new technology just because it's new. It needs to give me extra benefits like higher productivity. What are the advantages of WPF over Winforms for pure business apps? I am not interested in the extra eye candy, animation, gradients, image display effects and so on which WPF provides. The business apps are for data entry, data reporting and maybe some charts and static display of photos. How will WPF help in these apps? Better richer data binding? WinForm is a mature proven technology and I like the fact I can do everything in Visual Studio vs multiple IDE's for WPF (VS & Blend family). Plus I think WPF doesn't have as rich data binding controls like their Winform counterparts (DataGridView..etc). AFAIK, Microsoft will still support Winforms for many years. Try to convince someone like me to switch.

    Read the article

  • What single software development tool do you think holds the most value?

    - by Phobis
    Every day I realize how much I love Visual Studio for .NET development.... but, I believe that Resharper, may hold a value that surpasses Visual Studio's (I am using VS 2005 for WPF/WCF development). I decided it would be great to compile a list of the most valuable tools for software development. These can be applications/plug-ins anything that you think holds GREAT value. Also, please explain the benefits of the tool that you are posting. Resharper: Intergrated Unit testing "Camel Hump" code auto completion Find "usings" (inverse of "Go to Deceleration") Code formating and member rearranging Assembly and namespace inclusion (based on your code) Check for common optimizations and possible bugs in code and suggests/rewrites the code for you (things like null checking, redundant delegate creation, inverting if statements, etc...); Tells you when code and be more generic (may suggest things like "use this interface instead" if your code never refers to something specific on an object) Helps you see code that is not being used and will clean any unused members. File structure view helps you jump around the regions of your file (this is really awesome and clean). Class searching (you can use things like camel humps) Asks you which partial file to open once you find a class. It also has it's own plugin support, so you can do things like FxCop, documentation and relfector (all free). This thing has so much I don't think I hit 10% of it yet :) [When I get time, I will try to add more... feel free to help me out]

    Read the article

  • Using virtualization infrastructure for J2EE application distribution- viable alternative?

    - by Dan
    Our company builds custom J2EE web solutions. At the moment, we use standard J2EE distribution mechanisms (ear/war archives). Application servers are generally administered by our clients' IT departments and since we do not have complete control over the environment, a lot of entropy can be introduced into the solution. For example: latest app. server patch not applied conflicting third party libraries inside the app. server root server runtime and tuning parameters not configured (for example, number of connections in database pool) We are looking into using virtualization infrastructure for J2EE application distribution. Instead of sending the ear/war archive, we’d send image with application server node and our application preinstalled. Some of the benefits are same as using with using virtualization infrastructure in general, namely better use of hardware resources. For us, it reduces the entropy of hosting infrastructure - distributing VM should be less affected by hosting environment. So far, the downside I see can be in application server licenses, here they will have to use dedicated servers for our solution, but this is generally already done that way. Also, there is a complexity with maintaining virtualization infrastructure, but this is often something IT departments have more experience with than with administering and fine-tuning J2EE solutions. Anyone has experience with this model? What are the downsides? Will we not just replace one type of complexity with other?

    Read the article

  • codeigniter active record and mysql

    - by sea_1987
    I am running a query with Active Record in a modal of my codeigniter application, the query looks like this, public function selectAllJobs() { $this->db->select('*') ->from('job_listing') ->join('job_listing_has_employer_details', 'job_listing_has_employer_details.employer_details_id = job_listing.id', 'left'); //->join('employer_details', 'employer_details.users_id = job_listing_has_employer_details.employer_details_id'); $query = $this->db->get(); return $query->result_array(); } This returns an array that looks like this, [0]=> array(13) { ["id"]=> string(1) "1" ["job_titles_id"]=> string(1) "1" ["location"]=> string(12) "Huddersfield" ["location_postcode"]=> string(7) "HD3 4AG" ["basic_salary"]=> string(19) "£20,000 - £25,000" ["bonus"]=> string(12) "php, html, j" ["benefits"]=> string(11) "Compnay Car" ["key_skills"]=> string(1) "1" ["retrain_position"]=> string(3) "YES" ["summary"]=> string(73) "Lorem Ipsum is simply dummy text of the printing and typesetting industry" ["description"]=> string(73) "Lorem Ipsum is simply dummy text of the printing and typesetting industry" ["job_listing_id"]=> NULL ["employer_details_id"]=> NULL } } The job_listing_id and employer_details_id return as NULL however if I run the SQL in phpmyadmin I get full set of results, the query i running in phpmyadmin is, SELECT * FROM ( `job_listing` ) LEFT JOIN `job_listing_has_employer_details` ON `job_listing_has_employer_details`.`employer_details_id` LIMIT 0 , 30 Is there a reason why I am getting differing results?

    Read the article

  • Generating jquery 'rules' from business model to UI in asp.net mvc

    - by jim
    Hi all, I've had a good look around and am certain that there's no matching question on SO, so here goes. Has anyone created a 'helper' method on their model that generates jquery (or plain javascript) rules validation dynamically, based on the criteria/rules that are contained within the object and taken from a repository (i.e. DB). What i'm thinking of is a discrete set of partial views (and associated models) that have rules at the business logic 'level' and rather than (or in combination with) validating the rule(s) at postback, translating the same rules into tightly focussed jquery methods that work identically at client (js) and server (c#) levels. I can see benefits here re performance. Also, the rules definitions could be created in a single place (in c#) and the jquery generated off of that, thus allowing single edits to update both code streams. I appreciate that there would be limitations imposed by language specific contstraints but the general principle could be quite interesting if used appropriately. I'm also aware that testibility could be an issue when using two different language structures and hoping to achieve similar test outcomes - but those aside... any thoughts or experiences of similar out there?? cheers jimi

    Read the article

  • The best way to write a jQuery plugin - If there is such a way?

    - by Nick Lowman
    There are quite a few ways to write plugins i.e. here's a nice example and what I've seen quite a lot of lately is the following code pattern and it's used by Doug Neiner here; (function($){ $.formatLink = function(el, options){ var base = this; base.$el = $(el); base.el = el; base.$el.data("formatLink", base); base.init = function(){ base.options = $.extend({}, $.formatLink.defaultOptions, options); //code here } base.init(); }; $.formatLink.defaultOptions = { }; $.fn.formatLink = function(options){ return this.each(function(){ (new $.formatLink(this, options)); }); }; })(jQuery); So, can anyone tell me the benefits of using the pattern above rather than the one below. I can't see the point in calling the $.extend function for every element in the jQuery stack (above), where the example below only does this once and then works on the stack. To test it I created two plugins, using both patterns, which applied styles to about 5000 li elements and the code below took about 1 second whereas the pattern above took about 1.3 seconds. (function($){ $.fn.formatLink = function(options){ var options = $.extend({}, $.fn.formatLink.defaultOptions, options || {}); return this.each(function(){ //code here }); }); $.fn.formatLink.defaultOptions ={} })(jQuery);

    Read the article

  • Can I have the gcc linker create a static libary?

    - by Lucas Meijer
    I have a library consisting of some 300 c++ files. The program that consumes the library does not want to dynamically link to it. (For various reasons, but the best one is that some of the supported platforms do not support dynamic linking) Then I use g++ and ar to create a static library (.a), this file contains all symbols of all those files, including ones that the library doesn't want to export. I suspect linking the consuming program with this library takes an unnecessary long time, as all the .o files inside the .a still need to have their references resolved, and the linker has more symbols to process. When creating a dynamic library (.dylib / .so) you can actually use a linker, which can resolve all intra-lib symbols, and export only those that the library wants to export. The result however can only be "linked" into the consuming program at runtime. I would like to somehow get the benefits of dynamic linking, but use a static library. If my google searches are correct in thinking this is indeed not possible, I would love to understand why this is not possible, as it seems like something that many c and c++ programs could benefit from.

    Read the article

  • CDN for Images in ASP.NET

    - by Chris
    I am in the process of moving all of the images in my web application over to a CDN but I want to easily be able to switch the CDN on or off without having to hard code the path to the images. My first thought was to add an HttpHandler for image extensions that depending whether a variable in the web.config (something like ) will serve the image from the server or from the CDN. But after giving this a little though I think I've essentially ruled this out as it will cause ASP.NET to handle the request for every single image, thus adding overhead, and it might actually completely mitigate the benefits of using a CDN. An alternative approach is, since all of my pages inherit from a base page class, I could create a function in the base class that determines what path to serve the files from based off the web.config variable. I would then do something like this in the markup: <img src='<%= GetImagePath()/image.png' /> I think this is probably what I'll have to end up doing, but it seems a little clunky to me. I also envision problems with the old .NET error of not being able to modify the control collection because of the "<%=" though the "<%#" solution will probably work. Any thoughts or ideas on how to implement this?

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >