Search Results

Search found 20852 results on 835 pages for 'intellij idea'.

Page 313/835 | < Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >

  • Minimize the chance my email is blocked/filtered as spam

    - by justSteve
    I'm running a web-based store where order confirmations are sometimes blocked and don't reach the intended user. The structure of the business model is such that our product is marketed to the end-user by a 3rd parities - affiliates how are known entities to the end-users and email is freely exchanged between end-users and our affiliates. Our confirmations being blocked is becoming a big enough problem that we are considering implementing a system where a 'confirmations' address is created within the affiliates domain, then we'd have our app send via the affiliate's mail server instead of our own. But that'd be lots of work. The idea has been raised to have our app use our affiliates' email in the FROM field but still send from our server. My thinking is that would be detected at the end-users side and blocked just as often - we dealing with institutions large enough at least some checks up at the perimeter. Is this assumption correct (more likely to be blocked) or is there a less round about way to send messages under the auspices of 3rd parties? thx

    Read the article

  • MAC and IP adress text-identicon as avatar

    - by rubo77
    I would like to create something like identicons but not with images but with a unique word for each IP-Adresses and MAC-Adresses. create an easy to remember alias for a mac address, that is unique and reverse lookupable, for example: IP 123.456.789.132 will result in an alias for that IP, that is connected to an existing word from a wordlist, that is unique. Background of this idea: this way we could identify our Routers in our Opennet in Hamburg easily in a graphical NodeGraph. Is there some site already, where I can convert MAC-Adresses to unique human readeable words?

    Read the article

  • Memory consumption of each accept() call on server running on Windows 2008 [migrated]

    - by Atul
    I've written a simple and small server application on Windows 2008 that just accepts connections and does nothing else. I am doing memory footprint assessment of socket calls, What I found that each connection (after accept()) consumes at least 2.5 KB of memory. Interestingly, the memory is not consumed by the process that has accept() call but it consumed by a OS process. I believe it might be because of data structures being created inside OS for each connection. Now, I have two key questions: Is it possible by any means to reduce this memory footprint (by changing any parameters, configuration etc) ? If yes how ? (Because 2K for each connection would be too much if we planning server to accept millions of connections) If my server is intended to accept million connections, is it good idea to use Windows 2008 ? or shall I switch to some other OS? Please advice me.

    Read the article

  • Advice: The first-time interviewer's dilemna

    - by shan23
    I've been working in my first job for about 2 years now, and I've been "asked" to interview a potential teammate (whom I might have to mentor as well) on pretty short notice (2 days from now). Initially, I had been given a free rein(or so I thought, and hence agreed), but today, I've been told "not to pose bookish questions" - implying I can only ask basic programming puzzles and stuff similar to the 'fizbuzz' question. I strongly believe that not knowing basic algorithmic notations(the haziest ideas of space/time complexities) or the tiniest idea of regular expressions would make working with the guy very difficult for anyone. I know i'm asking for a lot here, but according to you, what would be a comprehensive way to test out the absolutely basic requirements of a CS guy(he has 2 yrs of exp) without sounding too pedantic/bookish etc ? It seems it would be legit to ask C questions/simple puzzles only....but I really do want to have something a bit different from "finding loops in linked lists" that has kind of become the opening statement of most techie interviews !! This is a face-to-face interview with about an hour or more of time - I looked at Steve's basic phone-screen questions, and I was wondering if there exists a guide on "basic face-to-face interview questions" that I can use(or compile from the community's answers here). EDIT: The position is mostly for a kernel level C programming job, with some smattering of C++ required for writing the test framework.

    Read the article

  • Cisco ASA - VPN and Hairpinning....

    - by Nordberg
    Hi, We have 2 sites that will be linked by a IPSEC VPN between 2 Cisco ASAs: Site 1 8Mb ADSL Connection Cisco ASA 505 Site 2 2Mb SDSL Connection Cisco ASA 505 Basically, both sites need access to a service at the end of another IPSEC VPN, Site 3, which I plan to terminate at Site 2. This is due to the way the service is sold - it's billed per gateway. So if both Site 1 and Site 2 had their own VPN connection to Site 3, it would cost us twice as much... Anyway, my idea is to have all traffic from Site 1 destined for Site 3 to go via the VPN between Site 1 and Site 2. The end result being all traffic that hits Site 3 has come via Site 2. I understand this is known as hairpinning but I'm struggling to find a great deal of information on how this is setup. So, firstly, can anyone confirm that what I'm trying to achieve is possible and, secondly, can anyone point me in the direction of an example of such a configuration? Many Thanks.

    Read the article

  • Having trouble installing guest additions for a Xubuntu 13.10 guest OS in Virtualbox 4.2.10

    - by Duval Pearson III
    I am using Ubuntu 13.04 64-BIT as my host operating system running Virtual Box 4.2.10. I get this message when I tell virtualbox to install guest additions(CTRL+D),mounting the volume in the guest OS and run the VBoxLinuxAdditions.run file using root by: sudo ./VBoxLinuxAdditions.run It starts and then comes to these error messages: Verifying archive integrity... All good. Uncompressing VirtualBox 4.2.10 Guest Additions for Linux.......... VirtualBox Guest Additions installer Removing existing VirtualBox non-DKMS kernel modules ...done. Building the VirtualBox Guest Additions kernel modules The headers for the current running kernel were not found. If the following module compilation fails then this could be the reason. Building the main Guest Additions module ...done. Building the shared folder support module ...fail! (Look at /var/log/vboxadd-install.log to find out what went wrong) Doing non-kernel setup of the Guest Additions ...done. Installing the Window System drivers Warning: unknown version of the X Window System installed. Not installing X Window System drivers. Installing modules ...done. Installing graphics libraries and desktop services components ...done. allusers@allusers-VirtualBox:/media/allusers/VBOXADDITIONS_4.2.10_84104$ I followed everything from the official VirtualBox manual on installing guest additions for linux I used some other commands such as: sudo apt-get install build-essential linux-headers-$(uname -r) dkms and: sudo apt-get install virtualbox-guest-x11 I rebooted after executing those command and it still wont work. It still says the kernel modules are missing and the window is not seamless. Any idea what could be the problem?

    Read the article

  • Is there any reason not to go directly from client-side Javascript to a database?

    - by Chris Smith
    So, let's say I'm going to build a Stack Exchange clone and I decide to use something like CouchDB as my backend store. If I use their built-in authentication and database-level authorization, is there any reason not to allow the client-side Javascript to write directly to the publicly available CouchDB server? Since this is basically a CRUD application and the business logic consists of "Only the author can edit their post" I don't see much of a need to have a layer between the client-side stuff and the database. I would simply use validation on the CouchDB side to make sure someone isn't putting in garbage data and make sure that permissions are set properly so that users can only read their own _user data. The rendering would be done client-side by something like AngularJS. In essence you could just have a CouchDB server and a bunch of "static" pages and you're good to go. You wouldn't need any kind of server-side processing, just something that could serve up the HTML pages. Opening my database up to the world seems wrong, but in this scenario I can't think of why as long as permissions are set properly. It goes against my instinct as a web developer, but I can't think of a good reason. So, why is this a bad idea? EDIT: Looks like there is a similar discussion here: Writing Web "server less" applications EDIT: Awesome discussion so far, and I appreciate everyone's feedback! I feel like I should add a few generic assumptions instead of calling out CouchDB and AngularJS specifically. So let's assume that: The database can authenticate users directly from its hidden store All database communication would happen over SSL Data validation can (but maybe shouldn't?) be handled by the database The only authorization we care about other than admin functions is someone only being allowed to edit their own post We're perfectly fine with everyone being able to read all data (EXCEPT user records which may contain password hashes) Administrative functions would be restricted by database authorization No one can add themselves to an administrator role The database is relatively easy to scale There is little to no true business logic; this is a basic CRUD app

    Read the article

  • phpize: m4 error just in one extension

    - by Francois
    On Linux, I installed php 5.3.8 from source. Using phpize for installing an extension works fine but not on one specific extension (mysqlnd). # cd /opt/php/5.3.8/ext/pdo && /opt/php/5.3.8/bin/phpize ... this runs ok # cd /opt/php/5.3.8/ext/mysqlnd && /opt/php/5.3.8/bin/phpize Cannot find config.m4. Make sure that you run '/opt/php/5.3.8/bin/phpize' in the top level source directory of the module` As you can see error can not be that I am not on top level source directory since I am. I tried to call phpize from ext folder - did not work either! For info I have m4 installed Any idea? Thanks :)

    Read the article

  • Mixing XNA and silverlight gives wierd graphics

    - by Mech0z
    I making a small 3dgame which is made as a Silverlight and XNA app, but when I draw the sprites the graphics becomes all wierd. All my primitive types are rendered correctly, but my 3d models are just wierd My Draw is like this when silverlight is set to draw private void OnDraw(object sender, GameTimerEventArgs e) { // Render the Silverlight controls using the UIElementRenderer elementRenderer.Render(); // Clear the screen to a solid color SharedGraphicsDeviceManager.Current.GraphicsDevice.Clear(Color.CornflowerBlue); switch (gameState) { case GameState.ChooseStarter: TextBlockStatus.Text = "Find Starting Player"; break; case GameState.PlaceBrick: TextBlockPlayer.Text = (playerTurn == PlayerTurn.PlayerOne) ? "Player One" : "Player Two"; TextBlockState.Text = "Place Brick"; foreach (IGraphicObject obj in _3dObjects) { obj.Draw(cameraPosition, e); } break; case GameState.GiveBrick: TextBlockState.Text = "Give Brick"; break; } spriteBatch.Begin(); // Using the texture from the UIElementRenderer, // draw the Silverlight controls to the screen spriteBatch.Draw(elementRenderer.Texture, cameraProjection, Color.White); spriteBatch.End(); } This gives me this output If I comment the spritebatch lines out I get the correct output, except the silverlight text is of course not shown I am not entirely sure what to look for except that zero vector I am giving to the spritebatch, but if thats the source I have no idea what I am supposed to set it as epspecially when its a 2d vector

    Read the article

  • Using USB to Ethernet with Linux Ubuntu 12.04

    - by Sriram
    Being a newbie, Please excuse if the technical Jargon used is not an universally accepted one :) I have a particular device (say device A) whose USB2.0 driver is available from Linux community. Linux UBUNTU12.04 based PC is able to detect that device via the available driver. My requirement is to ensure that PC can exchange the command as well as data with the device A over TCP/IP packets (In other words, instead of just a USB Based driver, there should be a TCP/IP wrapper over the device USB driver and still does the same job as the USB driver was doing before) Bought an USB (Female) to RJ-45 adapter,connected Device A (male) USB to the USB Female end of the adapter and the Ethernet end connected to the router. PC also is connected to the same router so that both Device A and the PC have the IP address in the same subnet range. So the packets produced by the device A can be routed to the PC via some binding( not sure how I can achieve this, but conceptual idea) Here are the issues I can see as of now 1) USB to RJ-45 is just a hardware signal conversion and not a NIC in itself and hence no MAC/IP ADDRESS assigned. Can we bind a virtual NIC created in PC with this connector? 2) Any available USB TO IP command as well as data translation wrappers available? e.g. command for the device A on Ethernet converted to command for the device A on USB which is then acted upon the device as a command from the USB driver There is some missing link in my understanding and hence it would be of great help if you can bounce off some ideas on how I can take this forward so that Device A and PC exchange data over IP. Thanks and Regards, Sriram

    Read the article

  • IIS7 Custom ASP.NET Errors

    - by Nathan
    I'm trying to setup a custom error page for the IIS 7 404.13 (Content length too large) error. Here's the relevant sections of my web.config file: <system.webServer> <httpErrors errorMode="Custom" existingResponse="Replace"> <remove statusCode="404" subStatusCode="13" /> <error statusCode="404" subStatusCode="13" prefixLanguageFilePath="" path="/FileUpload/Test.aspx" responseMode="ExecuteURL" /> </httpErrors> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10240" /> </requestFiltering> </security> </system.webServer> The response that is being sent back to the server is blank. The Test.aspx file is not blank. Any idea what's going on here?

    Read the article

  • What is the evidence that an API has exceeded its orthogonality in the context of types?

    - by hawkeye
    Wikipedia defines software orthogonality as: orthogonality in a programming language means that a relatively small set of primitive constructs can be combined in a relatively small number of ways to build the control and data structures of the language. The term is most-frequently used regarding assembly instruction sets, as orthogonal instruction set. Jason Coffin has defined software orthogonality as Highly cohesive components that are loosely coupled to each other produce an orthogonal system. C.Ross has defined software orthogonality as: the property that means "Changing A does not change B". An example of an orthogonal system would be a radio, where changing the station does not change the volume and vice-versa. Now there is a hypothesis published in the the ACM Queue by Tim Bray - that some have called the Bánffy Bray Type System Criteria - which he summarises as: Static typings attractiveness is a direct function (and dynamic typings an inverse function) of API surface size. Dynamic typings attractiveness is a direct function (and static typings an inverse function) of unit testing workability. Now Stuart Halloway has reformulated Banfy Bray as: the more your APIs exceed orthogonality, the better you will like static typing My question is: What is the evidence that an API has exceeded its orthogonality in the context of types? Clarification Tim Bray introduces the idea of orthogonality and APIs. Where you have one API and it is mainly dealing with Strings (ie a web server serving requests and responses), then a uni-typed language (python, ruby) is 'aligned' to that API - because the the type system of these languages isn't sophisticated, but it doesn't matter since you're dealing with Strings anyway. He then moves on to Android programming, which has a whole bunch of sensor APIs, which are all 'different' to the web server API that he was working on previously. Because you're not just dealing with Strings, but with different types, the API is non-orthogonal. Tim's point is that there is a empirical relationship between your 'liking' of types and the API you're programming against. (ie a subjective point is actually objective depending on your context).

    Read the article

  • Facebook like button for a Facebook feed item

    - by jverdeyen
    Here is what I do: When the admin of a website i'm creating adds a new 'news' item to his website, it directly posts the same item on the Facebook 'fanpage' of the website. This is done by an Facebook app with the permissions to post an item on the Facebook fanpage. So far so good. I'm getting the new post id from the facebook graph. Here is what I want to achieve: I'm adding a 'like' button on the original website, but when this is clicked it should be referenced to the same post item on the facebook page. So if someone likes the news post on the website he should also like the referencing facebook feed post. Which is exactly the same. Can this be done? I've been playing around with the href etc of the like button, without any success. When I hit the like button it likes the url given, but is doesn't recognizes that this is a feed post. Any idea? Thx in advance.

    Read the article

  • Sponsor the Hottest .NET Community Event in Germany: dotnet Cologne 2011

    - by WeigeltRo
    The “dotnet Cologne” conference organized by the NET usergroups Bonn and Cologne quickly has become the .NET community event in Germany. So when we opened the registration for dotnet Cologne 2011 on Monday, we expected some interest. But we didn’t expect the 200 “early bird” seats to be gone in less than three hours! And the registrations at normal price keep coming in, so it looks like this event will sell out even earlier than last year. In December I wrote about sponsorship opportunities at the dotnet Cologne 2011 – and why it’s a good idea to be a sponsor at this particular conference. If you are interested in becoming a sponsor: We still offer a wide variety of sponsorship packages in different sizes. At our new, larger, event location, we still have space for exhibition booths. Last year’s exhibitors were very happy and had many interesting conversations with the attendees. And this year we planned for longer breaks between sessions, which means event more time for presenting your products. And yes, German developers understand English demos. But maybe a booth is a bit too much for you. With the Bronze package, you can make sure the attendees receive promotional material of your company in their bags – for a fraction of what you’d pay at a commercial conference. Or you could sponsor a couple of licenses of your product for the raffle at the end of the day. If you want to learn more, just send an email to Roland.Weigelt at dotnet-koelnbonn.de and I’ll send you our sponsor information.

    Read the article

  • Linux cron spamming me then stops about php/suhosin

    - by acidzombie24
    My server emails me when any messages goes to root. cron sends me messages. Today I got over 300 emails from my server all of which are PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626+lfs/suhosin.so' - /usr/lib/php5/20090626+lfs/suhosin.so: cannot open shared object file: No such file or directory in Unknown on line 0 I have no idea why. I went to debug it however 5hours ago it stopped so theres nothing i could look at except maybe logs. Why did this maybe happen? Disk isn't full and I have enough ram available.

    Read the article

  • Posting data from multiple servers routing through one server to client server

    - by Swaroop Kundeti
    I have 5 webservers behind Load balancer and we have a client server at other end. Client has white listed my 5 webserver public ip so that my webservers will post a file to the client server. Here the problem is my webservers is going to increase and i cannot always ask client to make my new webserver ip's white list. So i would like to make my infra this way, my webservers will post data to the client server routing from a single server. Like assume that web-1 is main server and the remaining 4 web servers will post data to client server routing through main web-1. I was told that this can be achieved by doing IP Tunneling. But i have no idea how to do that. Would be great for any kind of help.

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Boot disc isnt loading on MY system

    - by acidzombie24
    I am trying to update the firmware on my harddisk. I grabbed seagates windows setup tool which didnt boot into the app to update the firmware so I burned their iso image. Their ISO also doesnt boot and i vaguely remember something about windows not recognize my disc because of an EFI thing. It probably has nothing to do with it. Anyways, how do I boot into the disc? I tried going into advance options to boot directly to the disc and i get a blank screen. I can use ctrl+alt+del which reboot the system but other then that its blank and doesnt seem to load anything on the disc. The disc was a 7mb iso burnt using windows 7 built in iso burner (it suggest using it on seagates site). I have no idea what to do. Do any of you guys know what my problem may be? The media is DVD-R

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

  • How to automatically make a change to Outlook Microsoft Exchange Proxy Settings

    - by Richard West
    I need to make a change on all computers in our domain. Specifically I need to make a change to the Microsoft Exchange Proxy Settings. Our users have Outlook 2010 installed. These setting can be mannually accessed from: Control Panel - Mail - E-mail Accounts - (Select Account) - Change Account - More Settings - Connection Tab - Exchange Proxy Settings I need to have both the "On fast networks" and "On slow networks" check boxes selected. Obviously the idea of asking my users to go through the process above to make these changes is not ideal. Therefore I looking for advice on how I can automatically push these setting to my user base. I have seached the registry but I have been unable to find the location that this setting is saved. Thanks for any help!

    Read the article

  • Useless Plesk Error?

    - by Josh Pennington
    I am trying to determine why Plesk on my server won't start and I have no idea where to even start (since my hosting company appears to not want to help me out). Anyways, the error in my Plesk error_log is as follows: 2010-12-25 21:30:28: (log.c.75) server started 2010-12-25 21:30:28: (network.c.336) SSL: error:00000000:lib(0):func(0):reason(0) 2010-12-25 21:30:28: (log.c.75) server started 2010-12-25 21:30:28: (network.c.336) SSL: error:00000000:lib(0):func(0):reason(0) It leads me to believe its a problem with the SSL on the server but I am not sure what to make of the error. Can someone lead me in the right direction? Thanks Josh Pennington

    Read the article

  • Make a controller a superclass in MVC design pattern

    - by Nikola
    I am really confused how to handle this. I have: model.Outsider model.SubContractor (which extends Outsider) Basically, Outsider would mean Supplier, but SubContractor is not a Supplier itself so I had to divide it somehow. I used to have model.Outsider (super class of the two bellow) model.Supplier model.SubContractor But Supplier didn't have any unique fields so I have removed it (Was this correct?) So, continuing, I have db layer which has: db.DBOutsider db.DBSubContractor (which again extends the top one) And I have no place to put my Supplier methods. I can do that in DBOutsider, but then DBSubContractor (which extends DBOutsider) would get unnecessary methods. Do I create a DBSupplier even though there is no such class in the model layer? Then again, going to controller layer. It is the same situation as db layer. I have OutsiderController and SubContractorController (which extends the first one) (is this correct in first place? to have an extended controller?). But I have no place to put the methods which are concerned with Supplier and if I put them in the OutsiderController the SubContractorController would recieve unneccessary methods. At the moment I am going for the extra DBSupplier and SupplierController classes, but I have no idea if this is the correct way. Basically what perplexes me is the "empty" Supplier. Since it is supposed to be an Outsider but has no extra methods or fields, should there be an empty class?

    Read the article

  • Gathering all data in single iteration vs using functions for readable code

    - by user828584
    Say I have an array of runners with which I need to find the tallest runner, the fastest runner, and the lightest runner. It seems like the most readable solution would be: runners = getRunners(); tallestRunner = getTallestRunner(runners); fastestRunner = getFastestRunner(runners); lightestRunner = getLightestRunner(runners); ..where each function iterates over the runners and keeps track of the largest height, greatest speed, and lowest weight. Iterating over the array three times, however, doesn't seem like a very good idea. It would instead be better to do: int greatestHeght, greatestSpeed, leastWeight; Runner tallestRunner, fastestRunner, lightestRunner; for(runner in runners){ if(runner.height > greatestHeight) { greatestHeight = runner.height; tallestRunner = runner; } if(runner.speed > ... } While this isn't too unreadable, it can get messy when there is more logic for each piece of information being extracted in the iteration. What's the middle ground here? How can I use only a single iteration while still keeping the code divided into logical units?

    Read the article

  • asp.net mvc vs angular.js model binding

    - by aw04
    So I've noticed a trend lately of .net web developers using angular.js on the client side of applications and I've become more curious as I play around with angular and compare it to how I would do things in asp.net mvc. I'll give a quick example of what really got me thinking. I recently came across a situation at work (I work in a .net environment) where I needed to create a table bound to a collection of objects that had the ability to add and remove rows/items from the collection. I had an add button that created a new object and appended a row to the end of the table, and a remove button in each row to remove a particular object/row. Using asp.net mvc, I first found myself making an ajax call to the server for each operation, updating the server side model, and refreshing part of the page to show the result in the table. This worked but I didn't really like the idea of calling the server to update the model each time, so I tried to come up with a solution to do this on the client side. It turned out to be quite a task, as I had to generate the html on add with validation and all and the correct indexing for the model binding to work. It got worse on remove, as I ended up with a crazy string replace function to recreate the indexes on each item to satisfy the binding requirements (if an item other than the last is removed, the indexes are no longer correct). Now out of curiosity, I tried to recreate this at home in angular (which I had no experience with) and it took me all of about 10 minutes with simple functions to add and remove items from the client side model. This is just one example, but it seems to me that I'm able to achieve the same results with far fewer calls to the server in angular because of the fact that it binds to a client side model. So my question is, is this a distinct advantage of using a javascript mvc framework or am I somehow under utilizing the power of asp.net mvc and am I right in thinking that these operations should be done on the client and have no business requiring calls to the server?

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

< Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >