Search Results

Search found 1504 results on 61 pages for 'pros and cons'.

Page 15/61 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • Should I *always* import my file references into the database in drupal?

    - by sprugman
    I have a cck type with an image field, and a unique_id text field. The file name of the image is based on the unique_id. All of the content, including the image itself is being generated automatically via another process, and I'm parsing what that generates into nodes. Rather than creating separate fields for the id and the image, and doing an official import of the image into the files table, I'm tempted to only create the id field and create the file reference in the theme layer. I can think of pros and cons: 1) Theme Layer Approach Pros: makes the import process much less complex don't have to worry about syncing the db with the file system as things change more flexible -- I can move my images around more easily if I want Cons: maybe not as much The Drupal Way™ not as pure -- I'll wind up with more logic on the theme side. 2) Import Approach Pros: import method is required if we ever wanted to make the files private (we won't.) safer? Maybe I'll know if there's a problem with the image at import time, rather than view time. Since I'll be bulk importing, that might make a difference. if I delete a node through the admin interface, drupal might be able to delete the file for me, as well. Con: more complex import and maintenance All else being equal, simpler is always better, so I'm leaning toward #1. Are there any other issues I'm missing? (Since this is an open ended question, I guess I'll make it a community wiki, whatever that means.)

    Read the article

  • Building a structure/object in a place other than the constructor

    - by Vishal Naidu
    I have different types of objects representing the same business entity. UIObject, PowershellObject, DevCodeModelObject, WMIObject all are different representation to the same entity. So say if the entity is Animal then I have AnimalUIObject, AnimalPSObject, AnimalModelObject, AnimalWMIObject, etc. Now the implementations of AnimalUIObject, AnimalPSObject, AnimalModelObject are all in separate assemblies. Now my scenario is I want to verify the contents of business entity Animal irrespective of the assembly it came from. So I created a GenericAnimal class to represent the Animal entity. Now in GenericAnimal I added the following constructors: GenericAnimal(AnimalUIObject) GenericAnimal(AnimalPSObject) GenericAnimal(AnimalModelObject) Basically I made GenericAnimal depend on all the underlying assemblies so that while verifying I deal with this abstraction. Now the other approach to do this is have GenericAnimal with an empty constructor an allow these underlying assemblies to have a Transform() method which would build the GenericAnimal. Both approaches have some pros and cons: The 1st approach: Pros: All construction logic is in one place in one class GenericAnimal Cons: GenericAnimal class must be touched every-time there is a new representation form. The 2nd approach: Pros: construction responsibility is delegated to the underlying assembly. Cons: As construction logic is spread accross assemblies, tomorrow if I need to add a property X in GenericAnimal then I have to touch all the assemblies to change the Transform method. Which approach looks better ? or Which would you consider a lesser evil ? Is there any alternative way better than the above two ?

    Read the article

  • Socket 1155 vs 2011 vs Haswell

    - by woody
    The title says it all. I am trying to decide between sockets and just cant pinpoint which to get based on pros and cons. The build this will go into will be my primary PC. It will be used for every day computing, coding, some multimedia and gaming. I have read that 1155 and 2011 will be dead within the new year and that Haswell will double the performance of Ivy Bridge. What is a general run down on the different sockets? Pros and Cons? More specifically what are the technical differences between the three?

    Read the article

  • Mac Joining Active Directory Still Prompts For Authentication

    - by David Potter
    My Mac is joined to an Active Directory domain. What I expected to see was the same ease of access to file shares and internal websites that Windows computers joined to the domain experience (i.e., no authentication needed; it just uses Windows Integrated Authentication). Instead I am asked for credentials each time I try to access those shares and protected websites (e.g. SharePoint). Is this normal behavior, or is something wrong with my Mac that it prompts me for my username and password for the domain when I access Windows file shares or intranet sites protected by NTLM/Kerberos? Machines include: MacBook Pros running Mountain Lion MacBook Pros running Lion MacServer running Lion Server

    Read the article

  • Recommendations or advice for shared computer control

    - by Telemachus
    Basic scenario: we are a school (overwhelmingly Mac, some Windows machines via BootCamp), and we are considering using DeepFreeze to guard the state of our shared machines. We have roughly 250 machines that are either shared laptops (which move around quite a bit) or common desktops in public spaces. Obviously, we spend a lot of time maintaining the machines and trying to reverse the inevitable drift as people make changes to the computers. We would like to control the integrity of the build we initially put onto the machines without handcuffing users and especially without using Mac's Parental Control software. (We've had nothing but bad experiences with it.) We've been testing DeepFreeze, and so far it's very impressive. But I'm curious to hear if people who have used DeepFreeze or any similar software have any advice or tips. To get things started, I will post my own pros and cons. Pros: The state of the machine is frozen in our chosen state. All changes made to the machine after that disappear upon restart. (This frozen state really appears to cover everything. I have yet to do something to a test machine that isn't instantly healed.) Tons of trivial but time-consuming maintenance is gone in an instant. Also, lots of not-so-trivial breakage should be avoided. There are good options, however, that allow you to create storage spaces either globally or per user. (Otherwise, stored files disappear upon reboot. For some machines, this is a good option itself. Simply warn people: save externally or else; this machine is a kiosk, not your storage space.) Cons: Anytime we actually need to make a change (upgrade basic software, add a printer or an airport permanently, add new software), the process is a bit more complex. Reboot into a special mode (thaw state), make changes, reboot back into frozen mode. If (when?) we forget this, we will end up making changes that disappear after the next reboot. Users will forget to save files correctly (in the right place or externally), and we will have loud, unpleasant conversations explaining that we can't recover the document they worked on all afternoon yesterday. The machine rebooted. The file is gone. These are my initial thoughts, but I would love to hear from other people who have experience with DeepFreeze or any similar software. What should we be careful about? Do the pros outweigh the cons? What gains or problems am I not seeing? Thanks.

    Read the article

  • Mutliple VMs for Tomcat cluster vs Multiple Tomcat instances on one physical box

    - by Greymeister
    I'm working on a project that will be implemented into production using a cluster of Apache Tomcat instances and I'm looking for the best Hardware/OS solutions and VMs have come up as one option. I have run ESXi/ESX instances before for development and testing, but I'm curious for a hosting environment if having multiple VMs is actually worse than just configuring a server to host multiple instances of Tomcat. These are my guesses: Pros for VMWare Easier Maintenance/Backup for individual VMs (VMWare makes this easy) Can remote login to individual VMs without having to give host access (security?) Easier way to re-purpose machine for OS/Hardware changes Pros for running on one Physical Machine Overhead of only one OS (also no VMWare footprint) Update OS/security changes once One less administrative layer (No VM expertise required) I'm curious if anyone has any other ideas about what the benefits would be for either option.

    Read the article

  • What is this service called in English?

    - by moomoochoo
    DETAILS I'm not familiar with server administration and am trying to find the name of a particular service offered by a Japanese website provider. The service is for a dedicated server. I have attached a picture detailing the service below. The left side of the picture shows the server without the service. The right side shows the server with the service. Once I know the name of the service I will Google it, but any additional information in regards to the pros and cons of such a service would be much appreciated. Thanks! QUESTION What is the name of the service in the picture. What are the pros and cons of this service?

    Read the article

  • SQL server environment

    - by Olegas D
    Hello I'm considering a bit of changes in current sales environment. And trying to check all cons and pros. Current situation. SQL server (quite decent HP server - server1) + backup server (smaller Dell server - server2). all sql files and sql server itself are on the server1. If something goes wrong with server1 I will have to manually move to server2. Connecting to the sql server: 1 HQ (where server located) + 4 sites through VPN. Now I'm considering 2 scenarios: Buy some storage system + update existing servers (add ram, upgrade processors) and go for VMWare ESXI. Rent a server at a datacenter + rent virtual server in case real server goes down. Also rent some space at data storage to keep SQL files there. Have anyone considered these things and maybe found some good pros/cons list? ;) Thanks

    Read the article

  • function to org-sort by three (3) criteria: due date / priority / title

    - by lawlist
    Is anyone aware of an org-sort function / modification that can refile / organize a group of TODO so that it sorts them by three (3) criteria: first sort by due date, second sort by priority, and third sort by by title of the task? EDIT: If anyone can please help me to modify this so that undated TODO are sorted last, that would be greatly appreciated -- at the present time, undated TODO are not being sorted: ;; multiple sort (defun org-sort-multi (&rest sort-types) "Multiple sorts on a certain level of an outline tree, or plain list items. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Example: To sort first by TODO status, then by priority, then by date, then alphabetically (case-sensitive) use the following call: (org-sort-multi '(?d ?p ?t (t . ?a)))" (interactive) (dolist (x (nreverse sort-types)) (when (char-valid-p x) (setq x (cons nil x))) (condition-case nil (org-sort-entries (car x) (cdr x)) (error nil)))) ;; sort current level (defun lawlist-sort (&rest sort-types) "Sort the current org level. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Defaults to \"?o ?p\" which is sorted by TODO status, then by priority" (interactive) (when (equal mode-name "Org") (let ((sort-types (or sort-types (if (or (org-entry-get nil "TODO") (org-entry-get nil "PRIORITY")) '(?d ?t ?p) ;; date, time, priority '((nil . ?a)))))) (save-excursion (outline-up-heading 1) (let ((start (point)) end) (while (and (not (bobp)) (not (eobp)) (<= (point) start)) (condition-case nil (outline-forward-same-level 1) (error (outline-up-heading 1)))) (unless (> (point) start) (goto-char (point-max))) (setq end (point)) (goto-char start) (apply 'org-sort-multi sort-types) (goto-char end) (when (eobp) (forward-line -1)) (when (looking-at "^\\s-*$") ;; (delete-line) ) (goto-char start) ;; (dotimes (x ) (org-cycle)) )))))

    Read the article

  • Best Method to SFTP or FTPS Files via SSIS

    - by Registered User
    What is the best method using SSIS (SQL Server Integration Services) to upload a file to either a remote SFTP (secure FTP with SSH2 protocal) or FTPS (FTP over SSL) site? I've used the following methods, but each has short-comings I would like to avoid: COZYROC LIBRARY Method: Install the CozyRoc library on each development and production server and use the SFTP task to upload the files. Pros: Easy to use. It looks, smells, and feels like a normal SSIS task. SSIS also recognizes the password as sensitive information and allows you all the normal options for protecting the sensitive information instead of just storing it in clear text in a non-secure manner. Works well with other SSIS tasks such as ForEach Loop Containers. Errors out when uploads and downloads fail. Works well when you don't know the names of the files on the remote FTP site to download or when you won't know the name of the file to upload until run-time. Cons: Costs money to license in a production environment. Makes you dependent upon the vendor to update their libraries between each version. Although they already have a 2008 version, this caused me a problem during the CTP's of 2008. Requires installing the libraries on each development and production machine. COMMAND LINE SFTP PROGRAM Method: Install a free command-line SFTP application such as Putty and execute it either by running a batch file or operating system process task. Pros: Free, free, and free. You can be sure it is secure if you are using Putty since numerous GUI FTP clients appear to use Putty under the covers. You DEFINATELY know you are using SSH2 and not SSH. Cons: The two command-line utilities I tried (Putty and Cygwin) required storing the SFTP password in a non-secure location. I haven't found a good way to capture failures or errors when uploading files. The process doesn't look and smell like SSIS. Most of the code is encapsulated in text files instead of SSIS itself. Difficult to use if you don't know the exact name of the file you are uploading or downloading. A 3RD PARTY C# or VB.NET LIBRARY Method: Install a SFTP or FTPS library and use a Script Task that references the library to upload the files. (I've never tried this, so I'm going to guess at the pros and cons) Pros: Probably easy to capture errors. Should work well with variables, so it would probably be easy to use even when you don't know the exact name of the file you are uploading or downloading. Cons: It's a script task combined with .NET libraries. If you are using SSIS, then you probably are more comfortable with SSIS tasks then .NET code. Script tasks are also difficult to troubleshoot since they don't have the same debugging tools and features as regular .NET projects. Creates a dependency on 3rd party code that may not work between different versions of SQL Server. To be fair, it is probably MORE likely to work between different versions of SQL Server than a 3rd party SSIS task library. Another huge con -- I haven't found a free C# or VB.NET library that does this as of yet. So if anyone knows of one, then please let me know!

    Read the article

  • Recommended integration mechanism for bi-directional, authenticated, encrypted connection in C clien

    - by rcampbell
    Let me first give an example. Imagine you have a single server running a JVM application. This server keeps a collection of N equations, once for each client: Client #1: 2x Client #2: 1 + y Client #3: z/4 This server includes an HTTP interface so that random visitors can type https://www.acme.com/client/3 int their browsers and see the latest evaluated result of z/4. The tricky part is that either the client or the server may change the variable value at any time, informing the other party immediately. More specifically, Client #3 - a C app - can initially tell the server that z = 20. An hour later that same client informs the server that z = 23. Likewise the server can later inform the client that z = 28. As caf pointed out in the comments, there can be a race condition when values are changed by the client and server simultaneously. The solution would be for both client and server to send the operation performed in their message, which would need to be executed by the other party. To keep things simple, let's limit the operations to (commutative) addition, allowing us to disregard message ordering. For example, the client seeds the server with z = 20: server:z=20, client:z=20 server sends {+3} message (so z=23 locally) & client sends {-2} message (so z=18 locally) at the exact same time server receives {-2} message at some point, adds to his local copy so z=21 client receives {+3} message at some point, adds to his local copy so z=21 As long as all messages are eventually evaluated by both parties, the correct answer will eventually be given to the users of the client and server since we limited ourselves to commutative operations (addition of 3 and -2). This does mean that both client and server can be returning incorrect answers in the time it takes for messages to be exchanged and processed. While undesirable, I believe this is unavoidable. Some possible implementations of this idea include: Open an encrypted, always on TCP socket connection for communication Pros: no additional infrastructure needed, client and server know immediately if there is a problem (disconnect) with the other party, fairly straightforward (except the the encryption), native support from both JVM and C platforms Cons: pretty low-level so you end up writing a lot yourself (protocol, delivery verification, retry-on-failure logic), probably have a lot of firewall headaches during client app installation Asynchronous messaging (ex: ActiveMQ) Pros: transactional, both C & Java integration, free up the client and server apps from needing retry logic or delivery verification, pretty straightforward encryption, easy extensibility via message filters/routers/etc Cons: need additional infrastructure (message server) which must never fail, Database or file system as asynchronous integration point Same pros/cons as above but messier RESTful Web Service Pros: simple, possible reuse of the server's existing REST API, SSL figures out the encryption problem for you (maybe use RSA key a la GitHub for authentication?) Cons: Client now needs to run a C HTTP REST server w/SSL, client and server need retry logic. Axis2 has both a Java and C version, but you may be limited to SOAP. What other techniques should I be evaluating? What real world experiences have you had with these mechanisms? Which do you recommend for this problem and why?

    Read the article

  • Custom Online Backup Solution Advice

    - by Martín Marconcini
    I have to implement a way so our customers can backup their SQL 2000/5/8 databasase online. The application they use is a C#/.NET35 Winforms application that connects to a SQL Server (can be 2000/2005/2008, sometimes express editions). The SQL Server is on the same LAN. Our application has a very specific UI and we must code each form following those guidelines. There’s lots of GDI+ to give it the look and feel we want. For that reason, using a 3rd party application is not a very good idea. We need to charge the customer on a monthly/annual basis for the service. Preferably, the customer doesn’t need to care about bandwidth and storage space. It must be transparent. Given the above reqs., my first thoughts are: Solution 1: Code some sort of FTP basic functionality with behind the scenes SQL Backup mechanism, then hire a Hosting service and compress-transfer the .BAK to the Hosting. Maintain a series of Folders (for each customer). They won’t see what’s happening. They will just see a list of their files and a big “Backup now” button that will perform the SQL backup, compress it and upload it (and update the file list) ;) Pros: Not very complicated to implement, simple to use, fairly simple to configure (could have a dedicated ftp user/pass) Cons: Finding a “ftp” only hosting plan is not probably going to be easy, they usually come with a bunch of stuff. FTP is not always the best protocol. more? Solution 2: Similar to 1, but instead of FTP, find a cloud computing service like Amazon S3, Mosso or similar. Pros: Cloud Storage is fast, reliable, etc. It’s kind of easy to implement (specially if there are APIs like AWS or Mosso). Cons: I have been unable to come up with a service optimized for resellers where I can give multiple sub-accounts (one for each customer). Billing is going to be a nightmare cuz these services bill per/GB and with One account it’s impossible to differentiate each customer. Solution 3: Similar to 2, but letting the user create their own account on Amazon S3 (for example). Pros: You forget about billing and such. Cons: A mess for the customer who has to open the Amazon (or whatever) account, will be charged for that and not from you. You can’t really charge the customer (since you’re just not doing anything). Solution 4: Use one of the many backup online solutions that use the tech in cloud storage. Pros: many of these include SQL Server backup, and a lot of features that we’d have to implement. Plus web access and stuff like that will come included. Cons: Still have the billing problem described in number 2. Little of these companies (if any) offers “reseller” accounts. You have to eventually use their software (some offer certain branding). Any better approach? Summary: You have a software (.NET Winapp). You want your users to be able to backup their SQL Server databases online (and be able to retrieve the backups if needed). You ideally would like to charge the customer for this service (i.e. XX € a year).

    Read the article

  • When should I use Areas in TFS instead of Team Projects

    - by Martin Hinshelwood
    Well, it depends…. If you are a small company that creates a finite number of internal projects then you will find it easier to create a single project for each of your products and have TFS do the heavy lifting with reporting, SharePoint sites and Version Control. But what if you are not… Update 9th March 2010 Michael Fourie gave me some feedback which I have integrated. Ed Blankenship via @edblankenship offered encouragement and a nice quote. Ewald Hofman gave me a couple of Cons, and maybe a few more soon. Ewald’s company, Avanade, currently uses Areas, but it looks like the manual management is getting too much and the project is getting cluttered. What if you are likely to have hundreds of projects, possibly with a multitude of internal and external projects? You might have 1 project for a customer or 10. This is the situation that most consultancies find themselves in and thus they need a more sustainable and maintainable option. What I am advocating is that we should have 1 “Team Project” per customer, and use areas to create “sub projects” within that single “Team Project”. "What you describe is what we generally do internally and what we recommend. We make very heavy use of area path to categorize the work within a larger project." - Brian Harry, Microsoft Technical Fellow & Product Unit Manager for Team Foundation Server   "We tend to use areas to segregate multiple projects in the same team project and it works well." - Tiago Pascoal, Visual Studio ALM MVP   "In general, I believe this approach provides consistency [to multi-product engagements] and lowers the administration and maintenance costs. All good." - Michael Fourie, Visual Studio ALM MVP   “@MrHinsh BTW, I'm very much a fan of very large, if not huge, team projects in TFS. Just FYI :) Use Areas & Iterations.” Ed Blankenship, Visual Studio ALM MVP   This would mean that SSW would have a single Team Project called “SSW” that contains all of our internal projects and consequently all of the Areas and Iteration move down one hierarchy to accommodate this. Where we would have had “\SSW\Sprint 1” we now have “\SSW\SqlDeploy\Sprint1” with “SqlDeploy” being our internal project. At the moment SSW has over 70 internal projects and more than 170 total projects in TFS. This method has long term benefits that help to simplify the support model for companies that often have limited internal support time and many projects. But, there are implications as TFS does not provide this model “out-of-the-box”. These implications stretch across Areas, Iterations, Queries, Project Portal and Version Control. Michael made a good comment, he said: I agree with your approach, assuming that in a multi-product engagement with a client, they are happy to adopt the same process template across all products. If they are not, then it’ll either be easy to convince them or there is a valid reason for having a different template - Michael Fourie, Visual Studio ALM MVP   At SSW we have a standard template that we use and this is applied across the board, to all of our projects. We even apply any changes to the core process template to all of our existing projects as well. If you have multiple projects for the same clients on multiple templates and you want to keep it that way, then this approach will not work for you. However, if you want to standardise as we have at SSW then this approach may benefit you as well. Implications around Areas Areas should be used for topological classification/isolation of work items. You can think of this as architecture areas, organisational areas or even the main features of your application. In our scenario there is an additional top level item that represents the Project / Product that we want to chop our Team Project into. Figure: Creating a sub area to represent a product/project is easy. <teamproject> <teamproject>\<Functional Area/module whatever> Becomes: <teamproject> <teamproject>\<ProjectName>\ <teamproject>\<ProjectName>\<Functional Area/module whatever> Implications around Iterations Iterations should be used for chronological classification/isolation of work items. This could include isolated time boxes, milestones or release timelines and really depends on the logical flow of your project or projects. Due to the new level in Area we need to add the same level to Iteration. This is primarily because it is unlikely that the sprints in each of your projects/products will start and end at the same time. This is just a reality of managing multiple projects. Figure: Adding the same Area value to Iteration as the top level item adds flexibility to Iteration. <teamproject>\Sprint 1 Or <teamproject>\Release 1\Sprint 1 Becomes: <teamproject>\<ProjectName>\Sprint 1 Or <teamproject>\<ProjectName>\Release 1\Sprint 1 Implications around Queries Queries are used to filter your work items based on a specified level of granularity. There are a number of queries that are built into a project created using the MSF Agile 5.0 template, but we now have multiple projects and it would be a pain to have to edit all of the work items every time we changed project, and that would only allow one team to work on one project at a time.   Figure: The Queries that are created in a normal MSF Agile 5.0 project do not quite suit our new needs. In order for project contributors to be able to query based on their project we need a couple of things. The first thing I did was to create an “_Area Template” folder that has a copy of the project layout with all the queries setup to filter based on the “_Area Template” Area and the “_Sprint template” you can see in the Area and Iteration views. Figure: The template is currently easily drag and drop, but you then need to edit the queries to point at the right Area and Iteration. This needs a tool. I then created an “Areas” folder to hold all of the area specific queries. So, when you go to create a new TFS Sub-Project you just drag “_Area Template” while holding “Ctrl” and drop it onto “Areas”. There is a little setup here. That said I managed it in around 10 minutes which is not so bad, and I can imagine it being quite easy to build a tool to create these queries Figure: These new queries can be configured in around 10 minutes, which includes setting up the Area and Iteration as well. Version Control What about your source code? Well, that is the easiest of the lot. Just create a sub folder for each of your projects/products.   Figure: Creating sub folders in source control is easy as “Right click | Create new folder”. <teamproject>\DEV\Main\ Becomes: <teamproject>\<ProjectName>\DEV\Main\ Conclusion I think it is up to each company to make a call on how you want to configure your Team Projects and it depends completely on how many projects/products you are going to have for each customer including yourself. If we decide to utilise this route it will require some configuration to get our 170+ projects into this format, and I will probably be writing some tools to help. Pros You only have one project to upgrade when a process template changes – After going through an upgrade of over 170 project prior to the changes in the RC I can tell you that that many projects is no fun. Standardises your Process Template – You will always have the same Process implementation across projects/products without exception You get tighter control over the permissions – Yes, you can do this on a standard Team Project, but it gets a lot easier with practice. You can “move” work items from one “product” to another – Have we not always wanted to do that. You can rename your projects – Wahoo: everyone wants to do this, now you can. One set of Reporting Services reports to manage – You set an area and iteration to run reports anyway, so you may as well set both. Simplified Check-In Policies– There is only one set of check-in policies per client. This simplifies administration of policies. Simplified Alerts – As alerts are applied across multiple projects this simplifies your alert rules as per client. Cons All of these cons could be mitigated by a custom tool that helps automate creation of “Sub-projects” within Team Projects. This custom tool could create areas, Iteration, permissions, SharePoint and queries. It just does not exist yet :) You need to configure the Areas and Iterations You need to configure the permissions You may need to configure sub sites for SharePoint (depends on your requirement) – If you have two projects/products in the same Team Project then you will not see the burn down for each one out-of-the-box, but rather a cumulative for the Team Project. This is not really that much of a problem as you would have to configure your burndown graphs for your current iteration anyway. note: When you create a sub site to a TFS linked portal it will inherit the settings of its parent site :) This is fantastic as it means that you can easily create sub sites and then set the Area and Iteration path in each of the reports to be the correct one. Every team wants their own customization (via Ewald Hofman) - small teams of 2 persons against teams of 30 – or even outsourcing – need their own process, you cannot allow that because everybody gets the same work item types. note: Luckily at SSW this is not a problem as our template is standardised across all projects and customers. Large list of builds (via Ewald Hofman) – As the build list in Team Explorer is just a flat list it can get very cluttered. note: I would mitigate this by removing any build that has not been run in over 30 days. The build template and workflow will still be available in version control, but it will clean the list. Feedback Now that I have explained this method, what do you think? What other pros and cons can you see? What do you think of this approach? Will you be using it? What tools would you like to support you?   Technorati Tags: Visual Studio ALM,TFS Administration,TFS,Team Foundation Server,Project Planning,TFS Customisation

    Read the article

  • Entity Framework v1 &ndash; tips and Tricks Part 3

    - by Rohit Gupta
    General Tips on Entity Framework v1 & Linq to Entities: ToTraceString() If you need to know the underlying SQL that the EF generates for a Linq To Entities query, then use the ToTraceString() method of the ObjectQuery class. (or use LINQPAD) Note that you need to cast the LINQToEntities query to ObjectQuery before calling TotraceString() as follows: 1: string efSQL = ((ObjectQuery)from c in ctx.Contact 2: where c.Address.Any(a => a.CountryRegion == "US") 3: select c.ContactID).ToTraceString(); ================================================================================ MARS or MultipleActiveResultSet When you create a EDM Model (EDMX file) from the database using Visual Studio, it generates a connection string with the same name as the name of the EntityContainer in CSDL. In the ConnectionString so generated it sets the MultipleActiveResultSet attribute to true by default. So if you are running the following query then it streams multiple readers over the same connection: 1: using (BAEntities context = new BAEntities()) 2: { 3: var cons = 4: from con in context.Contacts 5: where con.FirstName == "Jose" 6: select con; 7: foreach (var c in cons) 8: { 9: if (c.AddDate < new System.DateTime(2007, 1, 1)) 10: { 11: c.Addresses.Load(); 12: } 13: } 14: } ================================================================================= Explicitly opening and closing EntityConnection When you call ToList() or foreach on a LINQToEntities query the EF automatically closes the connection after all the records from the query have been consumed. Thus if you need to run many LINQToEntities queries over the same connection then explicitly open and close the connection as follows: 1: using (BAEntities context = new BAEntities()) 2: { 3: context.Connection.Open(); 4: var cons = from con in context.Contacts where con.FirstName == "Jose" 5: select con; 6: var conList = cons.ToList(); 7: var allCustomers = from con in context.Contacts.OfType<Customer>() 8: select con; 9: var allcustList = allCustomers.ToList(); 10: context.Connection.Close(); 11: } ====================================================================== Dispose ObjectContext only if required After you retrieve entities using the ObjectContext and you are not explicitly disposing the ObjectContext then insure that your code does consume all the records from the LinqToEntities query by calling .ToList() or foreach statement, otherwise the the database connection will remain open and will be closed by the garbage collector when it gets to dispose the ObjectContext. Secondly if you are making updates to the entities retrieved using LinqToEntities then insure that you dont inadverdently dispose of the ObjectContext after the entities are retrieved and before calling .SaveChanges() since you need the SAME ObjectContext to keep track of changes made to the Entities (by using ObjectStateEntry objects). So if you do need to explicitly dispose of the ObjectContext do so only after calling SaveChanges() and only if you dont need to change track the entities retrieved any further. ======================================================================= SQL InjectionAttacks under control with EFv1 LinqToEntities and LinqToSQL queries are parameterized before they are sent to the DB hence they are not vulnerable to SQL Injection attacks. EntitySQL may be slightly vulnerable to attacks since it does not use parameterized queries. However since the EntitySQL demands that the query be valid Entity SQL syntax and valid native SQL syntax at the same time. So the only way one can do a SQLInjection Attack is by knowing the SSDL of the EDM Model and be able to write the correct EntitySQL (note one cannot append regular SQL since then the query wont be a valid EntitySQL syntax) and append it to a parameter. ====================================================================== Improving Performance You can convert the EntitySets and AssociationSets in a EDM Model into precompiled Views using the edmgen utility. for e.g. the Customer Entity can be converted into a precompiled view using edmgen and all LinqToEntities query against the contaxt.Customer EntitySet will use the precompiled View instead of the EntitySet itself (the same being true for relationships (EntityReference & EntityCollections of a Entity)). The advantage being that when using precompiled views the performance will be much better. The syntax for generating precompiled views for a existing EF project is : edmgen /mode:ViewGeneration /inssdl:BAModel.ssdl /incsdl:BAModel.csdl /inmsl:BAModel.msl /p:Chap14.csproj Note that this will only generate precompiled views for EntitySets and Associations and not for existing LinqToEntities queries in the project.(for that use CompiledQuery.Compile<>) Secondly if you have a LinqToEntities query that you need to run multiple times, then one should precompile the query using CompiledQuery.Compile method. The CompiledQuery.Compile<> method accepts a lamda expression as a parameter, which denotes the LinqToEntities query  that you need to precompile. The following is a example of a lamda that we can pass into the CompiledQuery.Compile() method 1: Expression<Func<BAEntities, string, IQueryable<Customer>>> expr = (BAEntities ctx1, string loc) => 2: from c in ctx1.Contacts.OfType<Customer>() 3: where c.Reservations.Any(r => r.Trip.Destination.DestinationName == loc) 4: select c; Then we call the Compile Query as follows: 1: var query = CompiledQuery.Compile<BAEntities, string, IQueryable<Customer>>(expr); 2:  3: using (BAEntities ctx = new BAEntities()) 4: { 5: var loc = "Malta"; 6: IQueryable<Customer> custs = query.Invoke(ctx, loc); 7: var custlist = custs.ToList(); 8: foreach (var item in custlist) 9: { 10: Console.WriteLine(item.FullName); 11: } 12: } Note that if you created a ObjectQuery or a Enitity SQL query instead of the LINQToEntities query, you dont need precompilation for e.g. 1: An Example of EntitySQL query : 2: string esql = "SELECT VALUE c from Contacts AS c where c is of(BAGA.Customer) and c.LastName = 'Gupta'"; 3: ObjectQuery<Customer> custs = CreateQuery<Customer>(esql); 1: An Example of ObjectQuery built using ObjectBuilder methods: 2: from c in Contacts.OfType<Customer>().Where("it.LastName == 'Gupta'") 3: select c This is since the Query plan is cached and thus the performance improves a bit, however since the ObjectQuery or EntitySQL query still needs to materialize the results into Entities hence it will take the same amount of performance hit as with LinqToEntities. However note that not ALL EntitySQL based or QueryBuilder based ObjectQuery plans are cached. So if you are in doubt always create a LinqToEntities compiled query and use that instead ============================================================ GetObjectStateEntry Versus GetObjectByKey We can get to the Entity being referenced by the ObjectStateEntry via its Entity property and there are helper methods in the ObjectStateManager (osm.TryGetObjectStateEntry) to get the ObjectStateEntry for a entity (for which we know the EntityKey). Similarly The ObjectContext has helper methods to get an Entity i.e. TryGetObjectByKey(). TryGetObjectByKey() uses GetObjectStateEntry method under the covers to find the object, however One important difference between these 2 methods is that TryGetObjectByKey queries the database if it is unable to find the object in the context, whereas TryGetObjectStateEntry only looks in the context for existing entries. It will not make a trip to the database ============================================================= POCO objects with EFv1: To create POCO objects that can be used with EFv1. We need to implement 3 key interfaces: IEntityWithKey IEntityWithRelationships IEntityWithChangeTracker Implementing IEntityWithKey is not mandatory, but if you dont then we need to explicitly provide values for the EntityKey for various functions (for e.g. the functions needed to implement IEntityWithChangeTracker and IEntityWithRelationships). Implementation of IEntityWithKey involves exposing a property named EntityKey which returns a EntityKey object. Implementation of IEntityWithChangeTracker involves implementing a method named SetChangeTracker since there can be multiple changetrackers (Object Contexts) existing in memory at the same time. 1: public void SetChangeTracker(IEntityChangeTracker changeTracker) 2: { 3: _changeTracker = changeTracker; 4: } Additionally each property in the POCO object needs to notify the changetracker (objContext) that it is updating itself by calling the EntityMemberChanged and EntityMemberChanging methods on the changeTracker. for e.g.: 1: public EntityKey EntityKey 2: { 3: get { return _entityKey; } 4: set 5: { 6: if (_changeTracker != null) 7: { 8: _changeTracker.EntityMemberChanging("EntityKey"); 9: _entityKey = value; 10: _changeTracker.EntityMemberChanged("EntityKey"); 11: } 12: else 13: _entityKey = value; 14: } 15: } 16: ===================== Custom Property ==================================== 17:  18: [EdmScalarPropertyAttribute(IsNullable = false)] 19: public System.DateTime OrderDate 20: { 21: get { return _orderDate; } 22: set 23: { 24: if (_changeTracker != null) 25: { 26: _changeTracker.EntityMemberChanging("OrderDate"); 27: _orderDate = value; 28: _changeTracker.EntityMemberChanged("OrderDate"); 29: } 30: else 31: _orderDate = value; 32: } 33: } Finally you also need to create the EntityState property as follows: 1: public EntityState EntityState 2: { 3: get { return _changeTracker.EntityState; } 4: } The IEntityWithRelationships involves creating a property that returns RelationshipManager object: 1: public RelationshipManager RelationshipManager 2: { 3: get 4: { 5: if (_relManager == null) 6: _relManager = RelationshipManager.Create(this); 7: return _relManager; 8: } 9: } ============================================================ Tip : ProviderManifestToken – change EDMX File to use SQL 2008 instead of SQL 2005 To use with SQL Server 2008, edit the EDMX file (the raw XML) changing the ProviderManifestToken in the SSDL attributes from "2005" to "2008" ============================================================= With EFv1 we cannot use Structs to replace a anonymous Type while doing projections in a LINQ to Entities query. While the same is supported with LINQToSQL, it is not with LinqToEntities. For e.g. the following is not supported with LinqToEntities since only parameterless constructors and initializers are supported in LINQ to Entities. (the same works with LINQToSQL) 1: public struct CompanyInfo 2: { 3: public int ID { get; set; } 4: public string Name { get; set; } 5: } 6: var companies = (from c in dc.Companies 7: where c.CompanyIcon == null 8: select new CompanyInfo { Name = c.CompanyName, ID = c.CompanyId }).ToList(); ;

    Read the article

  • Should we enforce code style in our large codebase?

    - by eighttrackmind
    By "code style" I mean 2 things: Style, eg. // bad if(foo){ ... } // good if (foo) { ... } Conventions and idiomaticity, where two ways of writing the same thing are functionally equivalent, but one is more idiomatic. eg. // bad if (fooLib.equals(a, b)) { ... } // good if (a == b) { ... } I think it makes sense to use an auto-formatter to enforce #1 automatically. So my question is specifically about #2. I like to break things down into pros and cons, here's what I've come up with so far: Pros: Used by many large codebases (eg. Google, jQuery) Helps make it a bit easier to work on new areas of the codebase Helps make code more portable (this is not necessarily true) Code style is automatic once you get used to it Makes it easier to fast-decline pull requests Cons: Takes engineers’ and code reviewers’ time away from more important things (like developing features) Code should ideally be rewritten every 2-3 years anyway, so it’s more important to focus on getting the architecture right, and achieving high test coverage Adds strain to code reviews (eg. “don’t do it this way, I like this other way better”) Even if I’ve been using a code style for a while, I still sometime have to pause and think about how to write a line better Having an enforced, uniform code style makes it hard to experiment with potentially better styles Maintaining a style guide takes a lot of incremental effort Engineers rarely read through the style guide. More often, it's cited in code reviews And as a secondary question: we also have many smaller repositories - should the same code style be enforced there?

    Read the article

  • HTML5 or Javascript game engine to develop a browser game

    - by Jack Duluoz
    I would like to start developing a MMO browser game, like Travian or Ogame, probably involving also a bit of more sophisticated graphical features such as players interacting in real time with a 2d map or something like that. My main doubt is what kind of development tools I should use: I've a good experience with PHP and MySQL for the server side and Javascript (and jQuery) regarding the client side. Coding everything from scratch would be of course really painful so I was wondering if I should use a javascript game engine or not. Are there (possibly free) game engine you would recommend? Are they good enough to develop a big game? Also, I saw a lot of HTML5 games popping up lately but I'm now sure if using HTML5 is a good idea or not. Would you recommend it? What are the pro and cons about using HTML5? If you'd recommend it, do you have any good links regarding game development with HTML5? (PS: I know that HTML5 and a Javascript engine are not mutually exclusive, I just didn't know how to formulate a proper title since English is not my main language. So, please, answer addressing HTML5 and a game engine pro and cons separately)

    Read the article

  • Dilemma for growing a project: Open source volunteer developers VS closed source paid / revshare developers? [closed]

    - by giorgio79
    I am trying to grow my project, and I am vaccillating between some examples. Some options seem to be: 1. open sourcing the project to draw volunteer developers. Pros This would mean anyone can try and make some money off the code that would motivate them to contribute back and grow the project. Cons Existing bigger could easily copy and paste my work so far. They can also replicate without having access to the code, but that would take more time. I also thought of using AGPL license, but again, code can still be copied without redistribution. After all, enforcing a license costs a lot of money, and I cannot just say to a possible copycat that it seems you copied my code, show me what you got. 2. Keep the project closed source, but create some kind of a developer program where they get revshare Pros I keep the main rights for the project, but still generate interest by creating a developer program. Noone can copy code easily, just with some considerable effort, but make contributions easy as a breeze. I am also seeing many companies just open source a part of their projects, like Acquia does not open source its multisite setup, or github does not open source some of its core business. Cons Less attention from open source committed devs. Conclusion So option 2 seems the most secure, but would love some feedback.

    Read the article

  • Node.js MMO - process and/or map division

    - by Gipsy King
    I am in the phase of designing a mmo browser based game (certainly not massive, but all connected players are in the same universe), and I am struggling with finding a good solution to the problem of distributing players across processes. I'm using node.js with socket.io. I have read this helpful article, but I would like some advice since I am also concerned with different processes. Solution 1: Tie a process to a map location (like a map-cell), connect players to the process corresponding to their location. When a player performs an action, transmit it to all other players in this process. When a player moves away, he will eventually have to connect to another process (automatically). Pros: Easier to implement Cons: Must divide map into zones Player reconnection when moving into a different zone is probably annoying If one zone/process is always busy (has players in it), it doesn't really load-balance, unless I split the zone which may not be always viable There shouldn't be any visible borders Solution 1b: Same as 1, but connect processes of bordering cells, so that players on the other side of the border are visible and such. Maybe even let them interact. Solution 2: Spawn processes on demand, unrelated to a location. Have one special process to keep track of all connected player handles, their location, and the process they're connected to. Then when a player performs an action, the process finds all other nearby players (from the special player-process-location tracking node), and instructs their matching processes to relay the action. Pros: Easy load balancing: spawn more processes Avoids player reconnecting / borders between zones Cons: Harder to implement and test Additional steps of finding players, and relaying event/action to another process If the player-location-process tracking process fails, all other fail too I would like to hear if I'm missing something, or completely off track.

    Read the article

  • Examples of permission-based authorization systems in .Net?

    - by Rachel
    I'm trying to figure out how to do roles/permissions in our application, and I am wondering if anyone knows of a good place to get a list of different permission-based authorization systems (preferably with code samples) and perhaps a list of pros/cons for each method. I've seen examples using simple dictionaries, custom attributes, claims-based authorization, and custom frameworks, but I can't find a simple explanation of when to use one over another and what the pros/cons are to using each method. (I'm sure there's other ways than the ones I've listed....) I have never done anything complex with permissions/authorization before, so all of this seems a little overwhelming to me and I'm having trouble figuring out what what is useful information that I can use and what isn't. What I DO know is that this is for a Windows environment using C#/WPF and WCF services. Some permission checks are done on the WCF service and some on the client. Some are business rules, some are authorization checks, and others are UI-related (such as what forms a user can see). They can be very generic like boolean or numeric values, or they can be more complex such as a range of values or a list of database items to be checked/unchecked. Permissions can be set on the group-level, user-level, branch-level, or a custom level, so I do not want to use role-based authorization. Users can be in multiple groups, and users with the appropriate authorization are in charge of creating/maintaining these groups. It is not uncommon for new groups to be created, so they can't be hard-coded.

    Read the article

  • Choosing the right language for the job

    - by Ampt
    I'm currently working for a company on the engineering team of about 5-6 people and have been given the job of heading up the redesign of an embedded system tester. We've decided the general requirements and attributes that would be desirable in the system, and now I have to decide on a language to use for the system, or at the very least come up with a list of languages with pros and cons to present to the team. The general idea of the project is that we currently have a tester written in c++, which was never designed to be a tester, but instead has evolved to be such over the course of 3-4 years due to need. Writing tests for a new product requires modifying the 'framework' and writing code that is completely non-human readable or intuitive due to the way the system was originally designed. Now, we've decided that the time to modify this tester for each new product that we want to test has become too high and want to partially re-write the system so that we can program the actual tests in a scripting language that would then use the modified c++ framework on the back end to test the actual systems. The c++ framework would be responsible for doing all the actual work and the scripting language would just integrate with that to tell the framework what to do. Never having programmed in a scripting language (we program embedded systems), I've run into a wall where I have no experience with any of the languages that we could possibly use, but must somehow give pros and cons of each language so that we can choose the best one for the job. Currently my short list of possibilities includes: Python TCL Lua Perl My question is this: How can a person evaluate a language that he/she has never used before? What criteria are good indicators for a languages potential usability on a project? While helpful suggestions for my particular case are appreciated, I feel that this is a good skill to possess and would like to be able to apply this to many different projects if at all possible

    Read the article

  • Use IIS express in Visual studio sp1

    - by anirudha
    many developer thing to adopt IIS express aka Webmatrix to use inside visual studio. that the feature of Visual studio service pack 1. the visual studio 2010 sp1 allow developer to use webmatrix inside the visual studio. the developer can choose one of Visual studio development server or Webmatrix. the visual studio service pack 1 launch today but i am not recommended them because it’s in beta. if you use them then you may be in a big trouble because beta’s fundamental we all know. don’t be fool by installing them because they not tell you about both thing pros and cons. the stable version of visual studio is Visual studio who can be obtained from Microsoft download center not service pack 1. so beware to use unstable version. this is not browser then we have 3 option 1. chrome 2. firefox  3. Microsoft internet explorer [fool] because this is IDE and we have only one that can be problem if they work not well. the service pack 1 always changes your installation for visual studio Express edition [tool such as VWD or VCS express] if you already have them  i means service pack 1 then share your experience with us! if not that don’t look on Microsoft website blogs because they never tell you pros and cons of their software.

    Read the article

  • SQL – Download NuoDB and Qualify for FREE Amazon Gift Cards

    - by Pinal Dave
    July has been a fantastic month and Team NuoDB has really appreciated the active participation of the SQLAuthority.com active reader base. Earlier we had launched two contests with NuoDB and both of them are very much appreciated by readers. There are constant demands of more contests and team NuoDB is very much excited to support more contests. Here are the details to constests ran earlier: What ACID stands in the Database? – Contest to Win 24 Amazon Gift Cards and Joes 2 Pros 2012 Kit What is the latest Version of NuoDB? – A Quick Contest to Get Amazon Gift Cards Based on the earlier successful contests, the kind folks at NuoDB decided that they will support one more round of the giveaways to SQLAuthority.com contests. However, please note that this month’s contest will end in next 48 hours. You have to take part before July 31st, 2013 11:59:00 PM PST. Here is the quick contest: You just have to go and download NuoDB. The first 10 people who will download the NuoDB will get Amzon USD 10 cards. Remaining everyone will be entered into a lucky draw of Amazon Gift cards of USD 50. Winners will be announced in next 24 hours. To eligible for this contest, please download NuoDB before July 31st, 2013 11:59:00 PM PST. Bonus Round: If you have entered in the contest above, you can also enter to win latest Beginning SSRS Joes 2 Pros book. You just have to leave a comment over here with the note about how many different platform NuoDB supports. Here are few of the blog post I wrote earlier on that subject: Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB Part 5 - NuoDB and Third Party Explorer – SQuirreL SQL Client, SQL Workbench/J and DbVisualizer Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Best ways to collect location-based user input

    - by user359650
    I'm working on a website where users will be able to register and provide information about their location. In order to prevent users from inputting incorrect data, we don't want users to provide free-text information but instead choose from predefined values as much as possible. We believe there are 2 ways of providing those values: use an API to an external service provider or create your own local database. APIs Some resources: - https://developers.facebook.com/docs/reference/ads-api/get-autocomplete-data/ - http://developer.yahoo.com/geo/geoplanet/ Pros: -accuracy and completeness of data. -no maintenance related to update of data as this it taken care of by API provider. -easier/faster to get started (no need to create local database, just implement API). Cons: -degradation of performance when availability issues with external API. -outage due to changes to the external API (until your code is updated to reflect those changes). -lock-in with external provider. Local database Some resources: - http://developer.yahoo.com/geo/geoplanet/data/ - http://www.maxmind.com/app/geolitecity - http://download.geonames.org/export/dump/ Pros: -no external dependency: improved stability and performance. Cons: -more work to get started (you need to create the database and code to interact with it). -risks of inaccurate/incomplete data, either initially or over time. -more maintenance work to keep database up to date. Assuming the depth information requested from users is as follows: -country: interested in value. also used to narrow down list of regions. -region (state in the US, county in the UK...): not interested in value itself, only used to narrow down list of cities. -city: interested in value (which can be used to work out related region should we need regional statistics). -address: interested in value although OPTIONAL. Which option (whether API or local database) would you choose? What tips you would give for the implementation? What other resources can you share?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >