Search Results

Search found 5915 results on 237 pages for 'practices'.

Page 41/237 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • User input and automated input separation

    - by tpaksu
    I have a MySQL database and an automation script which modifies the data inside once a day. And these columns may have changed by an user manually. What is the best approach to make the system only update the automated data, not the manually edited ones? I mean yes, flagging the cell which is manually edited is one way to do it, but I want to know if there's another way to accomplish this? Just curiosity. BTW, the question is about cell values, not rows.

    Read the article

  • Working in America from the UK

    - by thedixon
    I've been toying with the idea for the last few years and I figured it was about time to start asking questions about it! Here goes: I'm a Senior-level .NET/C# programmer from the UK, with 7 years of commercial experience in industry and looking to work over across the pond in the big ol' USA! This is with a view to live there on a permanent basis. My idea is to try and set up some interviews and go over there for a week to attend them, then I guess wait for responses in hopes they'd sponsor me for a working VISA. I'd like to know is there anyone out there with any experience of doing the same thing? Was it difficult finding work? Is there anything I should know before embarking on this route? How long did the transition take? Update: Considering the down votes, either I've posted this in the wrong place, or people really don't like my query. If so, please shed some light.

    Read the article

  • Multiple parameters vs single parameter(object with multiple properties)

    - by Shwetanka
    I have an Entity Student with following properties - (name, joinedOn, birthday, age, batch, etc.) and a function fetchStudents(<params>). I want to fetch students based on multiple filters. In my method I have two ways to pass filters. Pass all filters as params to the method Make a class StudentCriteria with filters as fields and then pass the object of this class While working in java I always go with the second option but recently I'm working in php and I was advised to go with the first way. I am unable to figure out which way is better in maintaining the code, reusability and performance wise. Thanks.

    Read the article

  • Child to Parent linking - bad idea?

    - by Thraka
    I have a situation where my parent knows about it's child (duh) but I want the child to be able to reference the parent. The reason for this is that I want the child to have the ability to designate itself as most important or least important when it feels like it. When the child does this, it moves it to the top or bottom of the parent's children. In the past I've used a WeakReference property on the child to refer back tot he parent, but I feel that adds an annoying overhead, but maybe it's just the best way to do it. Is this just a bad idea? How would you implement this ability differently?

    Read the article

  • Are my negative internship experiences respresentative of the real world?

    - by attemptAtAnonymity
    I'm curious if my current experiences as an intern are representative of actual industry. As background, I'm through the better part of two computing majors and a math major at a major university; I've aced every class and adored all of them, so I'd like to think that I'm not terrible at programming. I got an internship with one of the major software companies, and half way through now I've been shocked at the extraordinarily low quality of code. Comments don't exist, it's all spaghetti code, and everything that could be wrong is even worse. I've done a ton of tutoring/TAing, so I'm very used to reading bad code, but the major industry products I've been seeing trump all of that. I work 10-12 hours a day and never feel like I'm getting anywhere, because it's endless hours of trying to figure out an undocumented API or determine the behavior of some other part of the (completely undocumented) product. I've left work hating the job every day so far, and I desperately want to know if this is what is in store for the rest of my life. Did I draw a short straw on internships (the absurdly large paychecks imply that it's not a low quality position), or is this what the real world is like?

    Read the article

  • Should you promise to deliver a feature that you aren't sure if its implementable?

    - by user476
    In an article from HN, I came across the following advice: Always tell your customer/user "yes", even if you're not sure. 90% of the time, you'll find a way to do it. 10% of the time, you'll go back and apologize. Small price to pay for major personal growth But I've always thought that one should do a feasibility analysis before making any kind of promises to a customer/user, so that they aren't misled at any point. At what circumstances, then, should the above advice applicable?

    Read the article

  • Software development based on a reference implementation

    - by Kanishka Dilshan
    Lets say I have library "A2" as a dependency in a project. Library "A2" is derived from library "A1" where someone has done few changes to the library "A1" 's source code. Lets say there is a new version of "A1" I want to use the new version but no modification to its sourcecode at all. I am planning to identify what are the changes done to the original library when deriving library "A2" out of it and decorate the latest version of the library with those changes. Is it a good approach to solve this? if not can someone suggest the best approach to solve this kind of problems?

    Read the article

  • Prefer examples over Documentation. Is it a behavioral problem?

    - by user1324816
    Whenever I come across a new api or programming language or even simple Linux MAN pages, I always (ever since I remember) avoided then and instead lazily relied on examples for gaining understanding of new concepts. Subconsciously, I avoid documentation/api whenever it is not straight forward or cryptic or just plain boring. It's been years since I began programming and now I feel like I need to mend my ways as I now realize that I'm causing more damage by refraining from reading cryptic/difficult documentation as it is still a million times better than examples as the official documentation has more coverage than any example out there. So even after realizing that examples should be treated as "added" value instead of the "primary" source for learning. How do I break this bad habit as a programmer or am I over thinking? Any wisdom from fellow programmers is appreciated.

    Read the article

  • Basic security practices for desktop Ubuntu

    - by Daisetsu
    Most of us know the basic security practices on Windows: use a limited account set a password disable unused services uninstall bloatware Antivirus / Antimalware etc. I haven't ran linux as my main desktop computer before, so I don't know how to properly secure it. I have heard linux is supposed to be more secure than Windows, but I know that the default settings of anything are rarely secure. What are some things I should do as a new Linux user to secure my desktop system from attack?

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Microsoft.Practices.ObjectBuilder

    - by nishant
    I have installed the application on the client's server. Here's the issue though.. The client's server is running on medium trust. With medium trust godaddy does not give persmissions to certain files. If you go to the following URL : http://helpingyougethired.com/Introduction.aspx you will see the error : Could not load file or assembly 'Microsoft.Practices.ObjectBuilder. If I delete the "Microsoft.Practices.ObjectBuilder.dll" and "Microsoft.Practices.ObjectBuilder.xml", the pages load. But when I try to save the form, I get some error related to this dll. So I need to find an alternative for this dll. I cannot use this DLL. my save, delete, select process is depend on this dll plz provide any solution if u have. thanks in advance

    Read the article

  • Coding guidelines + Best Practises?

    - by Chathuranga Chandrasekara
    I couldn't find any question that directly applies to my query so I am posting this as a new question. If there is any existing discussion that may help me, please point it out and close the question. Question: I am going to do a presentation on C# coding guidelines but it is not supposed to limit to coding standards. So I have a rough idea but I think I need to address good programing practices. So the contents will be something like this. Basic coding standards - Casing, Formatting etc. Good practices - Usage of Hashset over other data structures, String vs String Builder, String's immutability and using them effectively etc Really I would like to add more good practices (Especially to improve the performance.) So like to hear some more good practices to be used with C#. Any suggestions??? (No need of large descriptions :) Just the idea is sufficient.)

    Read the article

  • Best practices for mass email platform

    - by Niro
    I am in the process of setting up mass email service. My question is: what are the best practices to achieve maximum deliver-ability. More precisely - what should I do/know to prevent spam filters from blocking the emails (the emails are not spam). for example- how can I tell if my IP address is blacklisted somewhere and how can I prevent it from becoming blacklisted. Is amazon web services a suitable platform due to dynamic IP addresses, what are the restrictions on the from address, can it be different from the mail server domain.... you get it....

    Read the article

  • Plesk Backup Best Practices?

    - by The MYYN
    My client utilizes Plesk (9.X) for Server Management. We're implementing a custom backup solution, which should include a complete restorable representation of the actual Plesk configuration (Emails, Domains, etc.). We have full access, since it is a dedicated server resembling these steps: Plesk offers some backups, but they do not include the actual content of the (sub-)domains. Browsing the docs and the internet, I haven't found much ideas on that problem. Our target is to have a disaster recovery scenario: Reinstall a clean OS (Ubuntu) from scratch. Install MySQL/PHP and dependencies (since this runs the app) Install a bare plesk Restore all domains + plesk configuration from an archive Continue operations ... Now steps 1, 2, 3 and 5 are trivial. But what are the best practices for step 4? A side questions: Are there any easy-to-use open source apps out there, to create and restore server-images (even on machines with an possible different hardware)? Thanks for your time and input.

    Read the article

  • Best practices for setting lm-factor in Squid refresh patterns

    - by Mpentecost
    I am running a Squid (3.1) cache in front of Django. The content of the site does not change very often, so Squid gives our backend much needed breathing room. Currently, this is the refresh pattern that we are using to cache the content: refresh_pattern . 60 100% 60 We basically want to cache everything for at least an hour (and only an hour) before Squid then re-validates the content. My question is on the "100%" parameter, which sets the lm-factor. I'm not sure if setting that to 100% is doing what we want it to. The assumption was that by setting it to 100%, it would ensure that objects stay in the cache for the max cache time. Is this an incorrect assumption? What are the best practices that one should follow when setting up a refresh pattern like this?

    Read the article

  • Active Directory LDS Structure Best Practices

    - by Mark A Johnson
    I'm looking for guidance in structuring an LDS directory and finding only best practices targeted at Domain Services. Does anyone here have references for the hierarchical structure we set up in the directory? I'm interested in small items, like whether to name the top node with "DC" tags or "O" tags, etc. E.g., should it be "DC=CompanyName,DC=local" when we're not actually using any specific domain? Shouldn't it be "O=CompanyName"? And I'm interested in whether this question is even worth considering.

    Read the article

  • Best practices to avoid Jenkins error: sudo: no tty present and no askpass program specified

    - by s g
    When running any sudo command from Jenkins I get the following error: sudo: no tty present and no askpass program specified I understand that I can solve this by adding a NOPASSWD entry to my /etc/sudoers file which will allow user jenkins to run commands without needing a password. I can add an entry like this: %jenkins ALL=(ALL)NOPASSWD:/home/vts_share/test/sudotest.sh ...but this leads to the following issue: how to avoid specifying full path in sudoers file? I can add an entry like this: %jenkins ALL=NOPASSWD: ALL ...but this allows user jenkins to avoid the password prompt for all commands, which seems a bit unsafe. I'm just curious what my options are here, and if there are any best practices I should consider.

    Read the article

  • Roaming Profiles: Best Practices

    - by Noah Clark
    I want to setup roaming profiles for about 50 users. What is the best way to go about doing this? What are the best practices. I've read about desktops/my Documents being TOO big. How big is too big? We have a few users who keep a lot of media on their machine to listen to throughout the day. I would imagine they have a few gigs of MP3's in their My Documents folder. How do you deal with this? Thanks!

    Read the article

  • Web Farm Application deployment best practices

    - by rauts
    Hi All, We are having a web farm which hosts multiple ASP.Net applications. We typically have 4 servers on the farm. The dilemma which i am having is in terms of capacity issue of the farm. Lets say i have currently got 200 apps in total. Should I deploy all 200 apps on all 4 servers (i.e. all the servers in the farm are identical) or should i split the applications between 2 sets of server and create 2 smaller farms so that i can then manage the application based on its criticality and usage etc. Any best practices in this area would be highly appreciated. Thanks Rauts

    Read the article

  • any practices ,samples for ERD?

    - by just_name
    Q: I wanna any web sites , any books just for training on ERD and normalization ,, i wanna a lot of samples ,practices,and case studies with recommended answers, to strength myself in database design.and avoid the poor data base design i made . note:i don't need books to explain the concepts , what i need is practices ,examples,case studies with recommended answers. Thanks in advance.

    Read the article

  • Best practices and Design Patterns for iPhone forms?

    - by cannyboy
    Part of the app I'm making requires the user to fill in a multi-page form, the contents of which will be saved locally (perhaps using Core Data). Are there any best practices for this? This form just includes text fields. I guess the options are UITextFields, or perhaps a UIWebView, with the fields as part of an html form? Are there are any best practices, or design patterns, which are good for this kind of thing?

    Read the article

  • Securing SSH/SFTP and best practices on security

    - by MultiformeIngegno
    I'm on a fresh VPS with Ubuntu Server 12.04. I wanted to ask you the good practices to apply to enhance security over a stock Ubuntu-server. This is what I did up to now: I added Google Authenticator to SSH, then I created a new user (whom I'll use instead of 'root' for SSH & SFTP access) which I added to my /etc/sudoers list below 'root', so now it's: # User privilege specification root ALL=(ALL:ALL) ALL new_user ALL=(ALL:ALL) ALL Then I edited sshd_config and set PermitRootLogin to 'no'. Then restarted the ssh service. Is this ok? There are a few things I'd like to ask you though: 1) What's the sense of adding a new (sudoer) user whilst the root user still exist (ok it can't access with root privilege but it's still there..)? 2) System files are owned by 'root'.. I want to use my new_user to access via SFTP but with it I can't edit those files!! Should I mass-CHMOD 'em so that new_user has write perms too? What's the good practice on this? Thanks in advance, I hope you'll tell me if I did something wrong and/or other ways to secure the system. :)

    Read the article

  • Enterprise class storage best practices

    - by churnd
    One thing that has always perplexed me is storage best practices. Filesystems brag about how they can be petabytes or exabytes in size. Yet, I do not know many sysadmins who are willing to let a single volume grow over several terrabytes. I do know the primary reason behind this is how long it would take to rebuild the array should a drive fail. The more drives in a single LUN, the longer this takes and the greater your risk of losing another drive while the rebuild is taking place. Then there's usage reasons. Admins will carve out a LUN based on how much space they think needs to be allocated to the project. It seems more practical to me for the LUN to be one large array and to use quotas. I understand this wouldn't satisfy every requirement (iSCSI), but I see a lot of NAS systems (NFS) managed this way. I also understand that the underlying volumes can be grown/shrunk as needed quite easily, but wouldn't it be less "risky" to use quotas rather than manipulating volumes and bringing possible data loss into the equation? There may be some other reasons I'm missing, so please enlighten me. Can we not expect filesystems to ever be so large? Are we waiting for the hardware to get faster to cut down on rebuild times?

    Read the article

  • Best Practices for adding Exchange Archive to current 3 server setup

    - by ADquestion
    I'm looking to add an Archive Database (which I know is just a Mailbox Database) to our current Exchange 2010 environment. I have done this in the past at a previous job, but we had a simpler setup than at this current job. I've been trying to find some best practices to make sure it's setup in an ideal way, but so far not finding the details I would prefer. Hoping someone on here can give me a few pointers. Currently we have a 3 server setup, Server1, Server2 and Server3. Three databases of course, DB1, DB2 and DB3. We have a DAG setup between them. Server1 has DB1 and DB3 on it, DB1 is not active, DB3 is active. Server2 has DB1 and DB2 on it, both are active. Server3 has DB2 and DB3 on it, both are not active. All three servers are virtual (VMware). Each one is setup identical to the other as follows: C:\ 60GB - OS E:\ 600GB - DB (currently only 90GB used, pointing to Datastore just for Server2) F:\ 200GB - Log (2GB used, pointing to same Datastore as above) G:\ 200GB - Restore (0 used, pointing to same Datastore as above) The drives are all set to Thin Provisioning, and it looks as though I have 600GB of available space. They have not been on Exchange that long and only have about 70GB worth of PSTs to import back in that will be going to the Archive Database, plus anything older than 2 years from their current inbox that will be moved into there. I was considering placing the Archive DB on the E:\ drive of Server3 (only) like the current DB, but wasn't sure if that was acceptable. I don't plan on setting the Archive DB up with the DAG, just plan on having it as a single repository for older emails and manually back it up every now and then. If anyone has any suggestions on this I would appreciate it the input. I've done it on a slightly smaller scale before and it worked well, but like to think it through before pulling the trigger, especially at a new job. :) Thanks again!

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >