Search Results

Search found 2170 results on 87 pages for 'earning potential'.

Page 17/87 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • set a default saving data in a selection box in HABTM model

    - by vincent low
    here is my problem which in my system. my system allow user to add event, which include date, time, places. In the other hand, user are allow to choose the "Event sharing to" some of other user. So i successful to created and share to other user, when the user login which are being choose to share with the event, he is able to view for that particular event. But the problem is, when user add the event, he must choose for his own name also in the select box. If not he will be unable to read the event that he had created. So i need to save a default data to my model, which is whenever the user got choose the sharing event or not, it also will save the data to his user id. here is my code for select box, what can i edit and do to let it set a default saving value which is always save with the user own data?? input('User',array( 'label' = 'Select Related Potential', 'options' = $users, //'id'='user', 'style'='width:250px;height:100px', //'selected' = $ownUserId )); ? i try to solve by adding 1 more row to the add.ctp, but the permission just set to the own user who created it, the other user i choose is unable to read. $form-input('User',array( 'label' = 'Select Related Potential', 'options' = $users, //'id'='user', 'style'='width:250px;height:100px', 'selected' = $ownUserId )); $form-input('User',array('type'='hidden','value'=$ownUserId));

    Read the article

  • Work for huge company or small company that makes products for huge company?

    - by TheGambler
    I'm facing a career decision. I currently work for a huge, huge, huge company (this should bring 3-4 companies to mind) as a programmer. I work on a really big (3 million lines of code) J2EE web application. I've been out of college for 2 years and I pretty much know that big corporate life isn't for me. There is a career opportunity to work for a small company that has a lot of funding and huge potential to sell their main project to this huge company I work for.This company makes kiosk software and works with a nearby kiosk software company. The deal isn't done, but I would come straight in to work on this project. This would be more of a interactive media position and I would do the development in flash/action script. I personally think this will be more interesting work then what I do now. This company does have other clients right now, just not on the scale of this potential huge client. I was told that my initial salary would be equal to what I'm making now, but if we land this client it could go up a lot more. I have a wife and child which should and will play a part in my decision if I should take this job. We do though have family that if crap hit the fan we could live with until we got back on our feet. So with the information provided, does this sound like a good opportunity and should I take it? This is a huge decision and the reason I'm asking the question here is because there are a lot of people here that have probably faced this decision before.

    Read the article

  • ObjectiveC - Releasing objects added as parameters

    - by NobleK
    Ok, here goes. Being a Java developer I'm still struggling with the memory management in ObjectiveC. I have all the basics covered, but once in a while I encounter a challenge. What I want to do is something which in Java would look like this: MyObject myObject = new MyObject(new MyParameterObject()); The constructor of MyObject class takes a parameter of type MyParameterObject which I initiate on-the-fly. In ObjectiveC I tried to do this using following code: MyObject *myObject = [[MyObject alloc] init:[[MyParameterObject alloc] init]]; However, running the Build and Analyze tool this gives me a "Potential leak of an object" warning for the MyParameter object which indeed occurs when I test it using Instruments. I do understand why this happens since I am taking ownership of the object with the alloc method and not relinquishing it, I just don't know the correct way of doing it. I tried using MyObject *myObject = [[MyObject alloc] init:[[[MyParameterObject alloc] init] autorelease]]; but then the Analyze tool told me that "Object sent -autorelease too many times". I could solve the issue by modifying the init method of MyParameterObject to say return [self autorelease]; in stead of just return self;. Analyze still warnes about a potential leak, but it doesn't actually occur. However I believe that this approach violates the convention for managing memory in ObjectiveC and I really want to do it the right way. Thanx in advance.

    Read the article

  • designing an ASP.NET MVC partial view - showing user choices within a large set of choices

    - by p.campbell
    Consider a partial view whose job is to render markup for a pizza order. The desire is to reuse this partial view in the Create, Details, and Update views. It will always be passed an IEnumerable<Topping>, and output a multitude of checkboxes. There are lots... maybe 40 in all (yes, that might smell). A-OK so far. Problem The question is around how to include the user's choices on the Details and Update views. From the datastore, we've got a List<ChosenTopping>. The goal is to have each checkbox set to true for each chosen topping. What's the easiest to read, or most maintainable way to achieve this? Potential Solutions Create a ViewModel with the List and List. Write out the checkboxes as per normal. While writing each, check whether the ToppingID exists in the list of ChosenTopping. Create a new ViewModel that's a hybrid of both. Perhaps call it DisplayTopping or similar. It would have property ID, Name and IsUserChosen. The respective controller methods for Create, Update, and Details would have to create this new collection with respect to the user's choices as they see fit. The Create controller method would basically set all to false so that it appears to be a blank slate. The real application isn't pizza, and the organization is a bit different from the fakeshot, but the concept is the same. Is it wise to reuse the control for the 3 different scenarios? How better can you display the list of options + the user's current choices? Would you use jQuery instead to show the user selections? Any other thoughts on the potential smell of splashing up a whole bunch of checkboxes?

    Read the article

  • Deploying DotNetNuke and separate ASP.NET Application together - Possible Issues?

    - by TheTXI
    I am making this in a proactive attempt to head off any potential problems which could arise from this. The situation is that we are developing an ASP.NET application for a client which will handle the online ordering from their customers. This application is going to be using the same database that their current WinForms application uses (no real issue here). At the same time we are developing a new front-end website for them using DotNetNuke. The DotNetNuke app will simply be linking to the ASP.NET application for the customers to submit their orders (no need for them to communicate back and forth, etc.) The plan is to host both applications on the same box at the client location. What I am looking for are potential problems or setup tips which would prevent possible conflict between the two apps (web.config conflicts, etc.) Is there a problem with having both hosted on the same location, how should IIS be set up, etc.? If there are any external resources also available which could address this, please feel free to link them as well.

    Read the article

  • Good ways to earn income as a self employed developer

    - by nullptr
    I was just wondering if people could share their experiences and ideas about generating / earning income from a software product or service they have personally developed. To me this seems like a good way to earn a living while doing what we love (programming) and working on projects and problems which interest us. Ie, NOT boring bank or marketing software etc 9-5 all week... Some ideas I have are things like web 2.0 style sites (Facebook,Youtube,Twitter,Digg) etc etc... - These can be very very profitable as we all know but can take years to take off. Are there ways to survive until/if this does happen? Mobile applications. Iphone, Google Android and the new up coming Nintendo DS app store. These have good potential to make it easy to find a market for your application and make selling it easy. Shareware/PC software. A bit 80's and 90's and you kind of need to be a salesman/marketer to sell it but its the only other thing I can think of. Also im not talking about doing freelance work. Im only interested in idea's you can come up with and develop your self (not other peoples ideas or problems which are you are payed to develop). Things that a sole developer or at the most 2 developers could work on and have good potential for high returns on investment (in terms of time) would be great. PS, I wish I thought of stackoverflow!

    Read the article

  • j_security_check to SSO in different module under Oracle App Server?

    - by thebearinboulder
    I have an existing j2ee application running on Oracle App Server. It is targeted towards paying customers so the content is secured and a SSO module properly intercepts all requests for secured content. Now the company is adding a unbranded public-facing module with a number of unsecured pages. At one point the user is expected to register for a free account and log in to proceed further. Think doctors adding a public-facing site with information for potential patients, or lawyers adding a public-facing site with information for potential clients. There's some information on the session and the usual approach would be to authenticate the user, persist the session information using the now-known user id, invalidate the existing session (to prevent certain types of attacks), the reload the session information before returning to the user. I can't just persist it under the session id since that's about to change. The glitch is that the existing application already has an SSO module and I get a 404 error every time I try to direct to j_security_check. I've tried that, /sso/j_security_check, even http://localhost/sso/j_security_check, all without success. I noticed that an earlier question said that tomcat requires access to a secured page before j_security_check is even visible. I don't know if that's the case with Oracle AS. Ideas? Or is the best approach to continue arguing that we have a different user base so it would be better to handle authentication in our own module anyway?

    Read the article

  • Loading specific files from arbitrary directories?

    - by Haydn V. Harach
    I want to load foo.txt. foo.txt might exist in the data/bar/ directory, or it might exist in the data/New Folder/ directory. There might be a different foo.txt in both of these directories, in which case I would want to either load one and ignore the other according to some order that I've sorted the directories by (perhaps manually, perhaps by date of creation), or else load them both and combine the results somehow. The latter (combining the results of both/all foo.txt files) is circumstantial and beyond the scope of this question, but something I want to be able to do in the future. I'm using SDL and boost::filesystem. I want to keep my list of dependencies as small as possible, and as cross-platform as possible. I'm guessing that my best bet would be to get a list of every directory (within the data/ folder), sort/filter this list, then when I go to load foo.txt, I search for it in each potential directory? This sounds like it would be very inefficient, if I have dozens of potential directories to search through every time. What's the best way to go about accomplishing this? Bonus: What if I want some of the directories to be archives? ie. considering both data/foo/ and data/bar.zip to both be valid, and pull foobar.txt from either one without caring.

    Read the article

  • Sorting Algorithms

    - by MarkPearl
    General Every time I go back to university I find myself wading through sorting algorithms and their implementation in C++. Up to now I haven’t really appreciated their true value. However as I discovered this last week with Dictionaries in C# – having a knowledge of some basic programming principles can greatly improve the performance of a system and make one think twice about how to tackle a problem. I’m going to cover briefly in this post the following: Selection Sort Insertion Sort Shellsort Quicksort Mergesort Heapsort (not complete) Selection Sort Array based selection sort is a simple approach to sorting an unsorted array. Simply put, it repeats two basic steps to achieve a sorted collection. It starts with a collection of data and repeatedly parses it, each time sorting out one element and reducing the size of the next iteration of parsed data by one. So the first iteration would go something like this… Go through the entire array of data and find the lowest value Place the value at the front of the array The second iteration would go something like this… Go through the array from position two (position one has already been sorted with the smallest value) and find the next lowest value in the array. Place the value at the second position in the array This process would be completed until the entire array had been sorted. A positive about selection sort is that it does not make many item movements. In fact, in a worst case scenario every items is only moved once. Selection sort is however a comparison intensive sort. If you had 10 items in a collection, just to parse the collection you would have 10+9+8+7+6+5+4+3+2=54 comparisons to sort regardless of how sorted the collection was to start with. If you think about it, if you applied selection sort to a collection already sorted, you would still perform relatively the same number of iterations as if it was not sorted at all. Many of the following algorithms try and reduce the number of comparisons if the list is already sorted – leaving one with a best case and worst case scenario for comparisons. Likewise different approaches have different levels of item movement. Depending on what is more expensive, one may give priority to one approach compared to another based on what is more expensive, a comparison or a item move. Insertion Sort Insertion sort tries to reduce the number of key comparisons it performs compared to selection sort by not “doing anything” if things are sorted. Assume you had an collection of numbers in the following order… 10 18 25 30 23 17 45 35 There are 8 elements in the list. If we were to start at the front of the list – 10 18 25 & 30 are already sorted. Element 5 (23) however is smaller than element 4 (30) and so needs to be repositioned. We do this by copying the value at element 5 to a temporary holder, and then begin shifting the elements before it up one. So… Element 5 would be copied to a temporary holder 10 18 25 30 23 17 45 35 – T 23 Element 4 would shift to Element 5 10 18 25 30 30 17 45 35 – T 23 Element 3 would shift to Element 4 10 18 25 25 30 17 45 35 – T 23 Element 2 (18) is smaller than the temporary holder so we put the temporary holder value into Element 3. 10 18 23 25 30 17 45 35 – T 23   We now have a sorted list up to element 6. And so we would repeat the same process by moving element 6 to a temporary value and then shifting everything up by one from element 2 to element 5. As you can see, one major setback for this technique is the shifting values up one – this is because up to now we have been considering the collection to be an array. If however the collection was a linked list, we would not need to shift values up, but merely remove the link from the unsorted value and “reinsert” it in a sorted position. Which would reduce the number of transactions performed on the collection. So.. Insertion sort seems to perform better than selection sort – however an implementation is slightly more complicated. This is typical with most sorting algorithms – generally, greater performance leads to greater complexity. Also, insertion sort performs better if a collection of data is already sorted. If for instance you were handed a sorted collection of size n, then only n number of comparisons would need to be performed to verify that it is sorted. It’s important to note that insertion sort (array based) performs a number item moves – every time an item is “out of place” several items before it get shifted up. Shellsort – Diminishing Increment Sort So up to now we have covered Selection Sort & Insertion Sort. Selection Sort makes many comparisons and insertion sort (with an array) has the potential of making many item movements. Shellsort is an approach that takes the normal insertion sort and tries to reduce the number of item movements. In Shellsort, elements in a collection are viewed as sub-collections of a particular size. Each sub-collection is sorted so that the elements that are far apart move closer to their final position. Suppose we had a collection of 15 elements… 10 20 15 45 36 48 7 60 18 50 2 19 43 30 55 First we may view the collection as 7 sub-collections and sort each sublist, lets say at intervals of 7 10 60 55 – 20 18 – 15 50 – 45 2 – 36 19 – 48 43 – 7 30 10 55 60 – 18 20 – 15 50 – 2 45 – 19 36 – 43 48 – 7 30 (Sorted) We then sort each sublist at a smaller inter – lets say 4 10 55 60 18 – 20 15 50 2 – 45 19 36 43 – 48 7 30 10 18 55 60 – 2 15 20 50 – 19 36 43 45 – 7 30 48 (Sorted) We then sort elements at a distance of 1 (i.e. we apply a normal insertion sort) 10 18 55 60 2 15 20 50 19 36 43 45 7 30 48 2 7 10 15 18 19 20 30 36 43 45 48 50 55 (Sorted) The important thing with shellsort is deciding on the increment sequence of each sub-collection. From what I can tell, there isn’t any definitive method and depending on the order of your elements, different increment sequences may perform better than others. There are however certain increment sequences that you may want to avoid. An even based increment sequence (e.g. 2 4 8 16 32 …) should typically be avoided because it does not allow for even elements to be compared with odd elements until the final sort phase – which in a way would negate many of the benefits of using sub-collections. The performance on the number of comparisons and item movements of Shellsort is hard to determine, however it is considered to be considerably better than the normal insertion sort. Quicksort Quicksort uses a divide and conquer approach to sort a collection of items. The collection is divided into two sub-collections – and the two sub-collections are sorted and combined into one list in such a way that the combined list is sorted. The algorithm is in general pseudo code below… Divide the collection into two sub-collections Quicksort the lower sub-collection Quicksort the upper sub-collection Combine the lower & upper sub-collection together As hinted at above, quicksort uses recursion in its implementation. The real trick with quicksort is to get the lower and upper sub-collections to be of equal size. The size of a sub-collection is determined by what value the pivot is. Once a pivot is determined, one would partition to sub-collections and then repeat the process on each sub collection until you reach the base case. With quicksort, the work is done when dividing the sub-collections into lower & upper collections. The actual combining of the lower & upper sub-collections at the end is relatively simple since every element in the lower sub-collection is smaller than the smallest element in the upper sub-collection. Mergesort With quicksort, the average-case complexity was O(nlog2n) however the worst case complexity was still O(N*N). Mergesort improves on quicksort by always having a complexity of O(nlog2n) regardless of the best or worst case. So how does it do this? Mergesort makes use of the divide and conquer approach to partition a collection into two sub-collections. It then sorts each sub-collection and combines the sorted sub-collections into one sorted collection. The general algorithm for mergesort is as follows… Divide the collection into two sub-collections Mergesort the first sub-collection Mergesort the second sub-collection Merge the first sub-collection and the second sub-collection As you can see.. it still pretty much looks like quicksort – so lets see where it differs… Firstly, mergesort differs from quicksort in how it partitions the sub-collections. Instead of having a pivot – merge sort partitions each sub-collection based on size so that the first and second sub-collection of relatively the same size. This dividing keeps getting repeated until the sub-collections are the size of a single element. If a sub-collection is one element in size – it is now sorted! So the trick is how do we put all these sub-collections together so that they maintain their sorted order. Sorted sub-collections are merged into a sorted collection by comparing the elements of the sub-collection and then adjusting the sorted collection. Lets have a look at a few examples… Assume 2 sub-collections with 1 element each 10 & 20 Compare the first element of the first sub-collection with the first element of the second sub-collection. Take the smallest of the two and place it as the first element in the sorted collection. In this scenario 10 is smaller than 20 so 10 is taken from sub-collection 1 leaving that sub-collection empty, which means by default the next smallest element is in sub-collection 2 (20). So the sorted collection would be 10 20 Lets assume 2 sub-collections with 2 elements each 10 20 & 15 19 So… again we would Compare 10 with 15 – 10 is the winner so we add it to our sorted collection (10) leaving us with 20 & 15 19 Compare 20 with 15 – 15 is the winner so we add it to our sorted collection (10 15) leaving us with 20 & 19 Compare 20 with 19 – 19 is the winner so we add it to our sorted collection (10 15 19) leaving us with 20 & _ 20 is by default the winner so our sorted collection is 10 15 19 20. Make sense? Heapsort (still needs to be completed) So by now I am tired of sorting algorithms and trying to remember why they were so important. I think every year I go through this stuff I wonder to myself why are we made to learn about selection sort and insertion sort if they are so bad – why didn’t we just skip to Mergesort & Quicksort. I guess the only explanation I have for this is that sometimes you learn things so that you can implement them in future – and other times you learn things so that you know it isn’t the best way of implementing things and that you don’t need to implement it in future. Anyhow… luckily this is going to be the last one of my sorts for today. The first step in heapsort is to convert a collection of data into a heap. After the data is converted into a heap, sorting begins… So what is the definition of a heap? If we have to convert a collection of data into a heap, how do we know when it is a heap and when it is not? The definition of a heap is as follows: A heap is a list in which each element contains a key, such that the key in the element at position k in the list is at least as large as the key in the element at position 2k +1 (if it exists) and 2k + 2 (if it exists). Does that make sense? At first glance I’m thinking what the heck??? But then after re-reading my notes I see that we are doing something different – up to now we have really looked at data as an array or sequential collection of data that we need to sort – a heap represents data in a slightly different way – although the data is stored in a sequential collection, for a sequential collection of data to be in a valid heap – it is “semi sorted”. Let me try and explain a bit further with an example… Example 1 of Potential Heap Data Assume we had a collection of numbers as follows 1[1] 2[2] 3[3] 4[4] 5[5] 6[6] For this to be a valid heap element with value of 1 at position [1] needs to be greater or equal to the element at position [3] (2k +1) and position [4] (2k +2). So in the above example, the collection of numbers is not in a valid heap. Example 2 of Potential Heap Data Lets look at another collection of numbers as follows 6[1] 5[2] 4[3] 3[4] 2[5] 1[6] Is this a valid heap? Well… element with the value 6 at position 1 must be greater or equal to the element at position [3] and position [4]. Is 6 > 4 and 6 > 3? Yes it is. Lets look at element 5 as position 2. It must be greater than the values at [4] & [5]. Is 5 > 3 and 5 > 2? Yes it is. If you continued to examine this second collection of data you would find that it is in a valid heap based on the definition of a heap.

    Read the article

  • Get to Know a Candidate (2 of 25): Merlin Miller&ndash;American Third Position Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Meet Merlin Miller of American Third Position Party In addition to being American Third Position Party nominee, Miller is an independent film maker.  He is a graduate of West Point and has commanded two units in the United States Army.  After military service he worked as an industrial engineering manager for Michelin Tire.  He learned about film making by earning an MFA from the University of Southern California. Mr. Miller is on the ballot in: CO, NJ, and TN. In the 2000’s, Miller began taking an increasingly paleoconservative political stance.  He claimed that Hollywood “ surreptitiously seeks to destroy our European-American heritage and our Christian-based traditional values, and replace them with values that debase these traditional values and elevate minorities as paragons of virtue and wisdom....Today’s motion pictures, in concert with other forms of mass media entertainment, are the greatest enemies to the well-being of our progeny and the future of our country.” Miller states that he "doesn't like" interracial marriage; however, he does not support outlawing interracial marriage, either.  Miller has denied being anti-Semitic, instead claiming that he merely opposes "favoritism" granted to Jews in the film industry.  He also opposes illegal immigration and what he refers to as "wide open borders" in the United States. The American Third Position Party (A3P) is a third positionist American political party which promotes white supremacy.  It was officially launched in January 2010 (although in November 2009 it filed papers to get on a ballot in California) partially to channel the right-wing populist resentment engendered by the financial crisis of 2007–2010 and the policies of the Obama administration and defines its principal mission as representing the political interests of white Americans. The party takes a strong stand against immigration and globalization, and strongly supports an anti-interventionist foreign policy. Although the party does not support labor unions, they do strongly support the labor rights of the American working class on a platform of placing American workers first over illegal immigrant workers and banning of overseas corporate relocation of American industry and technology Third Position or Third Alternative refers to a revolutionary nationalist political position that emphasizes its opposition to both communism and capitalism. Advocates of Third Position politics typically present themselves as "beyond left and right", instead claiming to syncretize radical ideas from both ends of the political spectrum Learn more about Merlin Miller and American Third Position Party on Wikipedia.

    Read the article

  • NEW: Oracle Certification Exam Preparation Seminars

    - by Harold Green
    Hi Everyone, I am really excited about a new offering that we are announcing this week - Oracle Certification Exam Preparation Seminars. These are something that will make a big difference for many of you in your efforts to become certified and move your career forward. They are also something that have previously only been available (but very popular) to the limited number of customers who have attended our annual conferences in San Francisco (Oracle OpenWorld and JavaOne). These are the first in a series of offerings that we are releasing over the next few months. So for those of you either preparing or considering Oracle certification - keep watching here on the blog, Facebook, Twitter and the Oracle Certification website for additional announcements related to our most popular certification areas. Details of the new Exam Preparation Seminars are found below: NEW: ORACLE CERTIFICATION EXAM PREPARATION SEMINARS Becoming Oracle certified is a great way to build your career, gain additional credibility and improve your earning power. We know that the decision to become certified is not trivial. Our surveys indicate that people consider their time investment a critical factor in their decision to become certified. Your time is important. In order to help candidates maximize the efficiency of their study time we are releasing a new series of video-based seminars called Exam Preparation Seminars. These seminars are patterned after the extremely popular Exam Cram sessions that until now have only been available at our annual customer conferences (Oracle Open World and JavaOne). Beginning today they are now available to anyone, anywhere as a part of this Exam Prep Seminar series. Features: Fast-paced objective by objective review of the exam topics - led by top Oracle University instructors 24/7 access through Oracle University's training on demand platform. Ability to re-watch all or part of the the seminar. All the conveniences of video-based training: start, stop, fast-forward, skip, rewind, review. Tips that will help you better understand what you need to know to pass the exam. The Exam Preparation Seminars are meant to help anyone with a working knowledge of the technology get that extra boost to help them finalize their preparation, and will help anyone who wants a better understanding of the the depth and breadth of the exam topics and objectives. Benefits: Save time by understanding what you should study. Makes you efficient because you will understand the breadth and depth of each of the exam topics. Helps you create a better, more efficient study plan. Improves your confidence in your skills and ability to pass the certification exam. Exam Preparation Seminars are available individually, or in convenient Value Packages (which include the Exam Preparation Seminar, and an exam voucher which includes one free-retake if you need it). Currently we are releasing two seminars - one for DBA SQL and one for DBA Administration I. Additional offerings are in process. Find out more: General WEB: Oracle Certification Exam Preparation Seminars VIDEO: Exam Preparation Seminars Promo (1:27) Oracle Database Administration I (11g, 10g) VIDEO: Instructor Introduction (1:08) VIDEO: Sample Video (2:16) Oracle Database SQL VIDEO: Instructor Introduction (1:08) VIDEO: Sample Video (2:16)

    Read the article

  • Best Advice Ever: Learn By Helping Others

    - by Argenis
    I remember when back in 2001 my friend and former SQL Server MVP Carlos Eduardo Rojas was busy earning his MVP street-cred in the NNTP forums, aka Newsgroups. I always thought he was playing the Sheriff trying to put some order in a Wild Wild West town by trying to understand what these people were asking. He spent a lot of time doing this stuff – and I thought it was just plain crazy. After all, he was doing it for free. What was he gaining from all of that work? It was not until the advent of Twitter and #SQLHelp that I realized the real gain behind helping others. Forget about the glory and the laurels of others thanking you (and thinking you’re the best thing ever – ha!), or whatever award with whatever three letter acronym might be given to you. It’s about what you learn in the process of helping others. See, when you teach something, it’s usually at a fixed date and time, and on a specific topic. But helping others with their issues or general questions is something that goes on 24x7, on whatever topic under the sun. Just go look at sites like DBA.StackExchange.com, or the SQLServerCentral forums. It’s questions coming in literally non-stop from all corners or the world. And yet a lot of people are willing to help you, regardless of who you are, where you come from, or what time of day it is. And in my case, this process of helping others usually leads to me learning something new. Especially in those cases where the question isn’t really something I’m good at. The delicate part comes when you’re ready to give an answer, but you’re not sure. Often times I’ll try to validate with Internet searches and what have you. Often times I’ll throw in a question mark at the end of the answer, so as not to look authoritative, but rather suggestive. But as time passes by, you get more and more comfortable with that topic. And that’s the real gain.  I have done this for many years now on #SQLHelp, which is my preferred vehicle for providing assistance. I cannot tell you how much I’ve learned from it. By helping others, by watching others help. It’s all knowledge and experience you gain…and you might not be getting all that in your day job today. Such thing, my dear reader, is invaluable. It’s what will differentiate yours amongst a pack of resumes. It’s what will get you places. Take it from me - a guy who, like you, knew nothing about SQL Server.

    Read the article

  • Employer admits that its developers are underpaid and undervalued. Time to part ways?

    - by Psionic
    My employer recently posted an opening for a C# Developer with 3-5 years of experience. The requirements and expectations for the position were fair, up until the criteria for salary determination. It was stated clearly that compensation would depend ONLY on experience with C#, and that years of programming experience with other languages & frameworks would be considered irrelevant and not factored in. I brought up my concern with HR that good candidates would see this as a red flag and steer away. I attempted to explain that software development is about much more than specific languages, and that paying someone for their experience in a single language is a very shortsighted approach to hiring good developers (I'm telling this to the HR dept of a software company). The response: "We are tired of wasting time interviewing developers who expect 'big salaries' because they have lots of additional programming experience in languages other than what we require." The #1 issue here is that 'big salaries' = Market Rate. After some serious discussion, they essentially admitted that nobody at the company is paid near market rate for their skills, and there's nothing that can be done about it. The C-suite has the mentality that employees should only be paid for skills proven over years under their watch. Entry-level developers are picked up for less than $38K and may reach 50K after 3 years, which I'm assuming is around what they plan on offering candidates for the C# position. Another interesting discovery (not as relevant) - people 'promoted' to higher responsibilities do not get raises. The 'promotion' is considered an adjustment of the individuals' roles to better suit their 'strengths', which is what they're already being paid for. After hearing these hard truths straight from HR, I would assume that most people who are looking out for themselves would quickly begin searching for a new employer that has a better idea of what they're doing in the industry (this company fails in many other ways, but I don't want to write a book). Here is my dilemma however: This is the first official software development position I've held, for barely 1 year now. My previous position of 3 years was with a very small company where I performed many duties, among them software development (not in my official job description, but I tried very hard to make it so). I've identified local openings that I'm currently qualified for, most paying at least 50% more than I'm getting now. Question is, is it too soon for a jump? I am getting valuable experience in my current position, with no shortage of exciting projects. The work environment is very comfortable, and I'm told by many that I'm in the spotlight of the C-level guys for the stuff that I've been able to accomplish during my short time (for what that's worth). However, there is a clear opportunity cost to staying, knowing now with certainty that I will have to wait 3-5 years only to be capped at what I could potentially be earning elsewhere this year. I am also aware that 'job hopper' is a dangerous label to have, regardless of the reasons.

    Read the article

  • Pros/Cons of switching from Exchange to GMail

    - by Brent
    We are a medium-large non-profit company, with around 1000 staff and volunteers, and have been using MS Exchange (currently 2003) for our mail system for years. I recently attended a Google conference where they were positing that "Cloud computing is the way of the future", and encouraging us to switch from doing our own email with Exchange, to using GMail and Google Apps for everything. Additionally, one of our departments has been pushing from inside to do this transition within their own department, if not throughout the entire organization. I can definitely see some benefits - such as: Archive space - we never seem to have the space our users want, and of course, the more we get, the more we have to back up OS Agnostic - Exchange is definitely built for windows, and with mac and linux users on the rise, these users increasingly demand better tools / support. Google offers this. Better archiving - potential of e-discovery, that doesn't exist in a practical way with our current setup. Switching would relieve us of a fair bit of server administration, give more options to our end users, and free up the server resources we are now using for Exchange. Our IT department wants to be perceived as providing up-to-date solutions to technical problems, and this change would definitely provide such an image. Google's infrastructure is obviously much more robust than ours, and they employ some of the world's best security and network experts. However, there are also some serious drawbacks: We would be essentially outsourcing one of our mission-critical systems to a 3rd party The switch would inevitably involve Google Apps and perhaps more as well. That means we would have a-lot more at the mercy of a single (potentially weak) password. (is there a way to make this more secure using a password plus physical key of some sort??) Our data would not remain under our roof - or even in our country (Canada). This obviously has plusses on the Disaster Recovery side, but I think there are potential negatives on the legal side. I can't imagine that somebody as large as Google would be as responsive as we would want with regard to non-critical issues such as tracing missing emails, etc. (not sure how much access we would have to basic mail logs - for instance) Can anyone help me evaluate this decision? What issues am I overlooking? What experiences have you had with this transition (or the opposite - gmail to Exchange) Can you add to the points I have already outlined?

    Read the article

  • Can I compile Passenger (mod_rails/mod_rack) to make a statically linked Apache httpd?

    - by Oleg
    I prefer to disable httpd dynamic module loading on my production server. I've been using mod_jk linked statically into httpd for quite a long time and it proved to be stable. Now I would like to add Ruby Passenger (mod_rails/mod_rack) to my httpd. I wonder if it is possible to link it statically into Apache httpd the same way too? (without producing a huge httpd) If it is, are there any potential pitfalls, safety or performance concerns having both mod_jk and mod_rails within the same executable? Thanks

    Read the article

  • Keep IIS7 Failed Request Tracing as a sysadmin only diagnostic tool?

    - by Kev
    I'm giving some of our customers the ability to manage their sites via IIS Feature Delegation and IIS Manager for Remote Administration. One feature I'm unsure about permitting access to is Failed Request Tracing for the following reasons: Customers will forget to turn it off The server will be taking a performance hit (especially if 500 sites all have it turned on) The server will become littered with old FRT's The potential to leak sensitive information about how the server is configured thus providing useful information to would-be intruders. Should we just keep this as a troubleshooting tool for our own admins?

    Read the article

  • Encode divx into iPhone-compatible MPEG4 while downloading?

    - by Jonathan
    Is there a way to encode a DivX file into an iPhone-compatible video file while the DivX is downloading/streaming? Further, is it possible to re-stream the resulting iPhone-compatible video to the iPhone with a minimal (say, 2 minute or so) delay? I've seen other questions about encoding iPhone-compatible videos but they don't quite address the streaming video source and potential of streaming output that I'm interested in.

    Read the article

  • Convert all short (8.3) paths to long paths in the registry?

    - by Mehrdad
    Running FSUtil 8Dot3Name Scan /v /s C: gives you a list of potential 8.3 paths in the registry that are referring to your file system, but is there any tool that can actually convert those paths to long paths in the registry? I do understand this could break some programs, but I have backups so I'd be willing to give it a try. (I tried to make a tool like this myself, but it's harder than it looks, because of false positives and all the varieties of ways the path could be embedded.)

    Read the article

  • SQL Server High Availability - Mirroring with MSCS?

    - by David
    I'm looking at options for high-availability for my SQL Server-powered application. The requirements are: HA protection from storage failure. Data accessibility when one of the DB servers is undergoing software updates (e.g. planned outage for Windows Update / SQL Server service-packs). Must not involve much in the way of hardware procurement. The application is an ASP.NET web application. The web application's users have their own database instances. I've seen two main options: SQL Server failover clustering, and SQL Server mirroring. I understand that SQL Server Failover Clustering requires the purchasing of a shared disk array and doesn't offer any protection if the shared storage goes down (so the documentation recommends to set up a Mirroring between two clusters). Database Mirroring seems the cheaper option (as it only requires two database servers and a simple witness box) - but I've heard it doesn't work well when you have a large number of databases. The application I'm developing involves giving each client their own database for their application - there could be hundreds of databases. Setting up the mirroring is no problem thanks to the automation systems we have in place. My final point concerns how failover works with respect to client connections - SQL Server Failover Clustering uses MSCS which means that the cluster is invisible to clients - a connection attempt might fail during the failover, but a simple reconnect will have it working again. However mirroring, as far as I know, requires that the client be aware of the mirrored partners: if the client cannot connect to the primary server then it tries the secondary server. I'm wondering how this work with respect to Connection Pooling in ASP.NET applications - does the client connection failovering mean that there's a potential 2-second (assuming 2000ms TCP timeout policy) pause when the connection pool tries the primary server on every connection attempt? I read somewhere that Mirroring can be used on top of MSCS which means that the client does not need to be aware of mirroring (so there wouldn't be any potential delays during connection, and also that no changes would need to be made to the client, not even the connection string) - however I'm finding it hard to get documentation or white papers on this approach. But if true, then it means the best method is then Mirroring (for HA) with MSCS (for client ignorance and connection performance). ...but how does this scale to a server instance that might contain hundreds of mirrored databases?

    Read the article

  • The Koyal Group Info Mag News¦Charged building material could make the renewable grid a reality

    - by Chyler Tilton
    What if your cell phone didn’t come with a battery? Imagine, instead, if the material from which your phone was built was a battery. The promise of strong load-bearing materials that can also work as batteries represents something of a holy grail for engineers. And in a letter published online in Nano Letters last week, a team of researchers from Vanderbilt University describes what it says is a breakthrough in turning that dream into an electrocharged reality. The researchers etched nanopores into silicon layers, which were infused with a polyethylene oxide-ionic liquid composite and coated with an atomically thin layer of carbon. In doing so, they created small but strong supercapacitor battery systems, which stored electricity in a solid electrolyte, instead of using corrosive chemical liquids found in traditional batteries. These supercapacitors could store and release about 98 percent of the energy that was used to charge them, and they held onto their charges even as they were squashed and stretched at pressures up to 44 pounds per square inch. Small pieces of them were even strong enough to hang a laptop from—a big, fat Dell, no less. Although the supercapacitors resemble small charcoal wafers, they could theoretically be molded into just about any shape, including a cell phone’s casing or the chassis of a sedan. They could also be charged—and evacuated of their charge—in less time than is the case for traditional batteries. “We’ve demonstrated, for the first time, the simple proof-of-concept that this can be done,” says Cary Pint, an assistant professor in the university’s mechanical engineering department and one of the authors of the new paper. “Now we can extend this to all kinds of different materials systems to make practical composites with materials specifically tailored to a host of different types of applications. We see this as being just the tip of a very massive iceberg.” Pint says potential applications for such materials would go well beyond “neat tech gadgets,” eventually becoming a “transformational technology” in everything from rocket ships to sedans to home building materials. “These types of systems could range in size from electric powered aircraft all the way down to little tiny flying robots, where adding an extra on-board battery inhibits the potential capability of the system,” Pint says. And they could help the world shift to the intermittencies of renewable energy power grids, where powerful batteries are needed to help keep the lights on when the sun is down or when the wind is not blowing. “Using the materials that make up a home as the native platform for energy storage to complement intermittent resources could also open the door to improve the prospects for solar energy on the U.S. grid,” Pint says. “I personally believe that these types of multifunctional materials are critical to a sustainable electric grid system that integrates solar energy as a key power source.”

    Read the article

  • compile Passenger statically into Apache httpd?

    - by Oleg
    I prefer to disable httpd dynamic module loading on my production server. I've been using mod_jk linked statically into httpd for quite a long time and it proved to be stable. Now I would like to add Ruby Passenger (mod_rails/mod_rack) to my httpd. I wonder if it is possible to link it statically into Apache httpd the same way too? (without producing a huge httpd) If it is, are there any potential pitfalls, safety or performance concerns having both mod_jk and mod_rails within the same executable? Thanks

    Read the article

  • Excel for programmers

    - by Rohit
    Recently as part of my job I have had to edit and create a lot of excel spreadsheets. I am familiar with some Excel formulas but while editing the spreadsheets I don't feel that I'm using the full potential of excel. Are there any books/online resources which guide someone with a programming background in Excel?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >