Search Results

Search found 1771 results on 71 pages for 'knowing me knowing you'.

Page 30/71 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Programming as a minor

    - by Tomas Cokis
    Hello Everyone! I've never asked a question here at programmers, and for reasons which will become obvious later I've never answered one here, but I do poke around in short bursts. Anyway, I'm 15 right now, and I've been programming in C++ for 4 years, just working on my own projects that are aim so high as to never be finished. I've been working on a single project for the last year, and every 3 months, I add a new system into it. It might be a value tabling directory enabled log system, or a render system, or a class to load up xml files, whatever it is, I don't mind too much that the overall project (a 3d engine) isn't ever going to get finished, I just get some satisfaction from getting what I have done building and running. I don't know what I want to do when I grow up, although I suspect I'll go into some form of engineering, but I was interested in knowing if I do choose to go into a career as a developer, what kind of material I could look at to push myself up and get myself experience that might help my career later. I'm not talking about books in particular, I'm more interested in subjects areas that will get me access to good job opportunities, or that will give me a hand-up if I do computer science and software related courses at uni. One of the things I was thinking of doing was designing some of the logic gate components of a small computer - which I started briefly over the holidays, working out integer addition, subtraction and multiplication. That kind of stuff interests me, but is it really useful - or more useful then just more programming? But anyway, Any advice? Should I continue on my perpetual 3d engine? Are there any other projects or particular accomplishments that would help my education? Perhaps I should mention that I live in Perth, Australia, so local software companies are likely to be more scarce then usual.

    Read the article

  • #altnetseattle &ndash; Collaboration, Why is it so hard!

    - by GeekAgilistMercenary
    The session convened and we began a discussion about why collaboration is so hard. To work together in software better us engineers have to overcome traditional software approaches (silos of work) and the human element of tending to go off in a corner to work through an issue. It was agreed upon that software engineers are jack asses of jack assery. Breaking down the stoic & silent types by presenting a continuous enthusiasm until the stoic and silent types break down and open up to the group.  Knowing it is ok to ask the dumb question or work through basic things once in a while. Non-work interactions are pivotal to work related collaboration. Collaboration is mostly autonomous of process (i.e. Agile or Waterfall) Latency time should be minimal in the feedback loop for software development. Collaboration is enhanced by Agile Ideals, and things like Scrum or Lean Process. Agile is not a process, Lean and Scrum are process.  Agile is an ideal. Lean, Agile, Scrum, Waterfall, Six Sigma, CMMI, oh dear. . . Great session.  Off to the next session and more brain crunching. . . weeeeeeee!

    Read the article

  • Automated Error Reporting in .NET Reflector - harnessing the most powerful test rig in existence

    - by Alex.Davies
    I know a testing system that will find more bugs than all the unit testing, integration testing, and QA you could possibly do. And the chances are you're not using it. It's called your users. It's a cliché that you should test so that you find your bugs rather than your users. Of course you should. But it's also a cliché that no software is ever shipped bug-free. Lost cause? No, opportunity! I think .NET Reflector 6 is pretty stable. In fact I know exactly how stable it is, because some (surprisingly high) proportion of its users tell me every time it crashes: If they press "Send Error Report", I get: And then I fix it. As a rough guess, while a standard stack trace is enough to fix a problem 30% of the time, having all those local variables in the stack trace means I can fix it about 80% of the time. How does this all happen? Did it take ages to code this swish system? Nope, it was one checkbox in SmartAssembly. It adds some clever code to your assembly to capture local variables every time an exception is thrown, and to ask your user to report it to you, with a variety of other useful information. Of course not all bugs show up as exceptions. But if you get used to knowing that SmartAssembly will tell you when an exception happens, you begin to change your coding style. Now, as long as an exception gets thrown in any situation you don't expect, you'll fix it if it ever happens. You'll start throwing exceptions liberally, and stop having to think about whether tiny edge cases are possible, as long as they throw an exception if they happen.

    Read the article

  • Issuse with multiple student result printing request

    - by dotman14
    I'm not really sure about how to ask this question but i'll try my best to make it clear enough. I have a Student Result Application where by students result are managed, over several academic sessions, all having two semesters each. During each semester students take different courses and have results based on the semester. The application is done now and i'm using a PDF Libaray to crop the final result page to hand over to the students each semester. If a student request a particular semester result it's a straight forward issue and there are no complications when it comes to printing out the result. My issue is this: in a case where a student requests to have a combination of semesters...say 3rd year rain semester , 4th year rain semester and 5th year hamattarn semester. Please how can i handle this issue? Does the user picks these options at the user interface level or there's a special way to handle issues like this? Also, if i'm to display these multiple student result how could this be be done, knowing fully well that i'll have to print the different result seperately. Hopefully i've being able to make my situation clear enough. Thanks for your time and patience. Expecting your comments and answers. Thanks.

    Read the article

  • Advice: The first-time interviewer's dilemna

    - by shan23
    I've been working in my first job for about 2 years now, and I've been "asked" to interview a potential teammate (whom I might have to mentor as well) on pretty short notice (2 days from now). Initially, I had been given a free rein(or so I thought, and hence agreed), but today, I've been told "not to pose bookish questions" - implying I can only ask basic programming puzzles and stuff similar to the 'fizbuzz' question. I strongly believe that not knowing basic algorithmic notations(the haziest ideas of space/time complexities) or the tiniest idea of regular expressions would make working with the guy very difficult for anyone. I know i'm asking for a lot here, but according to you, what would be a comprehensive way to test out the absolutely basic requirements of a CS guy(he has 2 yrs of exp) without sounding too pedantic/bookish etc ? It seems it would be legit to ask C questions/simple puzzles only....but I really do want to have something a bit different from "finding loops in linked lists" that has kind of become the opening statement of most techie interviews !! This is a face-to-face interview with about an hour or more of time - I looked at Steve's basic phone-screen questions, and I was wondering if there exists a guide on "basic face-to-face interview questions" that I can use(or compile from the community's answers here). EDIT: The position is mostly for a kernel level C programming job, with some smattering of C++ required for writing the test framework.

    Read the article

  • Newbie seeking advice on programming in general

    - by user974685
    need some of you to remember back to a time when you might have been bad at programming... Been at my new job (as a software developer) for a couple of months now, passed probation period. Have very little programming experience (C++ only) and am currently working with asp.net MVC and silverlight. So there's a website the company has been working on and I am joining the effort to make it better, iron out bugs etc. The problem is - learning about a system/website which has already been made, via visual studio. I ALWAYS feel HUGELY overwhelmed, never knowing which part of this line should I look up, and generally having lots of trouble getting the big picture. Visual studio itself is something I'm finding it difficult to get to grips with, let alone the asp.net framework. I get the impression that because my coworkers have more experience than me, they are getting all the good jobs, and I am left with crap to do - stuff which is not even vaguely programming. Meaning they are learning/creating more, and I am learning/creating near nothing. I'm getting demoralised, and too scared to say anything. I'm not stupid, I've read and practiced plenty of the fundamental programming concepts...I'm just bloody scared of this damn framework. I look at it and just feel paralyzed. The result is that I keep asking the older veteran guy of questions, and he is getting irritated, and would rather give me easy/mindless/non programming jobs to avoid wasting time with helping me out. Then when I don't understand something, I'm hesitating about whether or not I should ask him yet, and trying to decide if it would be a waste of time. I'm the kind of person who picks things up slowly, but with a lot of attention to detail. The former I think is making me look incompetent though. Anyone get where I'm coming from please say something helpful....I'm scared of losing my job in a few months or something...

    Read the article

  • Getting into game/game engine programming

    - by Darkslash
    So I am interested in learning game programming, but I really have an interest in the lower level engineering in games. I have openGL experience, and I am really interested in learning more about implementing AI, Physics, etc. I have a computer science degree, so I really like getting into technical stuff. Many times when I ask about this sort of thing, I get a lot of "Use an engine", "Use Unity3d", "Why waste your time writing code that already exists", etc etc. My idea was to use simpler libraries such as SFML or XNA so that I could learn how to implement the more complex systems. The thing is, although I do want to write games, I want to learn things that using something like Unity simply doesnt teach you. My goal is not to make a current generation quality 3D game to sell, I just want to make some cool smaller games and learn all I can about the programming side of game development. Is this something that people just do not do anymore? It seems like everywhere I turn people are using Unity or UDK or GameMaker. I fully understand why you would use a tool like these, but I cant see how they would suit my purposes. So where does someone like myself turn? Am I trying to learn something that people just do not bother doing anymore? Is the innovation in this area gone and just all about gameplay now? Im sorry if this question seems silly, but I am genuinely interested in knowing more about this and meeting more people who are interested in this sort of thing.

    Read the article

  • How can we plan projects realistically while accounting for support issues?

    - by Thomas Clayson
    We're having a problem at work: we're trying to schedule work so that we can assess time scales and get deadline dates. The problem is that it's difficult to plan for a project without knowing everything that's going to happen. For instance, right now we've planned all our projects through the start of December, however in that time we will have various in house and external meetings, teleconferences and extra work. It's all well and good to say that a project will take three weeks, but if there is a week's worth of interruption in that time then the date of completion will be pushed back a week. The problem is 3 fold: When we schedule projects the time scales are taken literally. If we estimate three weeks, the deadline is set for three week's time, the client is told, and there is no room for extension. Interim work and such means that we lose productive time working on the project. Sometimes clients don't have the time that we need to take to do the work, so they'll sometimes come to us and say they need a project done by the end of the month even when we think that the work will take two months - not to mention we already have work to be doing. We have a Gantt chart which we are trying to fill in with all the projects we have and we fill in timesheets, but they're not compared to the Gantt chart at all. This makes it difficult to say "Well, we scheduled 3 weeks for this project, but we've lost a week here so the deadline has to move back a week." It's also not professional to keep missing deadlines we've communicated to the client. How do other people deal with this type of situation? How do you manage the planning of projects? How much "extra" time do you schedule into a project to account for non-project work that occurs during a project? How do you deal with support issues and bugs and stuff? Things you can't account for during planning? UPDATE Lots of good answers thank you.

    Read the article

  • Which algorithms/data structures should I "recognize" and know by name?

    - by Earlz
    I'd like to consider myself a fairly experienced programmer. I've been programming for over 5 years now. My weak point though is terminology. I'm self-taught, so while I know how to program, I don't know some of the more formal aspects of computer science. So, what are practical algorithms/data structures that I could recognize and know by name? Note, I'm not asking for a book recommendation about implementing algorithms. I don't care about implementing them, I just want to be able to recognize when an algorithm/data structure would be a good solution to a problem. I'm asking more for a list of algorithms/data structures that I should "recognize". For instance, I know the solution to a problem like this: You manage a set of lockers labeled 0-999. People come to you to rent the locker and then come back to return the locker key. How would you build a piece of software to manage knowing which lockers are free and which are in used? The solution, would be a queue or stack. What I'm looking for are things like "in what situation should a B-Tree be used -- What search algorithm should be used here" etc. And maybe a quick introduction of how the more complex(but commonly used) data structures/algorithms work. I tried looking at Wikipedia's list of data structures and algorithms but I think that's a bit overkill. So I'm looking more for what are the essential things I should recognize?

    Read the article

  • SQL SERVER – New Look for CodePlexProject – Hosting for Open Source Software

    - by pinaldave
    Codeplex is my favorite site. CodePlex is Microsoft’s free open source project hosting site. You can create projects to share with the world, collaborate with others on their projects, and download open source software. It is great place to find so many open source project available to explore. All the softwares are free and open source. I often go there at intervals to check what is new in SQL Server field as well on other technologies. Yesterday when I visited it, I had nice surprise as it has total makeover and looks very decent as well elegant at the same time. I have noticed that when I talk about Codeplex is user community, not everybody knows about it. The quickest way I explain what is codeplex is that I start naming few of the projects which are available there and suddenly I start noticing a few hands going up knowing the projects. This is indirect way to prove that many of us know CodePlex usability but do not pay special attention to what it is actually. Let me name a few popular projects of the CodePlex here. SQL Server Sample Database [link] Image Resizer for Windows [link] Ajax Control Toolkit [link] Skype Voice Changer [link] Silverlight Toolkit [link] Windows 7 USB/DBD Download Tool [link] Orchard Project [link] There are very interesting SQL Server projects available on Codeplex as well. I am listing few of them here for reference in listed in no particular order. SQL Server Sample Database [link] SQL Server Compact ToolBox [link] Microsoft Drivers for PHP for SQL Server [link] Internals Viewer for SQL Server [link] SQL Server Spatial Tooks [link] SQL Monitor – managing sql server performance [link] SQL Server 2008 Extended Events SSMS Addin [link] How many of above mentioned project have you come across earlier? Leave a comment it will be interesting to know what our community is familiar with. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Should one always know what an API is doing just by looking at the code?

    - by markmnl
    Recently I have been developing my own API and with that invested interest in API design I have been keenly interested how I can improve my API design. One aspect that has come up a couple times is (not by users of my API but in my observing discussion about the topic): one should know just by looking at the code calling the API what it is doing. For example see this discussion on GitHub for the discourse repo, it goes something like: foo.update_pinned(true, true); Just by looking at the code (without knowing the parameter names, documentation etc.) one cannot guess what it is going to do - what does the 2nd argument mean? The suggested improvement is to have something like: foo.pin() foo.unpin() foo.pin_globally() And that clears things up (the 2nd arg was whether to pin foo globally, I am guessing), and I agree in this case the later would certainly be an improvement. However I believe there can be instances where methods to set different but logically related state would be better exposed as one method call rather than separate ones, even though you would not know what it is doing just by looking at the code. (So you would have to resort to looking at the parameter names and documentation to find out - which personally I would always do no matter what if I am unfamiliar with an API). For example I expose one method SetVisibility(bool, string, bool) on a FalconPeer and I acknowledge just looking at the line: falconPeer.SetVisibility(true, "aerw3", true); You would have no idea what it is doing. It is setting 3 different values that control the "visibility" of the falconPeer in the logical sense: accept join requests, only with password and reply to discovery requests. Splitting this out into 3 method calls could lead to a user of the API to set one aspect of "visibility" forgetting to set others that I force them to think about by only exposing the one method to set all aspects of "visibility". Furthermore when the user wants to change one aspect they almost always will want to change another aspect and can now do so in one call.

    Read the article

  • Do you ever worry that you're more concerned with how something is built rather than what you are actually building?

    - by Rob Stevenson-Leggett
    As a programmer I have an inherent nagging annoyance at my tools, other peoples code, my code, the world in general. I always want to improve it. So I refactor, I stay on top of the latest techniques. I try and learn patterns, I try to use frameworks so as not to reinvent the wheel. I can write a tech spec that will blow your socks off with the amount of patterns I can squeeze in. However, lately I feel I actually know more about the tools I use than how to actually implement successful software. I feel like I'm lacking in the human factors skill set and I believe that to be a successful software engineer takes more than knowing the coolest framework. I think it needs some of the following skillsets too. Interaction design User experience Marketing I've got a bit of this that I've learned from people I've worked with and great projects I've worked on but I don't feel like I "own" these skills. Am I right? Should I be trying to develop these skills further, or should these be left to the people who do these for a career? How do you make sure you don't get too tied up in how you're doing something and make sure you "make your users awesome"? Does anyone know of good resources for learning these skills from a programming point of view?

    Read the article

  • Why I don't use SSIS checkpoint files

    - by jamiet
    In a recent discussion in regard to general ETL best practises the subject of checkpoint files as a means for package restartability came up and I stated that I was dead against using them. For anyone that may care, here is why: Configuring them is distinctly unintuitive (that's a matter of opinion but if you follow the link I'll wager that you will agree) they don't make any allowance for loop iterations they cannot store variables of type Object they are limited in ability. There are many scenarios where you may want to execute certain containers regardless of whether the package is started from a checkpoint file but the current usage model does not allow for this. they are ignored by eventhandlers, which wouldn't be so bad if there were a way to toggle this behaviour in certain scenarios they dont work properly I'll expand on the last bullet point. I have encountered situations where the behaviour for tasks executing concurrently is unpredictable. That is, sometimes the completion of a task that executes concurrently with a failed/failing task will make it into the checkpoint file and sometimes it won't. This is near-impossible to reproduce but it does happen as my good friend John Welch will hopefully concur (if he is reading). Is anyone out there making successful use of checkpoint files within SSIS? I would be interested in knowing about that if so. @Jamiet

    Read the article

  • Is this form of cloaking likely to be penalised?

    - by Flo
    I'm looking to create a website which is considerably javascript heavy, built with backbone.js and most content being passed as JSON and loaded via backbone. I just needed some advice or opinions on likely hood of my website being penalised using the method of serving plain HTML (text, images, everything) to search engine bots and an js front-end version to normal users. This is my basic plan for my site: I plan on having the first request to any page being html which will only give about 1/4 of the page and there after load the last 3/4 with backbone js. Therefore non javascript users get a 'bit' of the experience. Once that new user has visited and detected to have js will have a cookie saved on their machine and requests from there after will be AJAX only. Example If (AJAX || HasJSCookie) { // Pass JSON } Search Engine server content: That entire experience of loading via AJAX will be stripped if a google bot for example is detected, the same content will be servered but all html. I thought about just allowing search engines to index the first 1/4 of content but as I'm considered about inner links and picking up every bit of content I thought it would be better to give search engines the entire content. I plan to do this by just detected a list of user agents and knowing if it's a bot or not. If (Bot) { //server plain html } In addition I plan to make clean URLs for the entire website despite full AJAX, therefore providing AJAX content to www.example.com/#/page and normal html to www.example.com/page is kind of our of the question. Would rather avoid the practice of using # when there are technology such as HTML 5 push state is around. So my question is really just asking the opinion of the masses on if it's likely that my website will be penalised? And do you suggest an alternative which avoids 'noscript' method

    Read the article

  • Open source clone for Starcraft

    - by sinekonata
    Two questions about a SC:Broodwar clone. Is there one yet? How likely is legal pursuit? Since almost all games I usually play now have an FOS alternative from alpha to way polished, I was wondering why can't I find one for SC, one of the biggest titans of the gaming community? So my first question is, is there a game that was made with the intention to emulate SC? Is it that I didn't look well enough? Could it really be that no one tackled what seems like a small effort compared to the creation of a game engine like Spring or games like Rigs of Rod or Minetest? And since SC is not being maintained at all shouldn't the incentive to see a bug free modable balanced version huge? What am I not getting here? In the event that there is none, is it a legal problem? Could it be that people expect Blizzard to release sources themselves? Or that developers don't see the point in having SC mechanics without the patented lore and aesthetics? And the trickier question, if I were to make SC an open source game, a total clone of it for the purposes of maintenance, modability, etc. Would Blizzard really sue a team of developer fans that just do them a favour knowing they don't lose any money from Korea broadcasts? Or would they do it not to set precedents. So thanks for reading all that, hope I'm not the only one to think it's weird that no one talks about it. See you.

    Read the article

  • SSDT gotcha – Moving a file erases code analysis suppressions

    - by jamiet
    I discovered a little wrinkle in SSDT today that is worth knowing about if you are managing your database schemas using SSDT. In short, if a file is moved to a different folder in the project then any code analysis suppressions that reference that file will disappear from the suppression file. This makes sense if you think about it because the paths stored in the suppression file are no longer valid, but you probably won’t be aware of it until it happens to you. If you don’t know what code analysis is or you don’t know what the suppression file is then you can probably stop reading now, otherwise read on for a simple short demo. Let’s create a new project and add a stored procedure to it called sp_dummy. Naming stored procedures with a sp_ prefix is generally frowned upon and hence SSDT static code analysis will look for occurrences of this and flag them. So, the next thing we need to do is turn on static code analysis in the project properties: A subsequent build causes a code analysis warning as we might expect: Let’s suppose we actually don’t mind stored procedures with sp_ prefixes, we can just right-click on the message to suppress and get rid of it: That causes a suppression file to get created in our project: Notice that the suppression file contains a relative path to the file that has had the suppression placed upon it. Now if we simply move the file within our project to a new folder notice that the suppression that we just created gets removed from the suppression file: As I alluded above this behaviour is intuitive because the path originally stored in the suppression file is no longer relevant but you’re probably not going to be aware of it until it happens to you and messages that you thought you had suppressed start appearing again. Definitely one to be aware of. @Jamiet   

    Read the article

  • How to check Early Z efficiency on AMD GPU with Windows 7

    - by Suma
    I have a game using DirectX 9, and a development station using Win 7 x64. I am still able to get access to another station with Vista x64 / dual booted with WinXP x86. I wanted to check early Z efficiency in the game and to my sadness all tools I have tried seem to be unable to perform this task: AMD PerfStudio AMD GPUPerfStudio 2 does not support DirectX 9 at all AMD GPUPerfStudio 1.2 does not install correctly on Windows 7. When I have tweaked the MSI package (a simple OS version check adjustment was needed), it complained the drivers I have do not provide needed instrumentation. The drivers old enough to support the GPUPerfStudio would most likely not be able to operate with my Radeon 5750 card (though this is something I am not 100 % sure, I did not attempt to try any older drivers, not knowing which I should look for) PIX PIX does not seem to contain any counters like this. It offers some ATI specific counters, but when I try to activate them, the PIX reports "PIX encountered a problem while attaching to the target program." I do not want to upgrade to DX 10/11 just to be able to profile the game, but it seems without the step I am somewhat locked with a toolset which is no longer supported. I see only one obvious options which would probably work, and that is using WinXP (or with a little bit of luck even Vista) station, perhaps with some older AMD card, to make sure GPUPerfStudio 1.2 works. Other than that, can anyone recommend other options how to check GPU HW counters (HiZ / EarlyZ in particular, but if others would be enabled as well, it would be a nice bonus) for a DirectX 9 game on Windows 7, preferably on AMD GPU? (If that is not possible, I would definitely prefer switching GPU to switching the OS, but before I do so I would like to know if I will not hit the same problem with nVidia again)

    Read the article

  • I've been hired on as a entry-level game developer at a company and have little/no experience in API programming, what should I expect?

    - by Mr. Geneth
    So, I've been hired on as an entry level game developer with little/no experience working with any API other than Win32. This will be an overall learning experience for me as a person and I have gone over this multiple times with the boss and he has no problem with my inexperience. He says that if I'm not worth it now, I will be later. This gives me confidence, but I still feel that I should know a lot more before tackling this position. I would be stupid to pass it up. This is one of my favorite places to come for advice and help and have tried to just accept this, but it just keeps bothering that I can't go in knowing how to at least do the basics. I want to give the company its money's worth. Ya know? My questions are: What should I expect from the other programmers in this project (In terms of patience with me and working together, and being taught)? Is this normal? Any other advice on this sort of thing would be wonderful. I just want to feel comfortable with it.

    Read the article

  • Are there any actual case studies on rewrites of software success/failure rates?

    - by James Drinkard
    I've seen multiple posts about rewrites of applications being bad, peoples experiences about it here on Programmers, and an article I've ready by Joel Splosky on the subject, but no hard evidence of case studies. Other than the two examples Joel gave and some other posts here, what do you do with a bad codebase and how do you decide what to do with it based on real studies? For the case in point, there are two clients I know of that both have old legacy code. They keep limping along with it because as one of them found out, a rewrite was a disaster, it was expensive and didn't really work to improve the code much. That customer has some very complicated business logic as the rewriters quickly found out. In both cases, these are mission critical applications that brings in a lot of revenue for the company. The one that attempted the rewrite felt that they would hit a brick wall at some point if the legacy software didn't get upgraded at some point in the future. To me, that kind of risk warrants research and analysis to ensure a successful path. My question is have there been actual case studies that have investigated this? I wouldn't want to attempt a major rewrite without knowing some best practices, pitfalls, and successes based on actual studies. Aftermath: okay, I was wrong, I did find one article: Rewrite or Reuse. They did a study on a Cobol app that was converted to Java.

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 2)

    Last week's article, Building a Store Locator ASP.NET Application Using Google Maps API (Part 1), was the first in a multi-part article series exploring how to add store locator-type functionality to your ASP.NET website using the free Google Maps API. Part 1 started with an examination of the database used to power the store locator, which contains a single table named Stores with columns capturing the store number, its address and its latitude and longitude coordinates. Next, we looked at using Google Maps API's geocoding service to translate a user-entered address, such as San Diego, CA or 92101 into its latitude and longitude coordinates. Knowing the coordinates of the address entered by the user, we then looked at writing a SQL query to return those stores within (roughly) 15 miles of the user-entered address. These nearby stores were then displayed in a grid, listing the store number, the distance from the address entered to each store, and the store's address. While a list of nearby stores and their distances certainly qualifies as a store locator, most store locators also include a map showing the area searched, with markers denoting the store locations. This article looks at how to use the Google Maps API, a sprinkle of JavaScript, and a pinch of server-side code to add such functionality to our store locator. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Follow up – Usage of $rowguid and $IDENTITY

    - by pinaldave
    The most common question I often receive is why do I blog? The answer is even simpler – I blog because I get an extremely constructive comment and conversation from people like DHall and Kumar Harsh. Earlier this week, I shared a conversation between Madhivanan and myself regarding how to find out if a table uses ROWGUID or not? I encourage all of you to read the conversation here: SQL SERVER – Identifying Column Data Type of uniqueidentifier without Querying System Tables. In simple words the conversation between Madhivanan and myself brought out a simple query which returns the values of the UNIQUEIDENTIFIER  without knowing the name of the column. David Hall wrote few excellent comments as a follow up and every SQL Enthusiast must read them first, second and third. David is always with positive energy, he first of all shows the limitation of my solution here and here which he follows up with his own solution here. As he said his solution is also not perfect but it indeed leaves learning bites for all of us – worth reading if you are interested in unorthodox solutions. Kumar Harsh suggested that one can also find Identity Column used in the table very similar way using $IDENTITY. Here is how one can do the same. DECLARE @t TABLE ( GuidCol UNIQUEIDENTIFIER DEFAULT newsequentialid() ROWGUIDCOL, IDENTITYCL INT IDENTITY(1,1), data VARCHAR(60) ) INSERT INTO @t (data) SELECT 'test' INSERT INTO @t (data) SELECT 'test1' SELECT $rowguid,$IDENTITY FROM @t There are alternate ways also to find an identity column in the database as well. Following query will give a list of all column names with their corresponding tablename. SELECT SCHEMA_NAME(so.schema_id) SchemaName, so.name TableName, sc.name ColumnName FROM sys.objects so INNER JOIN sys.columns sc ON so.OBJECT_ID = sc.OBJECT_ID AND sc.is_identity = 1 Let me know if you use any alternate method related to identity, I would like to know what you do and how you do when you have to deal with Identity Column. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Is Intellisense faster in Visual Studio 2012 compared to Visual Studio 2010 for C++ projects?

    - by syplex
    We switched to VS2010 from VS2003 a few months ago, and there are many many improvements. But the speed of Intellisense is not one of them (although it does generate higher quality results, which is great). I read that Intellisense and the MSDN help system were being improved in VS2012, so I'm curious if its actually faster? The only data I could find were graphs of an early release (VS2011). For the record, I am using a vanilla install of VS2010 with SP1 on Windows 7 SP1 (x64). No plugins or add-ins running. What I'm looking for specifically: Has the speed of intellisense autocomplete improved? Has the speed of F12 (goto definition) improved? The answers to these questions will help in determining if VS2012 is worth the money to upgrade at this time as the intellisense slowness would be the only major reason for upgrading. I'd also be interested in knowing if the help system has improved. I'm currently using MSDN help from VS2008SP1 because it has filtering and is faster.

    Read the article

  • Where can I find video and audio drivers for my netbook?

    - by Ari
    I got a little netbook yesterday (an Acer Aspire One) for schoolwork and read in numerous reviews Ubuntu was a must. I dove into it blindly and downloaded 12.04 LTS and though I admit it's faster than before, I have no idea how it works yet and was stupid enough to get rid of the Windows 7 Starter that was on there before completely. I can see myself getting used to the rest of Ubuntu, but the main issue is that it seems to have eaten all of my drivers and I don't know where to get any Ubuntu compatible ones. Any video at all is jilty and kind of creepy and any audio is very distorted and REALLY creepy. I understand that this is a tiny $150 typing and light web use school device I'm talking about, but I feel really stupid making the poor thing worse than it was before and not knowing what to do about it. Does anyone know how to install graphics/sound drivers? Or, alternatively, does anyone know where I can get a copy of Windows 7 Starter. I can see myself liking this if the problems are fixable, but if not I still have the product key and would be able to tackle the problem by myself with that.

    Read the article

  • Experience vs. versatility

    - by Florin Bombeanu
    Let's say a .NET programmer works at a company which provides software on demand, not as a product. The programmer works in WPF for a period of time and he/she invests lots of time in it. He/she get very good at WPF and Windows Forms and desktop development in general. But the company has to provide a web application now, so the developer has to learn MVC or Web Forms. He/she is not experienced in web development so he/she starts investing time in this new technology and in time they get good at it. But this time the company has to provide a Sharepoint solution, and so on. What is more important: Being very very good at a certain technology, Or be as versatile as possible knowing less in each technology but covering a greater area of expertise? Should the programmer keep studying and working in WPF until he/she reaches a guru level or is it a good thing that they had to learn other technologies as well? I agree with those of you who will say that when learning different technologies you will also learn things which are useful no matter the technology you're programming in. But eventually, when the programmer will want to change jobs, will it matter more that he/she knows some WPF, MVC or Sharepoint than the fact that he/she is insanely good at one of them? I would think the second one is more important since most companies are looking for a developer for a certain technology. I don't think there are many companies looking for technical know-it-all people. What do you think?

    Read the article

  • Advantages of Hudson and Sonar over manual process or homegrown scripts.

    - by Tom G
    My coworker and I recently got into a debate over a proposed plan at our workplace. We've more or less finished transitioning our Java codebase into one managed and built with Maven. Now, I'd like for us to integrate with Hudson and Sonar or something similar. My reasons for this are that it'll provide a 'zero-click' build step to provide testers with new experimental builds, that it will let us deploy applications to a server more easily, that tools such as Sonar will provide us with well-needed metrics on code coverage, Javadoc, package dependencies and the like. He thinks that the overhead of getting up to speed with two new frameworks is unacceptable, and that we should simply double down on documentation and create our own scripts for deployment. Since we plan on some aggressive rewrites to pay down the technical debt previous developers incurred (gratuitous use of Java's Serializable interface as a file storage mechanism that has predictably bit us in the ass) he argues that we can document as we go, and that we'll end up changing a large swath of code in the process anyways. I contend that having accurate metrics that Sonar (or fill in your favorite similar tool) provide gives us a good place to start for any refactoring efforts, not to mention general maintenance -- after all, knowing which classes are the most poorly documented, even if it's just a starting point, is better than seat-of-the-pants guessing. Am I wrong, and trying to introduce more overhead than we really need? Some more background: an alumni of our company is working at a Navy research lab now and suggested these two tools in particular as one they've had great success with using. My coworker and I have also had our share of friendly disagreements before -- he's more of the "CLI for all, compiles Gentoo in his spare time and uses Git" and I'm more of a "Give me an intuitive GUI, plays with XNA and is fine with SVN" type, so there's definitely some element of culture clash here.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >