Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 142/563 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • How to manage primary key while updating [migrated]

    - by Subin Jacob
    In the following table primaryKeyColumn is primary key. To maintain the data history I always uses the values with WHERE condition(WHERE StatusColumn=1) And will set the StatusColumn to 0 if the data is edited (So that I could keep the previous data). But the problem is, if I update it to 0 , I can't insert the same key to primarykeycolumn since the column validated for primary keys. How can I manage these kind of validations? what the mistake I did in this design? primaryKeyColumn ValueColumn StatusColumn ---------------- ----------- ------------ 2 Name1 1 3 Name2 1 4 Name3 0

    Read the article

  • Program error trying to generate Outlook 2013 email from Visual Basic 2010 [on hold]

    - by Dewayne Pinion
    I am using vb to send emails through outlook. Currently we have a mix of outlook versions at our office: 2010 and 2013 with a mix of 32 bit and 64 bit (a mess, I know). The code I have works well for Outlook 2010: Private Sub btnEmail_Click(sender As System.Object, e As System.EventArgs) Handles btnEmail.Click CreateMailItem() End Sub Private Sub CreateMailItem() Dim application As New Application Dim mailItem As Microsoft.Office.Interop.Outlook.MailItem = CType(application.CreateItem( _ Microsoft.Office.Interop.Outlook.OlItemType.olMailItem), Microsoft.Office.Interop.Outlook.MailItem) 'Me.a(Microsoft.Office.Interop.Outlook.OlItemType.olMailItem) mailItem.Subject = "This is the subject" mailItem.To = "[email protected]" mailItem.Body = "This is the message." mailItem.Importance = Microsoft.Office.Interop.Outlook.OlImportance.olImportanceLow mailItem.Display(True) End Sub However, I cannot get this to work for 2013. I have referenced the version 15 dll for 2013 and it seems to be backward compatible, but when I try to use the above code for 2013 (it is 64 bit) it says it cannot start Microsoft Outlook. A program error has occured. This is happening on the application Dim statement line. I have tried googling around but there doesn't seem to be much out there referencing 2013 but I feel that the problem here probably has more to do with 64 bit than the software version. Thank you for any suggestions!

    Read the article

  • C# - How to store and reuse queries

    - by Jason Holland
    I'm learning C# by programming a real monstrosity of an application for personal use. Part of my application uses several SPARQL queries like so: const string ArtistByRdfsLabel = @" PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT DISTINCT ?artist WHERE {{ {{ ?artist rdf:type <http://dbpedia.org/ontology/MusicalArtist> . ?artist rdfs:label ?rdfsLabel . }} UNION {{ ?artist rdf:type <http://dbpedia.org/ontology/Band> . ?artist rdfs:label ?rdfsLabel . }} FILTER ( str(?rdfsLabel) = '{0}' ) }}"; string Query = String.Format(ArtistByRdfsLabel, Artist); I don't like the idea of keeping all these queries in the same class that I'm using them in so I thought I would just move them into their own dedicated class to remove clutter in my RestClient class. I'm used to working with SQL Server and just wrapping every query in a stored procedure but since this is not SQL Server I'm scratching my head on what would be the best for these SPARQL queries. Are there any better approaches to storing these queries using any special C# language features (or general, non C# specific, approaches) that I may not already know about?

    Read the article

  • Interviewing a DBA

    - by kev
    Our Company is in the Process of recuiting a DBA. I have built a group test of questions from basic questions such as Pk and Fk constraints, simple querries(fizzbuzz style) to more advanced things such as indexes, Collation, isolation levels and how to trace deadlocks. However, that is the limit of my knowledge. So my question to all the DBA's is what is the base level knowledge that all DBA's should have? We are really looking for someone that will be able to manage our replication, analyzing some of our slower running queries(that the devs can go to for help) and someone that can trace some of the deadlock issues that we are having. Any help would be most appreciated!

    Read the article

  • Can someone explain to me C#'s coding convention?

    - by AedonEtLIRA
    I recently started working with Unity3D and primarily scripting with C#. As, I normally program in Java, the differences aren't too great but I still referred to a crash course just to make sure I am on the right track. However, My biggest curiosity with C# is that is capitalises the first letter its method names (eg. java: getPrime() C#: GetPrime() aka: Pascal Case?). Is there a good reason for this? I understand from the crash course page that I read that apparently it's convention for .Net and I have no way of ever changing it, but I am curious to hear why it was done like this as opposed to the normal (relative?) camel case that, say, Java uses. Note: I understand that languages have their own coding conventions (python methods are all lower case which also applies in this question) but I've never really understood why it isn't formalised into a standard.

    Read the article

  • Is there a best practice / standard approach to a free trial for a web app

    - by wobbily_col
    I have an idea for a web app, and would be interested in implementing it, and offering a free trial of say 5 uses before asking people to sign up. I can think of numerous ways of doing this (using cookies , logging IP adresses off the top of my head, limiting functionality). Is there a standard approach to this? Are there best practices? Are there any good tutorials on this? (I would prefer not to go the liited functionality route, as it will not show what the app is capable of).

    Read the article

  • How to handle people who lie on their resume [closed]

    - by Juliet
    Moderator comment Please note that this is a two year old question that has just been migrated from Stack Overflow. Please take your time to read all the answers and ask yourself "would my answer add anything to this?". I'm conducting technical interviews to fill a few .NET positions. Many of the people I interview really do know .NET pretty well, but I find at least 90% embellish their skillset anywhere between "a little" to "quite drastically". Sometimes they fabricate skills relevant to the position they're applying for, sometimes they don't. Most of the people I interview, even the most egregious liars, are not scam artists. They just want to stand out among the crowd, so they drop a few buzzwords on their resume like "JBoss", "LINQ", "web services", "Django" or whatever just to pad their skillset and stay competitive. (You might wonder if a person that lies about those skills is just bluffing their way through a technical interview. My interviews involve a lot of hands-on coding and problem-solving – people who attempt to bluff will bomb the hands-on coding portion in the first 3 minutes.) These are two open-ended questions, but it would really help me out when I make my recommendations to the hiring managers: Regarding interviewing etiquette, should I attempt to determine whether a person really possesses all of the skills they claim to have? Can I do this without making the candidate feel uncomfortable? Regarding the final decision, should I recommend candidates who are genuinely qualified for the positions they're applying for, even if they've fabricated portions of their skillset?

    Read the article

  • Windows driver signing

    - by Artem Smolny
    My company is developing driver for our hardware. Now I need to sign my driver for 32 and 64 bit platforms. Please tell, now I need to buy Authenticode certificate, right? What CA to use? DigiCert? GlobalSign? ( http://www.sslshopper.com/microsoft-authenticode-certificates.html ) Symantec? ( http://www.symantec.com/verisign/code-signing/microsoft-authenticode ) What is the difference between this CA offers? I need to use tools from WDK?

    Read the article

  • Data Flow Diagrams - Difference between Lines and Arrows

    - by Howdy_McGee
    I'm currently working with Visio to create Data Flow Diagrams for a System Analysis and Design class but I'm unsure what the difference between ------ and ------> is. I can connect 2 shapes together with a line (process, entity, data store) but does the single line connecting the two mean data flow? Do I need to explicitly use the data flow arrow to show which way data is flowing? (There doesn't seem to be tags for this topic, maybe im in the wrong place?)

    Read the article

  • Java Application for handling records(CRUD)

    - by LivingThing
    I am new to JavaEE and am faced with a tight situation here. I have to develop a Java application for (CRUD) handling records and saving and loading an XML concerning that record. Obviously, I won't be asking you to do this for me. What I would be asking you is to give me some hints/pointer. Initially I thought JAXB would be enough for this but after putting a lot of time learning it and implementing the program I realized that it just can create the XML and read it but for update, delete I would have to do something else. Even if it wasn't for update and delete features requirement for my project I would still think that by just using JAXB is not a good implementation. I was wondering if "REST with Java (JAX-RS) using Jersey" should do the trick for me. ?

    Read the article

  • How is time calculation performed by a computer?

    - by Jorge Mendoza
    I need to add a certain feature to a module in a given project regarding time calculation. For this specific case I'm using Java and reading through the documentation of the Date class I found out the time is calculated in milliseconds starting from January 1, 1970, 00:00:00 GMT. I think it's safe to assume there is a similar "starting date" in other languages so I guess the specific implementation in Java doesn't matter. How is the time calculation performed by the computer? How does it know exactly how many milliseconds have passed from that given "starting date and time" to the current date and time?

    Read the article

  • How should I pitch moving to an agile/iterative development cycle with mandated 3-week deployments?

    - by Wayne M
    I'm part of a small team of four, and I'm the unofficial team lead (I'm lead in all but title, basically). We've largely been a "cowboy" environment, with no architecture or structure and everyone doing their own thing. Previously, our production deployments would be every few months without being on a set schedule, as things were added/removed to the task list of each developer. Recently, our CIO (semi-technical but not really a programmer) decided we will do deployments every three weeks; because of this I instantly thought that adopting an iterative development process (not necessarily full-blown Agile/XP, which would be a huge thing to convince everyone else to do) would go a long way towards helping manage expectations properly so there isn't this far-fetched idea that any new feature will be done in three weeks. IMO the biggest hurdle is that we don't have ANY kind of development approach in place right now (among other things like no CI or automated tests whatsoever). We don't even use Waterfall, we use "Tell Developer X to do a task, expect him to do everything and get it done". Are there any pointers that would help me start to ease us towards an iterative approach and A) Get the other developers on board with it and B) Get management to understand how iterative works? So far my idea involves trying to set up a CI server and get our build process automated (it takes about 10-20 minutes right now to simply build the application to put it on our development server), since pushing tests and/or TDD will be met with a LOT of resistance at this point, and constantly force us to break larger projects into smaller chunks that could be done iteratively in a three-week cycle; my only concern is that, unless I'm misunderstanding, an agile/iterative process may or may not release the software (depending on the project scope you might have "working" software after three weeks, but there isn't enough of it that works to let users make use of it), while I think the expectation here from management is that there will always be something "ready to go" in three weeks, and that disconnect could cause problems. On that note, is there any literature or references that explains the agile/iterative approach from a business standpoint? Everything I've seen only focuses on the developers, how to do it, but nothing seems to describe it from the perspective of actually getting the buy-in from the businesspeople.

    Read the article

  • Am I permitted to use an LGPL library without releasing the source to the rest of my application, if I dynamically reference the library?

    - by user185812
    I am a bit confused as to what I am/am not allowed to do with a LGPL Library that I intend on using in a small scale commercial C++ Application that I am developing. My current understanding, although I don't know if I am correct, is that I am permitted use the library without releasing the source to the rest of my application if I dynamically reference the library. Does anyone know if this is correct? Are there any restrictions as to how I reference the library? Thank You! I am not a native English speaker and don't understand the licence entirely.

    Read the article

  • Difference between a pseudo code and algorithm?

    - by Vamsi Emani
    Technically, Is there a difference between these two words or can we use them interchangeably? Both of them more or less describe the logical sequence of steps that follow in solving a problem. ain't it? SO why do we actually use two such words if they are meant to talk of the same? Or, In case if they aren't synonymous words, What is it that differentiates them? In what contexts are we supposed to use the word pseudo code vs the word algorithm? Thanks.

    Read the article

  • What are the most commonly used enterprise Java technologies, and what would you want a non technical audience to understand about them?

    - by overstood
    I have been asked to give a presentation to a non-technical audience on what Java technologies are currently being used in the enterprise world. The goal is to give this non-technical audience the background they need to understand what engineers are talking about. It's part of a broader series of talks that I'm giving. I'm primarily a .NET and C++ dev, so I thought I'd try to get some input from some Java devs. What technologies do you use? What Java related acronyms would you like to be able to use around non-coders? What would you like non-coders to understand about them?

    Read the article

  • Apache Commons PropertiesConfiguration escapes characters on Save [migrated]

    - by Anuvrat
    I am using the commons-configuration from apache commons library. I have a properties file which has properties like: blog_loc=http://my.blog.com blog_name="my blog name" I open the properties file, change the blog_name property and save the file. The following are the lines of code I use: PropertiesConfiguration propertyFile = new PropertiesConfiguration(propertyFileName); propertyFile.setProperty(blog_name, "blog name"); propertyFile.save(propertyFileName + ".out"); Unfortunately, in the output file certain characters get escaped as follows: blog_loc=http:\/\/my.blog.com blog_name=\"blog name\" Is there any way of preventing escaping of the above characters?

    Read the article

  • How I might think like a hacker so that I can anticipate security vulnerabilities in .NET or Java before a hacker hands me my hat [closed]

    - by Matthew Patrick Cashatt
    Premise I make a living developing web-based applications for all form-factors (mobile, tablet, laptop, etc). I make heavy use of SOA, and send and receive most data as JSON objects. Although most of my work is completed on the .NET or Java stacks, I am also recently delving into Node.js. This new stack has got me thinking that I know reasonably well how to secure applications using known facilities of .NET and Java, but I am woefully ignorant when it comes to best practices or, more importantly, the driving motivation behind the best practices. You see, as I gain more prominent clientele, I need to be able to assure them that their applications are secure and, in order to do that, I feel that I should learn to think like a malevolent hacker. What motivates a malevolent hacker: What is their prime mover? What is it that they are most after? Ultimately, the answer is money or notoriety I am sure, but I think it would be good to understand the nuanced motivators that lead to those ends: credit card numbers, damning information, corporate espionage, shutting down a highly visible site, etc. As an extension of question #1--but more specific--what are the things most likely to be seeked out by a hacker in almost any application? Passwords? Financial info? Profile data that will gain them access to other applications a user has joined? Let me be clear here. This is not judgement for or against the aforementioned motivations because that is not the goal of this post. I simply want to know what motivates a hacker regardless of our individual judgement. What are some heuristics followed to accomplish hacker goals? Ultimately specific processes would be great to know; however, in order to think like a hacker, I would really value your comments on the broader heuristics followed. For example: "A hacker always looks first for the low-hanging fruit such as http spoofing" or "In the absence of a CAPTCHA or other deterrent, a hacker will likely run a cracking script against a login prompt and then go from there." Possibly, "A hacker will try and attack a site via Foo (browser) first as it is known for Bar vulnerability. What are the most common hacks employed when following the common heuristics? Specifics here. Http spoofing, password cracking, SQL injection, etc. Disclaimer I am not a hacker, nor am I judging hackers (Heck--I even respect their ingenuity). I simply want to learn how I might think like a hacker so that I may begin to anticipate vulnerabilities before .NET or Java hands me a way to defend against them after the fact.

    Read the article

  • Demonstrate bad code to client?

    - by jtiger
    I have a new client that has asked me to do a redesign of their website, an ASP.NET Webforms application that was developed by another consultant. It seemed straight-forward (it never is) but I took a look at the code to make sure I knew what I was in for. This application was not written well. At all. It is extremely vulnerable to SQL Injection attacks, business logic is spread throughout the entire application, a lot of duplication, and dead end code that does nothing. On top of that, it keeps throwing exceptions that are being smothered, so it all appears to be running smoothly. My job is to simply update the html and css, but much of the html is being generated in business logic and would be a nightmare for me to sort everything out. My estimates on the redesign were longer than the client was aiming for, and they are asking why so long. How can I explain to my client just how bad this code is? In their mind, the application is running great and the redesign should be a quick one-off. It's my word against the previous consultant, so how can I actually give simple, concrete examples that a non-technical client would understand?

    Read the article

  • Calling methods on Objects

    - by Mashael
    Let's say we have a class called 'Automobile' and we have an instance of that class called 'myCar'. I would like to ask why do we need to put the values that our methods return in a variable for the object? Why just don't we call the method? For example: Why should we write: string message = myCar.SpeedMessage(); Console.WriteLine(message); instead of: Console.WriteLine(myCar.SpeedMessage());

    Read the article

  • What reasons are there for not using a third party version control service?

    - by Earlz
    I've recently noticed a bit of a trend for my projects as of late. I use to run my own SVN server on my VPS, but recently the nail went in the coffin for that when I got my last project migrated from my server to a Mercurial repo on Bitbucket. What are some of the ramifications to this? (disregarding the change in version control systems) It seems like there has been a huge explosion in version control hosting, and companies like Bitbucket even offer private repos for free, and Github and other such services are extremely cheap now. Also, by using them you get the benefit of their infrastructure's speed and stability. What reasons are there these days to host your own version control? The only real reason I can think of is if your source code is super top secret.

    Read the article

  • Syntactic sugar in PHP with static functions

    - by Anna
    The dilemma I'm facing is: should I use static classes for the components of an application just to get nicer looking API? Example - the "normal" way: // example component class Cache{ abstract function get($k); abstract function set($k, $v); } class APCCache extends Cache{ ... } class application{ function __construct() $this->cache = new APCCache(); } function whatever(){ $this->cache->add('blabla'); print $this->cache->get('blablabla'); } } Notice how ugly is this->cache->.... But it gets waay uglier when you try to make the application extensible trough plugins, because then you have to pass the application instance to its plugins, and you get $this->application->cache->... With static functions: interface CacheAdapter{ abstract function get($k); abstract function set($k, $v); } class Cache{ public static $ad; public function setAdapter(CacheAdapter $a){ static::$ad = $ad; } public static function get($k){ return static::$ad->get($k); } ... } class APCCache implements CacheAdapter{ ... } class application{ function __construct(){ cache::setAdapter(new APCCache); } function whatever() cache::add('blabla', 5); print cache::get('blabla'); } } Here it looks nicer because you just call cache::get() everywhere. The disadvantage is that I loose the possibility to extend this class easily. But I've added a setAdapter method to make the class extensible to some point. I'm relying on the fact that I won't need to rewrite to replace the cache wrapper, ever, and that I won't need to run multiple application instances simultaneously (it's basically a site - and nobody works with two sites at the same time) So, am doing it wrong?

    Read the article

  • How should Code Review be Carried Out?

    - by Graviton
    My previous question has to do with how to advance code review among the developers. Here I am interested in how the code review session should be carried out, so that both the reviewer and reviewed are feeling comfortable about it. I have done some code review before, but the experience sucks big time. My previous manager would come to us-- on an ad hoc basis-- and tell us to explain our code to him. Since he wasn't very familiar with the code base, I spent a huge amount of times explaining just the most basic structure of my code to him. This took a long time and by the time we were done, we were both exhausted. Then he would raise issues with my code. Most issues he raised were cosmetic in nature ( e.g, don't use region for this code block, change the variable name from xxx to yyy even though the later makes even less sense, and so on). We did this a few rounds, and the review session didn't derive much benefits for us, and we stopped. What do you have to do, in order to make code review a natural, enjoyable, thought stimulating, bug-fixing and mutual-learning experience?

    Read the article

  • Why do operating systems do low level stuff in C and C++? Why not just C++?

    - by Cole Johnson
    On the Wikipedia page for Windows, it states the Windows is written in Assembly for the bootloader and task switcher, and C and C++ for kernel routines. This confuses me because AFAIK, you can call C++ functions from an extern "C"'d block as C++ is just C with extra features (all of which can be rewritten in C if you wanted to AFAIK). I can get using C for the kernel functions so pure C apps can use them (like printf and such), but if they can just be wrapped in an extern "C " block, then why code in C? So my question is: Why would a kernel be written in both C and C++ instead of just C++

    Read the article

  • What should I expect from a system engineer university career

    - by Trufa
    I'm starting tomorrow a series of interviews to decide which university should I choose to get a degree in System Engineer. I know this is a serious university but I would like to get some feedback about what should I expect or "demand" from the university. My experience in the technology field is (obviously) limited and would like to be aware of what should I to be aware if the university might be good or not. Specially in following fields: Infrastructure: what are the essentials? big pluses? Theoretical vs Practical: how practical should it be? what is a "good" mix? Programming languages, frameworks, etc: Which are the ideal for learning? Most demand? Latest technologies: What should they be teaching right know to "prove" they are up to date. Qualification system: What exam methods do you think are ideal for this kind of degree, good ol' Q&A, multiple choice, projects, a fair mix? What other points do you think I should care about? What isn't important? Thanks in advance. I realize this is might be a very subjective topic so I tried to make it as specific and on topic as I could but any recommendations are of course welcome. I also understand that none of this questions will guarantee this will be a good university but it might give me another reference as to which should I choose when the moment comes.

    Read the article

  • R vs Python for data analysis

    - by The_Cthulhu_Kid
    I have been programming for about a year and I am really interested in data analysis and machine learning. I am taking part in a couple of online courses and am reading a couple of books. Everything I am doing uses either R or Python and I am looking for suggestions on whether or not I should concentrate on one language (and if so which) or carry on with both; do they complement each other? -- I should mention that I use C# in school but am familiar with Python through self-study.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >