Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 145/563 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Is a Model Driven Architecture in Language Oriented Programming (MPS) feasible at this time

    - by Steven Jeuris
    As a side project I am developing some sort of DSL where I describe a data model, and generate desired code files from it. I believe this is called Model Driven Architecture. My partial existing implementation uses C#, CodeDOM, XML and XSLT to do this manually. I discovered there already exist better environments to do this in. The one which fascinated me the most is called MPS, which follows the Language Oriented Programming paradigm. This article, written by a cofounder of JetBrains was a real eye opener for me. I truly believe LOP has a very good chance of becoming the next big programming paradigm once it has broader support. From my short experience with MPS, I noticed it is still mainly Java-oriented. My question is, how feasible is it to generate code files for other (multiple) languages instead of just Java. I don't need full language support from the start, so preferably, I need to be able to implement a language in a agile way. E.g. first support only one type, add access modifiers, ... Perhaps some other (free) environment already provides this out of the box. P.S.: I find it important to have a lot of control over the naming conventions and such of the generated code. This is one of the reasons why I started my own implementation.

    Read the article

  • Career paths after web development?

    - by Mike
    I know this is open ended, but I'm just curious what you've done after your web development career, or if you've stayed loyal. I have a feeling/read/heard that web development salaries top out at a certain amount.. even after 10-15 years of experience. Reason I ask is that I graduated last summer with a BS in Chemical Engineering.. but have not been able to find a job in California. I've been web designing/developing since high school and thought that I should start a career, even if its not related to my major and not lose more time. Even though I'd really like to have an engineering career, I don't think that will happen. Do you guys have any suggestions or experiences for choices after/ways to enhance your career after several years in web development? Thanks! Update: Thanks for the responses guys! One more question: Is it likely to be accepted into a MS/PhD program if you've been out of uni for a couple years? Or with semi-related job experience? Would I be a bit of a misfit with a BS in ChemE studying CS/CompE for an MS?

    Read the article

  • Learning to be a good developer: what parts can you skip over?

    - by Andrew M
    I have set myself the goal of becoming a decent developer by this time next year. By this I mean full experience of the development 'lifecycle,' a few good apps/sites/webapps under my belt, and most importantly being able to work at a steady pace without getting sidelined for hours by some should-know-this-already technique. I'm not starting from scratch. I've written a lot of html/css, SQL, javascript, python and VB.net, and studied other languages like C and Java. I know about things like OOP, design patterns, TDD, complexity, computational linguistics, pointers/references, functional programming, and other academic/theoretical matters. It's just I can't say I've really done these things yet. So I want to get up to speed, and I want to know what things I can leave till a later date. For instance, studying algorithms and the maths behind them is interesting and all, but so far I've hardly needed to write anything but the most basic nested loops. Investigating Assembly to have a clearer picture of low-level operations would be cool... but I imagine rarely infringes on daily work. On the other hand, looking at a functional programming language might help me write programs that are more comprehensible and less prone to hidden failures (at the moment I'm finding the biggest difficulty is when the complexity of the app exceeds my capacity to understand it - for instance passing data around was fine... until I had to start doing it with AJAX, which was a painful step up). I could spend time working through case studies of design patterns, but I'm not sure how many of them get used in 'real life.' I'm a programmer with basic abilities - what skills should I focus on developing? (also my Unix skills are very weak, and also knowledge of Windows configuration... not sure how much time I should spend on that)

    Read the article

  • .NET developer needs FoxPro advice

    - by katit
    We have a prospect with FoxPro 2.6 (whatever it means) system. Our product integrates with other systems by the means of triggers (usually). We would place couple of triggers on X system and then just pull collected data for our use. This way there is no need to customize customers product and it works great(almost real time - we poll for changes every 30 seconds). Question: Can I put triggers on FoxPro 2.6? Can access FoxPro from .NET? Any catches/caveats?

    Read the article

  • Looking for Application Framework Features Lists, Comparisons and Guides [closed]

    - by Blah McBlah
    I am looking for lists of the things that application frameworks can do and for websites that have matrices, marketing content, blog articles and whatnot for comparing application frameworks to each other or just selling a framework. I'm talking generally, so regardless of coded language or operating system or client device. I want it all. I've found a few online, and would appreciate whatever sources I can glean from this site too.

    Read the article

  • Parallelism implies concurrency but not the other way round right?

    - by Cedric Martin
    I often read that parallelism and concurrency are different things. Very often the answerers/commenters go as far as writing that they're two entirely different things. Yet in my view they're related but I'd like some clarification on that. For example if I'm on a multi-core CPU and manage to divide the computation into x smaller computation (say using fork/join) each running in its own thread, I'll have a program that is both doing parallel computation (because supposedly at any point in time several threads are going to run on several cores) and being concurrent right? While if I'm simply using, say, Java and dealing with UI events and repaints on the Event Dispatch Thread plus running the only thread I created myself, I'll have a program that is concurrent (EDT + GC thread + my main thread etc.) but not parallel. I'd like to know if I'm getting this right and if parallelism (on a "single but multi-cores" system) always implies concurrency or not? Also, are multi-threaded programs running on multi-cores CPU but where the different threads are doing totally different computation considered to be using "parallelism"?

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Where's Randall Hyde?

    - by user1124893
    This probably doesn't belong here, but I couldn't think of any other StackExchange site that would fit it. Quick question, what ever happened to Randall Hyde, author of The Art of Assembly, HLA, and other works? I ask this because I was just exploring some of the content on his website and a lot of it is now gone. His website was hosted on Apple's MobileMe. As of the writing of this question, Apple has closed off all MobileMe content a few days ago. Correct me if I'm wrong, but didn't Apple warn of this a year in advance? If so, then where's Randall Hyde? Come to think of it, all of the content on his website that I have seen is several years old. A lot of it is rather useful but unfinished.

    Read the article

  • What's the difference between MariaDB and MySQL?

    - by Chris J. Lee
    What's the difference between MariaDB and MySQL? I'm not very familiar with both. I'm primarily a front end developer for the most part. Are they syntactically similar? Where do these two query languages differ? Wikipedia only mentions the difference between licensing: MariaDB is a community-developed branch of the MySQL database, the impetus being the community maintenance of its free status under GPL, as opposed to any uncertainty of MySQL license status under its current ownership by Oracle.

    Read the article

  • Unit testing in Django

    - by acjohnson55
    I'm really struggling to write effective unit tests for a large Django project. I have reasonably good test coverage, but I've come to realize that the tests I've been writing are definitely integration/acceptance tests, not unit tests at all, and I have critical portions of my application that are not being tested effectively. I want to fix this ASAP. Here's my problem. My schema is deeply relational, and heavily time-oriented, giving my model object high internal coupling and lots of state. Many of my model methods query based on time intervals, and I've got a lot of auto_now_add going on in timestamped fields. So take a method that looks like this for example: def summary(self, startTime=None, endTime=None): # ... logic to assign a proper start and end time # if none was provided, probably using datetime.now() objects = self.related_model_set.manager_method.filter(...) return sum(object.key_method(startTime, endTime) for object in objects) How does one approach testing something like this? Here's where I am so far. It occurs to me that the unit testing objective should be given some mocked behavior by key_method on its arguments, is summary correctly filtering/aggregating to produce a correct result? Mocking datetime.now() is straightforward enough, but how can I mock out the rest of the behavior? I could use fixtures, but I've heard pros and cons of using fixtures for building my data (poor maintainability being a con that hits home for me). I could also setup my data through the ORM, but that can be limiting, because then I have to create related objects as well. And the ORM doesn't let you mess with auto_now_add fields manually. Mocking the ORM is another option, but not only is it tricky to mock deeply nested ORM methods, but the logic in the ORM code gets mocked out of the test, and mocking seems to make the test really dependent on the internals and dependencies of the function-under-test. The toughest nuts to crack seem to be the functions like this, that sit on a few layers of models and lower-level functions and are very dependent on the time, even though these functions may not be super complicated. My overall problem is that no matter how I seem to slice it, my tests are looking way more complex than the functions they are testing.

    Read the article

  • How does the ? make a quantifier lazy in regex

    - by Uriel Katz
    I've been looking into regex lately and figured that the ? operator makes the *,+, or ? lazy. My question is how does it do that? Is it that *? for example is a special operator, or does the ? have an effect on the *? In other words, does regex recognize *? as one operator in itself, or does regex recognize *? as the two separate operators * and ?? If it is the case that *? is being recognized as two separate operators, how does the ? affect the * to make it lazy. If ? means that the * is optional, shouldn't this mean that the * doesn't have to exists at all. If so, then in a statement .*? wouldn't regex just match separate letters and the whole string instead of the shorter string? Please explain, I'm desperate to understand.

    Read the article

  • Is comparing an OO compiler to a SQL compiler/optimizer valid?

    - by Brad
    I'm now doing a lot of SQL development at my new job where as before I was doing Object Oriented desktop app stuff. I keep running across very large scripts (thousands of lines) and wanting to refactor in some way. I am seeing that SQL is a different sort of beast and it's probably fine to have these big scripts for the most part but while explaining this to me people are also insisting that the whole idea of refactoring is bad. That stuff like the .NET compiler are actually burdened by refactored code and that a big wall of code is more efficient and better design than code designed for reuse, readability and scalability. The other argument is that OO compilers are almost dangerously inefficient and don't have efficient memory management or runs too many CPU instructions compared to older "simpler" compilers and compared to SQL. Are these valid complaints? Even if some compiler like a C compiler is modestly more "efficient" (whatever that means on this high of a level without seeing code) would you want to write applications in C over C# or Java? Is comparing an OO compiler to a SQL compiler/optimizer even valid?

    Read the article

  • Is there a way to add unique items to an array without doing a ton of comparisons?

    - by hydroparadise
    Please bare with me, I want this to be as language agnostic as possible becuase of the languages I am working with (One of which is a language called PowerOn). However, most languanges support for loops and arrays. Say I have the following list in an aray: 0x 0 Foo 1x 1 Bar 2x 0 Widget 3x 1 Whatsit 4x 0 Foo 5x 1 Bar Anything with a 1 should be uniqely added to another array with the following result: 0x 1 Bar 1x 1 Whatsit Keep in mind this is a very elementry example. In reality, I am dealing with 10's of thousands of elements on the old list. Here is what I have so far. Pseudo Code: For each element in oldlist For each element in newlist Compare If values oldlist.element equals newlist.element, break new list loop If reached end of newlist with with nothing equal from oldlist, add value from old list to new list End End Is there a better way of doing this? Algorithmicly, is there any room for improvement? And as a bonus qeustion, what is the O notation for this type of algorithm (if there is one)?

    Read the article

  • How can you become a competent web application security expert without breaking the law?

    - by hal10001
    I find this to be equivalent to undercover police officers who join a gang, do drugs and break the law as a last resort in order to enforce it. To be a competent security expert, I feel hacking has to be a constant hands-on effort. Yet, that requires finding exploits, testing them on live applications, and being able to demonstrate those exploits with confidence. For those that consider themselves "experts" in Web application security, what did you do to learn the art without actually breaking the law? Or, is this the gray area that nobody likes to talk about because you have to bend the law to its limits?

    Read the article

  • How to attach WAR file in email from jenkins

    - by birdy
    We have a case where a developer needs to access the last successfully built WAR file from jenkins. However, they can't access the jenkins server. I'd like to configure jenkins such that on every successful build, jenkins sends the WAR file to this user. I've installed the ext-email plugin and it seems to be working fine. Emails are being received along with the build.log. However, the WAR file isn't being received. The WAR file lives on this path in the server: /var/lib/jenkins/workspace/Ourproject/dist/our.war So I configured it under Post build actions like this: The problem is that emails are sent but the WAR file isn't being attached. Do I need to do something else?

    Read the article

  • Should these concerns be separated into separate objects?

    - by Lewis Bassett
    I have objects which implement the interface BroadcastInterface, which represents a message that is to be broadcast to all users of a particular group. It has a setter and getter method for the Subject and Body properties, and an addRecipientRole() method, which takes a given role and finds the contact token (e.g., an email address) for each user in the role and stores it. It then has a getContactTokens() method. BroadcastInterface objects are passed to an object that implements BroadcasterInterface. These objects are responsible for broadcasting a passed BroadcastInterface object. For example, an EmailBroadcaster implementation of the BroadcasterInterface will take EmailBroadcast objects and use the mailer services to email them out. Now, depending on what BroadcasterInterface implementation is used to broadcast, a different implementation of BroadcastInterface is used by client code. The Single Responsibility Principle seems to suggest that I should have a separate BroadcastFactory object, for creating BroadcastInterface objects, depending on what BroadcasterInterface implementation is used, as creating the BroadcastInterface object is a different responsibility to broadcasting them. But the class used for creating BroadcastInterface objects depends on what implementation of BroadcasterInterface is used to broadcast them. I think, because the knowledge of what method is used to send the broadcasts should only be configured once, the BroadcasterInterface object should be responsible for providing new BroadcastInterface objects. Does the responsibility of “creating and broadcasting objects that implement the BroadcastInterface interface” violate the Single Responsibility Principle? (Because the contact token for sending the broadcast out to the users will differ depending on the way it is broadcasted, I need different broadcast classes—though client code will not be able to tell the difference.)

    Read the article

  • Developing an internet-enabled application as a Kiosk on Windows 7

    - by maple_shaft
    I am finalizing development of a desktop Java application that communicates with an outside web server, and now I need to start seriously considering deployment. This application will run on a large touchscreen all-in-one workstation running Windows 7. It will be located in a public-area and thus must be LOCKED-DOWN Hanibal Lecter style. Early in the project nobody really concerned themselves with this fact just assuming that we can buy some magical software for Windows 7 that will automatically take care of all this, however I am finding now that this looks to be a LOT more complicated than my manager ever thought. I need to: - Lock down the standard hot-keys (ALT+TAB, ALT+CTRL+DEL, etc...) Prevent the user from opening ANY programs other than the kiosk application and its spawned executables Prevent the user from closing the application Start the kiosk application on startup (this can be done without kiosk software) Auto-login to Windows on reboot (Windows Updates, power failure, bratty kid pressing the power button, etc...) Administrator passcode escape sequence for routine maintenance by desktop support professionals. To my dismay I am having a really hard time finding software that contains the whole package and am finding numerous swaths of competing information on the best way to do this. I am not necessarily looking for free or open source software and am willing to pay for software that can help me achieve this. Have any of you ever wrote kiosk software before and if so what approaches have you taken to do this?

    Read the article

  • How would I implement this application idea?

    - by Mike Wills
    I am a D&D gamer and a developer that has mostly worked with ASP.NET applications professionally. I have written some chat bots in Node.js and I have only played a little with PHP but wrote nothing serious. I have had inspiration to create a site that allows a person to keep track of characters (aka the character sheet). I am thinking of using this as a learning opportunity to learn noSQL and to write a full javascript front-end. I want this application to save the value as I change it. So if I edit the armor class, it is saved immediately instead of waiting until I hit the submit button. I think that will make it easier to use while gaming and not losing anything because I forgot to save the change. I have never done anything like this. How do you implement this style of application? Is there a tutorial or howto to get me on the right path? While I would really like to use ASP.NET but I don't have a Windows server to publish on (and I really can't afford to pay for a service). What language that runs on Linux would work well for this type of application? Note: I feel noSQL would work in this case because of the sheer number of tables required to create something like this in SQL.

    Read the article

  • Books or help on OO Analysis

    - by Pat
    I have this course where we learn about the domain model, use cases, contracts and eventually leap into class diagrams and sequence diagrams to define good software classes. I just had an exam and I got trashed, but part of the reason is we barely have any practical material, I spent at least two good months without drawing a single class diagram by myself from a case study. I'm not here to blame the system or the class I'm in, I'm just wondering if people have some exercise-style books that either provide domain models with glossaries, system sequence diagrams and ask you to use GRASP to make software classes? I could really use some alone-time practicing going from analysis to conception of software entities. I'm almost done with Larman's book called "Applying UML and Patterns An Introduction to Object-Oriented Analysis and Design and Iterative Development, Third Edition". It's a good book, but I'm not doing anything by myself since it doesn't come with exercises. Thanks.

    Read the article

  • Abstract Data Type and Data Structure

    - by mark075
    It's quite difficult for me to understand these terms. I searched on google and read a little on Wikipedia but I'm still not sure. I've determined so far that: Abstract Data Type is a definition of new type, describes its properties and operations. Data Structure is an implementation of ADT. Many ADT can be implemented as the same Data Structure. If I think right, array as ADT means a collection of elements and as Data Structure, how it's stored in a memory. Stack is ADT with push, pop operations, but can we say about stack data structure if I mean I used stack implemented as an array in my algorithm? And why heap isn't ADT? It can be implemented as tree or an array.

    Read the article

  • How to handle people who lie on their resume [closed]

    - by Juliet
    Moderator comment Please note that this is a two year old question that has just been migrated from Stack Overflow. Please take your time to read all the answers and ask yourself "would my answer add anything to this?". I'm conducting technical interviews to fill a few .NET positions. Many of the people I interview really do know .NET pretty well, but I find at least 90% embellish their skillset anywhere between "a little" to "quite drastically". Sometimes they fabricate skills relevant to the position they're applying for, sometimes they don't. Most of the people I interview, even the most egregious liars, are not scam artists. They just want to stand out among the crowd, so they drop a few buzzwords on their resume like "JBoss", "LINQ", "web services", "Django" or whatever just to pad their skillset and stay competitive. (You might wonder if a person that lies about those skills is just bluffing their way through a technical interview. My interviews involve a lot of hands-on coding and problem-solving – people who attempt to bluff will bomb the hands-on coding portion in the first 3 minutes.) These are two open-ended questions, but it would really help me out when I make my recommendations to the hiring managers: Regarding interviewing etiquette, should I attempt to determine whether a person really possesses all of the skills they claim to have? Can I do this without making the candidate feel uncomfortable? Regarding the final decision, should I recommend candidates who are genuinely qualified for the positions they're applying for, even if they've fabricated portions of their skillset?

    Read the article

  • Software bug/defect classification

    - by Dustin K
    We're trying to come up with terms that better describe our bugs/defects. To us, the term 'bug' or 'defect' is too generic and doesn't accurately reflect what is happening. For example, instead of saying that there is a bug (in the general sense), we'd rather say what type of bug (an error, or enhancement, or improvement, etc.). What names do you use for describing 'bugs'? I found http://www.softwaredevelopment.ca/bugs.shtml which has some pretty good classifications. How do you classify them?

    Read the article

  • Best way to use GIT to maintain web application template

    - by Darren
    I am a sole developer and I have a web application template that I have created in Visual Studio. I am using GIT for source control, but only on my development machine. Presently I have a master and I create branches for new features, merging them back in to the master as I complete the features. I am at a point now where I am ready to use the template for deployments, and of course I want to continue adding new features via branching/merging. My question is: what would be the typical/recommended way for me to create application deployments based on the master? Should I clone the repository into a new directory that is for a particular web application? Or should I also use branching to do project development based on the main project? The projects would never be merged back into the master. However, it would be nice if I could merge future features into the master and have the ability to incorporate them into previously completed projects if desired. For more specific details of my environment: I am using TortoiseGIT in Windows 7, Visual Studio 2012, ASP.NET Web Pages. Obviously the main differences between deployments would simply be differing pages, CSS files and jQuery scripts. I found this post as I was writing this one. In order to do this should I clone the master repository and checkout from it?

    Read the article

  • Sending email notifications to users

    - by Web Girl
    What is the preferable way to send email notifications to users? I can do it both ways but what is better? have some c# code that calls stored procedure in the database. Stored procedure based on some logic pulls all the emails data and sends email using database mail or c# code calls stored procedure, gets all the nesessary data back and sends email itself using smtp server etc. I just wonder what is the preferable way in the sense of performance etc... C# code is a library that would be a part of the web application. So it's where it's better to put the load, on the application server or the database server? System will not be crazy busy, it's not like Amazon or something. But still it would be nice to create something that makes sense.

    Read the article

  • Author has inserted copyright into code with gnu public license notice - implications?

    - by Nicholas Pickering
    I've found a project on Github that I'm interested in contributing to which claims to be open source and has a GPL license included with it. But the original author has added a copyright notification to each source file. I'm not sure why but I don't feel right contributing to a project that's always going to have someone else's name on it. It really breaks the community-created feel, and makes me uneasy about what the author might choose to do with the project next. What are the implications of copyrighting open source GPL code as so? What power does this give the original author over a contributor? # Copyright (C) 2012, 2013 __AUTHORNAME__ # This file is part of __PROJECTNAME__. # # __PROJECTNAME__ is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # __PROJECTNAME__ is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >