Search Results

Search found 11009 results on 441 pages for 'third party integration'.

Page 259/441 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • CodeStock 2012 Review: Eric Landes( @ericlandes ) - Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.

    Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.Speaker: Eric LandesTwitter: @ericlandesBlog: http://ericlandes.com/ This was one of the first sessions I attended during CodeStock 2012. Eric’s talk focused mostly on unit testing, and that the lack of proper unit testing can be compared to stealing from an employer. His point was that if you’re not doing proper unit testing then all of the time wasted on fixing issues that could have been detected with unit tests is like stealing money from employer. He makes the assumption that that time spent on fixing these issues could have been better spent developing new features that drive the business. To a point I can agree with Eric’s argument regarding unit testing and stealing from a company’s perspective. I can see how he relates resources being shifted from new development to bug fixes as stealing based on the fact that the resources used to fix bugs are directly taken from other projects. He also states that Boring/Redundant and Build/Test tasks should be automated because it reduces the changes of errors and frees up developer to do what they do best, DEVELOP! When he refers to testing, he breaks testing down in to four distinct types. Unit Test Acceptance Test (This also includes Integration Tests) Performance Test UI Test With this he also recommends that developers should not go buck wild striving for 100% code coverage because some test my not provide a great return on investment. In his experience he recommends that 70% test coverage was a very acceptable rate.

    Read the article

  • Forking a GPL dual licensed software with business owned copyrights

    - by Eric
    After receiving some threats of the copyrights holder of a dual licensed software(GPL2 and commercial) to buy the commercial version for projects in production, I am thinking to make a fork. In a case of GPL2 and commercially dual licensed with business owned copyrights software, is forking the GPL2 version an option? Also, is forking a good way to deal with such cases? Background information The software is a web CMS released under 2 versions a GPL2 free open source edition and a commercial edition including technical support and extra functionality. The problem is that now, basing their argumentation on the "distribution" definition of the GPL2, the company holding the copyrights argue that delivering the software and some extensions to a client is considered as a "distribution". And that such a "distribution" falls under the GPL2 obligation to release the custom made extension code. Custom made extensions are mainly designs, templates and very specific functionality. Basically they give me 3 choices: Buying the commercial licensed edition for projects based on the GPL in production, Deleting all the projects in production based on GPL2 version, Releasing all the extensions as GPL2 code. The first 2 options are nothing realistic for finished projects. The third option could be fine, but as most of the extensions are very specific, cleaning the code to make it usable by other users means lot of works and also I am not sure the clients will appreciate to have their website designs and specific functionality released publicly. The copyrights holding company even contacted some clients directly, giving them the "choice". I know that this is a very corporate interpretation of GPL2, and a such action is nothing close to legal, but as an independent developer, I don't want to take the risk to get involved in some long and tiring legal procedures. PS. This question was first asked on Stack Overflow where it felt out of the scope and closed, after reading the present site FAQ, discussing about software licensing seems fine.

    Read the article

  • Project Management Software / 1 maybe 2 developers

    - by Ominus
    I am looking for software that I can use to "manage" multiple projects (5 - 10). Here are the features I would like but any recommendation is welcome. Bug/Feature tracking on a per project basis. Some way to keep all documents, diagrams, specs, requirements, in one place with the project. Better yet a tool where all these things or most of them could be authored. Task management during the development phase with milestones and estimates/actuals. Git integration I have been doing contract work and i have been doing really well for myself as far as getting projects but its becoming VERY hard to manage everything in an efficient manner. I am trying to learn about best practices when it comes to software programming methodologies and the more I read the more i realize that I am just managing these projects poorly. I am getting things done but the more I take on the less "solid" everything is. I am afraid if I don't get some good solid tools/practices in place I am going to do my customers and myself a disservice. The problem is that there are SO many options that its hard to weed through them all. I was at a point today where I had decided that I would just code my own (there is some irony here)! Obviously everyone has their likes dislikes I would love to hear from some of you lone programmers and how you manage everything since our needs aren't exactly the same thing that a large team might need. I also want a solution that can scale to 2 maybe 3 developers if I end up hiring some people to help with my work load. Thanks again for your usual insights!

    Read the article

  • Any empirical evidence on the efficacy of CMMI?

    - by mehaase
    I am wondering if there are any studies that examine the efficacy of software projects in CMMI-oriented organizations. For example, are CMMI organizations more likely to finish projects on time and/or on budget than non-CMMI organizations? Edit for clarification: CMMI stands for "Capability Maturity Model Integration". It's developed by the Software Engineering Institute at Carnegie-Mellon University (SEI-CMU). It's not a certification, but there are various companies that will "appraise" your organization to various levels of CMMI, such as level 2 and level 3. (I believe CMMI level 1 is an animalistic, Hobbesian free-for-all that nobody aspires to. In other words, everybody is at least CMMI level 1, even if you've never heard of CMMI before.) I'm definitely not an expert, but I believe that an organization can be appraised for CMMI levels within different scopes of work: i.e. service delivery, software development, foobaring, etc. My question is focused on the software development appraisal: is an organization that has been appraised to CMMI Level X for software projects more likely to finish a software project on time and on budget than another organization that has not been appraised to CMMI Level X? However, in the absence of hard data about software-oriented CMMI, I'd be interested in the effect that CMMI appraisals have on other activities as well. I originally asked the question because I've seen various studies conducted on software (e.g. the essays in The Mythical Man Month refer to numerous empirical studies, as does McConnell's Code Complete), so I know that there are organizations performing empirical studies of software development.

    Read the article

  • Coordinate spaces and transformation matrices

    - by Belgin
    I'm trying to get an object from object space, into projected space using these intermediate matrices: The first matrix (I) is the one that transforms from object space into inertial space, but since my object is not rotated or translated in any way inside the object space, this matrix is the 4x4 identity matrix. The second matrix (W) is the one that transforms from inertial space into world space, which is just a scale transform matrix of factor a = 14.1 on all coordinates, since the inertial space origin coincides with the world space origin. /a 0 0 0\ W = |0 a 0 0| |0 0 a 0| \0 0 0 1/ The third matrix (C) is the one that transforms from world space, into camera space. This matrix is a translation matrix with a translation of (0, 0, 10), because I want the camera to be located behind the object, so the object must be positioned 10 units into the z axis. /1 0 0 0\ C = |0 1 0 0| |0 0 1 10| \0 0 0 1/ And finally, the fourth matrix is the projection matrix (P). Bearing in mind that the eye is at the origin of the world space and the projection plane is defined by z = 1, the projection matrix is: /1 0 0 0\ P = |0 1 0 0| |0 0 1 0| \0 0 1/d 0/ where d is the distance from the eye to the projection plane, so d = 1. I'm multiplying them like this: (((P x C) x W) x I) x V, where V is the vertex' coordinates in column vector form: /x\ V = |y| |z| \1/ After I get the result, I divide x and y coordinates by w to get the actual screen coordinates. Apparenly, I'm doing something wrong or missing something completely here, because it's not rendering properly. Here's a picture of what is supposed to be the bottom side of the Stanford Dragon: Also, I should add that this is a software renderer so no DirectX or OpenGL stuff here.

    Read the article

  • LINQ to Twitter Maintenance Feedback

    - by Joe Mayo
    Originally posted on: http://geekswithblogs.net/WinAZ/archive/2013/06/16/linq-to-twitter-maintenance-feedback.aspxIt’s always fun to receive positive feedback on your work. If you receive a sufficient amount of positive feedback, you know you’re doing something right. Sometimes, people provide negative feedback too. There are a couple ways to handle it: come back fighting or engage for clarification. The way you handle the negative feedback depends on what your goals are. Feedback Approaches If you know the feedback is incorrect and you need to promote your idea or product, you might want to come back fighting. The feedback might just be comments by a troll or competitor wanting to spread FUD. However, this could be the totally wrong approach if you misjudge the source and intentions of the feedback. In a lot of cases, feedback is a golden opportunity. Sometimes, a problem exists that you either don’t know about or don’t realize the true impact of the problem. If you decide to come back fighting, you might loose the opportunity to learn something new. However, if you engage the person providing the feedback, looking for clarification, you might learn something very important. Negative feedback and it’s clarification can lead to the collection of useful and actionable data. In my case, something that prompted this blog post, I noticed someone who tweeted a negative comment about LINQ to Twitter. Normally, any less than stellar comments are usually from folks that need help – so I help if I can. This was different. I was like “Don’t use LINQ to Twitter”. This is an open source project, the comment didn’t come from a competing project, and  sounded more like an expression of frustration. So I engaged. Not only did the person respond, but I got some decent quality feedback. What’s also interesting is a couple other side conversations sprouted on the subject, which gave me more useful data. LINQ to Twitter Thread Actions Essentially, this particular issue centered around maintenance. There are actually several sub-issues at play here: dependencies, error handling, debugging, and visibility. I’ll describe each one and my interpretation. Dependencies Dependencies are where a library has references to other libraries. This means that when you build your application, you need DLLs for the entire dependency graph for your application. There are several potential problems with this that include more libraries for configuration management, potential versioning mismatches, and lack of cross-platform support. In the early days of LINQ to Twitter, I allowed developers to contribute and add dependencies, but it became very problematic (for reasons stated). It was like a ball and chain that kept me from moving forward. So, I refactored and pulled other open-source into my project to eliminate external dependencies. This lets me fix the code in my project without relying on someone else to upgrade or fix their DLL. The motivation for this was from early negative feedback that translated as important data and acted on it. Today, LINQ to Twitter has zero dependencies. Note: Rejecting good code from community members who worked hard to make your project better is a painful experience in itself. I have to point out that any contribution was not in vain because they had a positive influence on my subsequent refactoring that resulted in a better developer experience. Error Handling Error handling has been a problem in the past. I have this combination of supporting both synchronous and asynchronous (APM) processing that can be complex at times. Within the last 6 months, I did a fair amount of refactoring to detect errors and process them properly. I also refactored TwitterQueryException so it includes important data from Twitter. During this refactoring, I’ve made breaking changes that I felt would improve the development experience (small things like renaming a callback property to Exception, rather than Error). I think the async error handling is much better than it was a year ago. For all the work I’ve done, there is more to do. I think that a combination of more error handling support, e.g. improving semantics, and education through documentation and samples will improve the error handling story. Because of what I’ve done so far, it isn’t bad, but I see opportunities for improvement. Debugging Debugging can be painful. Here’s why: you have multiple layers of technology to navigate and figure out where the real problem is – Twitter API, Security, HTTP, LINQ to Twitter, and application. You can probably add your own nuances to that list, but the point is that debugging in this environment can be complex. I think that my plans for error handling will contribute to making the debugging process easier. However, there’s more I can do in the way of documentation and guidance. Some of the questions to be answered revolve around when something goes wrong, how does the developer figure out that there is a problem, what the problem is, and what to do about it. One example that has gone a long way to helping LINQ to Twitter developers is the 401 FAQ. A 401 Unauthorized is the error that the Twitter API returns when a use isn’t able to authenticate and is one of the most difficult problems faced by LINQ to Twitter developers. What I did was read guidance from Twitter and collect techniques from my own development and actions helping other developers to compile an extensive list of reasons for the 401 and ways to fix the problem. At one time, over half of the questions I answered in the forums were to help solve 401 issues. After publishing the 401 FAQ, I rarely get a 401 question and it’s because the person didn’t know about the FAQ. If the person is too lazy to read the FAQ, that’s not my issue, but the results in support issues have been dramatic. I think debugging can benefit from the education and documentation approach, but I’m always open to suggestions on whatever else I can do. Visibility Visibility is a nuance of the error handling/debugging discussion but is deeply rooted in comfort and control. The questions to ask in this area are what is happening as my code runs and how testable is the code. In support of these areas, LINQ to Twitter does have logging and TwitterContext properties that help see what’s happening on requests. The logging functionality allows any developer to connect a TextWriter to the Log property of TwitterContext to see what’s happening. Further, TwitterContext has a Headers property to see the headers Twitter returns and a RawResults property to show the Json string Twitter returns. From a testing perspective, I’ve been able to write hundreds of unit tests, over 600 when this post is published, and growing. If you write your own library, you have full control over all of these aspects. The tradeoff here is that while you have access to the LINQ to Twitter source code and modify it for all the visibility, LINQ to Twitter *will* change (which is good) and you will have to figure out how to merge that with your changes (which is hard). The fact is that this is a limitation of any 3rd party library, not just LINQ to Twitter. So, it’s a design decision where the tradeoff is between control and productivity. That said, there are things I can do with LINQ to Twitter to make the visibility story more compelling. I think there are opportunities to improve diagnostics. This would be a ton of work because it would need to provide multi-level logging that can be tuned for production and support any logging provider you want to attach. I’ve considered approaches such as how the new Semantic Logging application block connects to Windows Error Reporting as a potential target. Whatever I do would need to be extensible without creating native external dependencies. e.g. how many 3rd party libraries force a dependency on a logging framework that you don’t use. So, this won’t be an easy feat, but I believe it can be part of the roadmap. I think that a lot of developers are unaware of existing visibility features, so the first step would be to provide more documentation and guidance. My thought are that this would lead to more feedback that will help improve this area. Summary Recent feedback highlights some of items that are important to LINQ to Twitter developers, such as dependencies, error handling, debugging, and visibility. I know that there are maintenance issues that have been problems for LINQ to Twitter developers in the past. I’ve done a lot of work in this area, such as improving error handling, adding visibility features, and providing extensive API documentation. That said, there is more to be done to make LINQ to Twitter the best Twitter API experience available for .NET developers and I welcome anyone’s thoughts on what I’ve written here or new improvements. @JoeMayo

    Read the article

  • ReSharper 7.1 update

    - by TATWORTH
    Jet Brains have announced ReSharper 7.1: a considerable update to the powerful .NET developer productivity tool for Visual Studio. They invite you to download ReSharper 7.1 and take it for a free 30-day trial. I urge you to try this excellent Visual Studio add-on. Here is their announcement: Following this update, ReSharper 7 brings even more value to all .NET developers, such as more ways to refactor, inspect, clean up, review and generate code. Feature highlights of ReSharper 7 now include: Full integration with Visual Studio 2012 while maintaining support for Visual Studio 2005, 2008, and 2010.Performance and bug fixes: Since releasing version 7.0 this summer, we have fixed over 300 performance problems and bugs.New code inspections and contract annotations for a more robust .NET code quality analysis. Sharing ReSharper code inspection results with teammates has been streamlined as well for the purposes of code review.Improved tooling for .NET code maintenance including the top requested Extract Class refactoring that helps decrease code complexity, as well as a way to remove unused assembly references across the entire solution.Enhanced code formatter: We have implemented some of the most demanded code formatter improvements so far. For example, ReSharper 7.1 is able to format XML doc comments and chained method calls.Additional code exploration features helping visualize hierarchies of polymorphic members and CSS styles.An extended and fine-tuned code generation toolset. In terms of support for specific technologies and frameworks, ReSharper 7 is on the cutting edge as well, providing: Support for VB.NET refined with the Extract Class refactoring, new quick-fixes and improved IntelliSense.XAML support considerably enhanced in terms of code completion, typing assistance, naming style control, and code generation.An extensive pack of functionality for developers looking to create Windows Store applications for Windows 8.INotifyPropertyChanged interface support pack to improve productivity of Windows Forms, WPF and Silverlight application developers.Extended web development toolset, including improvements to JavaScript support, and initial support for ASP.NET 4.5 and ASP.NET MVC 4.Addition of two previously unsupported Microsoft development technologies: LightSwitch and SharePoint. For details on features and improvements in ReSharper 7 and a 30-day free trial, please read What's New in ReSharper 7.

    Read the article

  • b2b SOA Suite partner training November 13th & 14th Bucharest

    - by JuergenKress
    Description: Oracle SOA Suite 11g is a complete infrastructure for building, deploying, and managing composite applications and business processes. For an enterprise to extend business processes to its trading partners, it requires a platform that addresses compliance, security, visibility, scalability, and standards. The Oracle SOA Suite (Oracle B2B) is this platform. Oracle B2B, the "Edge Component", enables an enterprise to define, configure, manage, and monitor the exchange of information, with its trading partners. Oracle SOA Suite, the "B2B Infrastructure", enables business process orchestration, administration, monitoring, auditing, inter-enterprise connectivity, governance and security. Together they provide a complete end-to-end business process integration platform. Date Location Time Facilitator Register 13-14 November Oracle Room MtgRm15_6, Bucharest - Nusco Tower, Romania D1: 08:30 - 17:30 D2: 09:00 - 17:30 Krishnaprem Bhatia Please contact us directly Registration Please contact us directly - Please note that there are limited seats and confirmation will be on a first come, first served basis Travel Each delegate is responsible for his/her own travel arrangements. Please obtain approval from your manager first. Contact For logistic questions, please contact Nadja Vogl SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: b2b,SOA Suite,training,eduction,b2b training,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Multiple vulnerabilities in Firefox

    - by Ritwik Ghoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-3982 Denial of service (DoS) vulnerability 10.0 Firefox Solaris 10 SPARC: 145080-13 X86: 145081-12 CVE-2012-3983 Denial of service (DoS) vulnerability 10.0 CVE-2012-3986 Permissions, Privileges, and Access Controls vulnerability 6.4 CVE-2012-3988 Resource Management Errors vulnerability 9.3 CVE-2012-3990 Resource Management Errors vulnerability 10.0 CVE-2012-3991 Permissions, Privileges, and Access Controls vulnerability 9.3 CVE-2012-3992 Permissions, Privileges, and Access Controls vulnerability 5.8 CVE-2012-3993 Design Error vulnerability 9.3 CVE-2012-3994 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-3995 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4179 Resource Management Errors vulnerability 10.0 CVE-2012-4180 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4181 Resource Management Errors vulnerability 10.0 CVE-2012-4182 Resource Management Errors vulnerability 10.0 CVE-2012-4183 Resource Management Errors vulnerability 10.0 CVE-2012-4184 Permissions, Privileges, and Access Controls vulnerability 9.3 CVE-2012-4185 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4186 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4187 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4188 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4192 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2012-4193 Design Error vulnerability 9.3 CVE-2012-4194 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-4195 Permissions, Privileges, and Access Controls vulnerability 5.1 CVE-2012-4196 Permissions, Privileges, and Access Controls vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page. Note: Solaris 10 patches SPARC: 145080-13 X86: 145081-12 contain the fix for all CVEs between Firefox version 10.0.7 and 10.0.12.

    Read the article

  • What individual needs to be aware when signing a NDA with client?

    - by doNotCheckMyBlog
    I am very new to IT industry and have no prior experience. However I came into contact with a party who is gear to build a mobile application. But, they want me to sign NDA (No Disclosure Agreement). The definition seems vague, The following definitions apply in this Agreement: Confidential Information means information relating to the online and mobile application concepts discussed and that: (a) is disclosed to the Recipient by or on behalf of XYZ; (b) is acquired by the Recipient directly or indirectly from XYZ; (c) is generated by the Recipient (whether alone or with others); or (d) otherwise comes to the knowledge of the Recipient, When they say otherwise comes to the knowledge of the recipient. Does it mean if I think of any idea from my own creative mind and which is similar to their idea then it would be a breach of this agreement? and also is it okay to tell to include application name in definition as currently to me it sounds like any online of mobile application concept they think I should not disclose it to anybody. "Confidential Information means information relating to the online and mobile application concepts discussed and that:" I am more concerned about this part, Without limiting XYZ’s rights at law, the Recipient agrees to indemnify XYZ in respect of all claims, losses, liabilities, costs or expenses of any kind incurred directly or indirectly as a result of or in connection with a breach by it or any of its officers, employees, or consultants of this Agreement. Is it really common in IT industry to sign this agreement between client and developer? Any particular thing I should be concerned about?

    Read the article

  • WIN7 and Ubuntu lost after Installing ubuntu 12.04 and win7 dual system ,I have no OS on my laptop now

    - by abos
    Here is the procedure: In the morning I installed ubuntu using a USB directly without config any thing to my win7 system. After install complete, ubuntu installation software tell me to reboot.And everything is just find. While rebooting, there is NO UBUNTU system for me to select,and my laptop go straight to log in using WIN7. NO ubuntu shows on WIN7's configuration(Default System). Log in ubuntu using usb(try ubuntu without installation), I can find ubuntu's filesystem was already there. Formatting the disk on WIN7's disk management, rearranging them to other disk.Still having no trouble with WIN7. In the afternoon try a few times of installation and uninstallation of ubuntu. still shows no sign of selecting ubuntu system. In the evening another trial while installing ubuntu with the third option of: installing ubuntu alongside with INW7, erase win7 and install ubuntu. somethingelse --- my check failed with configuartion for what comes out with the 'something else' option,reboot. And I have no system now with some cmd tips say: Reboot and Select proper Boot Device or Insert Boot Media in selected Boot device and press a key. Files those on win7's orginal file system and Ubuntu filesystem can still be found when I 'try ubuntu without installation'. 5.But I just got no OS when I reboot my laptop normally.

    Read the article

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • Setting up multiple cores for apache solr for Ubuntu 12.04 and Drupal 7

    - by chrisjlee
    I'm setting up solr locally for my development purposes and integration with Drupal 7. I'm not very familiar with tomcat. My background has primarily been LAMP setups. So I went and installed the package provided by ubuntu for apache solr following this guide. sudo apt-get install tomcat6 tomcat6-admin tomcat6-common tomcat6-user tomcat6-docs tomcat6-examples sudo apt-get install solr-tomcat I've got that working. The apt-get package manager does a great job and allows me to setup solr but with one core. What steps need to be taken to enable multi core setup for apache solr? And below is my solr.xml file: sudo nano /var/lib/tomcat6/conf/Catalina/localhost/solr.xml <!-- Context configuration file for the Solr Web App --> <Context path="/solr" docBase="/usr/share/solr" debug="0" privileged="true" allowLinking="true" crossContext="true"> <!-- make symlinks work in Tomcat --> <Resources className="org.apache.naming.resources.FileDirContext" allowLinking="true" /> <Environment name="solr/home" type="java.lang.String" value="/usr/share/solr" override="true" /> </Context>

    Read the article

  • How can I include my derived class type name in the serialized JSON?

    - by ChrisD
    Sometimes working with the js Serializer is easy, sometimes its not.   When I attempt to serialize an object that is derived from a base, the serializer decided whether or not to include the type name. When its present, the type name is represented by a ___type attribute in the serialized json like this: {"d":{"__type":"Commerce.Integration.Surfaces.OrderCreationRequest","RepId":0}} The missing type name is a problem if I intend to ship the object back into a web method that needs to deserialize the object.   Without the Type name, serialization will fail and result in a ugly web exception. The solution, which feels more like a work-around, is to explicitly tell the serializer to ALWAYS generate the type name for each derived type.  You make this declaration by adding a [GenerateScriptType())] attribute for each derived type to the top of the web page declaration.   For example, assuming I had 3 derivations of OrderCreationRequest; PersonalOrderCreationRequest, CompanyOrderCreationRequest, InternalOrderCreationRequestion, the code-behind for my web page would be decorated as follows: [GenerateScriptType(typeof(PersonalOrderCreationRequest))] [GenerateScriptType(typeof(CompanyOrderCreationRequest))] [GenerateScriptType(typeof(InternalOrderCreationRequest))] public partial class OrderMethods : Page { ... } With the type names generated in the serialized JSON, the serializer can successfully deserialize instances of any of these types passed into a web method. Hope this helps you as much as it did me.

    Read the article

  • How to configure KDE default settings for a new user of a group?

    - by Adobe
    I'm a sys admin on Kubuntu 11.10 machine. Where do I configure the basic config for a new user (say belonging to group "users")? Edit 1: I want to configure langauages - currently my new users get English and Bulgarian Languages. I want them to get English and Russian - and also to set Alt-CapsLock - to be the input-language-switching-combination. Edit 2: How do I configure things in /usr/share/kde4 When I do kdesudo systemsettings and save configurations - only root settings got changed - not the /usr/share/kde4 ones. Edit 3: New user gets the /etc/skel files controlling bash behaviour-appearence. What about the KDE new user's default files - where are they stored? Edit 4: Oh, I found some hints: kde4-config --path config gives a list of folders (separated by the colon) where KDE looks for configs. My machine responded with: /home/boris/.kde/share/config/ /etc/kde4/ /usr/share/kubuntu-default-settings/kde4-profile/default/share/config/ /usr/share/kde4/config/ /usr/share/desktop-base/profiles/kde-profile/share/config/ It looks like third line is where KDE takes the default options. So I found these zilions of settings - but no GUI way to configure it ((. Edit 5: Finally, I've created a dummy user, configured it, and wrote a script which gives it's settings to a given user(s). The trick - is to chown after one transfered the dot files from one user to another. I've tested it - it works fine.

    Read the article

  • Wired connection to windows ISC stopped working

    - by cmpickle
    I have had my Linux box connected to the internet through a Windows computer by wiring them together with a cat5 cable then the Windows is connected wireless to a router. This setup has worked since I got the Linux box but just yesterday stopped working. The changes that occurred were: I changed networking so that I was connecting the Linux box and Windows computer through a second router so during that time the Linux box went through a router to connect to the Windows computer that was still connected wireless to the first router. The Internet didn't work on the Linux box at the time they second router was involved when I removed it and directly connected the Linux box and Windows the Internet worked again. Another thing that changed was that I had disconnected my Windows computer from all network connections and deleted them to make my Ethernet work better. Also at that time my dad had I connect the Windows to a third router and changed some settings in it. After reconnecting this time the connection between the router and the windows would not establish. If anyone has any ideas as to why this is I would greatly appreciate it!

    Read the article

  • Triple-display setup using AMD drivers

    - by Halik
    I am currently running a dual display setup with nVidia 8800GTS video card, on a Ubuntu 12.10 box. The current setup uses nVidia TwinView to render the image on a 1920x1200 display and 1600x1200 one. I'm planning to add a third, 1280x1024 display to the setup. The change will require me to upgrade my GFX card to one supporting triple displays. I'll probably go with Sapphire Radeon 7770 (FLEX edition, to avoid additional active DP-DVI adapters). Before I invest in new GFX I wanted to ask - how well the AMD drivers will support such a setup. It does not matter whether it's fglrx or the OSS ones. If I remember correctly, when running Fedora on a Radeon x800, I had 'void' areas above and below the working area on my second display. The desktop was rendered in 1920+1280 width and 1200 height (which left 176px of vertical space accessible for my cursor and windows but not displayed on the screen - I'd prefer to avoid that). It may have very well been my misconfiguration back then. Generally, are there any solutions from AMD on par with TwinView? Or is it a non-issue at all? Also, I'm wondering about the usual stuff - hardware h264 decoding support, glitch-free flash support, any issues with Compiz/Unity?

    Read the article

  • Do you keep intermediate files under version control?

    - by Subb
    Here's an example with a Flash project, but I'm sure a lot of projects are like this. Suppose I create an image with Photoshop. I then export this image as a jpeg for integration in Flash. I compile the fla as an asset library, which is then used in my Flash Builder project to produce the final swf. So it goes like : psd => jpg -> fla => swc -> Flash Builder project => swf. => : produce -> : is used in The psd, fla, and Flash Builder Project are source files : they are not the result of some process. The jpg and swc are what I would call "intermediate" files. They are the product of one (or more) source file(s). The swf is the final result. So, would you keep those intermediate files under version control? How do you deal with them?

    Read the article

  • A Virtual Seat at the Architect&rsquo;s Table

    - by Bob Rhubart
    I always have fun producing the Arch2Arch podcasts, but the latest batch was all that and a bag of chips, since I was required to do absolutely no preparation and very little talking, and since the conversation was reminiscent of those I’ve had with various architects (you know who you are) in various watering holes: free-ranging, extemporaneous, and far, far from dull. The three most recent programs were recorded during a virtual mini meet-up of architects back in February.  You’ll find more detail here, but in a nutshell, I invited several previous Arch2Arch panelists to join me on Skype to talk about whatever was on their minds.  The resulting conversation yielded the three latest programs. Check them out – it’s like you’re sitting at the table. Listen to Part 1 Listen to Part 2 Listen to Part 3 The conversation begins with the participant’s responses to my challenge to fill in the blank in the sentence “Most conversations about Enterprise Architecture are too ____.” From there the conversation morphed into a discussion of the sheer joy of finding funding for architecture projects. The architects seated at the virtual table in these programs are:  Todd Biske, a veteran enterprise architect and the author of the book SOA Governace, from Packt Publishing. ( LinkedIn | Twitter | Blog | Oracle Mix ) Jordan Braunstein, an Oracle ACE Director and the Business Integration and Architecture Partner at TUSC. (Blog | Twitter | LinkedIn | Oracle Mix) Basheer Khan,  also an Oracle ACE Director, and the founder and CEO of Innowave Technology (Blog | LinkedIn | Twitter | Oracle Mix) Pat Shepherd, an enterprise architect with the Oracle Enterprise Solutions Group. (Oracle Mix | LinkedIn | Blog) Coming Soon I was so pleased with the results of this meet-up format that I did the same thing for the next series of programs.  These free-ranging conversations feature a different group of participants, covering a different set topics, including the fear of SOA, the misunderstanding and misinformation behind that fear, and the idea of beauty in architecture. Yeah, you read that right. So stay tuned: RSS   Technorati Tags: oracle,otn,enterprise architecture,podcast. arch2arch,meet-up del.icio.us Tags: oracle,otn,enterprise architecture,podcast. arch2arch,meet-up

    Read the article

  • C# class architecture for REST services

    - by user15370
    Hi. I am integrating with a set of REST services exposed by our partner. The unit of integration is at the project level meaning that for each project created on our partners side of the fence they will expose a unique set of REST services. To be more clear, assume there are two projects - project1 and project2. The REST services available to access the project data would then be: /project1/search/getstuff?etc... /project1/analysis/getstuff?etc... /project1/cluster/getstuff?etc... /project2/search/getstuff?etc... /project2/analysis/getstuff?etc... /project2/cluster/getstuff?etc... My task is to wrap these services in a C# class to be used by our app developer. I want to make it simple for the app developer and am thinking of providing something like the following class. class ProjectClient { SearchClient _searchclient; AnalysisClient _analysisclient; ClusterClient _clusterclient; string Project {get; set;} ProjectClient(string _project) { Project = _project; } } SearchClient, AnalysisClient and ClusterClient are my classes to support the respective services shown above. The problem with this approach is that ProjectClient will need to provide public methods for each of the API's exposed by SearchClient, etc... public void SearchGetStuff() { _searchclient.getStuff(); } Any suggestions how I can architect this better?

    Read the article

  • radeon display driver clones monitors while using Xinerama

    - by gregmuellegger
    I'm trying to get my two Radeon HD 4770 cards working with three monitors. Xinerama works so far in the way that I have two fully working monitors were I can move windows from one to the other. My problem now is that my third monitor is a clone of my second monitor (displaying the exact same thing). These monitors are connected to the same graphic card ("Screen Middle" and "Screen Right" in the xorg.conf below). Here is my xorg.conf: Section "ServerLayout" Identifier "ThreeMonitors" Screen "Screen Left" 0 0 Screen "Screen Middle" RightOf "Screen Left" Screen "Screen Right" RightOf "Screen Middle" Option "Xinerama" EndSection Section "Monitor" Identifier "Monitor Left" Option "DPMS" EndSection Section "Monitor" Identifier "Monitor Middle" Option "DPMS" EndSection Section "Monitor" Identifier "Monitor Right" Option "DPMS" EndSection Section "Device" Identifier "Device Left" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:3:0:0" Screen 0 EndSection Section "Screen" Identifier "Screen Left" Device "Device Left" Monitor "Monitor Left" SubSection "Display" Depth 24 EndSubSection EndSection Section "Device" Identifier "Device Middle" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:2:0:0" Screen 0 EndSection Section "Screen" Identifier "Screen Middle" Device "Device Middle" Monitor "Monitor Middle" SubSection "Display" Depth 24 EndSubSection EndSection Section "Device" Identifier "Device Right" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:2:0:1" Screen 1 EndSection Section "Screen" Identifier "Screen Right" Device "Device Right" Monitor "Monitor Right" SubSection "Display" Depth 24 EndSubSection EndSection I'm using a fresh Kubuntu 10.10 installation with propsed-updates enabled since this repo contains a xorg fix for using multiple graphic cards. I hope someone can help me out. Very many thanks!!

    Read the article

  • Alternatives to type casting in your domain

    - by Mr Happy
    In my Domain I have an entity Activity which has a list of ITasks. Each implementation of this task has it's own properties beside the implementation of ITask itself. Now each operation of the Activity entity (e.g. Execute()) only needs to loop over this list and call an ITask method (e.g. ExecuteTask()). Where I'm having trouble is when a specific tasks' properties need to be updated. How do I get an instance of that task? The options I see are: Get the Activity by Id and cast the task I need. This'll either sprinkle my code with: Tasks.OfType<SpecificTask>().Single(t => t.Id == taskId) or Tasks.Single(t => t.Id == taskId) as SpecificTask Make each task unique in the whole system (make each task an entity), and create a new repository for each ITask implementation I don't like either option, the first because I don't like casting: I'm using NHibernate and I'm sure this'll come back and bite me when I start using Lazy Loading (NHibernate currently uses proxies to implement this). I don't like the second option because there are/will be dozens of different kind of tasks. Which would mean I'd have to create as many repositories. Am I missing a third option here? Or are any of my objections to the two options not justified? How have you solved this problem in the past?

    Read the article

  • Please, tell us how you made Agile work for you?

    - by Paul
    I've been seeing many questions related to Agile. There seems to be confusion between the people who are doing Agile successfully, and those of us who don't understand it. So I'm wondering if some of the successful teams would be willing to give the result of us some examples of how you succeeded. Some of the things I know I wonder What steps did you use? (ie. Talk to users, mock up, tests, code, testing, (whatever)) Tools that helped you? Did you generate any artifacts, other than a working implementation? How did you prevent spaghetti architecture / code? How do you pass along to new team members, or is the team stable for the project How did you determine exit criteria, or was it open ended. (Scope of project?) Did you do this as contracting? How did you develop a contract up-front? Did the business do any up front work? Or did they come to the table with "We want to implement a "bleh bleh blah"? What types of tests did you use? Unit, Integration, UAT? Or did the process make some/all of those unnecessary? Bonus: Do you have an situations / links to "How To" Agile articles, books, etc? Wiki, describes what but not how (to the uninitiated) At least to me, not a duplicate

    Read the article

  • Errors in ~/.xsession-errors

    - by Kuberan Naganathan
    I'm getting errors in ~/.xession-errors. I'm running ubuntu 12.04 Many apps fail to run without mention of problems in the .xsession-errors file. I looked around and tried to resolve issues myself but failed so far. I have to say it's possible that the issue is related to me mounting /home on another partition. (I say possibly because stuff worked ok for a while.) Fortunately my .xsession-errors file is small enough to post here. Thanks in advance for the help: gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done (gnome-settings-daemon:2547): color-plugin-WARNING **: failed to get edid: unable to get EDID for output (gnome-settings-daemon:2547): color-plugin-WARNING **: unable to get EDID for xrandr-default: unable to get EDID for output (gnome-settings-daemon:2547): color-plugin-WARNING **: failed to reset xrandr-default gamma tables: gamma size is zero Initializing composite options...done Initializing opengl options...done Initializing decor options...done ** Message: applet now removed from the notification area Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/kuberan/.compiz/session/10754cf696d335e98e13471376531156900000024960034" Initializing session options...done Initializing gnomecompat options...done Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done ** Message: using fallback from indicator to GtkStatusIcon (compiz:2560): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Initializing unityshell options...done Setting Update "main_menu_key" Setting Update "run_key" Setting Update "icon_size" ** Message: moving back from GtkStatusIcon to indicator

    Read the article

  • International Pricing of Software [closed]

    - by arachnode.net
    I operate a small company that charges $99 for a piece of software. I'd like to know what would be a fair price for non-US customers. Today I sold a license to a party in South Africa. He told me he had been watching the project for two years while business justification could be made for the purchase as SA's currency is nine times weaker than the US dollar. I found this resource detailing how much a Big Mac costs in various countries: http://howmuchatyourplace.com/how_much_does/Big%20Mac_cost.php I realize that the cost of producing a Big Mac varies from locale to locale as does the demand for one. I am aware that many software companies charge prices in local currencies that equate to the price in US dollars. I am aware that my costs remain fixed, and I obviously I cannot discount the rate at which my time costs me. I'm OK with earning less per sale as I would rather get my software onto the desktops of those that need it rather than having them try to write it themselves. Support is light and I can usually point a user to an existing blog or forum post. Being a resident of Hawaii, I am aware that certain goods and services cost more here. Power is up to six times as much per KWH as it is in, say, Seattle, and wages are approximately 60% of what they are for my profession (programmer). I'd like to offer my software at a price that would be fair for everyone around the globe. If a currency is 2 foreign units to 1 US dollar, and goods and services cost 50% more and pay for an equivalent job is 50% of what it is here, should I charge, say, $50 instead of $99? Is there a resource which would allow me to input a price in US dollars and adjust for a list of international locations?

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >