Search Results

Search found 31762 results on 1271 pages for 'js future software'.

Page 42/1271 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Mobile HCM: It’s not the future, it is right now

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Steve Boese, Director Product Strategy, Oracle I’ll bet you reached for your iPhone or Android or BlackBerry and took a quick look at email or Facebook or last night’s text messages before you even got out of bed this morning. Come on, admit it, it’s ok, you are among friends here. See, feel better now? But seriously, the incredible growth and near-ubiquity of increasingly powerful, capable, and for many of us, essential in our daily lives mobile devices has profoundly changed the way we communicate, consume information, socialize, and more and more, conduct business and get our work done. And if you doubt that profound change has happened, just think for a moment about the last time you misplaced your iPhone.  The shivers, the cold sweats, the panic... We have all been there. And indeed your personal experiences with mobile technology echoes throughout the world - here are a few data points to consider: Market research firm IDC estimates 1.8 billion mobile phones will be shipped in 2012. A recent Pew study reports 46% of Americans own a smartphone of some kind. And finally in the USA, ownership of tablets like the iPad has doubled from 10% to 19% in the last year. So truly for the Human Resources leader, the question is no longer, ‘Should HR explore ways to exploit mobile devices and their always-on nature to better support and empower the modern workforce?’, but rather ‘How can HR best take advantage of smartphone and tablet capability to provide information, enable transactions, and enhance decision making?’. Because even though moving HCM applications to mobile devices seems inherently logical given today’s fast-moving and mobile workforces, and its promise to deliver incredible value to the organization, HR leaders also have to consider many factors before devising their Mobile HCM strategy and embarking on mobile HR technology projects. Here are just some of the important considerations for HR leaders as you build your strategies and evaluate mobile HCM solutions: Does your organization provide mobile devices to the workforce today, and if so, will the current set of deployed devices have the necessary capability and ecosystems to support your mobile HCM initiatives? Will you allow workers to use or bring their own mobile devices, (commonly abbreviated as ‘BYOD’), and if so are your IT and Security organizations in agreement and capable of supporting that strategy? Do you know which workers need access to mobile HCM applications? Often mobile HCM capability flows down in an organization, with executives and other ‘road-warrior’ types having the most immediate needs, followed by field sales staff, project managers, and even potential job candidates. But just as an organization will have to spend time understanding ‘who’ should have access to mobile HCM technology, the ‘what’ of the way the solutions should be deployed to these groups will also vary. What works and makes sense for the executive, (company-wide dashboards and analytics on an iPad), might not be as relevant for a retail store manager, (employee schedules, location-level sales and inventory data, transaction approvals, etc.). With Oracle Fusion HCM, we are taking an approach to mobile HR that encompasses not just the mobile solution needs for the various types of worker, but also incorporates the fundamental attributes of great mobile applications - the ability to support end-to-end transactions, apps that respond with lightning-fast speed, with functions that are embedded in a worker’s daily activities, and features that can be mashed-up easily with other business areas like Finance and CRM. Finally, and perhaps most importantly for the Oracle Fusion HCM team, delivering mobile experiences that truly enhance, enable, and empower the mobile workforce, and deliver on the design mantras of the best-in-class consumer applications, continues to shape and drive design decisions. Mobile is no longer the future, it is right now, and the cutting-edge HR leader of today will need to consider how mobile fits her HCM technology strategy from here on out. You can learn more about our ideas and plans for Oracle Fusion HCM mobile solutions at https://fusiontap.oracle.com/.

    Read the article

  • Oracle Utilities Application Framework future feature deprecation

    - by Paula Speranza-Hadley
    From time to time, existing functionality is replaced with alternative features to offer greater flexibility and standardization. In Oracle Utilities Application Framework V4.2.0.0.0 the following features are being announced for deprecation in the next release or have been previously announced and are not being delivered with this version of the Oracle Utilities Application Framework: ·         No SQL Server Support – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with any support for SQL Server. ·         No MPL Support – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with the Multi-Purpose Listener (MPL) component of the XML Application Integration (XAI) component. Customers using the MPL should migrate to Oracle Service Bus. ·         No provided Crystal Reports/Business Objects Interface – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with a supported Crystal Reports/Business Objects Interface. This facility is now available as downloadable customization for existing or new customers. Responsibility for maintenance and new features is now individual customer's responsibility. ·         XAI Servlet deprecation – The XAI Servlet (xaiserver and classicxai) will be removed in the next release of the Oracle Utilities Application Framework. Customers are encouraged to migrate to the native Web Services Support as outlined in XAI Best Practices whitepaper available from My Oracle Support (Doc Id: 942074.1). ·         ConfigLab deprecation – The ConfigLab facility will be removed in the next release of Oracle Utilities Application Framework for products it is shipped with. Customers are recommended to migrate to the Configuration Migration Assistant which provides the same and more functionality.   ·         Archiving deprecation – The inbuilt Archiving has been removed from Oracle Utilities Application Framework V4.2.0.0.0 or above, for products it is shipped with. Customers considering Archiving solution should migrate to the Information Lifecycle Management based solution provided for your product. ·         DISTRIBUTED batch execution mode deprecation – The DISTRIBUTED execution mode used by the batch component of the Oracle Utilities Application Framework will be deprecated in the next release of the Oracle Utilities Application Framework. Customers using DISTRUBUTED mode should migrate to CLUSTERED mode as outlined in the Batch Best Practices For Oracle Utilities Application Framework Based Products whitepaper available from My Oracle Support (Doc Id: 836362.1). ·         XAI Schema Editor deprecation – The XAI Schema Editor which is a component of the Oracle Utilities Software Development Kit will be removed in the next release of the Oracle Utilities Application Framework. Customers should migrate their existing schemas to Business Object based schemas and use the browser based Schema Editor instead.  

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • Migrating to Natty (or any other future versions of ubuntu)

    - by Nik
    I am hoping that this question would help other ubuntu users when migrating to a newer version of ubuntu. This should have all the info that they need. So please when you answer try to phrase them into points for easy understanding. I understand that some questions that I ask might have been asked before by other users. In that case just provide the links to those questions. I am running ubuntu 10.10 Maverick Meerkat in case that is important. I can say for sure that a clean install is definitely better than an upgrade since it gives you an opportunity to clean your system and get a fresh start. However some of us like to retain certain software configuration or files etc. The questions are as follows, How do you save the configuration files of certain application like for instance Thunderbird, firefox, etc...so that you can basically paste in the new version of ubuntu? (Thunderbird for instance has all my mail, so I definitely would like to backup its configuration and then use it the new installation that I do) I have some applications like MATLAB and Maple (Based on JAVA) installed. When I migrate, can I just copy the entire installation folder to the new version of ubuntu? Would it still work as now if I do that? When doing a backup which folders should be backed up? Obviously your personal files would be backup. But other than that, is it necessary to back up stuff in the home folder, /usr/bin etc? I have BURG installed. I am guessing that would be erased when I do a clean install along with the program's configuration and everything. How can I do a backup of it? I am dual booting my ubuntu alongside with Windows 7. When I perform the clean install of ubuntu, would GRUB (bootloader) be removed and in anyway jeopardize my windows installation? Over time I have added a lot of PPA which are of course compatible with my current ubuntu version. How do I make a backup of all my PPA and would they be compatible to the newer version of ubuntu when I restore them? I hope this covers all the questions or doubts that a user might face when thinking about performing a clean install of his system. If I missed anything please mention it as a comment and I will add it to my answer.

    Read the article

  • How often is software speed evident in the eyes of customers?

    - by rwong
    In theory, customers should be able to feel the software performance improvements from first-hand experience. In practice, sometimes the improvements are not noticible enough, such that in order to monetize from the improvements, it is necessary to use quotable performance figures in marketing in order to attract customers. We already know the difference between perceived performance (GUI latency, etc) and server-side performance (machines, networks, infrastructure, etc). How often is it that programmers need to go the extra length to "write up" performance analyses for which the audience is not fellow programmers, but managers and customers?

    Read the article

  • Is measuring software project metrics popular in todays industry?

    - by Russ K
    I encountered a developer who wanted some outside advice on their teams project. I found out they're developing a huge software suite for the companies executives, project manager and developers that can calculate metrics automatically and graph them per iteration. As a student from a computer science background I know very little on metrics and their importance, but my questions are: Do most companies have some way, doesn't have to be an elegant program, to measure meaningful metrics? Which metrics, single or combined, help you narrow down your projects scope and estimates? As a person who analyzes metrics, how often do you base decisions off of them? IE. Tests failed per week is increasing drastically? Do you feel that the introduction of studying metrics has helped you understand the project better? Not sure why but the developers project intrigued me and I must know more. If y

    Read the article

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • What should you bring to the table as a Software Architect?

    - by Ahmad Mageed
    There have been many questions with good answers about the role of a Software Architect (SA) on StackOverflow and Programmers SE. I am trying to ask a slightly more focused question than those. The very definition of a SA is broad so for the sake of this question let's define a SA as follows: A Software Architect guides the overall design of a project, gets involved with coding efforts, conducts code reviews, and selects the technologies to be used. In other words, I am not talking about managerial rest and vest at the crest (further rhyming words elided) types of SAs. If I were to pursue any type of SA position I don't want to be away from coding. I might sacrifice some time to interface with clients and Business Analysts etc., but I am still technically involved and I'm not just aware of what's going on through meetings. With these points in mind, what should a SA bring to the table? Should they come in with a mentality of "laying down the law" (so to speak) and enforcing the usage of certain tools to fit "their way," i.e., coding guidelines, source control, patterns, UML documentation, etc.? Or should they specify initial direction and strategy then be laid back and jump in as needed to correct the ship's direction? Depending on the organization this might not work. An SA who relies on TFS to enforce everything may struggle to implement their plan at an employer that only uses StarTeam. Similarly, an SA needs to be flexible depending on the stage of the project. If it's a fresh project they have more choices, whereas they might have less for existing projects. Here are some SA stories I have experienced as a way of sharing some background in hopes that answers to my questions might also shed some light on these issues: I've worked with an SA who code reviewed literally every single line of code of the team. The SA would do this for not just our project but other projects in the organization (imagine the time spent on this). At first it was useful to enforce certain standards, but later it became crippling. FxCop was how the SA would find issues. Don't get me wrong, it was a good way to teach junior developers and force them to think of the consequences of their chosen approach, but for senior developers it was seen as somewhat draconian. One particular SA was against the use of a certain library, claiming it was slow. This forced us to write tons of code to achieve things differently while the other library would've saved us a lot of time. Fast forward to the last month of the project and the clients were complaining about performance. The only solution was to change certain functionality to use the originally ignored approach despite early warnings from the devs. By that point a lot of code was thrown out and not reusable, leading to overtime and stress. Sadly the estimates used for the project were based on the old approach which my project was forbidden from using so it wasn't an appropriate indicator for estimation. I would hear the PM say "we've done this before," when in reality they had not since we were using a new library and the devs working on it were not the same devs used on the old project. The SA who would enforce the usage of DTOs, DOs, BOs, Service layers and so on for all projects. New devs had to learn this architecture and the SA adamantly enforced usage guidelines. Exceptions to usage guidelines were made when it was absolutely difficult to follow the guidelines. The SA was grounded in their approach. Classes for DTOs and all CRUD operations were generated via CodeSmith and database schemas were another similar ball of wax. However, having used this setup everywhere, the SA was not open to new technologies such as LINQ to SQL or Entity Framework. I am not using this post as a platform for venting. There were positive and negative aspects to my experiences with the SA stories mentioned above. My questions boil down to: What should an SA bring to the table? How can they strike a balance in their decision making? Should one approach an SA job (as defined earlier) with the mentality that they must enforce certain ground rules? Anything else to consider? Thanks! I'm sure these job tasks are easily extended to people who are senior devs or technical leads, so feel free to answer at that capacity as well.

    Read the article

  • Should the Joel Test be essential for every software company? [closed]

    - by Mahbubur R Aaman
    Joel Test has 12 steps for better code. They are: Do you use source control? Can you make a build in one step? Do you make daily builds? Do you have a bug database? Do you fix bugs before writing new code? Do you have an up-to-date schedule? Do you have a spec? Do programmers have quiet working conditions? Do you use the best tools money can buy? Do you have testers? Do new candidates write code during their interview? Do you do hallway usability testing? Should these steps mandatory for every software companies? While recruiting programmers, then programmers should ask the company, as they follow joel steps?

    Read the article

  • Software Update Notifications

    - by devio
    I am considering implementing some sort of Software Update Notification for one of the web applications I am developing. There are several questions I came across: Should the update check be executed on the client or on the server? Client-side means, the software retrieves the most current version information, performs its checks, and displays the update information. Server-side check means the software sends its version info to the server, which in turn does the calculations and returns information to the client. My guess is that server-side implementation may turn out to be more flexible and more powerful than client-side, as I can add functionality to the server easily, as long as the client understands it. Where should the update info be displayed? Is it ok to display on the login screen? Should only admins see it? (this is a web app with a database, so updating requires manipulation of db and web, which is only done by admins). What about a little beeping flashing icon which increases in size as the version gets more obsolete every day ;) ? Privacy issues Not everybody likes to have their app usage stats broadcast over the internet. TheOnion question: What do you think?

    Read the article

  • What software development process should I learn first for a solo project?

    - by Omar Kohl
    I want to develop a project on my own (if it is sucessful more people might start working on it too). Also I want to apply some proper software engineering from the first until the last day. On one hand just to try it out and compare results with previous projects that were just about writing code quick and dirty, and on the other hand to learn! I know the proper answer to this question is "It depends very much on the project...", "There is no single correct answer...". But I just need someplace to start, somewhere where every step is written down and tells me what to do. If I'm not happy next time I'll try something else. So, how/where should I start? I would love to hear some book suggestions cause I'm all about books :-D.

    Read the article

  • Why are 2 Adobe Flash Plugin on USC (Ubuntu Software Center)?

    - by LuC1F3R
    As you know in Ubuntu Software Center is 2 times Adobe Flash Plugin. One is called Adobe Flash Plugin and other Adobe Flash Plugin 10. Which of the two to install? Or rather it is the recommended installation methods? If we think well, we can install the Adobe Flash plugin for Firefox from the notification date (Install missing plugin) or walking on the Adobe website and downloading the package .deb. After all, how to properly install Flash Player on Linux Ubuntu? (But my biggg question is why are 2 Adobe Flash Plugin on USC? ...for what? If you click on "More Info", the description are the same for both)

    Read the article

  • Is it possible to install ZSNES Emulator from default software sources?

    - by Mike L
    I can find it listed when I search for "zsnes" in the Ubuntu Software Center but it doesn't have the "Install" button. If I click the "More information" button I'll get a "package not found" message. Synaptic can't find this package either. (from user @REJ) I have Natty 64bit. When I run sudo apt-get install zsnes it gives the following output: Reading package lists... Done Building dependency tree Reading state information... Done Package zsnes is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'zsnes' has no installation candidate

    Read the article

  • Two Weeks As A Software Estimation Rule of Thumb?

    - by Todd Williamson
    I saw a blog posting that spoke to me: http://james-iry.blogspot.com/2010/10/how-to-estimate-software.html Oddly, this is the kind of estimate that I tend to do on smaller projects. Just about everything is "two weeks" as that is comfortably far enough out. I once had an instructor walk us through how to create a more detailed estimate, wherein we already had the requirements up front, etc. and even after all the careful tabulation and such the final instruction was "Now that you have all this documentation go ahead and double it." Agile practitioners seem to like two weeks also as a sprint length. Is there something magical about two weeks? Is it a hrair number for our psyches or some other kind of crutch? Do you have an immediate default fall-back schedule strategy when you are pressed for an initial delivery date?

    Read the article

  • Why is my display name in Ubuntu Software Center some weird set of letters?

    - by Ike
    In USC, after I submit a review, my display name is "Bnxdcty"... a swell name, but where did it come from? I have checked the ubuntu single sign on page, verified my nickname on there, changed it to something else and back again for good measure, but still my reviewer name is somehow still "Bnxdcty". I even unauthorized ubuntu software center and then re-opened it/authorized it to my account. Does this just appear as this to me and others see my correct user nickname? It doesn't bother as much as it confuses me. I just know it will be something stupid that everyone knows but me.

    Read the article

  • What are options for 3rd Party Centralized Software Settings Management?

    - by Jeff Martin
    I am an architect in an enterprise looking to build a SaaS solution. Our products are distributed over many different deployable containers, Web Services, Web UI's, etc. I am looking for some open-source or 3rd party software solution to manage the settings of our application. These would be similar to the settings you might find in Word or Eclipse or Visual Studio. The settings would control various behaviors and features of the product. (Probably not settings like which database to connect to but more like, should I show line numbers on the page or not by default..). Ideally, we would be able to store values for different dimensions (by tenant, by user, by application environment... ) Because we have so many different deployables, I am looking for a centralized solution that can provide a web service that each of the deployables can get their individual settings from. Does anyone know of a centralized service providing this sort of features or give me some help in searching for an alternative to rolling our own?

    Read the article

  • Places to find free software projects who need developers/project managers?

    - by MHarrison
    While I have plenty of project management "booksmarts" and a handful of PM experience, I don't seem to have enough experience to get the sort of job I want. Since "I read another PM book/blog today" doesn't really count, I was thinking I could find some free/open source software (FOSS) projects who are looking for/hiring project managers or developers and see if there was anything I could volunteer for. Does anyone know of any FOSS employment sites where I might be able to find such projects? Something similar to careers.stackoverflow.com. I know I could just go to sourceforge/freshmeat and look around, but I was hoping to find some site that fills this need (and if any such sites exist, my google-fu is apparently VERY weak at finding them).

    Read the article

  • Why does my Ubuntu Software Center not work? [closed]

    - by Alex Mundy
    Possible Duplicate: How do I fix a “Problem with MergeList” error when trying to do an update? I've been having trouble with my Software Center. Whenever I try to open it, or even do an apt-get in the terminal I get this message: Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_restricted_binary-i386_Packages E: The package lists or status file could not be parsed or opened. How do I fix it? Note: I'm new to Ubuntu. I need simple instructions for the moment.

    Read the article

  • How do I return a purchase from the Ubuntu Software Centre?

    - by Garry Cairns
    I purchased Amnesia (game) from the Ubuntu Software Centre. It crashes on startup every time and I therefore want to return it for a refund. I can't find a way of doing so through the USC and, strangely, I can't find any trace of this having been asked before (maybe bad googling on my part). So the question is: how do I return a purchase and get a refund through the USC? EDIT Installing proprietary driver for AMD catalyst fixed Amnesia, but I still think this is an important question because it's basic customer service for any purchase. I'd therefore appreciate any answers anyone can find and I will continue to look too. If I find the answer I'll post it here.

    Read the article

  • Is it possible to configure Ubuntu as a software firewall?

    - by user3215
    I have some systems running on Ubuntu in the private IP range 192.168.2.0-255 . These systems are connected to a switch and the switch is connected to the ISP's modem. Neither the switch nor the modem support firewall options. I don't have any firewall device and I'm not willing to individually configure firewalls on all the systems (via gui/iptables). Is it possible to make an Ubuntu system into something like a software firewall, so that all the traffic/packets sent to or from the WAN(internet) would be allowed/denied based on its firewall rules?

    Read the article

  • Knowledge and user generated content management system to track files, research, proposals, etc.?

    - by Eshwar
    I'll try keep it short. Here's the scenario: We have employees all over the world performing similar work i.e. research, generating powerpoint slides, word documents, graphics, etc. Many times a lot of this previous work can be reused for another future project. The current arrangement is email and phone calls which as you would agree is quick if you know where to look but otherwise archaic and very very inefficient. So I am looking for software that will allow me to do the following: Tag files e.g. an investor presentation on cellphone usage in kenya would be tagged investor, cellphone, kenya Manage references e.g. if we read something on the internet, should be able to paste that link in some fashion and tag it as above. Preferably cloud based so that it can be accessed by anybody and additionally would be nice (though NOT must) to have access levels (director, manager, everyone) A nice interface that non technically savvy folks can warm up to ;) A desktop app would be handy so that people don't always have to click upload or something A tree based system is inefficient in this case because content is usually linked across branches and also people might not quite agree on one format of a tree. Tagging works around this very nicely. What I have considered so far: Evernote (for its more professional look) Springpad (for its versatility with content) Mendeley (this is a research manager and in some ways ideal, but i fear its limited to PDFs) The goal is that when somebody wants to look for a document, they don't have to ask a colleague, they can just search with keywords and all relevant information shows up. Thanks!

    Read the article

  • Day 6 - Game Menuing Woes and Future Screen Sneak Peeks

    - by dapostolov
    So, after my last post on Day 5 I dabbled with my game class design. I took the approach where each game objects is tightly coupled with a graphic. The good news is I got the menu working but not without some hard knocks and game growing pains. I'll explain later, but for now...here is a class diagram of my first stab at my class structure and some code...   Ok, there are few mistakes, however, I'm going to leave it as is for now... As you can see I created an inital abstract base class called GameSprite. This class when inherited will provide a simple virtual default draw method:        public virtual void DrawSprite(SpriteBatch spriteBatch)         {             spriteBatch.Draw(Sprite, Position, Color.White);         } The benefits of coding it this way allows me to inherit the class and utilise the method in the screen draw method...So regardless of what the graphic object type is it will now have the ability to render a static image on the screen. Example: public class MyStaticTreasureChest : GameSprite {} If you remember the window draw method from Day 3's post, we could use the above code as follows...         protected override void Draw(GameTime gameTime)         {             GraphicsDevice.Clear(Color.CornflowerBlue);             spriteBatch.Begin(SpriteBlendMode.AlphaBlend);             foreach(var gameSprite in ListOfGameObjects)            {                 gameSprite.DrawSprite(spriteBatch);            }             spriteBatch.End();             base.Draw(gameTime);         } I have to admit the GameSprite object is pretty plain as with its DrawSprite method... But ... we now have the ability to render 3 static menu items on the screen ... BORING! I want those menu items to do something exciting, which of course involves animation... So, let's have a peek at AnimatedGameSprite in the above game diagram. The idea with the AnimatedGameSprite is that it has an image to animate...such as ... characters, fireballs, and... menus! So after inheriting from GameSprite class, I added a few more options such as UpdateSprite...         public virtual void UpdateSprite(float elapsed)         {             _totalElapsed += elapsed;             if (_totalElapsed > _timePerFrame)             {                 _frame++;                 _frame = _frame % _framecount;                 _totalElapsed -= _timePerFrame;             }         }  And an overidden DrawSprite...         public override void DrawSprite(SpriteBatch spriteBatch)         {             int FrameWidth = Sprite.Width / _framecount;             Rectangle sourcerect = new Rectangle(FrameWidth * _frame, 0, FrameWidth, Sprite.Height);             spriteBatch.Draw(Sprite, Position, sourcerect, Color.White, _rotation, _origin, _scale, SpriteEffects.None, _depth);         } With these two methods...I can animate and image, all I had to do was add a few more lines to the screens Update Method (From Day 3), like such:             float elapsed = (float) gameTime.ElapsedGameTime.TotalSeconds;             foreach (var item in ListOfAnimatedGameObjects)             {                 item.UpdateSprite(elapsed);             } And voila! My images begin to animate in one spot, on the screen... Hmm, but how do I interact with the menu items using a mouse...well the mouse cursor was easy enough... this.IsMouseVisible = true; But, to have it "interact" with an image was a bit more tricky...I had to perform collision detection!             mouseStateCurrent = Mouse.GetState();             var uiEnabledSprites = (from s in menuItems                                    where s.IsEnabled                                    select s).ToList();             foreach (var item in uiEnabledSprites)             {                 var r = new Rectangle((int)item.Position.X, (int)item.Position.Y, item.Sprite.Width, item.Sprite.Height);                 item.MenuState = MenuState.Normal;                 if (r.Intersects(new Rectangle(mouseStateCurrent.X, mouseStateCurrent.Y, 0, 0)))                 {                     item.MenuState = MenuState.Hover;                     if (mouseStatePrevious.LeftButton == ButtonState.Pressed                         && mouseStateCurrent.LeftButton == ButtonState.Released)                     {                         item.MenuState = MenuState.Pressed;                     }                 }             }             mouseStatePrevious = mouseStateCurrent; So, basically, what it is doing above is iterating through all my interactive objects and detecting a rectangle collision and the object , plays the state animation (or static image).  Lessons Learned, Time Burned... So, I think I did well to start, but after I hammered out my prototype...well...things got sloppy and I began to realise some design flaws... At the time: I couldn't seem to figure out how to open another window, such as the character creation screen Input was not event based and it was bugging me My menu design relied heavily on mouse input and I couldn't use keyboard. Mouse input, is tightly bound with graphic rendering / positioning, so its logic will have to be in each scene. Menu animations would stop mid frame, then continue when the action occured again. This is bad, because...what if I had a sword sliding onthe screen? Then it would slide a quarter of the way, then stop due to another action, then render again mid-slide... it just looked sloppy. Menu, Solved!? To solve the above problems I did a little research and I found some great code in the XNA forums. The one worth mentioning was the GameStateManagementSample. With this sample, you can create a basic "text based" menu system which allows you to swap screens, popup screens, play the game, and quit....basic game state management... In my next post I'm going to dwelve a bit more into this code and adapt it with my code from this prototype. Text based menus just won't cut it for me, for now...however, I'm still going to stick with my animated menu item idea. A sneak peek using the Game State Management Sample...with no changes made... Cool Things to Mention: At work ... I tend to break out in random conversations every-so-often and I get talking about some of my challenges with this game (or some stupid observation about something... stupid) During one conversation I was discussing how I should animate my images; I explained that I knew I had to use the Update method provided, but I didn't know how (at the time) to render an image at an appropriate "pace" and how many frames to use, etc.. I also got thinking that if a machine rendered my images faster / slower, that was surely going to f-up my animations. To which a friend, Sheldon,  answered, surely the Draw method is like a camera taking a snapshot of a scene in time. Then it clicked...I understood the big picture of the game engine... After some research I discovered that the Draw method attempts to keep a framerate of 60 fps. From what I understand, the game engine will even leave out a few calls to the draw method if it begins to slow down. This is why we want to put our sprite updates in the update method. Then using a game timer (provided by the engine), we want to render the scene based on real time passed, not framerate. So even the engine renders at 20 fps, the animations will still animate at the same real time speed! Which brings up another point. Why 60 fps? I'm speculating that Microsoft capped it because LCD's dont' refresh faster than 60 fps? On another note, If the game engine knows its falling behind in rendering...then surely we can harness this to speed up our games. Maybe I can find some flag which tell me if the game is lagging, and what the current framerate is, etc...(instead of coding it like I did last time) Sheldon, suggested maybe I can render like WoW does, in prioritised layers...I think he's onto something, however I don't think I'll have that many graphics to worry about such a problem of graphic latency. We'll see. People to Mention: Well,as you are aware I hadn't posted in a couple days and I was surprised to see a few emails and messenger queries about my game progress (and some concern as to why I stopped). I want to thank everyone for their kind words of support and put everyone at ease by stating that I do intend on completing this project. Granted I only have a few hours each night, but, I'll do it. Thank you to Garth for mailing in my next screen! That was a nice surprise! The Sneek Peek you've been waiting for... Garth has also volunteered to render me some wizard images. He was a bit shocked when I asked for them in 2D animated strips. He said I was going backward (and that I have really bad Game Development Lingo). But, I advised Garth that I will use 3D images later...for now...2D images. Garth also had some great game design ideas to add on. I advised him that I will save his ideas and include them in the future design document (for the 3d version?). Lastly, my best friend Alek, is going to join me in developing this game. This was a project we started eons ago but never completed because of our careers. Now, priorities change and we have some spare time on our hands. Let's see what trouble Alek and I can get into! Tonight I'll be uploading my prototypes and base game to a source control for both of us to work off of. D.

    Read the article

  • batch file to merge .js files from subfolders into one combined file

    - by Andrew Johns
    I'm struggling to get this to work. Plenty of examples on the web, but they all do something just slightly different to what I'm aiming to do, and every time I think I can solve it, I get hit by an error that means nothing to me. After giving up on the JSLint.VS plugin, I'm attempting to create a batch file that I can call from a Visual Studio build event, or perhaps from cruise control, which will generate JSLint warnings for a project. The final goal is to get a combined js file that I can pass to jslint, using: cscript jslint.js < tmp.js which would validate that my scripts are ready to be combined into one file for use in a js minifier, or output a bunch of errors using standard output. but the js files that would make up tmp.js are likely to be in multiple subfolders in the project, e.g: D:\_projects\trunk\web\projectname\js\somefile.debug.js D:\_projects\trunk\web\projectname\js\jquery\plugins\jquery.plugin.js The ideal solution would be to be able to call a batch file along the lines of: jslint.bat %ProjectPath% and this would then combine all the js files within the project into one temp js file. This way I would have flexibility in which project was being passed to the batch file. I've been trying to make this work with copy, xcopy, type, and echo, and using a for do loop, with dir /s etc, to make it do what I want, but whatever I try I get an error.

    Read the article

  • Are all of the default scripts loaded by Magento really needed?

    - by pxl
    Here's a listing of all the scripts loaded by Magento by default: ../js/prototype/prototype.js //prototype library ../js/prototype/validation.js //don't know what this does ../js/scriptaculous/builder.js //don't know what this does ../js/scriptaculous/effects.js //base scriptaculous effects library? ../js/scriptaculous/dragdrop.js //component of scriptaculous effects ../js/scriptaculous/controls.js //not sure? ../js/scriptaculous/slider.js //more scriptaculous effects ../js/varien/js.js //don't know what this is ../js/varien/form.js //form validation scripts? ../js/varien/menu.js //menu/drop down menu scripts ../js/mage/translate.js //don't know what this does ../js/mage/cookies.js //don't know what this does these scripts total 316.8K of javascript... all in various states of being minified (for example, prototype.js isn't minified). So my first question: 1) Aside from prototype.js, are all of the others really that needed? and 2) What is the "correct" way to remove these scripts? Layout updates? Or hardcoded in templates? I want to make the loading of my magento site as light weight as possible. thanks!

    Read the article

  • The musical instrument software developer

    - by Peter Mortensen
    There is a correlation between playing a musical instrument and being a great software developer (the same for mathematics). But what is the causation (if any)? That is, should a software developer learn to play a musical instrument to become a better software developer? Or does proficiency in software development make it more likely that an interest in performing on a musical instrument will develop? Update: a very similar question was asked in podcast .NET Rocks, episode 614 (from Øredev 2010), 35 min 40 secs.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >