Search Results

Search found 19838 results on 794 pages for 'software'.

Page 112/794 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • How do you balance the speed of Sprints with the customer's conservative adoption schedule?

    - by Cheeso
    I'd prefer to have sprints that last 3-4 weeks, but customers don't want to adopt new feature/function every 3-4 weeks. Existing customers are conservative and, once we meet their minimum bar for features and capabilities, they like to remain on a stable release for much longer than 4 weeks. Even a 3-month cycle would be pushing it for them. On the other hand, newer customers tend to have more feature requests, and are willing to follow sprints. But this willingness dissipates after we've met their bar. How do you balance the need for rapid sprints with the customer's conservative view of application change? I'm particularly interested in SaaS scenarios.

    Read the article

  • Telecommuting with a foreign employer as a permanent job

    - by grabah
    Does anyone have any experience in telecommuting (working at home) for a company based in some foreign country? By this I don't mean working on some contracted job, but more or less permanent job. Is this even possible, what are options for payment, and can you expect to be paid by usual rates for that country or significantly less? Is there any working hours control, or as long as you deliver on time it's all good.

    Read the article

  • libgtk2.0-common fails to build with Gdk-2.0.gir error, Type reference 'GdkPixbuf' not found

    - by Stefano Palazzo
    I'm trying to build gtk, but it fails. Here's what I'm doing: sudo apt-get build-dep libgtk2.0-common sudo apt-get source libgtk2.0-common cd gtk+2.0-2.22.0/ sudo gedit gtk/gtktreeview.c & #...editing a few files (or not, it's the same error) sudo ./configure --prefix=/usr sudo make The compilation runs for a while and then quits: Gdk-2.0.gir: error: Type reference 'GdkPixbuf' not found ... make: *** [all] Error 2 What am I doing wrong?

    Read the article

  • Installing Broadcom Wireless Drivers

    - by Fer1805
    I'm having serious problems installing the Broadcom drivers for Ubuntu. It worked perfectly on my previous version, but now, it is impossible. What are the steps to install Broadcom wireless drivers for a BCM43xx card? I'm a user with no advance knowledge in Linux, so I would need clear explanations on how to make, compile, etc. lspci -vnn | grep Network showed: Broadcom Corporation BCM4322 802.11a/b/g/n Wireless LAN Controller [14e4:432b] iwconfig showed: lo no wireless extensions. eth0 no wireless extensions.

    Read the article

  • Is there an equivalent of RDP?

    - by detly
    The "Desktop Sharing" settings that come installed by default seem to use VNC. VNC is a bit of a bandwidth hog, can only work at the resolution of whatever screen is attached to the host, and mirrors every action on the host. (It also seems to work poorly with compositing, but maybe that's been fixed.) I know about X tunnelling, but that's annoying to use and doesn't always work properly (or, more accurately, some apps don't work properly). Is there any kind of protocol in between the two, similar to RDP used for Windows? Specifically, something that can run at a different resolution to the host screen and is a little lighter on the network? (Ideally, the more the protocol could have in common with RDP, the better.)

    Read the article

  • Calculation of Milestones/Task list

    - by sugar
    My project manager assigned me a task to estimate the development time for an iPad application. Lets assume that I gave estimation of 15 working days. He thought that the number of days where too many and client needed the changes to the application urgently (as in most of cases). So, he told me: "I am going to assign two developer including you and as per my understandings and experience it won't take more than seven working days." Clarifications I was given the task of estimating development time for an individual. How could I be sure that 2 developers are going to finish it within 7 days? (I am new to team & I hardly know the others abilities) Questions Why do most of project managers / team leaders have understandings like: If one developer requires N days, Then two developers would require N/2 days, Do they think something like developer = s/w production machines? Should a team member (developer, not team lead or any higher post) estimate other developers work? I didn't deny anything in the meeting and didn't said, but what should be the appropriate answer to convince them that N/2 formula that they follow is not correct?

    Read the article

  • Notable programs/games made in C/C++/Java/Python? [closed]

    - by ThePlan
    What are some famous programs or video games that were written in the following languages? C C++ Java Python I'm asking this particularly so I know how powerful impact did those languages have on our lives. I believe Windows was also written in C/C++ but I'm not sure if fully. Also if you are kind enough you can mention some other language impacts besides programs/video games. These languages are by far the most common so that's why I've picked them. Besides the impact on our lives I'd also like to see the power these languages have. I'm studying programming and I've learned bits of all those languages and I think if I knew some famous examples of programs written in those languages I could understand the power of them, as well as inspire me further in my career.

    Read the article

  • Should one use a separate database for application data and user data?

    - by trycatch
    I’ve been working on a project for a little while and I’m unsure which is the better architecture. I’m interested in the consensus. The answer to me seems fairly obvious but something about it is digging at me and I can't pick out what. The TL;DR is: how do you handle a program with application data and user data in the same DB which needs to be able to receive updates to the application data periodically? One database for user data and one for application, or both in one? The detailed version is.. if an application has a database which needs to maintain application data AND user data, and the user data all references application data, it feels more natural to me to store them in the same database. But if there exists a need to be able to update the application data within this database periodically, should this be stripped into two databases so that one can simply download the updated application data database file as an update and replace the old one? Or should they remain as one database, and the application data be updated via a script which inserts the new data into the existing database? The second sounds clearly preferable to me... but for some reason just doesn’t feel right, and I can't pick out quite why.

    Read the article

  • HELP: I Broke Ubuntu By Uninstalling Compiz

    - by tSquirrel
    I'm still getting used to Linux, having come from Windows. I was receiving an error that "compiz" had crashed a few times so I figured I'd uninstall it. sudo apt-get remove compiz sudo apt-get install compiz I logged out then back in, after that, the GUI was totally gone and I have no idea how to get it back or what I need to do to restore the GUI to what it was before I killed poor Compiz. GUI was pretty much unmodified after a fresh install of 14.04 How can I fix it? I'm not even sure how to get to a terminal or anything. The login screen looks normal, but after logging in, it's a totally bare desktop with my backround and a few icons. No Dash, toolbar, etc. Hot Keys don't seem to work either (Super = Dash doesn't work, etc); although I did accidently open "Disk" UI. Not sure how. Please Help! Right now I'm working off my W7 dualboot.

    Read the article

  • telecomuting in foreign country expiriences

    - by grabah
    Hi. Does anyone have any expiriance in telecomuting (working at home) for a company based in some foreign country? By this i don't mean working on some contracted job, but more or less permanent job. Is this even possible, what are options for payment, and can you expect to be payed by usual rates for that country or significantly less? Is there any workinghours control, or as long as you deliver on time it's all good.

    Read the article

  • Are small amounts of functional programming understandable by non-FP people?

    - by kd35a
    Case: I'm working at a company, writing an application in Python that is handling a lot of data in arrays. I'm the only developer of this program at the moment, but it will probably be used/modified/extended in the future (1-3 years) by some other programmer, at this moment unknown to me. I will probably not be there directly to help then, but maybe give some support via email if I have time for it. So, as a developer who has learned functional programming (Haskell), I tend to solve, for example, filtering like this: filtered = filter(lambda item: included(item.time, dur), measures) The rest of the code is OO, it's just some small cases where I want to solve it like this, because it is much simpler and more beautiful according to me. Question: Is it OK today to write code like this? How does a developer that hasn't written/learned FP react to code like this? Is it readable? Modifiable? Should I write documentation like explaining to a child what the line does? # Filter out the items from measures for which included(item.time, dur) != True I have asked my boss, and he just says "FP is black magic, but if it works and is the most efficient solution, then it's OK to use it." What is your opinion on this? As a non-FP programmer, how do you react to the code? Is the code "googable" so you can understand what it does? I would love feedback on this :) Edit: I marked phant0m's post as answer, because he gives good advice on how to write the code in a more readable way, and still keep the advantages. But I would also like to recommend superM's post because of his viewpoint as a non-FP programmer.

    Read the article

  • Should developers be involved in testing phases?

    - by LudoMC
    Hi, we are using a classical V-shaped development process. We then have requirements, architecture, design, implementation, integration tests, system tests and acceptance. Testers are preparing test cases during the first phases of the project. The issue is that, due to resources issues (*), test phases are too long and are often shortened due to time constraints (you know project managers... ;)). So my question is simple: should developers be involved in the tests phases and isn't it too 'dangerous'. I'm afraid it will give the project managers a false feeling of better quality as the work has been done but would the added man.days be of any value? I'm not really confident of developers doing tests (no offense here but we all know it's quite hard to break in a few clicks what you have made in severals days). Thanks for sharing your thoughts. (*) For obscure reasons, increasing the number of testers is not an option as of today. (Just upfront, it's not a duplicate of Should programmers help testers in designing tests? which talks about test preparation and not test execution, where we avoid the implication of developers)

    Read the article

  • Distributing a very simple application

    - by vanna
    I have a very simple working console application written in C++ linked with a light static library. It is just for testing purposes. Now that the coding part is done, I would like to know the process of actually distributing the program. I wrote a very basic CMakeLists.txt that create makefiles or VS projects to build the sources. I also have a program that calls the static library in order to make some google tests. To me, the distribution of this application goes like this : to developpers : the src directory with the CMakeLists.txt file (multi-platform distribution) with a README.txt and an INSTALL.txt to users : the executable and a README.txt on my git repo : everything mentionned above plus the sources for testing and the gtest external lib A this point : considering the complexity of my application, am I doing it right ? Is there any reference that would formalize this distribution process so I can get better and go further ? Say I would like to add dynamic libraries that can be updated, external libraries like boost : how should I package this to distribute it in a professionnal way ?

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • ACL tool for audit of Ubuntu production servers

    - by migrator
    In my production environment, I have close to 10 Ubuntu 12.04 Servers and I want to get the list of users from them. I am looking for some kind of script or tool (non-gui) to get the same. Yes, I can get the list from /etc/passwd and /etc/groups files but it would be good to have a tool or script to do this due to the following reasons. I have right now 10 systems in Ubuntu and 30 systems in Windows 2003. I am recommending my organization and IT to move all the systems to Ubuntu except the one running MS SQL server We do not have good Ubuntu admins with us and they should not mess up with the system if I give some manual commands I also need to find out date of creation of user, group, password standards like strength, expiry etc Please help me as I want to automate the process and get the list on weekly basis from IT team. Thanks in advance.

    Read the article

  • Making CopySourceAsHtml add-on work with VS2010

    - by DigiMortal
    As there are still bloggers who use CopySourceAsHtml add-on for Visual Studio to get syntax highlighted code to their blog posts and there is no guidance in CSAH site how to make it work with Visual Studio 2010 I will give my guidance here. Almost all code in this blog is syntax highlighted by this add-on (read more from my post Visual Studio add-in: CopySourceAsHTML). Last version of CSAH is available for VS2008 but it is easy to make it work with VS2010. Just follow these steps. Close VS2010 if it is opened. Goto folder MyDocuments\Visual Studio 2010. Move to AddIns subfolder (create it if there is no such subfolder). Create file called CopySourceAsHtml.AddIn and open it in text editor. Paste the following XML to editor:   <?xml version="1.0" encoding="utf-8" standalone="no"?> <Extensibility xmlns="http://schemas.microsoft.com/AutomationExtensibility"> <HostApplication> <Name>Microsoft Visual Studio Macros</Name> <Version>10.0</Version> </HostApplication> <HostApplication> <Name>Microsoft Visual Studio</Name> <Version>10.0</Version> </HostApplication> <Addin> <FriendlyName>CopySourceAsHtml</FriendlyName> <Description>Adds support to Microsoft Visual Studio 2010 for copying source code, syntax highlighting, and line numbers as HTML.</Description> <Assembly>JTLeigh.Tools.Development.CopySourceAsHtml, Version=3.0.3215.1, Culture=neutral, PublicKeyToken=bb2a58bdc03d2e14, processorArchitecture=MSIL</Assembly> <FullClassName>JTLeigh.Tools.Development.CopySourceAsHtml.Connect</FullClassName> <LoadBehavior>1</LoadBehavior> <CommandPreload>0</CommandPreload> <CommandLineSafe>0</CommandLineSafe> </Addin> </Extensibility> Save file and close it. Run VS2010 and activate add-on if it is not activated yet. That’s it. If you are heavy user of CSAH then I recommend you to bookmark this post. :)

    Read the article

  • Should I force users to update an application?

    - by Brian Green
    I'm writing an application for a medium sized company that will be used by about 90% of our employees and our clients. In planning for the future we decided to add functionality that will verify that the version of the program that is running is a version that we still support. Currently the application will forcequit if the version is not among our supported versions. Here is my concern. Hypothetically, in version 2.0.0.1 method "A" crashes and burns in glorious fashion and method "B" works just fine. We release 2.0.0.2 to fix method A and deprecate version 0.1. Now if someone is running 0.1 to use method B they will be forced to update to fix something that isn't an issue for them right now. My question is, will the time saved not troubleshooting old, unsupported versions outweigh the cost in usability?

    Read the article

  • Storage device manger regarding NTFS automount at boot time

    - by muneesh
    I am using storage device manager to auto-mount NTFS file system at boot time.But repeatedly, I am trying to uncheck the checkbox listed 'read only mode' in assistant option of storage device manager. I am not able to to auto-mount my NTFS partition in read/write mode. Please suggest a solution regarding this problem? Remember I am repeatedly trying to uncheck read only checkbox but not able to do that!

    Read the article

  • DXperience v2010 vol 2 Review

    - by kulrom
    Hi guys, this is my second review of these controls, and it will be quite different than the first. A few months ago I was engaged by a client to develop a web based application. He asked me (I guess they all ask the same) to make it good looking and attractive. The first thing I thought of when I heard this was "I must renew my DXperience subscription". And now I am glad I did that. Before I continue, I would like to say something to those readers who are totally new to DXperience. Guys, we all know that one of the more frustrating things for the ASP.NET developers is designing the good looking application. Well folks, your troubles are over! The DXperience takes much of the agony out of developing and designing an outstanding web application. Here we go!...(read more)

    Read the article

  • What simple offline GUI database should I use for this application?

    - by gcc
    I am looking an open source application. Application should have : * database support ( create two or three table ) * GUI ( what I have created should be seen ) Example : Assume that I have created a table ; X_table : | A | B | C | D | After creating table, I am loading data | A | B | C | D | | 1 | 11 | b | f | - | 3 | 12 | a | o | - data | 4 | 13 | r | o | - When I am opening application not for loading data, I want see data in graphical interface. Are there any open source application which have above feature ? Application can be so simple, * no internet connection * support only one database * static table creation ( once created never changed ) Application can be run Ubuntu 12.04 and/or Windows. In other words, I am wanting database viewer and editor. EDIT: I should load pdf file, image etc. or give path of the file to the application. This link can be reference to my question . ( Interface should be like this, just a list )

    Read the article

  • how to install gerix on ubuntu 12.04 using backtrack repositories?

    - by werever
    I got instructions from here, and I found several sites with similar instructions but somethig is wrong with the repositories on 12.04. This should work for 11.10 and previous versions. Any ideas how to install backtrack repos and some back track apps on 12.04? I read about downloading the .deb package for gerix and manually making the same folder structure that gerix uses on backtrack, then manually run the python script, but this way didn't work for me either. How can I get gerix installed and working on my Ubuntu 12.04 system?

    Read the article

  • how much knowledge do you need to call yourself a programmer?

    - by nore
    There is a guy who calls himself c/c++ programmer, but what does he actually know? What knowledge about c++ does he have because there are so much to know about c/c++. So he knows the core language? He knows visual c++? He knows how to program with WIN API? He knows how to program in linux with gtk? Network programming? The real question is: What do you need to know, to be called a c/c++ programmer ,because I know c and I really do not feel like I own the power of programming... please illuminate my path.

    Read the article

  • Is there an application or method to log of data transfers?

    - by Gaurav_Java
    My friend asked me for some files that I let him take from my system. I did not see he doing that. Then I was left with a doubt: what extra files or data did he take from my system? I was thinking is here any application or method which shows what data is copied to which USB (if name available then shows name or otherwise device id) and what data is being copied to Ubuntu machine . It is some like history of USB and System data. I think this feature exists in KDE This will really useful in may ways. It provides real time and monitoring utility to monitor USB mass storage devices activities on any machine.

    Read the article

  • Which Linux OS should I use?

    - by dylan0005
    I have use Ubuntu 10.04 & 12.04 but my problem is disk space, I just have 4GB of HDD! awesome considering all hardware that exists nowadays! Notebook Asus eeePC 900 with 2Gb of ram, intel inside CPU. So, which is the best OS should I use? I need one that have no problems with compatibily also that's not be so old. I've tried some of lightly versions like Puppy, slax, LPS us army OS, Precise.... BUT I don't like any of them. What do you think about Debian or Linux Mint?

    Read the article

  • makeMKV setup error

    - by PitaJ
    When I run sudo bash configure (./configure doesn't work), I get this: checking whether we are cross compiling... configure: error: in /media/pitaj/Shared/Documents/makeMKV/makemkv-oss': configure: error: cannot run C compiled programs. If you meant to cross compile, use --host'. See `config.log' for more details In console.log, it says that gcc -V isn't valid I'm following this tutorial: http://www.makemkv.com/forum2/viewtopic.php?f=3&t=224

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >