Search Results

Search found 11930 results on 478 pages for 'shared machines'.

Page 199/478 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • How can I have the passphrase for a private key remembered for a user?

    - by Jon Cram
    I have a collection of web services running on Ubuntu Server 12.04 that pull code from a github repository. These services run under a specific user (let's call that user 'example'). In /home/example/.ssh/is_rsa is the private key associated with the relevant github account. When performing an operation such as git pull I am greeted with: Enter passphrase for key '/home/simplytestable/.ssh/id_rsa':. Enter the correct password and all is ok. The same private key is present on local development Ubuntu Desktop 12.04 machines and no passphrase is asked for. I'd like to be able to have the passphrase remembered so that upon entering it once it is never asked for again. This will aid in automating various web service updates. I'm guessing that the passphrase needs to be stored in the relevant user's keychain such that I don't have to enter it every time the private key needs to be unlocked. How can I achieve this?

    Read the article

  • Is scanning the ports considered harmful?

    - by Manoj R
    If any application is scanning the ports of other machines, to find out whether any particular service/application is running, will it be considered harmful? Is this treated as hacking? How else can one find out on which port the desired application is running (without the user input)? Let's say I only know the port range in which the other application could be running, but not the exact port. In this case, my application ping each of the port in range to check whether the other application is listening on it, using already defined protocol. Is this a normal design? Or is this considered harmful for the security?

    Read the article

  • How To Use DosBox for Windows 7

    While many modern games use every bit of technology available to give gamers the latest in graphics sound and advanced gameplay some just cannot duplicate the fun that old games used to offer. A lot of fun yet old DOS games will not run on modern computers or operating systems like Windows 7 keeping you from experiencing nostalgic gaming bliss. Thanks to an emulator called DOSBox however you can enjoy many old DOS games on your Windows 7 machines and it does not matter if you are running the 32-bit or 64-bit version of the operating system.... Comcast? Business Class - Official Site Learn About Comcast Small Business Services. Best in Phone, TV & Internet.

    Read the article

  • Forward .html/.htm to .php with .config

    - by PhilipK
    I'm moving a site from my linux hosted server to a client's windows hosted server. The .htaccess file no longer works and I'm told that windows servers use .config . How can I forward all users accessing .html & .htm files to the equivalent .php file. Server Info... OS/Hosting Type: Windows / Shared Hosting .Net Runtime Version: ASP.Net 2.0/3.0/3.5 PHP Version: PHP 5.2 IIS Version: IIS 7.0 Data Center: US Regional EDIT *Hosting provided by GoDaddy Was told by a friend following should work but it has no effect on the site. <configuration> <system.webServer> <handlers> <add name="PHP-FastCGI" verb="*" path="*.html" modules="FastCgiModule" scriptProcessor="c:\php\php-cgi.exe" resourceType="Either" /> </handlers> </system.webServer> </configuration>

    Read the article

  • Compilable modern alternatives to C/C++

    - by Jeremy French
    I am considering writing a new software product. Performance will be critical, so I am wary of using an interpreted or language or one that uses a emulation layer (read java). Which leads me to thinking of using C (or C++) however these are both rather long in the tooth. I haven't used either for a long time. I figure in the last 20 years someone should have created something which is reasonably popular and is nice to code in and is complied. What more modern alternatives are there to C for writing high performance code compiled code? edit in response to comments If C++ is a different beast than it was 15 years ago, I would consider it, I guess I had an assumption that it had some inherent problems. Parallelisation would be important, but probably not across multiple machines.

    Read the article

  • Microsoft et le Stade Toulousain mettent de l'IT dans leurs vêtements, quelles applications imaginez-vous pour ces nouveaux habits ?

    Bientôt de l'IT jusque dans nos vêtements Robe tweeteuse, T-Shirt promotionnel pour club de Rugby : quelles applications imaginez-vous pour ces nouvelles générations d'habits ? En collaboration avec Gordon Fowler « The Printing Dress » est une création de deux designers expérimentées travaillant pour Microsoft Research, Asta Roseway et Sheridan Martin Petit. Réalisée à partir de papier de riz noir et blanc, la robe intègre des boutons rappelant les touches des anciennes machines à écrire, cousus sur le corsage de la robe. Un ordinateur portable, un projecteur et quatre cartes de circuits y sont également intégrés. Bien qu'étant encore un prototyp...

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • Faulty memtest result

    - by dhojgaard
    I've been a bit suspicious about my RAM lately. They seem not work like i expect them to. For instance i run a lot of Virtual Test Environments in VMware Workstation and lately Ubuntu starts to lag just running 4 virtual machines each dedicated 512 MB. BTW im having 6GB memory on my laptop. This did not use to be a problem for me on my last laptop that even had a lot lower CPU resources. So i was led to try a memtest after reading some websites about RAM testing. So i did a memtest overnight but when i woke up this morning i was just looping and i could not Exit or do anything. It was just looping the number of errors besides test 7 which you can see in the lower right part of the screenshot. Can anyone interpret this screenshot for me? Do i have a faulty set of RAMs?

    Read the article

  • Resources for Virtual Machine programming

    - by good_computer
    I am a beginner (a little more than that) programmer of C. I am really interested in the field of Virtual Machines. When I read about the Python VM, the PyPy project, the advancements in JVM technology, Google V8, the Erlang VM, I really get excited about these amazing pieces of technology, and really want to get my hands dirty building them or contributing to one of these projects. I need to know.. what are the things (language, concepts, algorithms, math, etc?) I need to know/learn to be able to build a virtual machine any books or other resources that will be helpful career prospects for a virtual machine engineer (but this is least important for me for now) (one more side question: somewhere I'd read something like JVM is on the cutting edge of virtual machine technology -- that it is the most advanced VM so far -- is that true?) Please give me a LONG answer detailing all that you know.

    Read the article

  • while upgrading a virtual machine do I need to install a bootloader

    - by Bond
    I have a virtualization server which is having a few virtual machines running at top of it. All this was done using Ubuntu server edition with KVM and using virt-manager on SSH connection. These VMs are Lucid 10.04 64 bit Vms. When I upgrade them via apt-get upgrade on an SSH connection in between the ncurses screen, it asks me if it should install a bootloader and to select Yes or No for it. I have no clue what should I select here and I cancel the upgrade.Since it is a production machine I can not specify any thing like this. So let me know what will be a correct.

    Read the article

  • Parleys Testimonial at GlassFish Community Event, JavaOne 2012

    - by arungupta
    Parleys.com is an e-learning platform that provide a unique experience of online and offline viewing presentations, with integrated movies and chaptering, from the top notch developer conferences and about 40 JUGs all around the world. Stephan Janssen (the Devoxx man and Parleys webmaster) presented at the GlassFish Community Event at JavaOne 2012 and shared why they moved from Tomcat to GlassFish. The move paid off as GlassFish was able to handle 2000 concurrent users very easily. Now they are also running Devoxx CFP and registration on this updated infrastructure. The GlassFish clustering, the asadmin CLI, application versioning, and JMS implementation are some of the features that made them a happy user. Recently they migrated their application from Spring to Java EE 6. This allows them to get locked into proprietary frameworks and also avoid 40MB WAR file deployments. Stateless application, JAX-RS, MongoDB, and Elastic Search is their magical forumla for success there. Watch the video below showing him in full action: More details about their infrastructure is available here.

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • How to organise projects with dependencies on BitBucket?

    - by Timwi
    Both Mercurial and BitBucket make one fundamental assumption: 1 repo = 1 project. If I have a project that has a dependency (a library) which is shared by many projects, this assumption gets in the way. Now it is no longer possible to have a separate BitBucket page for each project while still being able to commit atomic revisions to multiple projects. If I put all the projects into one repo, they all become one “project” on BitBucket. If I put them in separate repos, it is no longer possible to know which version of the library project was in use at revision X of a dependent project. How is this situation normally solved on BitBucket, or is there explicitly no support for this common scenario?

    Read the article

  • Samba installation failed on ubuntu 12.10

    - by Ayyaz
    I was trying to install samba to access shared printer on windows pc connected via office network, following response came out from terminal. please guide me how to install Samba or any other alternative. crm@crm-HP-G62-Notebook-PC:~$ sudo apt-get install samba [sudo] password for crm: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: samba : Depends: samba-common (= 2:3.6.6-3ubuntu4) but 2:3.6.6-3ubuntu5 is to be installed Depends: libwbclient0 (= 2:3.6.6-3ubuntu4) but 2:3.6.6-3ubuntu5 is to be installed Recommends: tdb-tools but it is not going to be installed E: Unable to correct problems, you have held broken packages. crm@crm-HP-G62-Notebook-PC:~$

    Read the article

  • What would you change about C# if you could?

    - by Casebash
    Assume that your objectives are the same as for C#. I don't want an answer like "I'd write Python instead". I'm only interested in changes in the language itself - not in the standard library. Perhaps this would be interesting to discuss, but as it is shared between many languages it would be better to discuss this in another question instead. Please post a single idea per answer; if you want to post several ideas, post several separate answers. Thanks!

    Read the article

  • Extremely slow transfer speed ubuntu -> Windows

    - by Hailwood
    I have two laptops, One is running Ubuntu 12.04 (EXT4) the other is running Windows 7 (NTFS). I am copying over 40gb of data (one file) from the Ubuntu laptop to the Windows Laptop. (Browse the shared folder on Ubuntu using Windows copy/paste) But I am getting transfer speeds topping out at ~700kb/s Surely this is not right. I am transferring via wifi on both laptops. My download speeds can reach 7-8mb/s on both laptops, so I know it is not the wifi cards or the router topping out. wlan0 Link encap:Ethernet HWaddr 84:4b:f5:db:b4:85 inet addr:192.168.1.66 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::864b:f5ff:fedb:b485/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11941185 errors:0 dropped:0 overruns:0 frame:0 TX packets:11306693 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10087111370 (10.0 GB) TX bytes:7843524888 (7.8 GB)

    Read the article

  • How to Implement Single Sign-On between Websites

    - by hmloo
    Introduction Single sign-on (SSO) is a way to control access to multiple related but independent systems, a user only needs to log in once and gains access to all other systems. a lot of commercial systems that provide Single sign-on solution and you can also choose some open source solutions like Opensso, CAS etc. both of them use centralized authentication and provide more robust authentication mechanism, but if each system has its own authentication mechanism, how do we provide a seamless transition between them. Here I will show you the case. How it Works The method we’ll use is based on a secret key shared between the sites. Origin site has a method to build up a hashed authentication token with some other parameters and redirect the user to the target site. variables Status Description ssoEncode required hash(ssoSharedSecret + , + ssoTime + , + ssoUserName) ssoTime required timestamp with format YYYYMMDDHHMMSS used to prevent playback attacks ssoUserName required unique username; required when a user is logged in Note : The variables will be sent via POST for security reasons Building a Single Sign-On Solution Origin Site has function to 1. Create the URL for your Request. 2. Generate required authentication parameters 3. Redirect to target site. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string postbackUrl = "http://www.targetsite.com/sso.aspx"; string ssoTime = DateTime.Now.ToString("yyyyMMddHHmmss"); string ssoUserName = User.Identity.Name; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string ssoHash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); string value = string.Format("{0}:{1},{2}", ssoHash,ssoTime, ssoUserName); Response.Clear(); StringBuilder sb = new StringBuilder(); sb.Append("<html>"); sb.AppendFormat(@"<body onload='document.forms[""form""].submit()'>"); sb.AppendFormat("<form name='form' action='{0}' method='post'>", postbackUrl); sb.AppendFormat("<input type='hidden' name='t' value='{0}'>", value); sb.Append("</form>"); sb.Append("</body>"); sb.Append("</html>"); Response.Write(sb.ToString()); Response.End(); } } Target Site has function to 1. Get authentication parameters. 2. Validate the parameters with shared secret. 3. If the user is valid, then do authenticate and redirect to target page. 4. If the user is invalid, then show errors and return. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { if (User.Identity.IsAuthenticated) { Response.Redirect("~/Default.aspx"); } } if (Request.Params.Get("t") != null) { string ticket = Request.Params.Get("t"); char[] delimiters = new char[] { ':', ',' }; string[] ssoVariable = ticket.Split(delimiters, StringSplitOptions.None); string ssoHash = ssoVariable[0]; string ssoTime = ssoVariable[1]; string ssoUserName = ssoVariable[2]; DateTime appTime = DateTime.MinValue; int offsetTime = 60; // get this from config or similar try { appTime = DateTime.ParseExact(ssoTime, "yyyyMMddHHmmss", null); } catch { //show error return; } if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } bool isValid = false; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string hash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); if (string.Compare(ssoHash, hash, true) == 0) { if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } else { isValid = true; } } if (isValid) { //Do authenticate; } else { //show error return; } } else { //show error } } } Summary This is a very simple and basic SSO solution, and its main advantage is its simplicity, only needs to add a single page to do SSO authentication, do not need to modify the existing system infrastructure.

    Read the article

  • How do I disable changing proxy settings?

    - by gap
    I've got several machines, running 14.04.1, for kids to access. Each machine has accounts for each kid, as well accounts for several adults. Although I've set a system wide proxy use policy, it doesn't get used by Chrome. Instead, I had to log into each kids account and set a proxy use policy, pointing to tinyproxy/dansguardian, for safe internet access. The problem is that, should the kids get an ounce of computer savvy, they'll figure out that they can launch the proxy settings config panel from Unity, then change the proxy settings to None, and completely bypass the "safe" internet scheme I've setup. Can anyone tell me how I can disable those user accounts from being able to modify their proxy settings (not the "system wide" ones... those arent used.. this is the per-user settings). Thanks

    Read the article

  • Subdomain redirects to different server, maintaining original URL

    - by Jason Baker
    I just took over the webmaster position in my organization, and I'm trying to set up a development environment separate from the production environment. The two environments are hosted with different companies, both on shared hosting plans. Right now, production is at domain.com and development is at domain2.com What I want is to direct my development team to dev.domain.com More importantly, I don't want the URL to change from dev.domain.com back to domain2.com, but I also be responsive to page changes. For example, if my dev team navigates from dev.domain.com to a page (called "page"), I'd like the URL to show as dev.domain.com/page Is this possible, or am I just dreaming?

    Read the article

  • Where are the systray icons for Dropbox in Ubuntu desktop 13.04 (minimal)?

    - by samvv
    I reinstalled Ubuntu desktop using the minimal CD image and the following command: $ sudo apt-get install ubuntu-desktop --no-install-recommends After that I used Ubuntu Software Center to make sure Unity supports application indicators: http://i.imgur.com/bYF162w.png. Everything works great, except for Dropbox. For some reason the icon doesn't appear in the tray, even though the application is running. Steam on the other hand runs just fine, so it seems like there is nothing wrong with the tray itself. According to this post the tray icons should be in /usr/shared/icons/hicolor/22x22/status but it doesn't contain any Dropbox icons. Neither do any of the other resolutions. The answer is a bit outdated, so I'm not entirely sure it is still applicable to the current version of Dropbox. I did the usual thing of reinstalling dropbox with: sudo apt-get purge nautilus-dropbox sudo apt-get install nautilus-dropbox But that didn't solve anything either. Does somebody know how to fix this issue?

    Read the article

  • Library and several small programs that use it: how should I structure my git repository?

    - by Dan
    I have some code that uses a library that I and others frequently modify (usually only by adding functions and methods). We each keep a local fork of the library for our own use. I also have a lot of small "driver" programs (~100 lines) that use the library and are used exclusively by me. Currently, I have both the driver programs and the library in the same repository, because I frequently make changes to both that are logically connected (adding a function to the library and then calling it). I'd like to merge my fork of the library with my co-workers' forks, but I don't want the driver programs to be part of the merged library. What's the best way to organize the git repositories for a large, shared library that needs to be merged frequently and a number of small programs that have changes that are connected to changes in the library?

    Read the article

  • Firefox va intégrer Weave en natif pour proposer la synchronisation cryptée et anonyme par défaut :

    Mise à jour du 25/05/10 Firefox va intégrer Weave en natif Pour proposer la synchronisation cryptée et anonyme par défaut Weave est le premier « service Mozilla ». Il s'agit en fait d'une extension qui permet de synchroniser les données de Firefox ? comme les marques-pages, les mots de passe, etc (lire ci-avant) - entre plusieurs machines. La Fondation Mozilla vient de décider d'inclure ce service en natif dans son navigateur. Rien de nouveau sous le soleil par rapport à la concurrence ? Oui et non. Non, parce qu'il est vrai que certaines des concurrents de Firefox le proposent déjà. ...

    Read the article

  • Webhosting with custom database choice [closed]

    - by churchill614
    Possible Duplicate: How to find web hosting that meets my requirements? I am trying to find somewhere to host a website which uses OrientDB as its database. My budget doesn't stretch to a dedicated server where I can configure everything as I need it. Rather, I am hoping to find somewhere, ideally UK based, that will allow me to install/install for me OrientDB on their server, that is of the normal shared server variety. Is anybody able to point me in a good direction for this please (whilst UK is preferable it is not essential)?

    Read the article

  • Windows 8 client virtualization

    - by John Paul Cook
    Hyper-V is coming to Windows 8, but you must have a processor that supports SLAT. Virtual machines created with Virtual PC aren’t easily transferred to Windows 2008 Hyper-V and vice-versa. With Windows 8, it will be easy to move vhds from Windows 8 on your laptop or desktop to Windows 8 server and back again. To find out if your processor supports SLAT, run coreinfo –v from a command window running as administrator. Download coreinfo from here . My MacBook Pro supports SLAT as this output shows:...(read more)

    Read the article

  • MVC .Net, WebMatrix talk presentations and webinars

    - by subodhnpushpak
    I presented sessions on MVC .Net and webmatrix. I covered stuff like what’s new in MVC .net and the architecture goodness of MVC pattern. I also demonstrated how MVC 3 / MVC 4 harness HTML 5 / mobile along with Jquery and Modernizr.  PHP coding using MVC and Webmatrix and other advanced stuff like hosting PHP on windows or porting MYSQL Db to MSSQL is also is also part of the demo in the sessions. The slide decks are available at below link and all the demo is recorded and also shared at below link.   WebMatrix View more presentations from Subodh Pushpak.   WebMatrix2 View more presentations from Subodh Pushpak.   The recordings / Demo can be accessed at and If you have any suggestions / ideas / comments; please do post.

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >