Search Results

Search found 10046 results on 402 pages for 'repository pattern'.

Page 204/402 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • URL Rewrite http to https EXCEPT files in a specific subfolder

    - by BrettRobi
    I am trying to force all traffic on my web site to use HTTPS, using the URL Rewrite 2.0 module added to IIS 7.5. I got that working and now have a need to exclude a couple of pages from using SSL. So I need a rule to rewrite all URL except those referencing this folder to HTTPS. I've been banging my head against the wall on this and am hoping someone can help. I tried creating a rule to match all URL except those in a nossl subfolder as in this example: <rule name="HTTP to HTTPS redirect" enabled="true" stopProcessing="true"> <match url="(/nossl/.*)" negate="true" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Found" /> </rule> But this doesn't work. Can anyone help?

    Read the article

  • Why is my Quickly app full of fail?

    - by bstpierre
    I tried to use quickly on Ubuntu 12.04 to create an application, but it does not behave as described in that linked page. I don't get a popup when creating the application (see error below). % quickly create ubuntu-application foo Creating project directory foo Creating bzr repository and committing Launching your newly created project! (foo:16847): GLib-GIO-ERROR **: Settings schema 'org.gnome.desktop.interface' is not installed Congrats, your new project is setup! cd /tmp/foo/ to start hacking. It creates a project, but when I try to run, it crashes and burns: % cd foo % quickly run (foo:22639): GLib-GIO-ERROR **: Settings schema 'org.gnome.desktop.interface' is not installed Is this because I'm not using gnome-shell? What can I do to get a working project? (Edit: As a side note, I'd be willing to debug this myself, but I don't even get a traceback. What do I have to do to get quickly to give me a traceback?)

    Read the article

  • Drawing different per-pixel data on the screen

    - by Amir Eldor
    I want to draw different per-pixel data on the screen, where each pixel has a specific value according to my needs. An example may be a random noise pattern where each pixel is randomly generated. I'm not sure what is the correct and fastest way to do this. Locking a texture/surface and manipulating the raw pixel data? How is this done in modern graphics programming? I'm currently trying to do this in Pygame but realized I will face the same problem if I go for C/SDL or OpenGL/DirectX.

    Read the article

  • How do I start working as a programmer - what do I need?

    - by giorgo
    i am currently learning Java and PHP as I have some projects from university, which require me to apply both languages. Specifically, a Java GUI application, connecting to a MySQL database and a web application that will be implemented in PHP/MySQL. I have started learning the MVC pattern, Struts, Spring and I am also learning PHP with zend. My first question is: How can I find employment as a programmer/software engineer? The reason I ask is because I have sent my CV into many companys, but all of them stated that I required work experience. I really need some guidance on how to improve my career opportunites. At present, I work on my own and haven't worked in collaboration with anyone on a particular project. I'm assuming most people create projects and submit them along with their CVs. My second question is: Everyone has to make a start from somewhere, but what if this somewhere doesn't come? What do I need to do to create the circumstances where I can easily progress forward? Thanks

    Read the article

  • Which VCS is more applicable for our workflow?

    - by Thomas Mancini
    Currently we have code stored on a shared network drive and do not use any kind of VCS. The code stored on our shared network drive is always being backed up. We would like to keep things as close to they are now as possible, while using some kind of VCS software. I am envisioning a centralized workflow with each developer having a local copy of the code on his/her machine. We don't do any branching or working offline. Typically when we spin off a new version we would just copy the current working directory to a new directory. I believe we would continue doing this and just create a repository for the new version. I would rather not get into an argument over which VCS is better, just hoping to get some opinions for which is best suited and most applicable for what we are trying to do.

    Read the article

  • Setting up ASP.NET structure for code

    - by user1175327
    I've always coded in C# MVC3 when developing web applications. But now i wanted to learn a bit more about developing web sites with just ASP.NET. But now i'm wondering what a good setup for my code would be. For me, an MVC like pattern seems to be a good way to go. But obviously ASP.NET doesn't have any router and controller classes. So i guess people have a different way of setting up their code when they do ASP.NET. So i'm looking for more information on how to get started with this. So not really the basics of ASP.NET, but something that focuses on a good code setup. Any good tutorials/information about this/?

    Read the article

  • Google Maps API: Premier License or excess map loads?

    - by j0nes
    I am currently looking for a way on how to deal with the Google Maps API usage limits. I am planning a redesign of our page that will probably get around 2 million map loads per month. This will surely break the usage limit of 750000 map loads per month available in the free version. If we pay for excess map loads, this means we would have to pay 5000$ per month. The other option would be to use a Premier license, however there is very few information available on the usage limits for this and the price. I have filled the request form to get a custom offer from Google, but I did not get any response yet. Can anyone of the Premier license holders tell me which option will be cheaper for my usage pattern, paying for Premier license or paying for excess map loads?

    Read the article

  • Is it a good idea to simplify an character -driven game engine to the point it's unnecessary to learn scripting/programming ?

    - by jokoon
    I remember, and I still think, that one cannot even make a prototyped 3D game to test just simple behaviors without using gigantic tools like unity or knowing extensive C++ programming, design pattern, a decent or basic 3D engine, etc. Now I'm wondering, since I know programming, that I'm still more lucky that the ones who need to learn programming prior to know how to make something: even scripted engines such as unity are not for kids, and to my sense they tend to dictate their ways of doing things, which is not the case with engine like ogre or irrlicht. I remember toying a little with the blender game engine, it was possible to link states or something I don't remember very well. Now I'm thinking that character driven games occupies a big part of the game market. Do you think it is a good idea to make a character-controlled oriented game engine which allows only to build AI instead of anything else ?

    Read the article

  • Python Multiprocessing with Queue vs ZeroMQ IPC

    - by Imraan
    I am busy writing a Python application using ZeroMQ and implementing a variation of the Majordomo pattern as described in the ZGuide. I have a broker as an intermediary between a set of workers and clients. I want to do some extensive logging for every request that comes in, but I do not want the broker to waste time doing that. The broker should pass that logging request to something else. I have thought of two ways :- Create workers that are only for logging and use the ZeroMQ IPC transport Use Multiprocessing with a Queue I am not sure which one is better or faster for that matter. The first option does allow me to use the current worker base classes that I already use for normal workers, but the second option seems quicker to implement. I would like some advice or comments on the above or possibly a different solution.

    Read the article

  • Bitbucket and a small development house

    - by Marlon
    I am in the process of finally rolling Mercurial as our version control system at work. This is a huge deal for everyone as, shockingly, they have never used a VCS. After months of putting the bug in management's ears, they finally saw the light and now realise how much better it is than working with a network of shared folders! In the process of rolling this out, I am thinking of different strategies to manage our stuff and I am leaning towards using Bitbucket as our "central" repository. The projects in Bitbucket will solely be private projects and everyone will push and pull from there. I am open to different suggestions, but has anyone got a similar setup? If so, what caveats have you encountered?

    Read the article

  • Cannot install build essential from the CD?

    - by munir
    After a fresh installation of Ubuntu 10.10 I tried to install build-essential from the Ubuntu installation CD. I put the cd in the cdrom and in the software repositories i checked the box install from cd(Ubuntu 10.10 release Maverick Meerkat). Then I reloaded the software repositories. The synaptic manager then tried to download some repository related files but failed to do so as i didn't have internet connection. Then I open a terminal and wrote sudo apt-get install build-essential. It prompted me if I want to install build essential y/N. I typed y but the terminal showed some errors and was not installed. I also tried to add the CD in the software repositories. I clicked add and it prompted me to insert a CD while the CD was still inside the cdrom. I clicked "ok" then and it showed it could not find any cd. What is wrong?

    Read the article

  • How will wayland be delivered?

    - by Chris Woollard
    Mark just announched support for wayland in Ubuntu in future. I was just wondering the following (Hopefully Mark can answer). Obviously this is quite a significant piece of work. When will a version be available to test? Will it be available as a package(s) in the repository or will it be delivered as its own distribution e.g. wayBuntu? When do you expect it to take over as the default (e.g. 11.11)? Thanks Chris

    Read the article

  • Can not enter password for sudo [duplicate]

    - by Michael
    This question already has an answer here: add repository to ubuntu from terminal with pgp key 3 answers I have used Ubuntu for several years, and I can not enter password for sudo, and this is when i wanna add a key to public.gpg for itunes10 it dos not work, and the password normal works wite sudo but not in the terminal when i enter: sudo wget -q "http:// deb.playonlinux.com/public.gpg" -o- | sudo apt-get add - and it says this 'sorry try again', and i just had installed itunes10, and have to add a key whit wget to public.gpg, and i tried to enter in the terminal: sudo apt-get update and the password works fine but not whit using sudo wget, and can some one please help.

    Read the article

  • Ubuntu 12.10 - VirtualBox not sharing internet with guest system

    - by Fernando Briano
    I went from ArchLinux to Ubuntu on my dev box. I use VirtualBox to test web sites on Windows and IE. I have my Windows 7 VirtualBox image running on Ubuntu's VirtualBox. Back with ArchLinux, internet worked "out of the box" on the Windows boxes. I left the default options on the box's Network Options (NAT). The Windows machine shows as "connected to ethernet" but reports: The dns server isn't responding So I can't access Internet from there. I tried searching for Ubuntu's official docs but they seem pretty outdated. I tried using my old boxes from Arch (which boot normally but have no internet) and creating a new box from Ubuntu itself, but still get the same results. Update: I'm using VirtualBox 4.1.18 from Ubuntu's repository (apt-get install virtualbox).

    Read the article

  • Do delegates defy OOP

    - by Dave Rook
    I'm trying to understand OOP so I can write better OOP code and one thing which keeps coming up is this concept of a delegate (using .NET). I could have an object, which is totally self contained (encapsulated); it knows nothing of the outside world... but then I attach a delegate to it. In my head, this is still quite well separated as the delegate only knows what to reference, but this by itself means it has to know about something else outside it's world! That a method exists within another class! Have I got myself it total muddle here, or is this a grey area, or is this actually down to interpretation (and if so, sorry as that will be off topic I'm sure). My question is, do delegates defy/muddy the OOP pattern?

    Read the article

  • Is "watermarking" code with random trailing whitespace a good way to detect plagiarism?

    - by paperjam
    Consider this: int f(int x) { return 2 * x * x; } and this int squareAndDouble(int y) { return 2*y*y; } If you found these in independent bodies of code, you might give the two programmers the benefit of the doubt and assume they came up with more-or-less the same function independently. But look at the whitespace at the end of each line of code. Same pattern in both. Surely evidence of copying. On a larger piece of code, correlation of random whitespace at line ends would be irrefutable evidence of a shared origin. Now aside from the obvious weaknesses: e.g. visible or obvious in some editors, easily removed, I was wondering if it was worth deploying something like this in my open source project. My industry has a history of companies ripping off open source projects.

    Read the article

  • Bzr to git migration

    - by Sardathrion
    I am planning to do two things on several large (several gigs) and old (several years) repositories: Move from bzr to git without losing the commit history. Restructure all the repositories either using bzr or git. This will involve moving files/directories from one repository to another with its change history. Doing both at once would be foolish (I think!) but I am not sure which one should be done first. Any suggestions? Anything I should watch out for when migrating/restructuring?

    Read the article

  • How do we install Unity-2D and dependencies offline?

    - by Takkat
    We have installed 11.04 32-bit on an old machine that has no internet connection and with a graphics card that is not suitable for running Compiz or Unity. Still, we would like to run Unity-2D on this machine. We are aware of answers to this Question. Sadly Keryx will not run on 11.04 32-bit because of unmet dependencies. Building an offline repository is not an option because of limited storage capacity. Is there any convenient other way to find, download, install, and eventually update unity-2 and all dependencies (preferably from an OS independent download path)?

    Read the article

  • How to determine the version and origin of proprietary drivers installed by Additional Drivers?

    - by Bribles
    How can I tell which version and from which repository the Additional Drivers tool is trying to install the fglrx graphics driver? It says that I have a different version of the driver in use. I installed the driver from maverick/restricted and apt-cache tells me it's from a regular Ubuntu mirror. The installed version is the same as the candidate version. Can I get Additional Drivers to tell me what it would install if I activated the driver through it? Is it possible Additional Drivers just assumes it's a different version since it was installed by a different process?

    Read the article

  • Should I limit my type name suffix vocabulary when using OOP?

    - by Den
    My co-workers tend to think that it is better to limit non-domain type suffixes to a small fixed set of OOP-pattern inspired words, e.g.: *Service *Repository *Factory *Manager *Provider I believe there is no reason to not extend that set with more names, e.g. (some "translation" to the previous vocabulary is given in brackets): *Distributor (= *DistributionManager or *SendingService) *Generator *Browser (= *ReadonlyRepositoryService) *Processor *Manipulator (= *StateMachineManager) *Enricher (= *EnrichmentService) (*) denotes some domain word, e.g. "Order", "Student", "Item" etc. The domain is probably not complex enough to use specialized approaches such as DDD which could drive the naming.

    Read the article

  • I have a library and several small programs that use it: how should I structure my git repositories?

    - by Dan
    I have some code that uses a library that I and others frequently modify (usually only by adding functions and methods). We each keep a local fork of the library for our own use. I also have a lot of small "driver" programs (~100 lines) that use the library and are used exclusively by me. Currently, I have both the driver programs and the library in the same repository, because I frequently make changes to both that are logically connected (adding a function to the library and then calling it). I'd like to merge my fork of the library with my co-workers' forks, but I don't want the driver programs to be part of the merged library. What's the best way to organize the git repositories for a large, shared library that needs to be merged frequently and a number of small programs that have changes that are connected to changes in the library?

    Read the article

  • Alsa doesn't work in vlc

    - by freebird
    Alsa Audio Output works fine from terminal, e.g. aplay /usr/share/sounds/alsa/Noise.wav. But I got to change from default to Alsa Audio Output in vlc. I found it in Tools Perfernces Audio Outputs. The issue is that when I change it to Alsa, I Loose all sound. When I leave the default I get an annoying Audio delay of about 200ms or 500ms. From what I have found you have to use Alsa Audio Outpu to fix that issue. Updated 6-26-2011 10:28pm To fix the Alsa Audio Output: sudo add-apt-repository ppa:ferramroberto/vlc sudo apt-get update sudo apt-get install vlc mozilla-plugin-vlc then, opened Update Manager, there were 2 updates for vlc there, I installed them and rebooted. Now alsa works fine and audio is in sync with video.

    Read the article

  • How to get rid of bookmarks in synced Chromium

    - by Lambda Dusk
    I'm using three Ubuntu systems in an irregular pattern, and since I use Chrome/Chromium anyway and have a Google account, I decided to make my life a bit easier and sync them. Now I am having a problem: When I want to remove bookmarks from my lists, they not only come back when I switch the machine, they double. By now, I have up to ten identical bookmarks in the list and I spend a lot of time scrolling over them. Is there any way to remove them permanently? EDIT: Apps, too.

    Read the article

  • Lubuntu 13.04 Resolution Issue du to ppa makson96

    - by Choupa
    I've installed Lubuntu 13.04 on my notebook. (It's the first time I use Lubuntu) I had graphical issue, apparently due to the graphic driver (ATI Radeon Xpress 1200) I followed this procedure to correct my problem : sudo add-apt-repository ppa:makson96 sudo apt-get update sudo apt-get upgrade sudo apt-get install fglrx-legacy My graphical issue has been corrected but now it shows a low resolution (1024*768) and I can't change it. the xrandr result : xrandr: failed to get size of gamma output default Screen 0: minimum 1024x768, current 1027x768, maximum 1024x768 default connected 1024x768+0+0 0mmx0mm 1024x768 0.0* I've already read this topic but I don't really understand how things work there. I'd really appreciate your help ! Many thanks !

    Read the article

  • Useful git commit messages for merged branches

    - by eykanal
    As a follow-up to this question: If I'm working on a team by myself, I can maintain useful commit messages when merging branches by squashing all the commits to a single diff and then merging that diff. That way I can easily see what changes were introduced in the branch, and I have a single summary describing the feature/change/whatever that was accomplished in that branch when browsing the master branch. My question now is, how can I accomplish this when working with a team? In that situation, the branches will be pushed to a remote repository, meaning that I can't squash all the commits in the branch down to a single commit. If the branch is public, can I still have a single useful merge commit in the master branch? (By "useful" I mean that the commit in the master line tells me (1) a useful summary of what was done in the branch and (2) diffs of the same.)

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >