Search Results

Search found 108599 results on 4344 pages for 'one click publish'.

Page 40/4344 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Publish Git repository to SVN

    - by Ken Williams
    I and my small team work in Git, and the larger group uses Subversion. I'd like to schedule a cron job to publish our repositories current HEADs every hour into a certain directory in the SVN repo. I thought I had this figured out, but the recipe I wrote down previously doesn't seem to be working now: git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 cd px2 svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn fetch git rebase trunk master git svn dcommit Here's what happens when I attempt: % git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 Cloning into 'ProjX'... ... % cd px2 % svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX Committed revision 123. % git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX Using higher level of URL: http://me@svnserver/svn/repo/play/me/fromgit/ProjX => http://me@svnserver/svn/repo % git svn fetch W: Ignoring error from SVN, path probably does not exist: (160013): Filesystem has no item: File not found: revision 100, path '/play/me/fromgit/ProjX' W: Do not be alarmed at the above message git-svn is just searching aggressively for old history. This may take a while on large repositories % git rebase trunk master fatal: Needed a single revision invalid upstream trunk I could have sworn this worked previously, anyone have any suggestions? Thanks.

    Read the article

  • WCF publish/subscribe service, and ASP.NET MVC client

    - by d3j4vu
    I managed to develop a custom WCF service, using the publish / subscribe model, and hosted inside a managed windows service. Everything's working. I developed an interface as the service contract implementing a method definition marked as a non-one way operation contract (OperationContract(IsOneWay = false)]. This, to make possible returns an instance of a custom class derived from System.Web.Mvc.ActionResult. In the MVC app, event fires ok. It wraps inside an action method, (just the one defined in the interface), but, and this is my current problem, i believe that something relative to the execution context of the windows service (and the hosted wcf counterpart) blocks the execution of the action method in the MVC app. This is what i have until now (some pieces ripped off just to be more clear): /// Method definition for the contract's service. Maps to a MVC ActionMethod. [OperationContract(IsOneWay = false)] ActionResult Imagen(string data, CustomActionResult result); The class to hold an ActionResult derived class instance: public class ServiceEventArgsMvc : ServiceEventArgs { /// <summary> /// /// </summary> public CustomActionResult Result { get; set; } } And the code in the MVC client app: /// <summary> /// Just a simple class to hold an abstract ActionResult derived class instance. /// </summary> public ActionResult Image(string data, CustomActionResult result) { ViewData["data"] = data; return View(); } Ok. ActionMethod sucessfully executes...but when it's done (and usually expected obtain a reditection to a View named Image, like the action method), the WCF service throws a Timeout exception, making clear that he's still waiting for a response from the MVC client. The response never arrives, so the MVC app never finish his work (redirect to the "Image" view as expected). Any ideas?. Guess i'm missing something very simple, but i don't know what it could be. This is drivin' me nuts.

    Read the article

  • Android Nexus One - Can I save energy with color scheme?

    - by Max Gontar
    Hi! I'm wondering what color-scheme is more energy-saving for AMOLED display? I've already decided to manage c-scheme according to ambient light, thanks to this post: Somewhat-proof, the link posted by nickf: Ironic Sans: Ow My Eyes. If you read that in a well lit room, the black-on-white will be the most pleasant to read. If you read it in a dark room, the white-on-black will be nicer. But if I want to save battery power, should I use bright content with light background or vice versa? Is it possible anyway (they say it's not)? Thanks!

    Read the article

  • Spring-Hibernate: How to submit a for when the object has one-to-many relations?

    - by Czar
    Hi, I have a form changeed the properties of my object CUSTOMER. Each customer has related ORDERS. The ORDER's table has a column customer_id which is used for the mapping. All works so far, I can read customers without any problem. When I now e.g. change the name of the CUSTOMER in the form (which does NOT show the orders), after saving the name is updated, but all relations in the ORDERS table are set to NULL (the customer_id for the items is set to NULL. How can I keep the relationship working? THX

    Read the article

  • Disable click action after letting up a mouse wheel hold scroll in Chrome browser

    - by Joe Miller
    I apologize in advance for the confusing title, not sure what the best way to describe this action is. Basically, I am holding down the mouse wheel and then moving the mouse itself up and down to scroll (not actually rotating the mouse wheel forward or backward). This is often the most convenient way to scroll for me. Unfortunately, when I scroll in this way, and then let up the mouse wheel again, it performs a click action, so if the arrow happens to land on a link when I let up the mouse wheel, I end up inadvertently clicking that link. How can I prevent the mouse from performing a click action when I use the mouse wheel to scroll by holding it down and then letting it up when I am done scrolling? It seems like this is only happening in the Chrome browser. Thanks! Windows 7, Chrome Browser, Logitech Mouse

    Read the article

  • How to PERMANENTLY disable touchpad tap-to-click on Dell Inspiron/Windows 7

    - by Graham
    Hi all - my first time here, so I hope you can help. I've seen a lot of stuff on various forums (including here) about disabling the annoying "tap" function on a laptop touchpad. I learned the hard way not to de-install the driver (as the software suggests), since you then loose the Synaptic tab in the mouse control settings, and with it all means to modify the touchpad settings ... indicentally, if this happens to you, reboot in safe mode and do a restore, and the synaptic tab comes back. Not ideal, I know, but it works. Anyway, I have the most up-to-date drivers, and I can go to the Synaptics tab and can disable the tap-to-click function no problem. However, next time the machine is booted, tap-to-click is back on. It can alsways be disabled, but it's pain having to reset it every time the mchine is powered up. Is there a way to permanently disable it, once and for all? Thanks in advance, Graham

    Read the article

  • Persisting right click menu highlight

    - by Charlie Somerville
    I used to have this problem sometimes in Vista, but now I'm using Windows 7 (it was a clean install, reformatted hard drive) I'm disappointed that it's happening again. Basically what happens is sometimes when I right click on something and click an entry in the context menu, the highlight from entry remains on the screen, in front of everything else. I can get rid of it by changing my theme to Aero Basic and back again, but it's not a nice solution as it takes too long and often once I get rid of it, it comes back. Here you can see an example of what's happening - the highlight is there from Chrome's context menu. Does anyone know how to fix this?

    Read the article

  • How to make FileZilla open all the required files with one click

    - by Omar Tariq
    Is there any way of configuring FileZilla so that I can open all the files on a server that I use to edit with just one click. For example if the files are like this: /home/abc/def/one.txt /home/abc/def/yet/another/directory/two.txt /home/abc/def/ghi/yet/another/directory/three.txt Then it is very time-consuming to navigate through each directory and open the required files. These are only 3 files but what if we have around 10 to 20 files? Yes, copying the path of the directories is one thing. But something that is built-in so that I can just click a button like open all the required files of this connection and it opens all the files in the editor (as set in FileZilla preferences) then that would be great!

    Read the article

  • Google Chrome auto-clicker extension?

    - by Joel Murphy
    I'm looking for an auto-clicker that will auto click page elements in Google Chrome. Standard auto-clickers work fine, but I'd like to continue working on my computer without having to keep Google Chrome open. Does anyone know of any extensions that offer this functionality? Anything that allows me to specify an element to be clicked on, or set screen co-ordinates within a webpage and will click away until I decide to stop the script would be perfect. I've tried looking at macro extensions, but they don't seem to offer the functionality I want. Can anybody suggest a particular extension? Thanks in advance.

    Read the article

  • Can't double click files to open them in inDesign (CS5)

    - by Matt
    I cannot open a file unless I open inDesign (the program) and then do File-Open If I double click, it starts to open, then just hangs forever. AFTER I close it, and look in the directory where they're saved, I see a (temporary?) "lock" file. Now I can double click the original file and it opens just fine. However, now when I close iD it deletes the file and the whole process starts again... I have tried updating the software, uninstalled COMPLETELY and reinstalled, tried a brand new Win7 install. These files are all saved on a network drive, the computer is a new quad-core Dell with 12GB of RAM and a fresh x64 Win7 install on the SSD. Does not happen with other programs.

    Read the article

  • Use DOS batch to move all files up 1 directory

    - by Harminoff
    I have created a batch file to be executed through the right-click menu in Win7. When I right-click on a folder, I would like the batch file to move all files (excluding folders) up 1 directory. I have this so far: PUSHHD %1 MOVE "%1\*.*" ..\ This seems to work as long as the folder I'm moving files from doesn't have any spaces. When the folder does have spaces, I get an error message: "The syntax of the command is incorrect." So my batch works on a folder titled PULLTEST but not on a folder titled PULL TEST. Again, I don't need it to move folders, just files. And I would like it to work in any directory on any drive. There will be no specific directories that I will be working in. It will be random. Below is the registry file I made if needed for reference. Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\Directory\shell\PullFiles] @="PullFilesUP" [HKEY_CLASSES_ROOT\Directory\shell\PullFiles\command] @="\"C:\\Program Files\\MyBatchs\\PullFiles.bat\" \"%1\""

    Read the article

  • Mouse(s) double clicks instead of single click (its not the mouse)

    - by Iznogood
    I am aware of this very similar question: But I have tried with 3 different mouses and everyone of them exibit the same behavior. Simple enough, 1 out of 3 times I get a double click when single clicking). I searched the net a lot about this problem and have yet to find a solution. I have tried: 1- switch to another mouse 2- uninstall the mouse drivers + reboot 3- I do not have any special mouse drivers/software like intellisense and logitecs to uninstall. 4- verified that I was not in fact on some setting that says open files with single click. 5- everything is up to date including a antivirus 6- Installed fresh drivers from dell's website It is a dell vostro 260 computer running windows 7 pro 64. edit: added a 6th thing I tried. edit2: tried reinstalling every windows update I could find nothing Boss just said he'd buy me a logitech mouse hoping the drivers will fix my problems. Hopefuly!

    Read the article

  • hardware: delay and distinct 'click' before hard drive access

    - by matt lohkamp
    I have a windows 7 box stashed away in my closet, containing (among other things) 2 big HDDs linked together as a mirrored volume - basically a super lazy NAS / media server. I've noticed that when that drive is accessed (whether locally, on the machine itself, or remotely, from another computer, or my xbox, for example) there's a noticeable pause, and then from the computer itself, a 'click!' noise, after which the drive is accessed; e.g. open \\computername\shared\, wait 2 seconds, hear 'click!' and then see files appear in windows explorer. Any ideas? Otherwise the drive preforms normally - is it a windows thing? a HDD-about-to-die thing? Or a "yeah that always happens, you've just never noticed it before" thing?

    Read the article

  • Windows 7 default VPN - Single Click to Connect

    - by Goyuix
    The default way to connect to a VPN (standard includedd MS client) seems to be to click on the network icon in the system tray to expand it, then pick the VPN connection, and click the connect button. This brings up a dialog where you can enter your username and password. I have told the VPN connection to remember my credentials. Is there some way I can skip that dialog and just have it connect? I have tried using rasdial.exe, and I can connect as long as I pass the username and password as arguments. It doesn't seem to want to use the stored credentials for some reason, maybe I need to store them with an elevated account.

    Read the article

  • Adding cygwin on right click on windows explorer

    - by PushpRaj
    I wish to add a command on right click menu in explorer that opens current directory with cygwin. For same I have successfully added these registries: [HKEY_CURRENT_USER\software\classes\directory\shell\cygwin] @="c:\\cygwin\\bin\\bash.exe --login -i -c \"cd '%1'; bash\"" [HKEY_CURRENT_USER\software\classes\drive\shell\cygwin] @="c:\\cygwin\\bin\\bash.exe --login -i -c \"cd '%1'; bash\"" but this adds the command only when on some folder or drive. I want generic right click on explorer, on which, search gives me this registry to edit: [HKEY_CLASSES_ROOT\Directory\Background\shell\cygwin] @="c:\\cygwin\\bin\\bash.exe --login -i -c \"cd '%1'; bash\"" My problem lies with the value of the key, which doesnt work on %1 but on some static value like /cygdrive/c Could someone please tell me the proper way to pass current directory to the command, also please refer me some basic and advanced pages for same. Thank you.

    Read the article

  • Certain running applications cause middle-click in firefox to function differently under windows 7

    - by Charlie
    If I have photoshop, vlc player, or even just the windows task list open, when I middle click in firefox to get it to open in new tab (by depressing both buttons on my Lenovo g550 with alps touchpad), the mouse icon changes to some variety of scroll type feature, middle click doesn't work, and if I persist then the other programs take focus, and some crash. I assume the scroll like feature must be intended as added functionality in either windows 7 or the alps touchpad driver, but no settings adjustments seem to be able to remove this, and I could see nothing regarding this in firefox. I would really like to fix this! Thanks.

    Read the article

  • PowerShell One Liner: Duplicating a folder structure in a Sharepoint document library

    - by Darren Gosbell
    I was asked by someone at work the other day, if it was possible in Sharepoint to create a set of top level folders in one document library based on the set of folders in another library. One document library has a set of top level folders that is basically a client list and we needed to create the same top level folders in another library. I knew that it was possible to open a Sharepoint document library in explorer using a UNC style path and that you could map a drive using a technique like this one: http://www.endusersharepoint.com/2007/11/16/can-i-map-a-document-library-as-a-mapped-drive/. But while explorer would let us copy the folders, it would also take all of the folder contents too, which was not what we wanted. So I figured that some sort of PowerShell script was probably the way to go and it turned out to be even easier than I thought. The following script did it in one line, so I thought I would post it here in my "online memory". :) dir "\\sharepoint\client documents" | where {$_.PSIsContainer} | % {mkdir "\\sharepoint\admin documents\$($_.Name)"} I use "dir" to get a listing from the source folder, pipe it through "where" to get only objects that are folders and then do a foreach (using the % alias) and call "mkdir".

    Read the article

  • Merge two different API calls into One

    - by dhilipsiva
    I have two different apps in my django project. One is "comment" and an other one is "files". A comment might save some file attached to it. The current way of creating a comment with attachments is by making two API calls. First one creates an actual comment and replies with the comment ID which serves as foreign key for the Files. Then for each file, a new request is made with the comment ID. Please note that file is a generic app, that can be used with other apps too. What is the cleanest way of making this into one API call? I want to have this as a single API call because I am in a situation where I need to send user an email with all the files as attachment when a comment is made. I know Queueing is the ideal way to do it. But I don't have the liberty to add queing to our stack now. So this was the only way I could think of.

    Read the article

  • Dual monitors with one above the other?

    - by Felix
    I'm using Gnome 3 and proprietary Nvidia drivers. I have tried to set in nvidia-settings my external monitor to be "above" my main one (it's a laptop). However, when I try to drag a window up from the main display to the external one, it gets stuck and can't move past a certain point. Trying to maximize it changes its decoration so it looks maximized (i.e. no borders, etc), but its size or position doesn't change. Now, if I set my external monitor to be "to the left" of the main one, it works, which is why I'm suspecting this is a Gnome issue, not an Nvidia one. Anyone know how to fix this? Update: some versions: Gnome: 3.2.2.1 Nvidia: 280.13 Update 2: I can see that Gnome 3.4 is out, and among the release notes is better external monitor support. However, they only mention a small fix that is unrelated to my problem. Can anyone with Gnome 3.4 and access to an external monitor please test this out and tell me if it works? I don't want to go through the hassle of upgrading my Ubuntu installation unless I know for certain it's going to fix the problem.

    Read the article

  • Complex shading using one single (small) texture

    - by teodron
    Recently I stumbled upon a demo reel in UDK about how one can attain beautiful results using just one (rather tiny) texture that's being sent to the shader pipeline. The famous link is this one. Basically, the author states that they've used just one texture and give a snapshot of the technique here. I see that every RGBA channel contains different grayscale information.. and that info could be used to inside a shader to obtain a colour blended output. The problem is that the reel displays a fairly complex scene. To top that, the author even makes use of a normal map. How did they manage to fit a normal map in an already cluttered texture? It makes sense to have a half-space normal map by using only RG from an RGB texture, but what about the rest of the information? Since it was proven to be possible, could someone please explain how it was done (the big picture, not the dirty details!)!? Here's the texture being used. Click to see in full size.

    Read the article

  • Click-Once install location from within application

    - by rein
    I'd like to programatically determine the "publish location" (the location on the server which contains the installation) of the click-once application I'm running. I know that the appref-ms file contains this information and I could parse this file to find it but the application has no idea as to the location of the appref-ms file and I can't seem to find a way of determining this location. Does anyone have any ideas how I can easily determine the publish location from within my application?

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >