Search Results

Search found 14900 results on 596 pages for 'git remote repository'.

Page 61/596 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Remote access to Microsoft Dynamics NAV (C/Side) with native non-SQL database

    - by Joannes Vermorel
    I am facing a company that have a fairly recent Microsoft Dynamics NAV (C/Side) setup that comes with a non-SQL storage system called the native database server. I would need to be remotely connect to this database, and perform what would equate to SQL queries with very modest needs (no join, no complex filtering). I am rather ignorant of this technology, does someone knows to how make remote queries to this ERP?

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Get Remote Processes on Windows 2003 with cpu percentage

    - by Brettski
    I have a production server with it's cpu's running excessively high. Except in critical circumstances nobody is allowed to logon to servers during non maintenance times. I am looking for an application I can use to look at the processes on the remote server which include CPU % usage. An application like top. Windows native tasklist.exe doesn't show percentage, nor does sysinternals pslist.exe. Suggestions?

    Read the article

  • Remote deployment of OS X Mountain Lion 10.8 upgrade, not as a fresh install

    - by Dean A. Vassallo
    Anyone have any ideas or suggestions (or know if its even possible) to remote upgrade a fleet of Macs from 10.6.8 to 10.8 remotely. I presume I can push the installESD through ARD, but I want it to run completely unattended. If it is not possible through "traditional" methods does anyone know of any tools that might help automate this process? Thank you for your thoughts, feedback, and suggestions.

    Read the article

  • Remote access to internal machine (ssh port-forwarding)

    - by MacUsers
    I have a server (serv05) at work with a public ip, hosting two KVM guests - vtest1 & vtest2 - in two different private network - 192.168.122.0 & 192.168.100.0 - respectively, this way: [root@serv05 ~]# ip -o addr show | grep -w inet 1: lo inet 127.0.0.1/8 scope host lo 2: eth0 inet xxx.xxx.xx.197/24 brd xxx.xxx.xx.255 scope global eth0 4: virbr1 inet 192.168.100.1/24 brd 192.168.100.255 scope global virbr1 6: virbr0 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 # [root@serv05 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr1 xxx.xxx.xx.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 0.0.0.0 xxx.xxx.xx.62 0.0.0.0 UG 0 0 0 eth0 I've also setup IP FORWARDing and Masquerading this way: iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE iptables --append FORWARD --in-interface virbr0 -j ACCEPT All works up to this point. If I want to remote access vtest1 (or vtest2) first I ssh to serv05 and then from there ssh to vtest1. Is there a way to setup a port forwarding so that vtest1 can be accessed directly from the outside world? This is what I probably need to setup: external_ip (tcp port 4444) -> DNAT -> 192.168.122.50 (tcp port 22) I know it's easily do'able using a SOHO router but can't figure out how can I do that on a Linux box. Any help form you guys?? Cheers!! Update: 1 Now I've made ssh to listen to both of the ports: [root@serv05 ssh]# netstat -tulpn | grep ssh tcp 0 0 xxx.xxx.xx.197:22 0.0.0.0:* LISTEN 5092/sshd tcp 0 0 xxx.xxx.xx.197:4444 0.0.0.0:* LISTEN 5092/sshd and port 4444 is allowed in the iptables rules: [root@serv05 sysconfig]# grep 4444 iptables -A PREROUTING -i eth0 -p tcp -m tcp --dport 4444 -j DNAT --to-destination 192.168.122.50:22 -A INPUT -p tcp -m state --state NEW -m tcp --dport 4444 -j ACCEPT -A FORWARD -i eth0 -p tcp -m tcp --dport 4444 -j ACCEPT But I'm getting connection refused: maci:~ santa$ telnet serv05 4444 Trying xxx.xxx.xx.197... telnet: connect to address xxx.xxx.xx.197: Connection refused telnet: Unable to connect to remote host Any idea what's I'm still missing? Cheers!!

    Read the article

  • Access Windows from Mac via Remote Dekstop Connection using hostname

    - by stevekuo
    I'm using Snow Leopard with Remote Desktop Connection attempting to access a Windows XP machine on a home network. If I specify the Windows PC's hostname it won't connect. Only by specifying the IP address does it connect. It's the same issue when trying to ping the Windows machine - IP address works, hostname doesn't. Both machines are on the same subnet connecting with a wireless router. Is there way to get OSX to resolve the Windows PC by its hostname?

    Read the article

  • RDPClip is not launching when I use Remote Desktop

    - by Ross
    When I'm using remote desktop to connect to my PC from my laptop(both running Windows 7 ultimate), RDPClip.exe never gets started. I can run it manually and copy/paste will work just fine, but I have no idea why it won't start automatically. I've done the usual of making sure the "Drives" checkbox is checked, but other than that I have no idea why it's mad.

    Read the article

  • Alternative Remote Desktop Software

    - by squillman
    What are good alternatives to the Windows builtin remote desktop client? I have tried Terminals and it is great but I've have run into numerous bugs with the latest release (currently 1.7e). Can anyone recommend an alternative similar to Terminals? EDIT (in response to Adam Gibbins' answer): One of the biggest things I'm looking for is session management and a tabbed environment similar to the Terminals interface.

    Read the article

  • How to stay connected on remote desktop even if different user tries to connect

    - by Darqer
    I'm logging through Remote Desktop to windows 7. Some other users sometimes try to connect to the same computer, then a message box pops up with information that I have 30 seconds to block this try or I will be logged off. Sometimes I'm away and then I'm being logged off and when I come back I have to log on again. Is there a way to turn off this functionality for single user. Is there some application that always breaks this login process ?

    Read the article

  • How to stay connected on remote desktop even if different user tires to connect

    - by Darqer
    I'm logging through Remote Desktop to windows 7. Some other users sometimes tries to connect to the same computer, then a message box pops up with information that I have 30 to break this trial or I will be logged off. Sometimes I'm away and then I'm being logged off and when I come back I have to log on again. Is there a way to turn off this functionality for single user. Is there some application that always break this login process ?

    Read the article

  • Need a hardware solution for remote controling a PC

    - by ShacharWeis
    Hello We have kiosk computers scattered around the country, and are using VNC to control them. But VNC has limitations (only works if the OS is intact, for instance). I want to be able to control the computer even if it is stuck in boot. Is there a cheap hardware solution for remote controlling a PC ? Thanks.

    Read the article

  • Remote edit with local editor (Linux)

    - by Eisaj
    Hello, I have a server I can ssh into, and I am also running Ubuntu. How do I edit this remote file using any program I have installed on my local Ubuntu, without copying it to local, editing it, and copying it back? Thanks!

    Read the article

  • Remote Computer renting (moving my desktop to the cloud)

    - by Carl
    I would like to rent a remote computer, like a virtual Vista or Windows 7 desktop, and run everything on it and access it with RDP (fastest). It could be virtual (running on Xen or Hyper-V) and the price needs to be right. Windows 7 to Windows 7 has nice RDP offload feature and doing stuff in the cloud is fast. Anywhere I could rent something like that? I've been using Amazon and CloudLayer, but they are optimized for server versions of Windows.

    Read the article

  • Vimdiff with git mergetool error: "More than two buffers in diff mode"

    - by Elizabeth Buckwalter
    I've read Vimdiff and Viewing differences with Vimdiff plus doing various google searches using things like "vimdiff multiple", "vimdiff git", "vimdiff commands" etc. When using do or diffg I get the error "More than two buffers in diff mode, don't know which one to use". When using diffg v:fname_in I get "No matching buffer for v:fname_in". From the vimdiff documentation: :[range]diffg[et] [bufspec] Modify the current buffer to undo difference with another buffer. If [bufspec] is given, that buffer is used. If [bufspec] refers to the current buffer then nothing happens. Otherwise this only works if there is one other buffer in diff mode. and more: When 'diffexpr' is not empty, Vim evaluates to obtain a diff file in the format mentioned. These variables are set to the file names used: v:fname_in original file v:fname_new new version of the same file v:fname_out resulting diff file So, I need to get the name of bufspec, but the default variables (fname_in, fname_new, and fname_out) aren't set. I ran the command git mergetool on a linux box through a terminal.

    Read the article

  • Best practice: git, github, lighthouse and 2 developers

    - by Alxandr
    I'm setting up a new project and plan on using git and github for sourcecontroll and hosting of repo and lighthouse for bugtracking. I've been working with git for some while now, but only been using it for more of a backup solution than collaborate coding solution. Also, I've noticed that in github you can setup a servicehook to lighthouse so that whenever you push to github it notifies lighthouse of the changes. This uses a token for user-authentication and has the ability to change tickets to resolved etc. However, this token I believe functions that way so that whenever a user pushes to the repo (dosn't matter who), it's the owner of the repo that "updates" to lighthouse. This is a problem. So, I believe it is necessary with 2 separate repos at github (one for each dev), and I'm wondering about the workflow that should be used. Any1 care to shred any light on this matter? Like when to pull and push (and where), and how to make the two github repos in sync or something like that? Or another solution to the problem altogether.

    Read the article

  • git workflow incorporating many, but not all commits from many forks

    - by becomingGuru
    I have a git repo. It has been forked several times and many independent commits are made on top of it. Everything normal, like what happens in many github hosted projects. Now, what exact workflow should I follow, if I want to see all that commits individually and apply the ones I like. The workflow I followed, which is not the optimal is to create a branch of the name github-username and merge the changes into my master and undo any changes in the commit I dont need manually (there are not many, so it worked). What I want is the ability to see all commits from different forks individually and cherry pick and apply them on top of my master. What is the workflow to follow for that? And what gui (gitk?) enables me to see all different individual commits. I realize that merge should be a primary part of the workflow and not cherry-pick as it creates a different commit (from git's point of view). Even rebasing other's changes on top of mine might not preserve the history on the graph to indicate that it is his commits I have rebased. So then, How do I ignore just a few commits from a lot of them? I think github should have a "apply this commit on top of my master" thing in their graph after each commit node; so I can just pull it, after doing all that.

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • How to automatically split git commits to separate changes to a single file

    - by Hercynium
    I'm just plain stuck as to how to accomplish this, or if it's even possible. Even it it can be done, I wonder if it could be setting us up for a messed-up, unmanageable repository. I have set up two branches of the code-base. One is "master" and the other is "prod". The HEAD of prod is always the latest code in production, and master is the main development branch. Here's the problem, though: We're converting from CVS here at $work and most of the developers are still getting used to git. Their CVS workflow involved tagging versions of individual files for production, then updating the servers using the tag. Unfortunately, this has let to sloppy practices like committing unrelated changes together and then tagging the files after-the-fact... and the devs want to know how they can do the following: In their local repos, they hack and commit to their hearts' delight, then at the end of the day, be able to run a command that takes a list of files whose commits over the day get merged with their local prod - and only those files - even if those commits combine changes to other files. I know how to split commits with git rebase --interactive, but I have no clue how I would automate splitting commits at all, never mind the way I want to. I do realize the simplest thing would be to just tell them to switch the their prod branches, checkout the files from their master branches into the working tree then commit to prod. My problem with that is losing the history of their commits over the day.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >