Search Results

Search found 4989 results on 200 pages for 'svn merge'.

Page 172/200 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Permission / owner issue with pushing to git when editing directly from repo?

    - by Susan
    I have a web interface for deploying scripts from our repo at Github to our live server. The web interface just triggers a bash script with some git commands. If I make changes locally, push to repo, then run the bash script to pull from repo to live it works fine. However, if I make changes directly in the repo (via Github's web interface), I'm running into fast-forward / lock issues. These are the steps I'm taking: Make a change on a file at Github repo Run a bash script (as apache) via web from live server that attempts a git push / pull. Get these problems: PUSH To [email protected]:name/name.git ! [rejected] master - master (non-fast-forward) error: failed to push some refs to '[email protected]:name/name.git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. PULL From github.com:name/name branch master - FETCH_HEAD error: unable to unlink old 'includes/footer.inc' (Permission denied) Updating 8f6d922..d1eba9d Updating 8f6d922..d1eba9d SSH in as root, attempt a push / pull and it works fine. Ideas on why would this method not work from apache?

    Read the article

  • Are there cloud network drives that let users lock files or mark them as "in use"?

    - by Brandon Craig Rhodes
    Having spent several hours reading about the features and limitations of services like DropBox and Jungle Disk and the hundreds of competitors they seem to have (as though everyone with an AWS account these days goes ahead and writes a file sharing application just for fun), I have yet to find one that would let a team of people at a small business collaborate without stepping all over each other's toes. At a small business there are often many small documents per project — estimates, contracts, project plans, budgets — and team members frequently have to open and edit them, with all sorts of problems happening if two people edit a file at once. Even if a sharing service is smart enough to keep both versions of the file created, most small-business software (like word processors, spreadsheets, estimating software, or billing systems) has no way to compare — much less to merge! — the changes in two rival versions of a file that two people edited at the same time without each other's knowledge. So, my question: are their cloud-based file sharing solutions that not only provide a virtual network drive that people can access, but that also let users lock files — even if it's not a real lock but just a flag or indicator — that could possibly prevent remote workers from both editing the same file at once? Having one person wait for another person to finish editing is a very, very small inconvenience compared to the hour or more than it can take to compare two estimates by hand until you find and resolve the rival changes. Given this fact, I am surprised that almost none of the popular file sharing solutions seem to recognize this problem and provide some solution! Does anyone know of a service that does?

    Read the article

  • Windows 7 x64 RTM USB Port Has Power But Won't Recognize Mouse/Keyboard/Anything

    - by ben
    I have an odd error that doesn't seem to fit in with any of the other odd Windows 7 x64 USB errors that have been kicked up on Google. Here we go: Uninstalled Tortoise SVN and clicked restart computer. My machine had been up for around 28 days On reboot my mouse and keyboard failed to work anymore, couldn't log in. Tried every USB port I have on my Dell 390 and the ports on my Dell 19's, nothing worked. They had power but Windows would not respond when I manipulated the keyboard/mouse. Rebooted my computer and pressed F2 to get into bios, my keyboard is working fine in bios. Keyboard and mouse work fine on other computers when using USB. Found adapters for keyboard and mouse to convert from USB to PS/2 ports, works fine. I'm actually typing this question on the same keyboard, same computer, just using PS/2 ports for my mouse and keyboard. It appears to be a Windows 7 x64 issue. Other things I have tried: Multiple other mice and keyboards, iphone, all with no luck. Each one gets power, but Windows never tries to install drivers or sees that they are connected. Uninstall and reinstall all USB drivers. Drives uninstall and reinstall fine and report no errors in Control Panel. In Power Management I disallow Windows from turning off USB ports to save power Installed the latest nVidia drivers for my graphics card, no change. Anyplace else I can look/try? Thanks!

    Read the article

  • Merging cuesheet chapter halves into single track for an audiobook

    - by TheSavo
    I have an audiobook that I have ripped and I need some help constructing chapters. I have already made some cue sheets TITLE "Bookname" PERFORMER "the Author" FILE "File1.FLAC" wave ; 23971906.667 milliseconds TRACK 01 AUDIO TITLE "_Intro" INDEX 01 00:00:00 TRACK 02 AUDIO TITLE "CH 01" INDEX 01 24:15:50 TRACK 03 AUDIO TITLE "CH 02" INDEX 01 66:21:00 TRACK 04 AUDIO TITLE "CH 03" INDEX 01 87:05:00 The audio book is in two files. The chapter at the end of the first file is continued in the second file. However, the second file restates: The publisher Book Title List item Blah blah blah I would like to merge the two 'halves' of the chapter in one seamless track. The only way I can think to do this would be be: Bulk cut down the tracks. Drop the junk info into junk track Continue the track listings as normal Take the two "halves" of the target chapter and build a separate cue sheet for it. I know there has to be an easier way. I am ok with making the 'junk' info a 'gap' or something. These are are FLAC files that will be converted to MP3 for my phone and other potable devices. I have read the primers on cue sheets, but I am just not getting it.

    Read the article

  • Sound out of sync after merging multiple mp4 files with Avidemux

    - by Goto10
    I am trying to join (merge) two or more .mp4 files together, without re-encoding. Here is what I did: Started Avidemux 2.5.5. With File-Open, selected Input1.mp4. I received this message - "H.264 detected. If the file is using B-frames as reference it can lead to a crash or stuttering. Avidemux can use another mode which is safe but YOU WILL LOOSE SOME FRAME ACCURACY. Do you want to use that mode?". I chose "No". With File-Append, selected Input2.mp4. I received the same "H.264 detected" message again and chose "No". Selected the Format to MP4 (from AVI). Saved the output file (called Output.mp4) with File-Save-Save Video. Unfortunately, when I play the Output.mp4 video in VLC, the sound is out of sync with the second video. How can I correct this?

    Read the article

  • Unable to remove Read-Only attribute from folder in Windows XP

    - by elcuco
    I have this directory which I cannot remove the read only attribute from. The computer is running XP SP2 (or SP3, not sure) and the directory sits in a NTFS file system. Looking into the web I found this: http://support.microsoft.com/kb/256614 which tells that if the directory is "customized" it's treated as a system folder and thus "read only". I don't think this is a scenario in my case, but anyway it's not helping, their recommendation is more or less: attr -r -s /d /s d:\data and this is not working for me. Any other ideas? More info: The directory is served to an HTTP server (wamp) and the directory is an SVN check out. What happens is that the web server cannot write files into the directory (imagechace from drupal is you are really interested). Edit 2: The original post claimed that the directory sits on a VFAT FS, however I booted Fedora 11 from livecd and the partition is marked as NTFS. Edit 3: I left the company which I worked on, on which this situation happened... so I cannot fully close this question. But things get even worse: I tested the "attr -r" answer I put, it did not work for me, and now the developer said that it worked for her. A nice WTF moment. Probably a reboot helped... Sorry for loosing details. If anyone has the same problem, and one of the answers helps him - please comment.

    Read the article

  • Tri-head linux system with Xmonad: is it possible to have HW acceleration

    - by progo
    What means there exists to have three monitors, all controlled by Xmonad and have hardware 3D acceleration as well? I had the pleasure of using three monitors earlier this year, and while Xmonad and Xinerama handle three monitors easily, I had to throw in an extra display driver, and also let go of Nvidia's own TwinView (which is a hack on Xinerama). This left me with no HW acceleration and some flickering as double buffering wouldn't work with certain applications. However, the three monitors handle so beautifully that I had hard time coming back to two. I understand the easiest way to achieve HW-accelerated tri-head combo is to split into two Xorgs. I wouldn't be able to switch windows between the Xorgs, so I'm not really into this solution. What's more, having a cheap and old PCI card along with even slightly better PCIe seemed to slow things down. Even if I occasionally disabled the third monitor from Xorg configure, I couldn't get HW acceleration to work. Only after I physically disconnected the old PCI card, I could get the games back in business. Would a Matrox Dual/Tri-head2go and a powerful Nvidia GPU do the trick? I understand Xmonad can be configured to "believe" that a "single" (as Dualhead2Go will merge) 3360x1050 display is actually two different ones? So that Xmonad's Mod-w and Mod-e would work properly there.

    Read the article

  • Per connection bandwidth limit

    - by Kyr
    Apparently, our server box running Windows Server 2008 R2 has a per connection bandwidth limit of 0.2 MB/s. Meaning, while one TCP connection can pull at max 0.2 MB/s, 60 parallel connections can pull 12 MB/s. We first noticed this when trying to checkout large SVN repository from this server. I used a simple Java application to test this, transferring data from server to workstation using variable number of threads (one connection per thread). Server part of the application simply writes 1 MB memory buffer to socket 100 times, so there is no disk involvement. Each connection topped at 0.2 MB/s. Same per connection limit was for only one as was for 60 parallel connections. The problem is that I have no idea from where this limit comes from. I have very little experience administrating Windows Server, so I was mostly trying to find something by googling. I have checked the following: Local Computer Policy QoS Packet Scheduler Limit reservable bandwidth: it's Not configured; Group Policy Management Console: we have two GOPs, but neiher has any Policy-based QoS defined; There isn't any bandwidth limiter program installed, as far as I can tell. We're using standard Windows Firewall. I can update this question with any additional information if needed.

    Read the article

  • Multiple LDAP servers with mod_authn_alias: failover not working when the first LDAP is down?

    - by quanta
    I've been trying to setup redundant LDAP servers with Apache 2.2.3. /etc/httpd/conf.d/authn_alias.conf <AuthnProviderAlias ldap master> AuthLDAPURL ldap://192.168.5.148:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> <AuthnProviderAlias ldap slave> AuthLDAPURL ldap://192.168.5.199:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> /etc/httpd/conf.d/authz_ldap.conf # # mod_authz_ldap can be used to implement access control and # authenticate users against an LDAP database. # LoadModule authz_ldap_module modules/mod_authz_ldap.so <IfModule mod_authz_ldap.c> <Location /> AuthBasicProvider master slave AuthzLDAPAuthoritative Off AuthType Basic AuthName "Authorization required" AuthzLDAPMemberKey member AuthUserFile /home/setup/svn/auth-conf AuthzLDAPSetGroupAuth user require valid-user AuthzLDAPLogLevel error </Location> </IfModule> If I understand correctly, mod_authz_ldap will try to search users in the second LDAP if the first server is down or OpenLDAP on it is not running. But in practice, it does not happen. Tested by stopping LDAP on the master, I get the "500 Internal Server Error" when accessing to the Subversion repository. The error_log shows: [11061] auth_ldap authenticate: user quanta authentication failed; URI / [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server] Did I misunderstand? AuthBasicProvider ldap1 ldap2 only means that if mod_authz_ldap can't find the user in ldap1, it will continue with ldap2. It doesn't include the failover feature (ldap1 must be running and working fine)?

    Read the article

  • Backup plan for linux webserver in small business?

    - by radman
    Hi, I am currently in the process of writing a backup plan for the webserver in use by my business. I am very new to this area and have a few ideas about how things should work but am unsure of what tools to use and what sort of restore process is appropriate. I'm looking for something relatively simplistic and it doesn't have to be 100% paranoid just enough to give me a reliable backup. Speed is not of the essence and there is not going to be a live fallback in place. The backup will be onto a single hdd that will be stored onsite (no option for offsite as yet). Backups will be taking place weekly. I am constrained by both time and money which is why I'm aiming for a good enough solution. Is taking an image of the webserver system drive periodically and using that as the backup appropriate? Should I be testing that the backups restore correctly every time that I perform one? This is a bit broad but what setup would you use if you were in my place, given the services I am running? Should I add additonal machines and split the services? Any advice is much appreciated! See below for server details Webserver Platform Linux Ubuntu server Running mail-server svn-server mediawiki wordpress apache-webserver Hardware single 500gb sata drive Architecture Single machine behind router (with firewall) accessible to the internet.

    Read the article

  • PHP crashing during oAuth scripts

    - by FunkyChicken
    I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64 The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log: in php-fpm log: WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start in nginx log: ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers. https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash. I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it. +++ UPDATE +++ Now I have been doing some debugging in 1 of the scripts that is playing up. If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec() command. When that is ran, it crashes. If i echo'test';exit; just above that line, it echo's correctly, if i do it below that line, php crashes. Which means it's that line 808 which causes the crash. So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm which also uses curl_exec, but that runs just fine. So I started to dig deeper into that query from the facebook script to see what values the $opts array contains from line 806. Output of that array is: http://pastebin.com/Cq9ffd3R What the problem is, I still have no clue :(

    Read the article

  • Mac OS X : Open up 3 terminals, run different commands from all for each of them, to set up a develo

    - by taelor
    I'm a Ruby on Rails Web Developer and there is a lot of repetition I go through to start up my development environment. I was wondering if there is any way that I can remove some of this repetition by writing a script, or using a program (like quicksilver) or something to get my work environment going. I know how to use quicksilver to open up terminal, and I even have a saved window group to get my 3 or 4 panes open. The next thing I would love to automatically happen is getting all three to goto a certain directory, and each run different commands. One will start the local server, and in another tab, start a background process. the other would open text mate, and then start a console session, while the last one runs a svn(or git) status. Oh yah, and I would love to go ahead and open firefox, and a few tabs going to a couple of locations. Does anyone have any suggestions on how I could make all this happen in once quicksilver command, or a double click on some type of script on my Desktop?

    Read the article

  • Is ffmpeg incorrectly interpreting .aif files?

    - by marue
    Being on an Ubuntu 10.04 server i installed the ffmpeg packages with apt. ffmpeg is working afterwards, and doing as it should. Almost. For testing purposes i uploaded a few audiofiles. One of them, an aif file, is not being correctly interpreted. While on my workhorse (Mac SnowLeopard) ffmpeg tells the format as Stream #0.0: Audio: pcm_s24be, 44100 Hz, 2 channels, s32, 2116 kb/s my Ubuntu server says it is: Stream #0.0: Audio: pcm_s24be, 44100 Hz, stereo, s16, 2116 kb/s which is the wrong bitdepth. Ubuntu then fails to convert the file with the error message [pcm_s24be @ 0xcd4b580]invalid PCM packet Error while decoding stream #0.0 which certainly is not true. The file is perfectly valid. Are there any know issues for ffmpeg interpreting the aif format? How can i find out which version of the aif-codec ffmpeg is using? Any ideas how to approach this issue? ffprobe output: FFprobe version SVN-r20090707, Copyright (c) 2007-2009 Stefano Sabatini libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 built on Jan 20 2010 00:13:01, gcc: 4.4.3 20100116 (prerelease) Input #0, aiff, from 'testfile.aif': Duration: 00:00:04.00, start: 0.000000, bitrate: 2117 kb/s Stream #0.0: Audio: pcm_s24be, 44100 Hz, stereo, s16, 2116 kb/s update 2: Forcing the conversion with -sample_fmt s32 doesn't change anything. Strange thing is: Even without using -sample_fmt s32 i just realized that the conversion is working and creates valid audiofiles. There just is the error message from above.

    Read the article

  • Command line scripts to restore the 4 system databases of MS SQL Server 2008

    - by ciscokid
    Hi there, can someone give me some advice on how to restore the 4 system databases (master, msdb, model, tempdb) of a sql server 2008 please? I've already done some testing myself (on restoring the master database) with the following commad line script as a result: ::set variables set dbname=master set dbdirectory=C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA title Restoring %dbname% database net stop mssqlserver cd C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn sqlservr -m sqlcmd -Slocalhost -E -Q "restore database master from disk='c:\master.bak' WITH REPLACE" net start mssqlserver pause After the execution of the 'sqlservr -m' command (used to start the server instance in single-user mode, which is only necessary when restoring the MASTER database), the script stops. So in order to execute the last 2 commands I need to separate the script into 2 smaller scripts, and run them one after the other. Does anyone has an idea on how I can merge them into one single script that runs completely without any interruption? I also want to restore the other 3 system databases using command line scripts like this one. Can someone please advice me how I need to go on? I've already noticed that restoring the temdb is not so easy, but there has to be a way... Looking forward to your advice!

    Read the article

  • mplayer (mplayerhq.hu) repeats ending audio frames

    - by kamikatze
    mplayer (from mplayerhq.hu) on windows repeats the last few audio frames upon exit. When the video ends, before you can see Exiting... (End of file) in the command prompt, you will hear the last 1/2 second or so of the audio track again. This behavior is the same for multiple containers/codecs/soundcards Vista or Windows 7. Is there a workaround for this? My playback specs: MPlayer Sherpya-MT-SVN-r31027-4.2.5 (C) 2000-2010 MPlayer Team 150 audio & 343 video codecs Playing splash_final.wmv. ASF file format detected. [asfheader] Audio stream found, -aid 1 [asfheader] Video stream found, -vid 2 VIDEO: [WMV3] 1280x720 24bpp 1000.000 fps 6291.5 kbps (768.0 kbyte/s) ========================================================================== Opening video decoder: [dmo] DMO video codecs DMO dll supports VO Optimizations 0 1 DMO dll might use previous sample when requested Decoder supports the following formats: YV12 YUY2 UYVY YVYU RGB8 [..] Decoder is capable of YUV output (flags 0x1b) Movie-Aspect is undefined - no prescaling applied. VO: [directx] 1280x720 = 1280x720 Planar YV12 Selected video codec: [wmv9dmo] vfm: dmo (Windows Media Video 9 DMO) ========================================================================== ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders AUDIO: 44100 Hz, 2 ch, s16le, 329.8 kbit/23.37% (ratio: 41221-176400) Selected audio codec: [ffwmav2] afm: ffmpeg (DivX audio v2 (FFmpeg)) ========================================================================== AO: [dsound] 44100Hz 2ch s16le (2 bytes per sample) Starting playback...

    Read the article

  • Installing ffmpeg + dependencies on AWS Linux AMI (repo issues)

    - by HdN8
    I'm installing ffmpeg to run on an Amazon linux AMI, and have added the rpmforge repo and the dag repo. Here are some guidelines I'm using for reference: TWoZaO and Razuna The rpmforge repo has ffmpeg, but if you try to install it then it will complain that is missing dependencies (for me libSDL-1.2.so.0()(64bit)). Regardless I will install ffmpeg from svn so I can be sure to enable the options I want (namelylibx264). It seems strange to me though that SDL is not inrpmforgeordag`, and in according to both of my references above, it should be there. I tried to grab it manually from here, but it needs these dependencies, so no-go: > error: Failed dependencies: SDL = > 1.2.10-8.el5 is needed by SDL-devel-1.2.10-8.el5.x86_64 > alsa-lib-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGL-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGLU-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libSDL-1.2.so.0()(64bit) is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libX11-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXext-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrandr-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrender-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXt-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64

    Read the article

  • how could application installations/configurations be easier in linux?

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • How can one make vim change terminal colors?

    - by amn
    I am using command line vim running from an xterm (which runs sh). I have color in vim according to a color scheme I like. The problem is, as usual with 256-color terminals and truecolor color schemes, colors are wrong. Now, I know I can do a gazillion things to fix this, including installing gvim, but I like my terminal. In fact, using xrdb [-merge] .Xresource file, I now actually have xterm override the color values, and the theme now looks perfect. Since, I may be switching to another theme, I need some workflow to have vim actually do what xrdb does - to reset terminal color pallette. Because right now I have to reset color values with xrdb ... first, then launch another xterm to actually use these values, then launch vim from that newly opened xterm to have the exact colors. The way I understood it is that vim color scheme, just as any other terminal application, uses colors by referencing their ids, and X resources set the values themselves. I think I saw somewhere on Internet, that terminal control character sequences can reset actual color values, in fact, I am sure they can - I managed to set my terminal background color at runtime. How would I make vim execute these sequences to match values for the color scheme? And is there any reference to these control sequences, as part of any standard?

    Read the article

  • Can two mocha for After Effects X-Spline Layers be merged ?

    - by George Profenza
    Hello, I'm new to mocha for After Effects, but like the auto tracking feature. There are a few markers I'm adding, but there's one which is giving me a bit of a headache. I'm tracking a circle that moves around a larger object, so it moves in front of it initially, then behind, being occluded by the larger object, then in front again. I tried to track the circle in the first part(when it's in front, before being occluded) and in the second part(when it's in front again, coming out on the other side of the large object) using a single X-Spline. The problem is the second part starts in a different location and I don't know how to 'move' the X-Spline to the new position and track from there, without affecting the previous keyframes. As a workaround I use two X-Spline Layers, export the data to .txt files, then manually merge the two files into a new one containing keyframes from both X-Spline Layers. Is there an easier way to do this (either merging two X-Spline Layer, or using a single X-Spline Layer that can move to a new location without affecting previous keyframes) ? Any suggestion would help.

    Read the article

  • IIS replication - Is it possible

    - by Ian
    Hi All, I have a requirement for a client that I have a centralised system that all his satellite branches can work on. Currently this is a ASP.net web forms app running under IIS 7 on win 2008 RC 2 using an SQL backend. The client has now requested that each branch have a local server, so that in the event that the internet connection is down, the branches productivity does not suffer. His other request is that everything can be updated via the central hub and using some mechanism the updates filter down to the individual sites. What are my options here? I see the following as possible options: Multiple redundant internet connections controlled by load balancers SQL replication for the DB (What is better, snapshot, merge or transactional) Roll my own IIS sync service the periodically checks if there is a new version of the web app and downloads it (I hope there are better option than this) Something way better I don’t yet know about (I hope this is the one I need) One of my clients concerns are that the branches are often in very remote areas where everything from technicians to internet is hard to find and very scarce. Any ideas, suggestions, tips etc are welcome. Thanks all

    Read the article

  • How should I perform database maintenance on a 24x7 system

    - by solublefish
    I'm a software developer who inherited a part-time DBA role. I'm responsible for an application backed by a small, high-volume 24x7 database on SQL Server 2008. While there's other stuff in the DB, the critical piece is a 50GB, 7.5M row table that serves 100K requests/sec during peak load, and about half that at "night". This is 99%+ read traffic, but the writes are constant, and required. I need to be able to perform periodic maintenance without a maintenance window. Say an index rebuild, a job to purge old data, Windows Update, or hardware upgrade. Most of the advice I've seen is along the lines of "MAKE a maintenance window." While I appreciate the sentiment, I hope there's another way. If it will solve this problem, I do have the ability to purchase new hardware or modify the database, the clients (a set of web services servers), and much of the application code (ADO.NET + ASP.NET). I've been thinking along the lines of using the warm spare (or a 3rd server) to do the maintenance, and then "swap" it into production. 1 Synchronize the spare by restoring backups, including a current transaction log. 2 Perform the maintenance tasks. 3 Reconfigure clients to connect to the spare server. Existing connections are finished within a minute or so. 4 The spare server is now the production server. The problem remaining is that the new production server is now out of date by however long it took to perform maintenance. Is there some way that the original production server can be made to queue up changes and merge them to the spare between steps 2 and 3? Any other ideas?

    Read the article

  • How to replicate a windows servers (IIS,Files,ConfigurationState)?

    - by Geo
    Maybe a better question is: What is the closest competitor for DoubleTake? I am looking to replicate a windows production server in case it fails have a immediate backup. Any idead? NOTE 1: I forget to add that this server is on the EC2 Amazon Cloud. NOTE 2: The main situation we have is recreating the configuration settings like IIS, FTP Server, SQL Server, SVN Server. NOTE 3: So far I have been giving three options as answers for my original question: AppAssurance -- After talking to their sales team they do not support Amazon as cloud provider. Basically there is a technical need to be able to reboot from a disk or similar media. So ESX Virtual machine environment will work, but not the EC2. Acronis -- which works as a backup in ghost style. This will work for other type of scenarios. Use the Amazon EC2 API -- This option is ideal, but only works if you are developing a cloud application rather than hosting a regular application in a cloud scenario. This means that I am still looking for the answer. Any other ideas.

    Read the article

  • Shared configuration for Eclipse on Debian server

    - by Joris Meys
    I've manually installed the latest Eclipse on our debian server and wanted to configure it so all users share the same configuration. It turned out less obvious than I thought: I don't seem to be able to install packages for all users. If I run it myself, all configuration data is saved under my own home directory. If I run Eclipse using sudo, everything is saved under the root directory but is not accessible for other users when they run Eclipse. I've been browsing the manual of Eclipse and some forums, but apart from a "yes, you can" I couldn't find any information on how that should be done. The biggest problem is installing plugins for all users to be found. Any help is greatly appreciated. Eclipse : 3.6.1 classic, installed using this procedure. Server uname: GNU/Linux * 2.6.26-2-amd64 Server is accessed using Putty, and Gnome desktop through realVNC. Just mentioning it if that is of any importance. Our sysadmin is on "prolonged leave" (working in Spain and never replaced), so I'm stuck without help here. EDIT : -- I asked this question also on StackOverflow as I wasn't certain this is a genuine server-related question. Please feel free to merge both questions at the appropriate place. --

    Read the article

  • Most cost efficient way to backup Subversion data to S3?

    - by sludge
    I'm looking at using S3 as an offsite backup repo for my Subversion database. When I dump my SVN database, it's about 10 gigabytes. I would like to avoid the charge of uploading that data repeatedly. The anatomy of this large file such that new changes to Subversion modify the tail of the file, with everything else staying the same. Because Amazon S3 does not allow you to "patch" files with changes, I will have to upload ten gigs every time I instantiate a backup after doing a simple submit to Subversion. Here are the options as I see them: Option 1 I am looking at duplicity which has --volsize which splits data over an amount of megs. Is it possible to split the Subversion dumps using this so further incremental backups are measured in megabytes? Option 2 Can I just backup the hot subversion repository? This seems like a bad idea if it is in the middle of writing a submit. However, I have the option of taking the repo offline between the hours of midnight and 4am. Each revision in my Berkeley DB uses a file as its record.

    Read the article

  • Would like to change audio codec, but keep video settings with ffmpeg

    - by Craig Tataryn
    I have a video for which I'd like to convert the audio codec to AAC 320 kbps / 44.100 kHz. What would I use for ffmpeg switches such that all the video settings and codec remain the same, but only the audio codec and settings change? Here's my video: $ ffmpeg -i Winnipeg.rb\ Scala-Talk.mov FFmpeg version SVN-r25375, Copyright (c) 2000-2010 the FFmpeg developers built on Oct 6 2010 13:02:41 with gcc 4.2.1 (Apple Inc. build 5664) configuration: --enable-libmp3lame --enable-shared --disable-mmx --arch=x86_64 libavutil 50.32. 2 / 50.32. 2 libavcore 0. 9. 1 / 0. 9. 1 libavcodec 52.92. 0 / 52.92. 0 libavformat 52.80. 0 / 52.80. 0 libavdevice 52. 2. 2 / 52. 2. 2 libavfilter 1.48. 0 / 1.48. 0 libswscale 0.12. 0 / 0.12. 0 Seems stream 0 codec frame rate differs from container frame rate: 2000.00 (2000/1) -> 10.00 (10/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Winnipeg.rb Scala-Talk.mov': Metadata: major_brand : qt minor_version : 537199360 compatible_brands: qt Duration: 01:10:53.00, start: 0.000000, bitrate: 283 kb/s Stream #0.0(eng): Video: h264, yuv420p, 800x598, 94 kb/s, 10 fps, 10 tbr, 1k tbn, 2k tbc Stream #0.1(eng): Audio: adpcm_ima_qt, 22050 Hz, 1 channels, s16 Stream #0.2(eng): Audio: adpcm_ima_qt, 22050 Hz, 1 channels, s16 At least one output file must be specified Many thanks in advance! One with with ffmpeg I've never been able to grok is how to just "tweak" files without having to regurgitate every little setting for things you don't want changes.

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >