Search Results

Search found 10107 results on 405 pages for 'remote backups'.

Page 262/405 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • Is Active Directory required for a team using TFS 2010?

    - by Andy
    I am new to TFS 2010 and wanted to give it a fair try for a small project with a team of 2-3 remote people. Is it a requirement that all my team users are part of an Active Directory network setup? or can I have my team-members to be loosely coupled and be able to login using username/password?

    Read the article

  • How should I configure backup of my server?

    - by ed209
    I have just rented a dedicated server. If it helps this is the config I have: CPU1 Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz (Cores 8) RAM 15975 MB Disk /dev/sda doesn't contain a valid partition table (=> /dev/sda doesn't) Disk /dev/sdc doesn't contain a valid partition table (=> /dev/sdc doesn't) Disk /dev/sdb doesn't contain a valid partition table (=> /dev/sdb doesn't) Disk /dev/sda: 120.0 GB (=> 114 GIB) Disk /dev/sdc: 3000.6 GB (=> 2861 GIB) Disk /dev/sdb: 3000.6 GB (=> 2861 GIB) /dev/sda is a 120GB SSD. This is where I have Ubuntu/lamp installed. It's the drive that will run my site. With the account I got two other drives of 3000GB each which I really don't need but they came with the account. I figured I could use these to back up my main 120gb drive. So a couple of things I wondered were: Should I use these for backups? How should I back up. The data I want to back up is a user uploads directory full of images and the database. Everything else is either in a code repo or backed up some other way. For example, it would be nice to know there is a disk image of the 120gb drive somewhere that I can copy over should there be any problems but equally I don't mind doing a fresh install of all the software and copying over just the images and database dump. Thanks for your advice! (also, happy to not use the two other drives and backup elsewhere if it's more sensible)

    Read the article

  • File upload-download in its actual format.

    - by ras
    hi, I've to make a code to upload/download a file on remote machine. But when i upload the file new line is not saved as well as it automatically inserts some binary characters. Also I'm not able to save the file in its actual format, I've to save it as "filename.ser". I'm using serialization-deserialization concept of java. Thanks in advance.

    Read the article

  • Get methods params type parsing wsdl file in a rails/ruby application

    - by Marco Sangiorgi
    Hi, I have a question about ruby and wsdl soap. I couldn't find a way to get each method's params and their type. For example, if I found out that a soap has a methods called "get_user_information" (using wsdlDriver) is there a way to know if this method requires some params and what type of params does it require (int, string, complex type, ecc..)? I'd like to be able to build html forms from a remote wsdl for each method... Sorry for my horrible English :D

    Read the article

  • .NET Based Radio Automation

    - by Brent Pabst
    I'm curious if anyone has seen an Open Source radio automation package (I found one in Russian on CodePlex) built on .NET In addition if I wanted to build something like this in a client server environment is WCF and WPF the best way to do it? Is it fast enough to trigger songs to play/encode on the server from a remote WPF client? Sort of vague questions but I wanted to get some community feedback.

    Read the article

  • Why does OpenID look so hard to implement?

    - by user198729
    I read through this post: http://stackoverflow.com/questions/741345/how-do-i-implement-direct-identity-based-openid-authentication-with-zend-openid Why does it look so complicated to implement? IMO, it's just to send request to a remote site and retrieve the response. What's the problem those OpenID libraries are dealing with?

    Read the article

  • Connection to mysql works through php but not through a form submission?

    - by Legend
    I have a download.php file that connects to a remote mysql database. If I run it using php download.php it works fine. But if I create another php file form.php and then submit the form to this download.php, it complains the following: Can't connect to MySQL server on '<IP_ADDRESS>' (13) Does anyone know why this might be happening? I can't see a reason why this works directly but fails to work upon form submission...

    Read the article

  • Efficient mirroring of directories using hardlinks

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • What is the best way of testing XML responses?

    - by user303396
    I'm using selenium IDE to test my webpages but unfortunately I cannot use it to test those pages that return an xml response. Some people use Selenium Remote Control, others use modules like WWW::Mechanize and Test::XML or Test::XPath. What is the best way to test the XML responses?

    Read the article

  • R language: open ssh connection

    - by marpo_it
    I am trying to open a remote shell via ssh to send commands from R. As long as I send commands, I need to get results and send new commands that depends on the output of the previous ones. For this reason I am looking for a solution to open a connection and manage it from within the R code until I have finished. I also need to open the connection with ssh key exchange (so without password authentication). Looking at CRAN I didn't find anything useful.

    Read the article

  • Custom attributes in Active Directory - determining usage/function and possible removal options?

    - by HopelessN00b
    I've bumped into a highly-customized Active Directory environment (2003 FL) that's got me wondering if there's any particularly easy way to figure out what a custom attribute's function is, and what, if anything, is "using" that particular attribute. And then what some good options for potentially removing custom attributes from the schema might be. Aside from a restore or starting from scratch. If such an option exists. For example, I think I can be fairly certain what the "isDumbass" attribute with a value of TRUE means, but not so much with "IRPextCONST", containing a value of 393684. Likewise, I'd think it should be pretty safe to delete the "isDumbass" attribute, but would like to a) be sure and b) find out what's querying or updating that value anyway, because I suspect that anything using that attribute might be next on the list of things to remove. Ideally, without having to run a search on the contents of every custom script and bit of source code I can get my hands on, of course. And finally, aside from rebuilding from scratch, or doing an authoritative AD restore from backups that don't exist... is there a way to delete a given custom attribute? (Not blank the value, but actually delete the attribute from the schema - some folks would rather not have attributes like "FaggotMeter" and "DouchebagCounter" hanging around.) I've been able to find and successfully test a method on Windows 2k, but it seems like Microsoft disabled this option in SP4, and the domain in question is a 2003 functional level.

    Read the article

  • Writing text file on local server MVC 2.0

    - by Liado
    Hi, i'm trying to write a text file on remote server, i'm using the following code: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Index(UserModels model) { if (!ModelState.IsValid) { return View("Index"); } try { using (StreamWriter w = new StreamWriter(Server.MapPath(TEXT_FILE_NAME), true)) { w.WriteLine(model.Email.ToString()); // Write the text } } catch { } the folder is still empty, can someone help? what should be the problem? Thanks

    Read the article

  • How to perform an action when file changed?

    - by ZeissS
    Hi, I want to create a script that checks an URL and perform an action (download + unzip) when the "Last-Modified" header of the remote file changed. I thought about fetching the header with curl but then I have to store it somewhere for each file and perform a date comparison. Does anyone have a different idea using (mostly) standard unix tools? thanks

    Read the article

  • not in gzip format error

    - by Ravindra
    while installing any Gem or doing any listing of gem gzip related error comes as shown below:- C:\Documents and Settings\gangunragem install rhosync -v 2.0.0.beta7 --pre ERROR: While executing gem ... (Zlib::GzipFile::Error) not in gzip format C:\Documents and Settings\gangunragem list rails -r * REMOTE GEMS * ERROR: While executing gem ... (Zlib::GzipFile::Error) not in gzip format Please help me out how to reslove this

    Read the article

  • php: how do i store an array in a file to access as an array later with php?

    - by Haroldo
    I just want to quickly store an array which i get from a remote API, so that i can mess around with it on a local host. So: i currently have an array i want to people to use the array without having to get it from the API There are no needs for efficiency etc here, this isnt for an actual site just for getting some sanitizing/formatting methods made etc is there a function like store_array() restore_arrray() ?!

    Read the article

  • Magento order status change events

    - by Christian
    Hi people, I want to change via web service a remote inventory, I know that via Event Observer Method can triger my code, but I don't know which event is useful to complete my task, like on_order_complete, is there an updated list of events or more documentation?

    Read the article

  • Client/Server communication via internet

    - by user957829
    Hi, Which is the best solution to communicate bidirectionally between a remote server and a client behind an internet box? UPnP with Sockets. HTTPS/Database Server and the client make 1 request every Xsec to know if there is new data. Client opens 1 connection on the server and it maintains open to make a tunnel. Thanks in advance for your help

    Read the article

  • Not sure what happens to my apps objects when using NSURLSession in background - what state is my app in?

    - by Avner Barr
    More of a general question - I don't understand the workings of NSURLSession when using it in "background session mode". I will supply some simple contrived example code. I have a database which holds objects - such that portions of this data can be uploaded to a remote server. It is important to know which data/objects were uploaded in order to accurately display information to the user. It is also important to be able to upload to the server in a background task because the app can be killed at any point. for instance a simple profile picture object: @interface ProfilePicture : NSObject @property int userId; @property UIImage *profilePicture; @property BOOL successfullyUploaded; // we want to know if the image was uploaded to out server - this could also be a property that is queryable but lets assume this is attached to this object @end Now Lets say I want to upload the profile picture to a remote server - I could do something like: @implementation ProfilePictureUploader -(void)uploadProfilePicture:(ProfilePicture *)profilePicture completion:(void(^)(BOOL successInUploading))completion { NSUrlSession *uploadImageSession = ..... // code to setup uploading the image - and calling the completion handler; [uploadImageSession resume]; } @end Now somewhere else in my code I want to upload the profile picture - and if it was successful update the UI and the database that this action happened: ProfilePicture *aNewProfilePicture = ...; aNewProfilePicture.profilePicture = aImage; aNewProfilePicture.userId = 123; aNewProfilePicture.successfullyUploaded = NO; // write the change to disk [MyDatabase write:aNewProfilePicture]; // upload the image to the server ProfilePictureUploader *uploader = [ProfilePictureUploader ....]; [uploader uploadProfilePicture:aNewProfilePicture completion:^(BOOL successInUploading) { if (successInUploading) { // persist the change to my db. aNewProfilePicture.successfullyUploaded = YES; [Mydase update:aNewProfilePicture]; // persist the change } }]; Now obviously if my app is running then this "ProfilePicture" object is successfully uploaded and all is well - the database object has its own internal workings with data structures/caches and what not. All callbacks that may exist are maintained and the app state is straightforward. But I'm not clear what happens if the app "dies" at some point during the upload. It seems that any callbacks/notifications are dead. According to the API documentation- the uploading is handled by a separate process. Therefor the upload will continue and my app will be awakened at some point in the future to handle completion. But the object "aNewProfilePicture" is non existant at that point and all callbacks/objects are gone. I don't understand what context exists at this point. How am I supposed to ensure consistency in my DB and UI (For instance update the "successfullyUploaded" property for that user)? Do I need to re-work everything touching the DB or UI to correspond with the new API and work in a context free environment?

    Read the article

  • How to use git to manage one codebase but have different environments

    - by emostar
    I'm using git for a personal project at the moment and have run into a problem of having one codebase for two different environments and was wondering what the cleanest way to use git would be. Main Desktop I Use this machine for most of my development. I have a git repository here that I cloned off of an empty repository that I use on my internal server. I do most of my work here and push back to the internal server so I can use that as a master of truth and to ease making backups. Laptop I sometimes want to code on the road, so I did a clone from the internal server and created a new branch called "laptop-branch". Unfortunately some directories MSVC++ version are different than from the Main Desktop environment. I just modified the files in the "laptop-branch" and committed them there. Now I did a lot of changes while on vacation with my laptop, and want to push them to origin, but don't want the changes I made that were related to directories and compiler versions to be pushed back to origin. What would be the best way to get this done?

    Read the article

  • Identifying Exchange 2010 regular process that is walking the mailbox database

    - by toongeneral
    I have an Exchange 2010 server running on a SAN-backed platform. The platform does block-level backups based on a snapshot/incremental basis, that only capture changed data. I was surprised to see a regular period of time where the data changes were happening at a high, sustained rate. Due to the way this system works, that can lead to 1.2TB of stored data per month. The regularity implied a scheduled task, but it is not a fixed interval. It is approximately every 26-32hrs. The disks were performing read operations of ~5MB/s and write operations of ~4.5MB/s, for a period of 3-4hrs. The total written data was ~55-60GB. Reading on TechNet, I am wondering if the following is causing this: http://blogs.technet.com/b/exchange/archive/2011/12/14/database-maintenance-in-exchange-2010.aspx#checksumming The somewhat restrictive thing is that the process only happens at most once every 24 hours. I was able to investigate while it was running, finding the following: the process is store.exe it is working on the mailbox database files while running, it is generating .log files (in the mailbox database folder) consistent with database changes the mailbox database is ~60GB in size, which fits with the total data changes on each iteration I have currently switched to a fixed maintenance window, as a test. It's not clear whether this is the cause, as the symptoms fit, but are not conclusive. Does anyone have any suggestions for additional troubleshooting?

    Read the article

  • How can records be deleted without activating the delete trigger?

    - by Servaas Phlips
    Hello there, Since about a month we are experiencing records that are disappearing from our database without any reason. (part of) Our database structure is at http://i.imgur.com/i15nG.png Now users and credentials can never be deleted. We noticed however that thanks to our backups that unfortanetely users disappeared from the database. The users and credentials that disappear appear to be completely random. In order to find out which application deletes this records we created triggers with the following checks: CREATE TRIGGER Credential_SoftDelete ON [Credential] INSTEAD OF DELETE AS DECLARE @message nvarchar(255) DECLARE @hostName nvarchar(30) DECLARE @loginName nvarchar(30) DECLARE @deletedId nvarchar(30) SELECT @deletedId=credentialid FROM deleted; SELECT @hostName=host_name,@loginName=login_name FROM sys.dm_exec_sessions WHERE session_id=@@SPID; SELECT @message = '[FAULT] Credential : ' + USER_NAME() + ' deleted ' +@deletedId + ' on ' + @@SERVERNAME + ' from [' + @hostname + ' by ' + @loginName; EXEC xp_logevent 50001,@message,ERROR GO Now after we added this trigger we hoped to find out which application deletes these credentials by searching in the log files. Unfortanetely the credentials are still deleted and the trigger Credential_SoftDelete is never logged. I did try run a delete on the database where the trigger is installed and where the users have disappeared. I ran the following query on the database: DELETE FROM [User] WHERE userid=296 and the trigger prevented deletion of this user and also logged this in the log events. This was actually on exact the same database where the users disappeared. (so no test copy or something like that) Please note that we also use replication, the type of replication we use is merge replication. How is this possible? Can the fact that we use replication on this database be the cause of this problem?

    Read the article

  • Best format for hard drive for Windows and Mac?

    - by Neil
    I have a 500 GB USB External Hard Drive. I need four partitions on it, for the following purposes: 160 GB for a bootable backup of my Mac. 160 GB for a bootable backup of my Windows. 11 GB for a bootable Snow Leopard Install Disk Rest as for file storage. Now I need a partition table which will get recognised on both Windows and Mac, without needing extra software on Windows, which will let me keep bootable copies of both OS'es, but let me access the file storage from both OS'es. Currently, I have a GUI Partition Table, with Mac OS Extended (Journaled) Partitions for the two backups, Mac OS Extended for the Install Disk, and NTFS for the file storage. While this gets recognised perfectly on my Mac, thanks to an NTFS for Mac driver from Paragon, when connected to Windows, the drive is detected by the machine (listed in Safely Remove USB), but not recognised in Windows Explorer unless I install MacDrive, which is not feasible for me to install on public Windows Machines I might wanna access my storage area on. Can someone recommend the best combination of formats and software/drivers to get this done seamlessly?

    Read the article

  • error when uploading with Git

    - by user560831
    I am new to Git hub and was able to successfully create an ssh key and upload it to the website however when I type in git push origin master I receive the following error: error: cannot run ssh: no such file or directory fatal: unable to fork I am using Cygwin on a windows Vista machine if that is also useful. Ok.. after installing openssh I now get the error: Permission denied (publickey) fatal: the remote end hung up unexpectedly

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >