Search Results

Search found 14013 results on 561 pages for 'remote backup'.

Page 60/561 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • SQL server 2055 remote connection problem, cannot solve it help please thank you

    - by user287745
    note:- if this question does not fit this site please do not just close it but also redirect the question to the fitting sister site, thank you" the steps taken and the error are mentioned please help, i am stuck here! installed sql server 2005 express on both computers installed sql server management studio express on both computers ran each management studio and connect to instance sqlserver using windows authentication ( one computer connection example "A-63A9D4D7E7834\SQLEXPRESS" ) created a database in the databases named as "test1" created a few tables with data saved and exit. did everything what this site says " How to configure SQL Server 2005 to allow remote connections" [add h t t p here as spam prevention] ://support.microsoft.com/kb/914277/en-us" but i have just disable the firewalls completely :turn off connecting to A-63A9D4D7E7834 started "SQL Server Management Studio Express" on computer A-63A9D4D7E7834 sever name: "ALL-E425BE6C41D\SQLEXPRESS" authentication: "windows authentication" and CONNECT I GET THE FOLLOWING ERROR Cannot connect to ALL-E425BE6C41D\SQLEXPRESS. ADDITIONAL INFORMATION: Login failed for user 'ALL-E425BE6C41D\Guest'. (Microsoft SQL Server, Error: 18456) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=18456&LinkId=20476 BUTTONS: OK HELP

    Read the article

  • JQuery Validation using Remote posts empty data to webservice

    - by user319721
    I'm using the JQuery Validation plugin. I'm using the remote option to make a call to my webservice to check if a company name exists. The webservice only accepts JSON data. I pass the data to the webservice from the Company Input Field in my Form as follows: data: "{'company': '" + $('#Company').val() + "'}" But this always returns a blank value for company so the response is {'company':''} i.e. correct JSON but missing the Company Input Field value. Can anyone shed some light on why I always get a blank value here? Thanks for the help, Ciaran

    Read the article

  • Remote stream multiple files in SOLR

    - by Mark
    I want to use SOLR's remote-streaming facility to extract and index the content of files. This works fine if I pass stream.file=xxx as a parameter to the http GET method. However, I have a lot of these, and want to batch them up (i.e. not have to have a GET per file). Is there a way I can do this in SOLR? e.g. I'd like to be able to POST some xml like this: <add> <doc stream_file="filename"> <field name="id">123</field> </doc> <doc>...

    Read the article

  • Remote Seam Persistence

    Hi. I have a button in a .xhtml file which calls a javascript function which calls a java function remotely (in jboss seam environment). That java function has an entityManager.persist(object). Do you know why this line of code doesn't commit to the DB? It says something that a transaction hasn't started. I supose in a remote context i don't have a transaction began because if i put an action on that button which calls the same java function instead of using javascript is above, it works fine; entityManager persists the object and i can see it in the DB. Does anyone has any ideas how could i make to actually persist the object using javascript to call the java function? (i have to use javascript because i need the callback function )

    Read the article

  • Open Folder within ClearCase Remote Client using Windows Explorer

    - by sammy
    Is there a way to open the folder location of a file from within CCRC? While I know I can open/copy from directly within CCRC, it is often useful to work directly with the file from within Windows Explorer. I am looking for something like "open file location" or "open in windows explorer". The folder within CCRC does not appear to allow opening it directly as the double-mouse-click action just expands the tree listing. The path is listed/copyable within the "ClearCase Details" tab, but I am trying to take my laziness to a whole new level by being able to open the folder with a single click. Any ideas if this is a feature available and where I can find it? Thanks. Info: Rational ClearCase Remote Client 7.1.1 Windows 7

    Read the article

  • form_for [@parent,@son],:remote=>true not asking for JS

    - by Cibernox
    Hi. I have a plain old form. That form is used to create new objects of a nested model. #restaurant.rb has_many :courses #courses.rb belongs_to :restaurant #routes.rb resources :restaurants do resources :courses end In my views(in haml), i have that code: %li.course{'data-random'=>random} = form_for([restaurant,course], :remote=>true) do |f| .name= f.text_field :name, :placeholder=>'Name here' .cat= f.hidden_field :category .price= f.text_field :price,:placeholder=>'Price here' .save = hidden_field_tag :random,random = f.submit "Save" I espected that form to be answered by action create of courses_controller with JS (create.js.erb), but it is submited like a normal form, and is answered with html. What am I doing wrong? This problem is similar to this but the only answer don't make sense to me. Thanks Inside

    Read the article

  • Powershell - remote folder availability while counting files

    - by ziklop
    I´m trying to make a Powershell script that reports if there´s a file older than x minutes on remote folder. I make this: $strfolder = 'folder1 ..................' $pocet = (Get-ChildItem \\server1\edi1\folder1\*.* ) | where-object {($_.LastWriteTime -lt (Get-Date).AddDays(-0).AddHours(-0).AddMinutes(-20))} | Measure-Object if($pocet.count -eq 0){Write-Host $strfolder "OK" -foreground Green} else {Write-Host $strfolder "ERROR" -foreground Red} But there´s one huge problem. The folder is often unavailable for me bacause of the high load and I found out when there is no connection it doesn´t report an error but continues with zero in $pocet.count. It means it reports everything is ok when the folder is unavailable. I was thinking about using if(Test-Path..) but what about it became unavailable just after passing Test-Path? Does anyone has a solution please? Thank you in advance

    Read the article

  • Git: How do I rewind the Master branch on the remote origin

    - by user277260
    I made 5 commits to Master branch when bug hunting on a private project and pushed them to the remote origin (my own private vps). Then I saw that commits 4 and 5 were going to cause trouble elsewhere and I need to undo them, so I checked out commit 3 again, made a new branch "Dev" from that point, and did a few more commits fixing the issue properly. Then I did git reset --hard HEAD~2 on Master to pull it back to the point that I branched Dev. Then I did git merge to fast forward Master back to the end of the Dev branch. So now I have a local repository, with Dev and Master both pointing to the same, up to date version of the project with the latest bug fix. Problem is, when I try to push the project now to the origin, it fails and gives me an error message: ! [rejected] master - master (non-fast forward) error: failed to push some refs to 'myserver...myproject.git' What have I done wrong, and how do I fix it? Thanks

    Read the article

  • Drupal install on remote mysql

    - by user1448660
    I am trying to install drupal on remote mysql server. I have created the user in mysql and granted the the privileges. I am able to connect through command line from my web server like this "mysql -u xxxx -h 10.xxx.yy.zz3 -p". But when I tried to install drupal I get "SQLSTATE[28000] [1045] Access denied for user 'xxxx'@'localhost'". I have given the privileges for "xxxx"@"10.xxx.yy.zz3" but drupal appends localhost instead of IP to user name. I have changed settings.php to mysql server IP. What am I missing?

    Read the article

  • Uploading to a remote server periodically?

    - by user1048138
    I have been working on an app that takes screen shots, kinda like http://puush.me/ however, I would like to be able to upload the screen shots to a remote server. What protocols can I use to do so. Needs to be cross platform and secure. I know that SSH, SFTP and FTP are options, however, they all require logins that I dont want to provide to the end user. Nor do I want to sign a key for them as it would still allow their machines to remotely log in.

    Read the article

  • How to restore a slave from a mysql backup?

    - by robsf
    I'm running MySql 5.1. I have Master and a Slave on 2 machines and I set up replication. I do periodic backup on my slave server. I stop mysql, I copy all the files and I restart mysql. In case I lose the Master, I can set up a new one from the last backup. What If I lose the Slave? Can I restart the slave from the last backup? Am I supposed to keep track of the position of the replication every time I to a backup?

    Read the article

  • Help to argue why to develop software on a physical computer rather than via a remote desktop

    - by s5804
    Remote desktops are great and many times a blessing and cost effective (instead of leasing expensive cables). I am not arguing against remote desktops, just if one have the alternative to use either remote desktop or physical computer, I would choose the later. Also note that I am not arguing for or against remote work practices. But in my case I am required to be physically present in the office when developing software. Background, I work in a company which main business is not to develop software. Therefore the company IT policies are mainly focused on security and to efficiently deploying/maintaing thousands of computer to users. Further, the typical employee runs typical Office applications, like a word processors. Because safety/stability is such a big priority, every non production system/application, shall be deployed into a physical different network, called the test network. Software development of course also belongs in the test network. To access the test network the company has created a standard policy, which dictates that access to the test network shall go only via a remote desktop client. Practically from ones production computer one would open up a remote desktop client to a virtual computer located in the test network. On the virtual computer's remote desktop one would be able to access/run/install all development tools, like Eclipse IDE. Another solution would be to have a dedicated physical computer, which is physically only connected to the test network. Both solutions are available in the company. I have tested both approaches and found running Eclipse IDE, SQL developer, in the remote desktop client to be sluggish (keyboard strokes are delayed), commands like alt-tab takes me out of the remote client, enjoying... Further, screen resolution and colors are different, just to mention a few. Therefore there is nothing technical wrong with the remote client, just not optimal and frankly de-motivating. Now with the new policies put in place, plans are to remove the physical computers connected to the test network. I am looking for help to argue for why software developers shall have a dedicated physical software development computer, to be productive and cost effective. Remember that we are physically in office. Further one can notice that we are talking about approx. 50 computers out of 2000 employees. Therefore the extra budget is relatively small. This is more about policy than cost. Please note that there are lots of similar setups in other companies that work great due to a perfectly tuned systems. However, in my case it is sluggish and it would cost more money to trouble shoot the performance and fine tune it rather than to have a few physical computers. As a business case we have argued that productivity will go down by 25%, however it's my feeling that the reality is probably closer to 50%. This business case isn't really accepted and I find it very difficult to defend it to managers that has never ever used a rich IDE in their life, never mind developed software. Further the test network and remote client has no guaranteed service level, therefore it is down for a few hours per month with the lowest priority on the fix list. Help is appreciated.

    Read the article

  • MySQL Open Source Backup and Recovery Alternative: Xtrabackup

    MySQL database administrators are always looking for a solid backup and recovery tool that will suit all their needs. Xtrabackup, created by Percona, is the open source alternative to the commercial Innodb Hot Backup tool. This article explains a good methodology for testing and verifying Xtrabackups capabilities and precision.

    Read the article

  • MySQL Open Source Backup and Recovery Alternative: Xtrabackup

    MySQL database administrators are always looking for a solid backup and recovery tool that will suit all their needs. Xtrabackup, created by Percona, is the open source alternative to the commercial Innodb Hot Backup tool. This article explains a good methodology for testing and verifying Xtrabackups capabilities and precision.

    Read the article

  • Problem restoring from tar backup: why are there /dev/disk/by-id/ symlinks and how can I avoid them?

    - by SK.
    Hello, I'm trying to make a bare-bone backup system with the most basic tools available on openSUSE 11.3 (in this case: bash, fdisk, tar & grub legacy) Here's the workflow for my scripts: backup.sh: (Run from external system, e.g. LiveCD) make an fdisk script ($fscript) from fdisk -l's output [works] mount the partitions from the system's fstab [works] tar the crucial stuff in file.tgz [works] restore.sh: (Run from external system, e.g. LiveCD) run fdisk $dest < $fscript to restore partitioning [works] format and mount partitions from system's fstab [fails] extract from file.tgz [works when mounting manually] restore grub [fails] I have recently noticed that openSUSE (though I'm sure it has nothing to do with the distro) has different output in /etc/fstab and /boot/grub/menu.lst, more precisely the partition name is for example "/dev/disk/by-id/numbers-brandname-morenumbers-part2" instead of "/dev/sda2" -- but it basically is a simple symlink. My questions about this: what is the point of such symlinks, especially if we're restoring on a different disk? is there a way to cleanly prevent the creation of those symlinks and use the "true" /dev/sdx everywhere instead? if the previous is no, do you know a way to replace those symlinks on the fly in a text file? I tried this script but only works if the file starts with the symlink description (case of fstab, not menu.lst): ### search and replace /dev/disk/by-id/... to /dev/sdx while read oldVolume rest; do # get first element, ignore rest of line if [[ "$oldVolume" =~ ^/dev/disk/by-id/.*(-part[0-9]*$)? ]]; then newVolume=$(readlink $oldVolume) # replace pointer by pointee, returns "../../sdx" echo /dev/${newVolume##*/} $rest >> TMP # format to "/dev/sdx", write line else echo $oldVolume $rest >> TMP # nothing to do fi done < $file mv -f TMP $file # save changes I've had trouble finding a solution to this on google so I was hoping some of the members here could help me. Thank you.

    Read the article

  • replacing buffalo lonkstations with FreeNAS, overall backup strategy, am I on the right path?

    - by Shreko
    We've been using 2 Buffalo LinkStations of 320Gb each for shared directory and employee's server storage (around 20 employees). So only documents (word, excel, cad drawings etc.) and database backup of the main application server (ERP, Accounting) 1 buffalo box serves as a main one, located at the server room, next to the main application server and the other buffalo box is located on the opposite side of the building (for fire protection) in a secure storage room and backs up the first one. We also have several external HDs that backs up everything from the buffalo box for an offsite backup. After 3.5 years of using these, capacity is a main limitation, I'm planning a replacement and would like to use FreeNAS (we already use monowall with great success). I would like to keep it simple and continue similar setup, building two low power boxes with 1 hd (2Tb) each. Is low power atom mobo OK? Not sure about HDs? I've read on this site somebody mentioning more seagate ES2 as more reliable and better performing. How would those eco/green drives compare. We've been pretty happy with speed of Buffalo boxes and I don't want my users to notice any slowdown. Any suggestion?

    Read the article

  • Can my employer force me to backup my personal machine? [closed]

    - by Eric B
    Here's the background: Approximately 1.25 years ago, the company I work for was acquired by a larger 400 person company. Before acquisition (and today still) we are all remote employees using our own personal hardware for work-related duties (coding, email, etc). We are approximately 15 employees within the larger organization. Some time after acquisition, the now owning company was slapped with a civil lawsuit. Part of this lawsuit (discovery) is requiring them to retrieve & store from us any related information. Because we were a separate company up until acquisition, there is a high probability that our personal machines might contain information about what the lawsuit alleges (email, documents, chat logs?, etc). Obviously, this depends largely on the person's job function (engineer vs. customer support vs. CEO). All employees are being required to comply. Since acquisition (1.25 yrs), the new company has not provided us with company laptops/desktops. We continue to use personal hardware, licenses, etc for work. Email is via POP3s and not hanging around on the mail server - it's on everyone's client. Documents are spread across personal machines. So, now they want us each to backup our complete personal machines. They are allowing us to create a "personal" folder where we can place personal documents. That single folder will be excluded from backup. Of course, that means total re-arrangement of documents, etc. For most of us, 99% of the data on the machine is NOT related to work. So, what's the consensus? Should we comply? What is their recourse if we do not?

    Read the article

  • git rebse onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • git rebase onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • WMI to reboot remote machine

    - by Stephen Murby
    I found this code on an old thread to shutdown the local machine: using System.Management; void Shutdown() { ManagementBaseObject mboShutdown = null; ManagementClass mcWin32 = new ManagementClass("Win32_OperatingSystem"); mcWin32.Get(); // You can't shutdown without security privileges mcWin32.Scope.Options.EnablePrivileges = true; ManagementBaseObject mboShutdownParams = mcWin32.GetMethodParameters("Win32Shutdown"); // Flag 1 means we want to shut down the system. Use "2" to reboot. mboShutdownParams["Flags"] = "1"; mboShutdownParams["Reserved"] = "0"; foreach (ManagementObject manObj in mcWin32.GetInstances()) { mboShutdown = manObj.InvokeMethod("Win32Shutdown", mboShutdownParams, null); } } Is it possible to use a similar WMI method to reboot flag"2" a remote machine, for which i only have machine name, not IPaddress. EDIT: I currently have; SearchResultCollection allMachinesCollected = machineSearch.FindAll(); Methods myMethods = new Methods(); string pcName; ArrayList allComputers = new ArrayList(); foreach (SearchResult oneMachine in allMachinesCollected) { //pcName = oneMachine.Properties.PropertyNames.ToString(); pcName = oneMachine.Properties["name"][0].ToString(); allComputers.Add(pcName); MessageBox.Show(pcName + "has been sent the restart command."); Process.Start("shutdown.exe", "-r -f -t 0 -m \" + pcName); } but this doesn't work, and i would prefer WMI going forward.

    Read the article

  • SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008

    - by pinaldave
    Note: Please read the complete post before taking any actions. This blog post would discuss SHRINKFILE and TRUNCATE Log File. The script mentioned in the email received from reader contains the following questionable code: “Hi Pinal, If you could remember, I and my manager met you at TechEd in Bangalore. We just upgraded to SQL Server 2008. One of our jobs failed as it was using the following code. The error was: Msg 155, Level 15, State 1, Line 1 ‘TRUNCATE_ONLY’ is not a recognized BACKUP option. The code was: DBCC SHRINKFILE(TestDBLog, 1) BACKUP LOG TestDB WITH TRUNCATE_ONLY DBCC SHRINKFILE(TestDBLog, 1) GO I have modified that code to subsequent code and it works fine. But, are there other suggestions you have at the moment? USE [master] GO ALTER DATABASE [TestDb] SET RECOVERY SIMPLE WITH NO_WAIT DBCC SHRINKFILE(TestDbLog, 1) ALTER DATABASE [TestDb] SET RECOVERY FULL WITH NO_WAIT GO Configuration of our server and system is as follows: [Removed not relevant data]“ An email like this that suddenly pops out in early morning is alarming email. Because I am a dead, busy mind, so I had only one min to reply. I wrote down quickly the following note. (As I said, it was a single-minute email so it is not completely accurate). Here is that quick email shared with all of you. “Hi Mr. DBA [removed the name] Thanks for your email. I suggest you stop this practice. There are many issues included here, but I would list two major issues: 1) From the setting database to simple recovery, shrinking the file and once again setting in full recovery, you are in fact losing your valuable log data and will be not able to restore point in time. Not only that, you will also not able to use subsequent log files. 2) Shrinking file or database adds fragmentation. There are a lot of things you can do. First, start taking proper log backup using following command instead of truncating them and losing them frequently. BACKUP LOG [TestDb] TO  DISK = N'C:\Backup\TestDb.bak' GO Remove the code of SHRINKING the file. If you are taking proper log backups, your log file usually (again usually, special cases are excluded) do not grow very big. There are so many things to add here, but you can call me on my [phone number]. Before you call me, I suggest for accuracy you read Paul Randel‘s two posts here and here and Brent Ozar‘s Post here. Kind Regards, Pinal Dave” I guess this post is very much clear to you. Please leave your comments here. As mentioned, this is a very huge subject; I have just touched a tip of the ice-berg and have tried to point to authentic knowledge. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Sends backups to a Network Folder, FTP Server, Dropbox, Google Drive or Amazon S3

    - by pinaldave
    Let me tell you about one of the most useful SQL tools that every DBA should use – it is SQLBackupAndFTP. I have been using this tool since 2009 – and it is the first program I install on a SQL server. Download a free version, 1 minute configuration and your daily backups are safe in the cloud. In summary, SQLBackupAndFTP Creates SQL Server database and file backups on schedule Compresses and encrypts the backups Sends backups to a network folder, FTP Server, Dropbox, Google Drive or Amazon S3 Sends email notifications of job’s success or failure SQLBackupAndFTP comes in Free and Paid versions (starting from $29) – see version comparison. Free version is fully functional for unlimited ad hoc backups or for scheduled backups of up to two databases – it will be sufficient for many small customers. What has impressed me from the beginning – is that I understood how it works and was able to configure the job from a single form (see Image 1 – Main form above) Connect to you SQL server and select databases to be backed up Click “Add backup destination” to configure where backups should go to (network, FTP Server, Dropbox, Google Drive or Amazon S3) Enter your email to receive email confirmations Set the time to start daily full backups (or go to Settings if you need Differential or  Transaction Log backups on a flexible schedule) Press “Run Now” button to test You can get to this form if you click “Settings” buttons in the “Schedule section”. Select what types of backups and how often you want to run them and you will see the scheduled backups in the “Estimated backup plan” list A detailed tutorial is available on the developer’s website. Along with SQLBackupAndFTP setup gives you the option to install “One-Click SQL Restore” (you can install it stand-alone too) – a basic tool for restoring just Full backups. However basic, you can drag-and-drop on it the zip file created by SQLBackupAndFTP, it unzips the BAK file if necessary, connects to the SQL server on the start, selects the right database, it is smart enough to restart the server to drop open connections if necessary – very handy for developers who need to restore databases often. You may ask why is this tool is better than maintenance tasks available in SQL Server? While maintenance tasks are easy to set up, SQLBackupAndFTP is still way easier and integrates solution for compression, encryption, FTP, cloud storage and email which make it superior to maintenance tasks in every aspect. On a flip side SQLBackupAndFTP is not the fanciest tool to manage backups or check their health. It only works reliably on local SQL Server instances. In other words it has to be installed on the SQL server itself. For remote servers it uses scripting which is less reliable. This limitations is actually inherent in SQL server itself as BACKUP DATABASE command  creates backup not on the client, but on the server itself. This tool is compatible with almost all the known SQL Server versions. It works with SQL Server 2008 (all versions) and many of the previous versions. It is especially useful for SQL Server Express 2005 and SQL Server Express 2008, as they lack built in tools for backup. I strongly recommend this tool to all the DBAs. They must absolutely try it as it is free and does exactly what it promises. You can download your free copy of the tool from here. Please share your experience about using this tool. I am eager to receive your feedback regarding this article. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >