Search Results

Search found 14013 results on 561 pages for 'remote backup'.

Page 144/561 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • How to set up a easy-to-use proxy for the whole system with WinXP client and server?

    - by Pekka
    I am working together intensively with a colleague on the Canary Islands. We speak through live messenger and work together using a RDP software. She has frequent problems with connections to certain big-name and small-name sites (amongst others live.com, google.com, gmx.de) very likely to be caused by the spanish provider (the connections simply time out, this has been going on for weeks already). I have been thinking about setting up my computer as a proxy to make these connections work. I have a DSL connection and am behind a NAT capable router that I control. Does anybody know a simple, "one-click" way to transport ALL network traffic through a remote proxy? Without having to set proxy settings for each application that uses the internet? VPN is not an option, because I am behind a firewall that supports protocol 47 and such, but I have never succeeded in getting an incoming VPN connection to work. I can however redirect normal traffic using NAT. A VPN solution that does not need strange protocols would also be an option.

    Read the article

  • Work from home on an iPad?

    - by Alex Basson
    The situation: My wife has a 13" MacBook Pro that she uses for email, Facebook, web surfing, and working from home. I'm about to buy us our first iPad. My wife's brother's computer just went belly-up, and she's contemplating giving him her MacBook and just using the iPad. The question is whether or not this is possible or realistic. Obviously, the iPad is well-suited for the email/web/Facebook tasks, but the working-from-home thing is an absolute must -- if the iPad can't handle that, it's a deal-breaker. For my wife, working from home means two things: Accessing her workplace computer's Windows Vista desktop, which she currently does via Remote Desktop. Editing Office documents locally, which she currently syncs via Dropbox. Being able to edit documents locally is important, because sometimes she will download documents and edit them when she doesn't have network access (e.g. on the subway). I'm more than happy to get a keyboard dock for her, so typing won't be an issue. Are there any iPad apps she can use to access her work computer and edit her work files? Thanks for any suggestions!

    Read the article

  • Setting up xpra for client use in OS X

    - by Jonathan
    I've been trying to get xpra to run on OS X for the last few days to connect to my Ubuntu server. Note that there's a GUI for it called shifter, but that (at least on OS X) is still far too buggy. For those who don't know what xpra is, if you know what screen is, it's like screen for GUI X Windows apps tunneled over ssh. You can render a remote X app locally so it's faster than sending a series of compresses screen shots (like VNC), but with xpra you can disconnect and reconnect on different computers. To get the basic functionality you can just type "ssh -X server.location" and any GUI app you open from the command line will open locally. I've been able to get xpra to build by doing the following: Download pari-all-0.0.6.tar.gz from the xpra site listed under upstream and untar it. Issue the following Mac Ports command (Dependencies thanks to RogBlog): sudo port install python25 python26 py26-pyrex py26-gtk xorg-libXtst py25-gobject py25-gtk py25-nose py26-nose xorg-libXdamage xorg-libXcomposite xorg-libXtst xorg-libXfixes In the upstream list of v0.0.06 patches (NOT 0.0.8pre!) on the xpra site listed above, download mswindows-conditional-pyrex.patch. Open the patch with your favorite text editor and change the single occurrence of "win" in it to "darwin". Apply the patch to setup.py. Run do-build in the command line. Now where I'm stumped: how do I run xpra? The build produces a sub directory called install/bin in which xpra is located, but when I try to run it I get the following error: Traceback (most recent call last): File "./xpra", line 4, in import xpra.scripts.main ImportError: No module named xpra.scripts.main There is a file called main.py under xpra/scripts, but I don't know any python and I'm not sure if this is what it's looking for, and what to do with it even if it is. My goal is to set up xpra so I can install it into /usr/bin (or some other common path for executables) and execute it whenever I please. What do I do next?

    Read the article

  • What is my BaseDN supposed to be with the following configuration of OpenLDAP?

    - by fuzzy lollipop
    I have the following in my OpenLDAP configuration. Using the latest version OpenLDAP on Centos 5.3. Installed using yum. From my /etc/openldap/slapd.conf database bdb suffix "dc=company,dc=com" rootdn "cn=Manager,dc=company,dc=com" From my /etc/openldap/ldap.conf BASE dc=company,dc=com I have successfully added an entry with ldapadd and retrieved it with ldapsearch from a local bash shell on the box. Now I am trying to get a Graphical Editor to connect to this server remotely so I can enter people from my laptop. But I am having no luck. I tried JXplorer, and it connects with Anonymous bind without me having to specify a BaseDN but I can't edit anything that way. If I try and give it a user name and password, using Manager and my rootpw I have in clear text just for testing, every GUI Client on my remote laptop complains about my BaseDN not being the correct format when I enter dc=company,dc=com and I tried cn=Manager,dc=company,dc=com. Error opening connection: [LDAP: error code 34 - invalid DN] I have tried multiple clients and all of them connect as anonymous, none let me connect authenticated where I can actually create or edit anything. I am using Manager as my username and the password from rootpw, is that correct?

    Read the article

  • RDP suggesting a user name when connecting to a server

    - by Neolisk
    Prior to Event X, RDPing to Server 2003 always caused the user name appear blank and Login to be enabled, so you could pick to which domain you would log in. For us it's either local or our domain. Since a recent Event X a domain + user name is being suggested for every server and it's not the most recently used user name. If you remove it manually from RDP dialog, it's still being pre-populated for you, and then at the next available opportunity it returns into General/User name option of RDP dialog. So user name field comes pre-populated and you cannot change to log in locally (only if you manually erase domain specifier - everything before \) - Log in to option is disabled by default. We did not do any changes to our domain or client machines, so I am suspecting some Windows update caused it (and this being Event X). Interesting fact - it does not consistently happen on all machines, and some can login to some servers fine, while other servers keep suggesting a default user name. What could be that Event X and is there a way to fix it? EDIT: I tried this - How to clear remote desktop connections history and specifically this part of it: reg delete "HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client\UsernameHint" /f The problem still persists.

    Read the article

  • AppFabric Cache - An existing connection was forcibly closed by the remote host

    - by Wallace Breza
    I'm trying to get AppFabric cache up and running on my local development environment. I have Windows Server AppFabric Beta 2 Refresh installed, and the cache cluster and host configured and started running on Windows 7 64-bit. I'm running my MVC2 website in a local IIS website under a v4.0 app pool in integrated mode. HostName : CachePort Service Name Service Status Version Info -------------------- ------------ -------------- ------------ SN-3TQHQL1:22233 AppFabricCachingService UP 1 [1,1][1,1] I have my web.config configured with the following: <configSections> <section name="dataCacheClient" type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" allowLocation="true" allowDefinition="Everywhere"/> </configSections> <dataCacheClient> <hosts> <host name="SN-3TQHQL1" cachePort="22233" /> </hosts> </dataCacheClient> I'm getting an error when I attempt to initialize the DataCacheFactory: protected CacheService() { _cacheFactory = new DataCacheFactory(); <-- Error here _defaultCache = _cacheFactory.GetDefaultCache(); } I'm getting the ASP.NET yellow error screen with the following: An existing connection was forcibly closed by the remote host Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host Source Error: Line 21: protected CacheService() Line 22: { Line 23: _cacheFactory = new DataCacheFactory(); Line 24: _defaultCache = _cacheFactory.GetDefaultCache(); Line 25: }

    Read the article

  • Load Balance WCF and Share a Remote MSMQ for High Throughput

    - by BarDev
    After a ton of reading in books and on the web, I have noticed hints of information that WCF and MSMQ can be used in achieving high throughput. The information I have seen mentions using multiple WCF services in a farm that reads from a single MSMQ queue. The problem is that I have found paragraphs here and there that mentions that high throughput can be done, but I cannot seem to find a document of how to implement it. The following is an excerpt from a MSDN article. The following paragraph is from Best Practices for Queued Communication http://msdn.microsoft.com/en-us/library/ms731093.aspx To achieve higher throughput and availability, use a farm of WCF services that read from the queue. This requires that all of these services expose the same contract on the same endpoint. The farm approach works best for applications that have high production rates of messages because it enables a number of services to all read from the same queue. This is what I'm trying to solve. I have an intranet application where a client sends a request to a WCF service. But I want the ability to load balance the WCF services on multiple servers in a farm. I also want these WCF services in the farm to do transactional reads from a remote MSMQ when an item is available in the Queue. If this is possible, an issue I have is that I do not understand the activation process of WCF to retrieve messages from a remote queue. If this is possible, does anyone know of any articles or Webcasts that would explain it in detail? BarDev

    Read the article

  • git push error 'remote rejected] master -> master (branch is currently checked out)'

    - by hap497
    Hi, Yesterday, I post a question regarding how to clone a git repository from 1 of my machine to another. http://stackoverflow.com/questions/2808177/how-can-i-git-clone-from-another-machine/2809612#2809612 I am able to successfully clone a git repository from my src (192.168.1.2) to my dest (192.168.1.1). But when I did an edit to a file and then do a 'git commit -a -m "test"' and then do a git push. I get this error on my dest (192.168.1.1): git push [email protected]'s password: Counting objects: 21, done. Compressing objects: 100% (11/11), done. Writing objects: 100% (11/11), 1010 bytes, done. Total 11 (delta 9), reused 0 (delta 0) error: refusing to update checked out branch: refs/heads/master error: By default, updating the current branch in a non-bare repository error: is denied, because it will make the index and work tree inconsistent error: with what you pushed, and will require 'git reset --hard' to match error: the work tree to HEAD. error: error: You can set 'receive.denyCurrentBranch' configuration variable to error: 'ignore' or 'warn' in the remote repository to allow pushing into error: its current branch; however, this is not recommended unless you error: arranged to update its work tree to match what you pushed in some error: other way. error: error: To squelch this message and still keep the default behaviour, set error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To git+ssh://[email protected]/media/LINUXDATA/working ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'git+ssh://[email protected]/media/LINUXDATA/working' I have 2 version of git, will that causes this problem? I have git 1.7 on 192.168.1.2 (src) but git 1.5 on 192.168.1.1 (dest). I appreciate if someone can help me with this. Thank you.

    Read the article

  • IIS8 Asp.net State service remote connection failure

    - by maxisam
    Recently we upgrade our web server to windows server 2012 with IIS8. We have this issue when users try to connect the asp.net state service to this web server remotely. It always popup Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. In IIS7 / 7.5 we use the same way and it works fine. As long as the state service is running and firewall is set properly, we don't have any problem. However, in IIS8 it doesn't work. (We even turn off firewall to test it) Thanks for helping.

    Read the article

  • Remote JMS connection still using localhost

    - by James
    I have a created a JMS Connection Factory on a remote glassfish server and want to use that server from a java client app on my local machine. I have the following configuration to get the context and connection factory: Properties env = new Properties(); env.setProperty("java.naming.factory.initial", "com.sun.enterprise.naming.SerialInitContextFactory"); env.setProperty("java.naming.factory.url.pkgs", "com.sun.enterprise.naming"); env.setProperty("java.naming.factory.state", "com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl"); env.setProperty("org.omg.CORBA.ORBInitialHost", JMS_SERVER_NAME); env.setProperty("org.omg.CORBA.ORBInitialPort", "3700"); initialContext = new InitialContext(env); TopicConnectionFactory topicConnectionFactory = (TopicConnectionFactory) initialContext.lookup("jms/MyConnectionFactory"); topicConnection = topicConnectionFactory.createTopicConnection(); topicConnection.start(); This seems to work and when I delete the ConnectionFactory from the glassfish server I get a exception indicating that is can't find jms/MyConnectionFactory as expected. However when I subsequently use my topicConnection to get a topic it tries to connect to localhost:7676 (this fails as I am not running glassfish locally). If I dynamically create a topic: TopicSession pubSession = topicConnection.createTopicSession(false, Session.AUTO_ACKNOWLEDGE); Topic topic = pubSession.createTopic(topicName); TopicPublisher publisher = pubSession.createPublisher(topic); Message mapMessage = pubSession.createTextMessage(message); publisher.publish(mapMessage); and the glassfish server is not running locally I get the same connection refused however, if I start my local glassfish server the topics are created locally and I can see them in the glassfish admin console. In case you ask I do not have jms/MyConnectionFactory on my local glassfish instance, it is only available on the remote server. I can't see what I am doing wrong here and why it is trying to use localhost at all. Any ideas? Cheers, James

    Read the article

  • hg archive to Remote Directory

    - by Brett Daniel
    Is there any way to archive a Mercurial repository to a remote directory over SSH? For example, it would be nice if one could do the following: hg archive ssh://[email protected]/path/to/archive However, that does not appear to work. It instead creates a directory called ssh: in the current directory. I made the following quick-and-dirty script that emulates the desired behavior by creating a temporary ZIP archive, copying it over SSH, and unzipping the destination directory. However, I would like to know if there is a better way. if [[ $# != 1 ]]; then echo "Usage: $0 [user@]hostname:remote_dir" exit fi arg=$1 arg=${arg%/} # remove trailing slash host=${arg%%:*} remote_dir=${arg##*:} # zip named to match lowest directory in $remote_dir zip=${remote_dir##*/}.zip # root of archive will match zip name hg archive -t zip $zip # make $remote_dir if it doesn't exist ssh $host mkdir --parents $remote_dir # copy zip over ssh into destination scp $zip $host:$remote_dir # unzip into containing directory (will prompt for overwrite) ssh $host unzip $remote_dir/$zip -d $remote_dir/.. # clean up zips ssh $host rm $remote_dir/$zip rm $zip Edit: clone-and-push would be ideal, but unfortunately the remote server does not have Mercurial installed.

    Read the article

  • ssh_exchange_identification: Connection closed by remote host under Git bash

    - by MoreFreeze
    I work at win7 and set up git server with sshd. I git --bare init myapp.git, and clone ssh://git@localhost/home/git/myapp.git in Cywgin correctly. But I need config git of Cygwin again, I want to git clone in Git Bash. I run "git clone ssh://git@localhost/home/git/myapp.git" and get following message ssh_exchange_identification: Connection closed by remote host then I run "ssh -vvv git@localhost" in Git Bash and get message debug2: ssh_connect: needpriv 0 debug1: Connecting to localhost [127.0.0.1] port 22. debug1: Connection established. debug1: identity file /c/Users/MoreFreeze/.ssh/identity type -1 debug3: Not a RSA1 key file /c/Users/MoreFreeze/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace // above it repeats 24 times debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /c/Users/MoreFreeze/.ssh/id_rsa type 1 debug1: identity file /c/Users/MoreFreeze/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host it seems my private keys has wrong format? And I find that there are exactly 25 line in private keys without "BEGIN" and "END". I'm confused why it said NOT RSA1 key, I totally ensure it is RSA 2 key. Any advises are welcome. btw, I have read first 3 pages on google about this problem.

    Read the article

  • Exception calling remote SOAP call from thread

    - by Duncan
    This is an extension / next step of this question I asked a few minutes ago. I've a Delphi application with a main form and a thread. Every X seconds the thread makes a web services request for a remote object. It then posts back to the main form which handles updating the UI with the new information. I was previously using a TTimer object in my thread, and when the TTimer callback function ran, it ran in the context of the main thread (but the remote web services request did work). This rather defeated the purpose of the separate thread, and so I now have a simple loop and sleep routine in my thread's Execute function. The problem is, an exception is thrown when returning from GetIMySOAPService(). procedure TPollingThread.Execute; var SystemStatus : TCWRSystemStatus; begin while not Terminated do begin sleep(5000); try SystemStatus := GetIMySOAPService().GetSystemStatus; PostMessage( ParentHandle, Integer(apiSystemStatus), Integer(SystemStatus), 0 ); SystemStatus.DataContext := nil; LParam(SystemStatus) := 0; except end; end; end; Can anyone advise as to why this exception is being thrown when calling this function from the thread? I'm sure I'm overlooking something fundamental and simple. Thanks, Duncan

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • jQuery Dialog - Remote Page Javascript not running in FF / Chrome

    - by Robert
    Hey SO, I've created a page intended only to be opened via modal. This page contains a calendar of the User's favorite events...etc. I'm using the jQuery Dialog and FullCalendar plugins to do this. In IE, there is not issue...everything works fine. However, the Javascript on the remote page (Calendar.aspx) is never getting executed in FF or Chrome. Here's my code. SomeRandomPage.aspx $("#showcalendar").click(function() { var dialog = $('<div style="display:hidden"></div>').appendTo('body'); // load remote content dialog.load("Calendar.aspx", {}, function(responseText, textStatus, XMLHttpRequest) { dialog.dialog( { modal: true, width: 750, resizable: false, draggable: false, title: "My Calendar", closeOnEscape: false, close: function(event, ui) { $(this).remove(); } } ); }); }); } Calendar.aspx <!-- Necessary js above --> <div id="calendar"></div> <script> $(document).ready(function() { alert("HERE"); //Doesn't work in FF or Chrome. Works in IE $('#calendar').fullCalendar({ header: { left: 'prev,next today', center: 'title', right: 'month' }, editable: false, events: '/webservices/CalendarEvents.ashx' }); }); </script>

    Read the article

  • Saving a remote image with cURL?

    - by thebluefox
    Morning all, Theres a few questions around this but none that really answer my question, as far as I ca understand. Basically I have a GD script that deals with resizing and caching images on our server, but I need to do the same with images stored on a remote server. So, I'm wanting to save the image locally, then resize and display it as normal. I've got this far... $file_name_array = explode('/', $filename); $file_name_array_r = array_reverse($file_name_array); $save_to = 'system/cache/remote/'.$file_name_array_r[1].'-'.$file_name_array_r[0]; $ch = curl_init($filename); $fp = fopen($save_to, "wb"); // set URL and other appropriate options $options = array(CURLOPT_FILE => $fp, CURLOPT_HEADER => 0, CURLOPT_FOLLOWLOCATION => 1, CURLOPT_TIMEOUT => 60); // 1 minute timeout (should be enough) curl_setopt_array($ch, $options); curl_exec($ch); curl_close($ch); fclose($fp); This creates the image file, but does not copy it accross? Am I missing the point? Cheers guys.

    Read the article

  • Typical Hadoop setup for remote job submission

    - by Artii
    So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines. Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great. I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great. Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar. So to sum it up my questions are, Is this an ok setup for a small test cluster How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it. How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup. Thanks I would greatly appreciate any advice or help on this.

    Read the article

  • Exceptions over remote methods

    - by Andrei Vajna II
    What are the best practices for exceptions over remote methods? I'm sure that you need to handle all exceptions at the level of a remote method implementation, because you need to log it on the server side. But what should you do afterwards? Should you wrap the exception in a RemoteException (java) and throw it to the client? This would mean that the client would have to import all exceptions that could be thrown. Would it be better to throw a new custom exception with fewer details? Because the client won't need to know all the details of what went wrong. What should you log on the client? I've even heard of using return codes(for efficiency maybe?) to tell the caller about what happened. The important thing to keep in mind, is that the client must be informed of what went wrong. A generic answer of "Something failed" or no notification at all is unacceptable. And what about runtime (unchecked) exceptions?

    Read the article

  • how to get response from remote server

    - by ruhit
    I have made a desktop application in asp.net using c# that connecting with remote server.I am able to connect but how do i show that my login is successful or not. After that i want to retrieve data from the remote server..........so plz help me.I have written the below code..............is there any better way try { string strId = UserId_TextBox.Text; string strpasswrd = Password_TextBox.Text; ASCIIEncoding encoding = new ASCIIEncoding(); string postData = "UM_email=" + strId; postData += ("&UM_password=" + strpasswrd); byte[] data = encoding.GetBytes(postData); MessageBox.Show(postData); // Prepare web request... //HttpWebRequest myRequest = (HttpWebRequest)WebRequest.Create("http://localhost/ruhit/basic_framework/index.php?menu=login=" + postData); HttpWebRequest myRequest =(HttpWebRequest)WebRequest.Create("http://www.facebook.com/login.php=" + postData); myRequest.Method = "POST"; myRequest.ContentType = "application/x-www-form-urlencoded"; myRequest.ContentLength = data.Length; Stream newStream = myRequest.GetRequestStream(); // Send the data. newStream.Write(data, 0, data.Length); MessageBox.Show("u r now connected"); HttpWebResponse response = (HttpWebResponse)myRequest.GetResponse(); // WebResponse response = myRequest.GetResponse(); StreamReader reader = new StreamReader(response.GetResponseStream()); string str = reader.ReadLine(); while (str != null) { str = reader.ReadLine(); MessageBox.Show(str); } reader.Close(); newStream.Close(); } catch { MessageBox.Show("error connecting"); }

    Read the article

  • Passing the form scope to a Remote cfc

    - by cf_PhillipSenn
    What is the syntax for passing the form scope into a cfc with access="remote"? I have: <cfcomponent> <cfset Variables.Datasource = "xxx"> <cffunction name="Save" access="remote"> <cfset var local = {}> <!--- todo: try/catch ---> <cfif arguments.PersonID> <cfquery datasource="#Variables.Datasource#"> UPDATE Person SET FirstName = <cfqueryparam cfsqltype="cf_sql_varchar" value="#arguments.FirstName#"> ,LastName = <cfqueryparam cfsqltype="cf_sql_varchar" value="#arguments.LastName#"> WHERE PersonID = <cfqueryparam cfsqltype="cf_sql_integer" value="#arguments.PersonID#"> </cfquery> <cfset local.result = arguments.PersonID> <cfelse> <cfquery name="local.qry" datasource="#Variables.Datasource#"> INSERT INTO Person(FirstName,LastName) VALUES( <cfqueryparam cfsqltype="cf_sql_varchar" value="#arguments.FirstName#"> ,<cfqueryparam cfsqltype="cf_sql_varchar" value="#arguments.LastName#"> ); SELECT PersonID FROM Person WHERE PersonID=Scope_Identity() </cfquery> <cfset local.result = local.qry.PersonID> </cfif> <cfreturn local.result> </cffunction> </cfcomponent> I need to pass in form.PersonID, form.firstname, form.lastname.

    Read the article

  • Read header data from files on remote server

    - by rejeep
    Hi! I'm working on a project right now where I need to read header data from files on remote servers. I'm talking about many and large files so I cant read whole files, but just the header data I need. The only solution I have is to mount the remote server with fuse and then read the header from the files as if they where on my local computer. I've tried it and it works. But it has some drawbacks. Specially with FTP: Really slow (FTP is compared to SSH with curlftpfs). From same server, with SSH 90 files was read in 18 seconds. And with FTP 10 files in 39 seconds. Not dependable. Sometimes the mountpoint will not be unmounted. If the server is active and a passive mounting is done. That mountpoint and the parent folder gets locked in about 3 minutes. Does timeout, even when there's data transfer going (guess this is the FTP-protocol and not curlftpfs). Fuse is a solution, but I don't like it very much because I don't feel that I can trust it. So my question is basically if there's any other solutions to the problem. Language is preferably Ruby, but any other will work if Ruby does not support the solution. Thanks!

    Read the article

  • Pushing changes to a remote server from a locally started repo

    - by Eliseo Soto
    I started a new project and created a local git repo with "git init" and now I have a few branches and everything works great. However, since my webhosting company offers git hosting (details if you're curious), I'd like to push my entire repo to their servers to have a backup in the cloud in case something bad happens to my local repo. How can I make the remote repo the "origin" since the repo was started locally?

    Read the article

  • Restore Files from Backups on Windows Home Server

    - by Mysticgeek
    If you use Windows Home Server to backup the machines on your network, your in luck if you accidentally delete important files or they become corrupted. Today we take a look at getting your data back from backups on your home server. Open Windows Home Server Console and click select the Computers and Backup tab. Right-click on the computer you need to restore files for and select View Backups. This will open a list of your recent backups. Highlight the one you want to open, then click the Open button in the Restore or View Files section. If this is the first time you’re restoring a file, you’ll be asked to verify installation of the device software. Check the box next to Always trust software from Microsoft Corporation and click Install. Now wait while the backup data is retrieved. After the backup data has been retrieved, an explorer windows opens up to drive (Z:) which is the backup data. It’s just like if you were opening a drive on your local machine. Now you can browse through the backup and find the files your missing. You can open the files directly, or drag them onto your machine to the location you want to restore them.   Restoring your data is actually a very easy process with Windows Home Server. Of course you’ll want to make sure the computers on your network are being backed up to WHS. if you need help with that, check out our article on how to configure your computer to backup to WHS. If you want to backup your home server shares, check out our article on how to backup WHS folder to an external drive. Similar Articles Productive Geek Tips GMedia Blog: Setting Up a Windows Home ServerRestore Your PC from Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscInstalling Windows Home ServerConfigure Your Computer to Backup to Windows Home Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • least privilege account for WinRM remote calls on Windows 2008 Server

    - by aldrin
    ServerFault Windows experts: please consider the following use case: I have 2 Windows 2008 Server SP2 boxes let’s call them – SOURCE, CLIENT. On SOURCE: I create a new user called 'normal'. Just a plain user - no special privileges. On CLIENT: I run the following from a command prompt winrm get wmi/root/cimv2/Win32_UTCTime -r:SOURCE -u:normal -p:NormalPassword I get an output containing WSManFault: Message = Access is denied. On CLIENT: I repeat step 3 with the administrator identity, i.e. winrm get wmi/root/cimv2/Win32_UTCTime -r:SOURCE -u:Administrator -p:AdminPassword I get the current UTC time at SOURCE. The question is, what are the least privileges I need to assign to the user 'normal' to ensure that Step 3 behaves like Step 5. In other words, what's the least privilege to enable WinRM access for a non-Admin account?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >