Search Results

Search found 5793 results on 232 pages for 'ftp sync'.

Page 37/232 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Changing the sequencing strategy for File/Ftp Adapter

    - by [email protected]
    The File/Ftp Adapter allows the user to configure the outbound write to use a sequence number. For example, if I choose address-data_%SEQ%.txt as the FileNamingConvention, then all my files would be generated as address-data_1.txt, address-data_2.txt,...and so on. But, where does this sequence number come from? The answer lies in the "control directory" for the particular adapter project(or scenario). In general, for every project that use the File or Ftp Adapter, a unique directory is created for book keeping purposes. And since this control directory is required to be unique, the adapter uses a digest to make sure that no two control directories are the same. For example, for my FlatStructure sample, the control information for my project would go under FMW_HOME/user_projects/domains/soainfra/fileftp/controlFiles/[DIGEST]/outbound where the value of DIGEST would differ from one project to another. If you look under this directory, you will see a file control_ob.properties and this is where the sequence number is maintained. Please note that the sequence number is maintained in binary form and you hence you might need a hex editor to view its content. You will also see another zero byte file, SEQ_nnn, but, ignore that for now. We'll get to it some other time. For now, please remember that this extra file is maintained as a backup. One of the challenges faced by the adapter runtime is to guard all writes to the control files so no two threads inadverently try to update them at the same time. And, it does so with the help of a "Mutex". For now, please remember that the mutex comes in different flavors: In-memory DB-based Coherence-based User-defined Again, we will talk about these mutexes some other time. Please note that there might be scenarios, particularly under heavy load, where the mutex might become a bottleneck. The adapter, however,  allows you to change the configuration so that the adapter sequence value comes from a database sequence or a stored procedure and in such situation, the mutex is acually by-passed and thereby resulting in better throughputs. In later releases, the behavior of the adapter would be defaulted to use a db-sequence.  The simplest way to achieve this is by switching your JNDI for the outbound JCA file to use "eis/HAFileAdapter" as shown   But, what does this do? Internally, the adapter runtime creates a sequence on the oracle database. For example, if you do a "select * from user_sequences" in your soa-infra schema, you will see a new sequence being created with name as SEQ_<GUID>__ where the GUID will differ from one project to another. However, if you want to use your own sequence, then it would require you to add a new property to your JCA file called SequenceName as shown below. Please note that you will need to create this sequence on your soainfra schema beforehand.     But, what if we use DB2 or MSSQL Server as the dehydration support? DB2 supports sequences natively but MSSQL Server does not. So, the adapter runtime uses a natively generated sequence for DB2, but, for MSSQL server, the adapter relies on a stored procedure that ships with the product. If you wish to achieve the same result for SOA Suite running DB2 as the dehydration store, simply change your connection factory JNDI name in the JCA file to eis/HAFileAdapterDB2 and for MSSQL, please use eis/HAFileAdapterMSSQL. And, if you wish to use a stored procedure other than the one that ships with the product, you will need to rely on binding properties to override the adapter behavior. Particularly, you will need to instruct the adapter that you wish to use a stored procedure as shown:       Please note that if you're using the File/Ftp Adapter in Append mode, then the adapter runtime degrades the mutex to use pessimistic locks as we don't want writers from different nodes to append to the same file at the same time.                    

    Read the article

  • Ubuntu One has high CPU usage but no syncing

    - by Peter
    over the weekend I updated my computer to Windows 8. So far Ubuntu One was running smoothly, but ever since the update (clean, new install) Ubuntu doesn't sync any more. In Windows 7 it would start to sync at full internet speed as soon as I drop a file. But now in Windows 8, as soon as I drop a new file into the Ubuntu One folder, CPU usage goes up to about 50 % and no network traffic occurs. This stays like that for a couple of minutes, CPU usage goes back to normal and then the client says that all is in sync - which isn't true. Is it too early for Windows 8? Do others have the same problem or is there something I can do about it? I try a couple of different things, and realized that if the file size is 20 MB the files get uploaded. The original file was 1.5 GB. I also didn't work with 200, 100 and 50 MB large files. But even with 20 MB large files, the upload is very slow and not steady. The log give plenty of this error: - twisted - ERROR - Failure: exceptions.TypeError About which I don't know the meaning. By the way, the account works just fine on the Ubuntu 12.04 partition. Any help is greatly appreciated. -Peter

    Read the article

  • Set which pulseaudio sync is affected by volume control buttons

    - by Michael K
    On my previous installation of ubuntu 10.04 it was possible to set a sync in pavucontrol, which is affected by the volume up-/down buttons on my keyboard. Now in Lubuntu 11.10 this does not work anymore. I can check the green tick, but this affects nothing, still always one and the same sink is affected. Did anybody have this before? Where is this function configured - is there a config file, in which I could change this configuration directly?

    Read the article

  • How to Sync iTunes to Your Android Phone

    - by Zainul Franciscus
    If you have iTunes, and you don’t have an iPhone – but an Android phone instead, syncing iTunes to your phone can be frustrating. So here are some tips to sync iTunes and make sure your cover art works well on your Android phone.How To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)Learn How to Make HDR Images in Photoshop or GIMP With a Simple Trick

    Read the article

  • Vsftpd (ftp) server drage and drop issuse

    - by user109705
    Hi i have installed and configured ftp server on ubuntu 12.04. vsftpd.Config #anonymous_enable=YES write_enable=YES. when i drag and drops files to the Sever with filezilla, it fails: ****550 Permission denied. Error: Critical file transfer error.**** but when i try to do the same thing to another server on the Internet, it works just fine. I even tried severe times to changes settings in the vsftpd.config file but it had the same problems respectively Help Thnks

    Read the article

  • How do I set the LRECL in C#.NET?

    - by donde
    I have been trying to ftp a dtl file from .net to, what I beleive, is an AS400. The error being reported back to me is: "One or more lines have been truncated" and the admin is saying the file is coming over with 256 lines that have variable length columns. I found this explanation online: we have to establish defaults because no specifics about the file exist. The default RECFM is V and LRECL is 256. This means that SAS will scan the input record looking for the CR & LF to tell us that we are at the EOR. If the marker isn't found within the limit of the LRECL, SAS discards the data from the LRECL value to the end of the record and adds a message to the Log that "One or more lines have been truncated". So I need to set the LRECL. How do I do this in C# .NET? FtpWebRequest ftp = (FtpWebRequest)FtpWebRequest.Create(ftpfullpath); ftp.Credentials = new NetworkCredential(user, pwd); ftp.KeepAlive = false; ftp.UseBinary = false; ftp.Method = WebRequestMethods.Ftp.UploadFile; FileStream fs = File.OpenRead(inputfilepath + ftpfileName); byte[] buffer = new byte[fs.Length]; fs.Read(buffer, 0, buffer.Length); fs.Close(); Stream ftpstream = ftp.GetRequestStream(); int i = 0; int intBlock = 1786; int intBuffLeft = buffer.Length; while (i < buffer.Length) { if (intBuffLeft >= 1786) { ftpstream.Write(buffer, i, intBlock); } else { ftpstream.Write(buffer, i, intBuffLeft); } i += intBlock; intBuffLeft -= 1786; } ftpstream.Close();

    Read the article

  • How would I construct a terminal command to download a folder with wget from a Media Temple (gs) ser

    - by racl101
    I'm trying to download a folder using wget on the Terminal (I'm usin a Mac if that matters) because my ftp client sucks and keeps timing out. It doesn't stay connected for long. So I was wondering if I could use wget to connect via ftp protocol to the server to download the directory in question. I have searched around in the internet for this and have attempted to write the command but it keeps failing. So assuming the following: ftp username is: [email protected] ftp host is: ftp.s12345.gridserver.com ftp password is: somepassword I have tried to write the command in the following ways: wget -r ftp://[email protected]:[email protected]/path/to/desired/folder/ wget -r ftp://serveradmin:[email protected]/path/to/desired/folder/ When I try the first way I get this error: Bad port number. When I try the second way I get a little further but I get this error: Resolving s12345.gridserver.com... 71.46.226.79 Connecting to s12345.gridserver.com|71.46.226.79|:21... connected. Logging in as serveradmin ... Login incorrect. What could I be doing wrong?

    Read the article

  • How to resume an ftp download at any point? (shell script, wget option)?

    - by Dave
    hi! i want to download a huge file from an ftp server in chunks of 50-100MB each. At each point, i want to be able to set the "starting" point and the length of the chunk i want. i wont have the "previous" chunks saved locally (ie i cant ask the program to "resume" the downlaod). what is the best way of going about that? i use wget mostly, but would something else be better?

    Read the article

  • LINUX: how to detect that ftp file upload is finished.

    - by duke84
    In my project I have a file uploading feature. Files are uploaded via FTP. I need to configure a listener that will check for new files and invoke a script only when file uploading is finished. Because if I run this script immediately after detecting the new file, it can start to process file that is not completely uploaded, which will cause an error. Can anybody tell if this is possible on LINUX and how can I do this?

    Read the article

  • How do I setup a cloud server to share and sync files on ESXi hosted environment?

    - by Manoj Agarwal
    I want to setup my private cloud network for my company for syncing and sharing files. Instead of using existing players like dropbox, google drive, amazon etc. I want to setup my own cloud infrastructure. The requirement is to easily share private data internally within the organization. I already have an ESXi based cloud environment, running several virtual machines in it. Will it be feasible and achievable?

    Read the article

  • XCOPY-deploying Microsoft Sync Framework in a no admin rights scenario (i.e. ClickOnce install)

    - by Mike Bouck
    I'm currently designing a smart client app (WPF) which needs to operate in an "occasionally disconnected" mode. For the offline scenario, I'm looking at using: Disconnected Service Agent Application Block (from the Smart Client Software Factory) Microsoft Sync Framework I should mention that I want my smart client app to be XCOPY-deployable, auto-updating, and installable without administrative privledges -- basically a ClickOnce-deployed app. From what I can tell this means the Microsoft Sync Framework is out because it has some COM in it's implementation that needs to get registered on the client which requires admin rights. Is it possible to XCOPY deploy and run MSF from a ClickOnce app? Any other ideas for data synchronization?

    Read the article

  • How to resume repo sync

    - by webgenius
    Can anyone please mention how to resume the sync command? I followed the following steps: $ repo init -u git://git.omapzoom.org/platform/omapmanifest.git -b eclair $ repo sync The sync took more than 6 hours and I had to terminate the sync myself due to shortgae of bandwidth. Is there any way I can resume the sync from the previous session? I can see that the following folders are created: bionic.git bootable build.git cts.git and many more.... I have access to free bandwidth only for 6 hours in a day, and I have to do the sync within this time. Any help is really appreciated.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >