Search Results

Search found 5793 results on 232 pages for 'ftp sync'.

Page 210/232 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Accents in uploaded file being replaced with '?'

    - by Katfish
    I am building a data import tool for the admin section of a website I am working on. The data is in both French and English, and contains many accented characters. Whenever I attempt to upload a file, parse the data, and store it in my MySQL database, the accents are replaced with '?'. I have text files containing data (charset is iso-8859-1) which I upload to my server using CodeIgniter's file upload library. I then read the file in PHP. My code is similar to this: $this->upload->do_upload() $data = array('upload_data' => $this->upload->data()); $fileHandle = fopen($data['upload_data']['full_path'], "r"); while (($line = fgets($fileHandle)) !== false) { echo $line; } This produces lines with accents replaced with '?'. Everything else is correct. If I download my uploaded file from my server over FTP, the charset is still iso-8850-1, but a diff reveals that the file has changed. However, if I open the file in TextEdit, it displays properly. I attempted to use PHP's stream_encoding method to explicitly set my file stream to iso-8859-1, but my build of PHP does not have the method. After running out of ideas, I tried wrapping my strings in both utf8_encode and utf8_decode. Neither worked. If anyone has any suggestions about things I could try, I would be extremely grateful.

    Read the article

  • How to maintain ComboBox.SelectedItem reference when DataSource is resorted?

    - by Dave
    This really seems like a bug to me, but perhaps some databinding gurus can enlighten me? (My WinForms databinding knowledge is quite limited.) I have a ComboBox bound to a sorted DataView. When the properties of the items in the DataView change such that items are resorted, the SelectedItem in my ComboBox does not keep in-sync. It seems to point to someplace completely random. Is this a bug, or am I missing something in my databinding? Here is a sample application that reproduces the problem. All you need is a Button and a ComboBox: public partial class Form1 : Form { private DataTable myData; public Form1() { this.InitializeComponent(); this.myData = new DataTable(); this.myData.Columns.Add("ID", typeof(int)); this.myData.Columns.Add("Name", typeof(string)); this.myData.Columns.Add("LastModified", typeof(DateTime)); this.myData.Rows.Add(1, "first", DateTime.Now.AddMinutes(-2)); this.myData.Rows.Add(2, "second", DateTime.Now.AddMinutes(-1)); this.myData.Rows.Add(3, "third", DateTime.Now); this.myData.DefaultView.Sort = "LastModified DESC"; this.comboBox1.DataSource = this.myData.DefaultView; this.comboBox1.ValueMember = "ID"; this.comboBox1.DisplayMember = "Name"; } private void saveStuffButton_Click(object sender, EventArgs e) { DataRowView preUpdateSelectedItem = (DataRowView)this.comboBox1.SelectedItem; // OUTPUT: SelectedIndex = 0; SelectedItem.Name = third Debug.WriteLine(string.Format("SelectedIndex = {0:N0}; SelectedItem.Name = {1}", this.comboBox1.SelectedIndex, preUpdateSelectedItem["Name"])); this.myData.Rows[0]["LastModified"] = DateTime.Now; DataRowView postUpdateSelectedItem = (DataRowView)this.comboBox1.SelectedItem; // OUTPUT: SelectedIndex = 2; SelectedItem.Name = second Debug.WriteLine(string.Format("SelectedIndex = {0:N0}; SelectedItem.Name = {1}", this.comboBox1.SelectedIndex, postUpdateSelectedItem["Name"])); // FAIL! Debug.Assert(object.ReferenceEquals(preUpdateSelectedItem, postUpdateSelectedItem)); } }

    Read the article

  • Design an Application That Stores and Processes Files

    - by phasetwenty
    I'm tasked with writing an application that acts as a central storage point for files (usually document formats) as provided by other applications. It also needs to take commands like "file 395 needs a copy in X format", at which point some work is offloaded to a 3rd party application. I'm having trouble coming up with a strategy for this. I'd like to keep the design as simple as possible, so I'd like to avoid big extra frameworks or techniques like threads for as long as it makes sense. The clients are expected to be web applications (for example, one is a django application that receives files from our customers; the others are not yet implemented). The platform it will be running on is likely going to be Python on Linux, unless I have a strong argument to use something else. In the beginning I thought I could fit the information I wanted to communicate in the filenames, and let my application parse the filename to figure out what it needed to do, but this is proving too inflexible with the amount of information I'm realizing I need to make available. Another idea is to pair FTP with a database used as a communication medium (client uploads a file and updates the database with a command as a row in a table) but I don't like this idea because adding commands (a known change) looks like it will require adding code as well as changing database schemas. It will also muddy up the interface my clients will have to use. I looked into Pyro to let applications communicate more directly but I don't like the idea of running an extra nameserver for this one purpose. I also don't see a good way to do file transfer within this framework. What I'm looking for is techniques and/or technologies applicable to my problem. At the simplest level, I need the ability to accept files and messages with them.

    Read the article

  • Automated test, build and deploy

    - by mike79
    I have visual studio team suite 2008. I was unable to meet the requirements to setup TFS, so I'm using TortoiseSvn and VisualSvn as my version contol in VSTS. I need the system setup to do the following: I neeed to be able to create and track workitems. When updates are made to the current project worked on in VSTS, the updates will be commited back to version control. Tests will be run to see that updates don't break the application. If there's a problem with the update it will be reported back to the developer. If there's no problem with the app, which is a clickonce application, it will automatically be built and deployed to an ftp server. I've never worked with version control, build servers, automated testing and continous intergration. I need to know what needs to be put in place for this type of system. I don't know which combination/stack I should be using: CC.net, TeamCity, Hudson, NAnt, NUnit, MsTest, Trac, BugTracker.net, Ndepend, VisualSvn Server, Perforce, Msdeploy, SCM. I want something that is free/opensource and relatively easy to setup and use. Please suggest a setup that will fit my needs. Any help appreciated

    Read the article

  • Validating parameters according to a fixed reference

    - by James P.
    The following method is for setting the transfer type of an FTP connection. Basically, I'd like to validate the character input (see comments). Is this going overboard? Is there a more elegant approach? How do you approach parameter validation in general? Any comments are welcome. public void setTransferType(Character typeCharacter, Character optionalSecondCharacter) throws NumberFormatException, IOException { // http://www.nsftools.com/tips/RawFTP.htm#TYPE // Syntax: TYPE type-character [second-type-character] // // Sets the type of file to be transferred. type-character can be any // of: // // * A - ASCII text // * E - EBCDIC text // * I - image (binary data) // * L - local format // // For A and E, the second-type-character specifies how the text should // be interpreted. It can be: // // * N - Non-print (not destined for printing). This is the default if // second-type-character is omitted. // * T - Telnet format control (<CR>, <FF>, etc.) // * C - ASA Carriage Control // // For L, the second-type-character specifies the number of bits per // byte on the local system, and may not be omitted. final Set<Character> acceptedTypeCharacters = new HashSet<Character>(Arrays.asList( new Character[] {'A','E','I','L'} )); final Set<Character> acceptedOptionalSecondCharacters = new HashSet<Character>(Arrays.asList( new Character[] {'N','T','C'} )); if( acceptedTypeCharacters.contains(typeCharacter) ) { if( new Character('A').equals( typeCharacter ) || new Character('E').equals( typeCharacter ) ){ if( acceptedOptionalSecondCharacters.contains(optionalSecondCharacter) ) { executeCommand("TYPE " + typeCharacter + " " + optionalSecondCharacter ); } } else { executeCommand("TYPE " + typeCharacter ); } } }

    Read the article

  • Some sonatype nexus questions.

    - by smallufo
    I deployed a sonatype nexus server inside my LAN , mapping some remote repositories to my public repositories : First question is , why these repositories not sync with the "real" repositories ? For example , I mapped maven central (http://repo1.maven.org/maven2) to "central" , but when I browse http://smallufo:8081/nexus/content/repositories/central/org/springframework/ , the packages are not complete , in http://repo2.maven.org/maven2/org/springframework/ , there are tons of artifacts , but I only have some of them : And versions are old ... ex : spring-core is only 2.5.6.SEC01 , but the latest version is 3.0.2.RELEASE. And my maven client seems can only find the old artifacts ... "central" is a proxy directory , it should be the same with the remote server. I tried to "Expire Cache" , "ReIndex" , "Incremental ReIndex" the whole "central" : After a long time with almost 100% java process load , the situation seems not better , just add some artifacts ... not reflecting the real "Maven Central" data... Second question , what's difference with "Expire Cache" , "ReIndex" , "Incremental ReIndex" ? Even I can "search" spring-core.3.0.2.RELEASE , my m2eclipse still cannot find it : I can also see the spring-core-3.0.2.RELEASE in the "index" , (but not available in "storage") : But why m2eclipse cannot make use of it ? it seems m2eclipse can only install artifacts in the storage , if this is how nexus works , how do I "force" download spring-core-3.0.2.RELEASE to nexus's storage ? How do I solve these strange incompatibilities ? Thanks a lot !

    Read the article

  • implementing a Intelligent File Transfer Software in java over TCP/IP

    - by whyjava
    Hello I am working on a proposal where we have to implement a software which can move files between one source to destination.The overall goal of this project is to create intelligent file transfer.This software will have three components :- 1) Broker : Broker is the module that communicates with other brokers, monitors files, moves files, retrieves configurations from the Configuration Manager, supplies process information for the monitor, archives files, writes all process data to log files and escalates issues if necessary 2) Configuration Manager :Configuration Manager is a web-based application used to configure and deploy the configuration to all brokers. 3) Monitor : Monitor is a web-based application used to monitor each Broker in the environment. This project has to be built up in java and protocol for file transfer in tcp/ip. Client does not want to use FTP. File Transfer seems very easy, until there are several processes who are waiting to pick the file up automatically. Several problems arise: How can we guarantee the file is received at the destination? If a file isn’t received the first time, we should try it again (even after a restart or power breakdown) ? How does the receiver knows the file that is received is complete? How can we transfer multiple files synchronously? How can we protect the bandwidth, so file transfer isn’t blocking other processes? How does one interoperate between multiple OS platforms? What about authentication? How can we monitor het workflow? Auditing / logging Archiving Can you please provide answer to some of these? Thanks

    Read the article

  • How should I manage/declare dependencies between open source C# projects?

    - by munificent
    I've got a game (a roguelike to be specific) in C# that I'm in the process of cleaning up to open source. One step I'd like to take is splitting it into three distinct pieces: A simple package of utility classes, things like 2D arrays, vectors, etc. A terminal UI package that gives you a curses-like display. It depends on 1. The actual game, which uses 1 and 2. Right now, these are all separate projects in the same solution, but I'd kind of like to make them completely separate projects (in the "open source project" sense, not the "visual studio project" use of the term) with their own names and repos. I think, at the very least, #1 is generally useful even if you aren't building game, and I don't want someone to have to build an entire game just to get some handy functions. What I'm not sure about is how to handle the dependencies if I split up the solution. If someone decides they want to sync the game, how should I ensure they also get 1 and 2? Include the built dependent .dlls in the games repo? Just document, "you need these other projects and they must be in a path relative to the game like this". Just leave it all one giant solution and a single repo. Something I'm not thinking of?

    Read the article

  • Best practice how to store HTML in a database column

    - by tbrandao
    I have an application that modifies a table dynamically, think spreadsheet), then upon saving the form (which the table is part of) ,I store that changed table (with user modifications) in a database column named html_Spreadhseet,along with the rest of the form data. right now I'm just storing the html in a plain text format with basic escaping of characters... I'm aware that this could be stored as a separate file, the source table (html_workseeet) already is. But from a data handling perspective its easier to save the changed html table to and from a column so as to avoid having to come up with a file management strategy (which folder will this live in, now must include folder in backups, security issues now need to apply to files, how to sync db security with file system etc.), so to minimize these issues I'm only storing the ... part in the database column. My question is should I gzip the HTML , maybe use JSON, or some other format to easily store and retrieve the HTML from the database column, what is the best practice to store HTML content in a datbase? Or just store it as I currently am as an escaped text column?

    Read the article

  • Approach For Syncing One SharePoint List With One or More SharePoint Lists

    - by plattnum
    What would be the best approach or strategy for configuring, customizing or developing in SharePoint a solution that allows me to keep one or more SharePoint lists in sync with a SharePoint list I have designated as a master or parent list. I would like to be able to create a master/parent list of some information that can be extended or used by different parts of the organization without them being able to CRUD any items on the actual columns of the master list. (I have seen some commercial web parts that offer column security on SharePoint lists and although that’s one way of potentially meeting my needs I would like to explore other options.) Scenario: I have a list called FOO: FOO Title Description I would like to create a new list BAR based off of FOO (BAR is managed by sub-organization that doesn't have access to FOO List): BAR FOO.Title (Read-Only) FOO.Description (Read-Only) NewColumn1 NewColumn2 Actions: Create- If a new item is entered in FOO I would like the new item added to BAR. Read - N/A Update - If the title or description is changed in FOO I would like it changed in BAR. Delete- No Deletes in the scenario. (Deletes are handled by the business with status column.) Templates with content extraction offer me this but it’s a one time shot at list creation. Just not sure what the best approach or strategy would be for this in MOSS 2007. Thanks!

    Read the article

  • architecture - centraled location for different modules (cms, webapplications, ...) - best practise

    - by NicoJuicy
    Let's just say that i want to create a cms + other online applications. I want them all to integrate into a central location, but they also have to be available seperately (not everyone want's more than the cms solution). Would i create a huge central application that contains all the database, which communicates through a webserice with the "standalone - integrated" modules? Or would i create them seperately and the only thing that the "central" application would do is syncing the information (eg. the cms and another solution can have the same tables (eg. clients or employees). Or do you have another idea? (i know i'm a little vague, but i can't "give" a lot of details because of work - contract). If someone has all the "packages" it should be possible for the central application to integrate all the modules at one place! Or if someone has more than 1 module, it should combine this on the website. What i thought is best, is that the central location contains only the users and their rights (eg. cms - all rights, ...), and the information get synced with every change. (module cms, adding a new client - store locally and send data to the central location, central location - send to modules = table clients updated everywhere) This way it is easy if someone only "bought" a module, they can sync it easily through the complete architecture. I hope i made myself clear!

    Read the article

  • C# Type conversion between two similar Datatable objects

    - by Ali
    I have .NET project with sync framework and two separate Datasets for MS SQL and Compact SQL. in my base class I have a generic DataTable object. in my derived classed I assign Typed DataTable to the generic object based on whether the application is operating online or offline: example: if (online) _dataTable = new MSSQLDataSet.Customer; else _dataTable = new CompactSQLDataSet.Customer; Now every where in my code i have to check and do a cast based on the current network mode like this: public void changeCustomerID(int ID) { if (online) (MSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; else (CompactMSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; } but I don't think this is very efficient and I believe it can be done in a smarter way to only use one line of code by dynamically getting the Type of _dataTable on the run time. my problem is at the design time, in order to acess datatable porperties such as "CustomerID" it has to be casted to either MSSQLDataSet.CustomerDataTable or CompactMSSQLDataSet.CustomerDataTable. Is there a way to have a function or a operator to convert the _datatable to its runtime type but still be able to use it's design time properties which are the same between the two types? something like: ((aType)_dataTable)[i].CustomerID = value; //or GetRuntimeType(_dataTable)[i].CustomerID = value;

    Read the article

  • Having a problem inserting into database

    - by neo skosana
    I have a stored procedure: CREATE PROCEDURE busi_reg(IN instruc VARCHAR(10), IN tble VARCHAR(20), IN busName VARCHAR(50), IN busCateg VARCHAR(100), IN busType VARCHAR(50), IN phne VARCHAR(20), IN addrs VARCHAR(200), IN cty VARCHAR(50), IN prvnce VARCHAR(50), IN pstCde VARCHAR(10), IN nm VARCHAR(20), IN surname VARCHAR(20), IN eml VARCHAR(50), IN pswd VARCHAR(20), IN srce VARCHAR(50), IN refr VARCHAR(50), IN sess VARCHAR(50)) BEGIN INSERT INTO b_register SET business_name = busName, business_category = busCateg, business_type = busType, phone = phne, address = addrs, city = cty, province = prvnce, postal_code = pstCde, firstname = nm, lastname = surname, email = eml, password = pswd, source = srce, ref_no = refr; END; This is my php script: $busName = $_POST['bus_name']; $busCateg = $_POST['bus_cat']; $busType = $_POST['bus_type']; $phne = $_POST['phone']; $addrs = $_POST['address']; $cty = $_POST['city']; $prvnce = $_POST['province']; $pstCde = $_POST['postCode']; $nm = $_POST['name']; $surname = $_POST['lname']; $eml = $_POST['email']; $srce = $_POST['source']; $ref = $_POST['ref_no']; $result2 = $db->query("CALL busi_reg('$instruc', '$tble', '$busName', '$busCateg', '$busType', '$phne', '$addrs', '$cty', '$prvnce', '$pstCde', '$nm', '$surname', '$eml', '$pswd', '$srce', '$refr', '')"); if($result) { echo "Data has been saved"; } else { printf("Error: %s\n",$db->error); } Now the error that I get: Commands out of sync; you can't run this command now

    Read the article

  • How to write a "thread safe" function in C ?

    - by Andrei Ciobanu
    Hello I am writing some data structures in C, and I've realized that their associated functions aren't thread safe. The i am writing code uses only standard C, and I want to achieve some sort of 'synchronization'. I was thinking to do something like this: enum sync_e { TRUE, FALSE }; typedef enum sync_e sync; struct list_s { //Other stuff struct list_node_s *head; struct list_node_s *tail; enum sync_e locked; }; typedef struct list_s list; , to include a "boolean" field in the list structure that indicates the structures state: locked, unlocked. For example an insertion function will be rewritten this way: int list_insert_next(list* l, list_node *e, int x){ while(l->locked == TRUE){ /* Wait */ } l->locked = TRUE; /* Insert element */ /* -------------- */ l->locked = FALSE; return (0); } While operating on the list the 'locked' field will be set to TRUE, not allowing any other alterations. After operation completes the 'locked' field will be again set to 'TRUE'. Is this approach good ? Do you know other approaches (using only standard C).

    Read the article

  • In Django, how to define a "location path" in order to display it to the users?

    - by naw
    I want to put a "location path" in my pages showing where the user is. Supposing that the user is watching one product, it could be Index > Products > ProductName where each word is also a link to other pages. I was thinking on passing to the template a variable with the path like [(_('Index'), 'index_url_name'), (_('Products'), 'products_list_url_name'), (_('ProductName'), 'product_url_name')] But then I wonder where and how would you define the hierarchy without repeating myself (DRY)? As far I know I have seen two options To define the hierarchy in the urlconf. It could be a good place since the URL hierarchy should be similar to the "location path", but I will end repeating fragments of the paths. To write a context processor that guesses the location path from the url and includes the variable in the context. But this would imply to maintain a independient hierarchy wich will need to be kept in sync with the urls everytime I modify them. Also, I'm not sure about how to handle the urls that require parameters. Do you have any tip or advice about this? Is there any canonical way to do this?

    Read the article

  • Threadpool design question

    - by ZeroVector
    I have a design question. I want some feedback to know if a ThreadPool is appropriate for the client program I am writing. I am having a client running as a service processing database records. Each of these records contains connection information to external FTP sites [basically it is a queue of files to transfer]. A lot of them are to the same host, just moving different files. Therefore, I am grouping them together by host. I want to be able to create a new thread per host. I really don't care when the transfers finish, they just need to do all the work (or try to do) they were assigned, and then terminate once they are finished, cleaning up all resources they used in the process. I anticipate no more than 10-25 connections to be established. Once the transfer queue is empty, the program will simply wait until there are records in the queue again. Is the ThreadPool a good candidate for this or should I use a different approach? Edit: For the most part, this is the only significant custom application running on the server.

    Read the article

  • GPG error occurs while using "deb file:/local-path-to-repo ..." in /etc/apt/sources.list

    - by Chandler.Huang
    I need to install packages within non-internet connection environment. My plan is to download dist structure from Internet and then add file path to /etc/apt/sources.list. So I download related structure includes ubunt/dists/precise, precise-backports, precise-proposed, precise-security, precise-updates from a ftp mirror server. And then I remove original source and add the following to my /etc/apt/sources.list. deb file:path-to-local-ubuntu-directory/ precise main restricted multiverse universe deb-src file:path-to-local-ubuntu-directory/ precise main restricted multiverse universe Then I got GPG error as following after apt-get update. root@openstack:/~# apt-get update Ign file: precise InRelease Get:1 file: precise Release.gpg [198 B] Get:2 file: precise Release [50.1 kB] Ign file: precise Release Get:3 file: precise/main TranslationIndex [3,761 B] Get:4 file: precise/multiverse TranslationIndex [2,716 B] Get:5 file: precise/restricted TranslationIndex [2,636 B] Get:6 file: precise/universe TranslationIndex [2,965 B] Reading package lists... Done W: GPG error: file: precise Release: The following signatures were invalid: BADSIG 0976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> I had tried use the following steps after google but in vain. sudo apt-get clean cd /var/lib/apt sudo mv lists lists.old sudo mkdir -p lists/partial sudo apt-get update Is there any way to resolve this? And why this error occurs? Thanks a lot.

    Read the article

  • Large scale Merge Replication strategy - what can go wrong?

    - by niidto
    Hi, I'm developing a piece of software that uses Merge Replication and SQL Compact on Windows Mobile 6. At the moment it is running on 5 devices reasonably well. The issues I've come up against are as follows: The schema has had to change a lot, and will continue to have to change as the application evolves. There have been various errors replicating these schema changes down to the device, uploads failing due to schema inconsistencies. Subscriptions expiring (after 14 days) and unable to reinitialize with upload - AKA, potential data los of unsynced data up to that point. Basically, the worst case scenario is data loss, and when merge replication fails, there seems to be no way back to get the data off. My method until now has been to drop and create the subscription on the device. I don't hear many people doing this, though it seems to solve everything. The long term plan is to role this out to 500+ devices. Any advice on people who have undertaken similar projects, and how to minimise data loss and make it so that there's appropriate error handling code to recover from sync failures would be much appreciated. James

    Read the article

  • Syncing Data to Remote Services, Best Practices for Caching?

    - by viatropos
    I want to be able to publish events to Eventbrite, Eventful, and Google Calendar for my Google Apps. Each service has slightly different properties for events... I will be syncing many other things too, such as users with Google Contacts and MailChimp, Documents with Google Docs and some other services, etc... So I'm wondering, what is the recommended way of retrieving the data for the end user so that it's reasonably maintainable and optimized? Here are the things I'm thinking that I'm having trouble with: My App keeps a central database of all the models (Event, Document, User, Form, etc.), and whenever Admin creates an object (e.g. create through Eventbrite or through our Admin panel), we sync them and store a copy in our local database. When User goes to the site /events, App retrieves the events from the database. Read Events from a target feed, such as the Eventbrite or Eventful feed, and scrap the local database. Basically, I'm wondering, if we're storing all of the data on a remote service, do we really need to have a local database copy of the data? When would we need to have a local database, when wouldn't we?

    Read the article

  • Callbacks on GUI Thread

    - by miguel
    We have an external data provider which, in its construtor, takes a callback thread for returning data upon. There are some issues in the system which I am suspicious are related to threading, however, in theory they cannot be, due to the fact that the callbacks should all be returned on the same thread. My question is, does code like this require thread synchronisation? class Foo { ExternalDataProvider _provider; public Foo() { // This is the c'tor for the xternal data provider, taking a callback loop as param _provider = new ExternalDataProvider(UILoop); _provider.DataArrived += ExternalProviderCallbackMethod; } public ExternalProviderCallbackMethod() { var itemArray[] = new String[4] { "item1", "item2", "item3", "item4" }; for (int i = 0; i < itemArray.Length; i++) { string s = itemArray[i]; switch(s) { case "item1": DoItem1Action(); break; case "item2": DoItem2Action(); break; default: DoDefaultAction(); break; } } } } The issue is that, very infrequently, DoItem2Action is executingwhen DoItem1Action should be exectuing. Is it at all possible threading is at fault here? In theory, as all callbacks are arriving on the same thread, they should be serialized, right? So there should be no need for thread sync here?

    Read the article

  • Exporting database from external server (without SSH access)

    - by Derek Carlisle
    Our current website is hosted by the design agency who originally built the website, however we are bringing the development of the website in house therefore need to export the database from their server and import it to ours. We have FTP and phpMyAdmin access but don't have SSH access to the server. I was hoping to run a PHP script that would mysql dump the database, compress it and then copy it across to our server using scp: $backupFile = $_SERVER['DOCUMENT_ROOT'].'/backup' . date("Y-m-d-H-i-s") . '.gz'; system("mysqldump -h DB_HOST -u DB_USER -pDB_PASS DB_NAME | gzip > $backupFile"); exec("sshpass -p PASSWORD scp -r -P PORT_NUMBER $backupFile [email protected]:/path/to/directory/"); I have ran this locally from the command line and it worked fine, although I had to install sshpass (the hosting server might not have this installed). Also, I was hoping to run it from the browser as I don't have command line access on the hosting server, however it didn't work, no errors produced though. Can you anyone recommend how I can export from the server that I don't have SSH access to and import to my server? Thanks

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • Modeling many-to-one with constraints?

    - by Greg Beech
    I'm attempting to create a database model for movie classifications, where each movie could have a single classification from each of one of multiple rating systems (e.g. BBFC, MPAA). This is the current design, with all implied PKs and FKs: TABLE Movie ( MovieId INT ) TABLE ClassificationSystem ( ClassificationSystemId TINYINT ) TABLE Classification ( ClassificationId INT, ClassificationSystemId TINYINT ) TABLE MovieClassification ( MovieId INT, ClassificationId INT, Advice NVARCHAR(250) -- description of why the classification was given ) The problem is with the MovieClassification table whose constraints would allow multiple classifications from the same system, whereas it should ideally only permit either zero or one classifications from a given system. Is there any reasonable way to restructure this so that a movie having exactly zero or one classifications from any given system is enforced by database constraints, given the following requirements? Do not duplicate information that could be looked up (i.e. duplicating ClassificationSystemId in the MovieClassification table is not a good solution because this could get out of sync with the value in the Classification table) Remain extensible to multiple classification systems (i.e. a new classification system does not require any changes to the table structure)? Note also the Advice column - each mapping of a movie to a classification needs to have a textual description of why that classification was given to that movie. Any design would need to support this.

    Read the article

  • All Callbacks on GUI Thread - Multithreading issues possible?

    - by miguel
    We have an external data provider which, in its construtor, takes a callback thread for returning data upon. There are some issues in the system which I am suspicious are related to threading, however, in theory they cannot be, due to the fact that the callbacks should all be returned on the same thread. My question is, does code like this require thread synchronisation? class Foo { ExternalDataProvider _provider; public Foo() { // This is the c'tor for the xternal data provider, taking a callback loop as param _provider = new ExternalDataProvider(UILoop); _provider.DataArrived += ExternalProviderCallbackMethod; } public ExternalProviderCallbackMethod(...) { //...(code omitted) var itemArray[] = new String[4] { "item1", "item2", "item3", "item4" }; for (int i = 0; i < itemArray.Length; i++) { string s = itemArray[i]; switch(s) { case "item1": DoItem1Action(); break; case "item2": DoItem2Action(); break; default: DoDefaultAction(); break; } //...(code omitted) } } } The issue is that, very infrequently, DoItem2Action is executingwhen DoItem1Action should be exectuing. Is it at all possible threading is at fault here? In theory, as all callbacks are arriving on the same thread, they should be serialized, right? So there should be no need for thread sync here?

    Read the article

  • http post request with cross-origin in javascript

    - by Calamarico
    i have a problem with a http post call in firefox. I know that when there are a cross origin, firefox first do a OPTIONS before the POST to know the access-control-allow headers. With this code i dont have any problem: Net.requestSpeech.prototype.post = function(url, data) { if(this.xhr != null) { this.xhr.open("POST", url); this.xhr.onreadystatechange = Net.requestSpeech.eventFunction; this.xhr.setRequestHeader("Content-Type", "application/json; charset=utf-8"); this.xhr.send(data); } } I test this code with a simple html that invokes this function. Everything is ok and i have the response of the OPTIONS and POST, and i process the response. But, i'm trying to integrate this code with an existen application with uses jquery (i dont know if this is a problem), when the send(data) executes in this case, the browser (firefox) do the same, first do a OPTION request, but in this case dont receive the response of the server and puts this message in console: [18:48:13.529] OPTIONS http://localhost:8111/ [undefined 31ms] Undefined... the undefined is because dont receive the response, but the code is the same, i dont know why in this case the option dont receive the response, someone have an idea? i debug my server app and the OPTIONS arrive ok to the server, but it seems like the browser dont wait to the response. edit more later: ok i think that the problem is when i run with a simple html with a SCRIPT tag that invokes the method who do the request run ok, but in this app that dont receive the response, i have a form that do a onsubmit event, i think that the submit event returns very fast and the browser dont have time to get the OPTIONS request. edit more later later: WTF, i resolve the problem make the POST request to sync: this.xhr.open("POST", url, false); The submit reponse very quickly and can't wait to the OPTION response of the browser, any idea to this?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >