Search Results

Search found 5793 results on 232 pages for 'ftp sync'.

Page 211/232 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Stored procedure for generic MERGE

    - by GilliVilla
    I have a set of 10 tables in a database (DB1). And there are 10 tables in another database (DB2) with exact same schema on the same SQL Server 2008 R2 database server machine. The 10 tables in DB1 are frequently updated with data. I intend to write a stored procedure that would run once every day for synchronizing the 10 tables in DB1 with DB2. The stored procedure would make use of the MERGE statement. Now, my aim is to make this as generic and parametrized as possible. That is, accommodate for more tables down the line... and accommodate different source and target DB names. Definitely no hard coding is intended. This is my algorithm so far: Have the database names as parameters Have the first query within the stored procedure... result in giving the names of the 10 tables from a lookup table (this can be 10, 20 or whatever) Have a generic MERGE statement that does the sync for each of the above set of tables (based on primary key?) This is where I need more inputs on. What is the best way to achieve this stored procedure? SQL syntax would be helpful.

    Read the article

  • How to combine two separate unrelated Git repositories into one with single history timeline

    - by Antony
    I have two unrelated (not sharing any ancestor check in) Git repositories, one is a super repository which consists a number of smaller projects (Lets call it repository A). Another one is just a makeshift local Git repository for a smaller project (lets call it repository B). Graphically, it would look like this A0-B0-C0-D0-E0-F0-G0-HEAD (repo A) A0-B0-C0-D0-E0-F0-G0-HEAD (remote/master bare repo pulled & pushed from repo A) A1-B1-C1-D1-E1-HEAD (repo B) Ideally, I would really like to merge repo B into repo A with a single history timeline. So it would appear that I originally started project in repo A. Graphically, this would be the ideal end result A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (new repo A) A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (remote/master bare repo pulled & pushed from repo A) I have been doing some reading with submodules and subtree (Pro Git is a pretty good book by the way), but both of them seem to cater solution towards maintaining two separate branch with sub module being able to pull changes from upstream and subtree being slightly less headache. Both solution require additional and specialized git commands to handle check ins and sync between master and sub tree/module branch. Both solution also result in multiple time-lines (with --squash you even get 3 timelines with subtree). The closest solution from SO seems to talk about "graft", but is that really it? The goal is to have a single unified repository where I can pull/push check-ins, so that there are no more repo B, just repo A in the end.

    Read the article

  • Asynchronous javascript issue

    - by amit
    I am trying to create a function which takes values from various html elements of the page to create a string and pass on to a variable. now this works great for all browsers except IE 8 and 9. IE tends to skip the part of fetching the values and goes straight to the variable and finds nothing.. is there a way to sync it all so that it works in IE? function seturl() { var qstring = returnQString(); $('span.keyword').text($.trim($('#hdnKeyWord').attr('value'))); $('input.search_box').attr('value', $.trim($('#hdnKeyWord').attr('value'))); $('#hdnSearchKeyword').attr('value', $.trim($('#hdnKeyWord').attr('value'))); $(".search_box").val($.trim($("#hdn_span_hdnKeyWord").text())); $(".header_inner input[type='text']").focus(); $(".search_term input[type='text']").focus(); $('#locationurl').attr('value', qstring); } function returnQString(){ var qstring = $.trim($('#locationurl').attr('init')); //initial value of the url qstring += "?type=" + $('#hdnSTSearch').attr('value'); // type of handler hit qstring += "&keyword=" + encodeURIComponent($('#hdnKeyWord').attr('value')); // keyword addition qstring += "&pagestart=" + $('#current_page').attr('value'); // pagestart(current page) addition qstring += "&pagesize=" + $('#show_per_page').attr('value'); // per page size addition qstring += "&facets=" // facetsearch $.each(selectedFilter.items, function (index, value) { qstring += value.filter + ","; }); qstring += "&selectedSection=" + selectedSection // Section Select return qstring; }

    Read the article

  • perl system command return code

    - by Mel
    I have a script that has been running for over a year and now it is failing: It is creating a command file: open ( FTPFILE, ">get_list"); print FTPFILE "dir *.txt"\n"; print FTPFILE "quit\n"; close FTPFILE; Then I run the system command: $command = "ftp ".$Server." < get_list | grep \"\^-\" >new_list"; $code = system($command); The logic the checks: if ($code == 0) { do stuff } else { log error } It is logging an error. When I print the $code variable, I am getting 256. I used this command to parse the $? variable: $exit_value = $? >> 8; $signal_num = $? & 127; $dumped_core = $? & 128; print "Exit: $exit_value Sig: $signal_num Core: $dumped_core\n"; Results: Exit: 1 Sig: 0 Core: 0 Thanks for any help/insight.

    Read the article

  • Newbie - eclipse workflow (PHP development)

    - by engil
    Hi all - this is a bit of a newbie question but hoping I can get some guidance. I've been playing around with Eclipse for a couple months yet I'm still not completely comfortable with my setup and it seems like every time I install it to a new system I end up with different results. What I'm hoping to achieve is (I think) fairly standard. In my environment I'd like SVN (currently using Subclipse), FTP support (currently using Aptana plugin), debugging (going to use XDebug) and all the usual bells and whistles of development (code completion, refactoring, etc.) My biggest current issue is how to set up my environment to support both a 'development' and 'production' server. Optimally I would be able to work directly against the dev server (Eclipse on my Vista desktop against the VM Ubuntu dev server) and then push to production server (shared hosting). I'd prefer to work directly against the dev server (with no local project files, just using the Connections provided by Aptana) but I'm guessing this won't allow for code-completoin or all the other bells and whistles provided for development. Any thoughts? Kind of an open ended question, but maybe this could be an opportunity for some of you with a great deal of experience using Eclipse to describe your setups so people like me can get some insight into good ways to get set up.

    Read the article

  • How to get timestamp of tick precision in .NET / C#?

    - by Hermann
    Up until now I used DateTime.Now for getting timestamps, but I noticed that if you print DateTime.Now in a loop you will see that it increments in descrete jumps of approx. 15 ms. But for certain scenarios in my application I need to get the most accurate timestamp possible, preferably with tick (=100 ns) precision. Any ideas? Update: Apparently, StopWatch / QueryPerformanceCounter is the way to go, but it can only be used to measure time, so I was thinking about calling DateTime.Now when the application starts up and then just have StopWatch run and then just add the elapsed time from StopWatch to the initial value returned from DateTime.Now. At least that should give me accurate relative timestamps, right? What do you think about that (hack)? NOTE: StopWatch.ElapsedTicks is different from StopWatch.Elapsed.Ticks! I used the former assuming 1 tick = 100 ns, but in this case 1 tick = 1 / StopWatch.Frequency. So to get ticks equivalent to DateTime use StopWatch.Elapsed.Ticks. I just learned this the hard way. NOTE 2: Using the StopWatch approach, I noticed it gets out of sync with the real time. After about 10 hours, it was ahead by 5 seconds. So I guess one would have to resync it every X or so where X could be 1 hour, 30 min, 15 min, etc. I am not sure what the optimal timespan for resyncing would be since every resync will change the offset which can be up to 20 ms.

    Read the article

  • running php script manually and getting back to the command line?

    - by Simpson88Keys
    I'm running a php script via command line, and it works just fine, except that when it's finished, it doesn't go back to the command line? Just sits there, so I never when it's done... This is the script: $conn_id = ftp_connect(REMOTE) or die("Couldn't connect to ".REMOTE); $login_result = ftp_login($conn_id, 'OMITTED','OMITTED'); if ((!$conn_id) || (!$login_result)) die("FTP Connection Failed"); $dir = 'download'; if ($dir != ".") { if (ftp_chdir($conn_id, $dir) == false) { echo ("Change Dir Failed: $dir<BR>\r\n"); return; } if (!(is_dir($dir))) mkdir($dir); chdir ($dir); } $contents = ftp_nlist($conn_id, "."); foreach ($contents as $file) { if ($file == '.' || $file == '..') continue; if (@ftp_chdir($conn_id, $file)) { ftp_chdir ($conn_id, ".."); ftp_sync ($file); } else ftp_get($conn_id, $file, $file, FTP_BINARY); } ftp_chdir ($conn_id, ".."); chdir (".."); ftp_close($conn_id);

    Read the article

  • Is there any way to access codeigniter language and config properties from included javascript files

    - by ubermensch
    Good morning! I'm having great success so far with CodeIgniter. I'm new to PHP and web development in general, but I feel that CodeIgniter is giving me a leg up while I catch up on the basics. My question for today is this - I have been happily loading config and lang values from my views for a while now, and everything is working fine. But what about JavaScript files being linked into my views? Is there any way to make the $this-lang-line and $this-config-item function references available to me in my JavaScript files? I am implementing jQuery client-side validation, and would like to pull in my error messages from the server, both to support internationalisation and to make sure that validation gracefully degrades if JavaScript is not available, in that the error messages pushed back into the view from the server-side validation are identical to those displayed dynamically by the jQuery validation. I would not like to have to keep coming back to make sure that these strings are kept in sync. As for internationalisation, I'm fresh out of ideas on how to support that if it turns out that lang and config item strings are completely unavailable from my JS files. Any help you can provide would be greatly appreciated! :)

    Read the article

  • How to run stored procedures and ad-hoc scripts asynchronously with "loosely" connected SQL Server 2

    - by sanga
    Is there a way to initiate a script against an instance of SQL server when it is not connected then have it run on the instance the next time it connects? This needs to happen without any intervention from me. Background situation if you are interested: We have about 120 machines each with their own instance of SQL Server 2000. Most of them are laptops. We have merge replication set up with each one. From time to time, there is a need to delete "rogue" guids from some tables in some instances that overwrite legitimate records on the main publisher as well as perform administrative tasks via stored procedure or adhoc sql statements. The problem is there is no telling when each machine is going to be connected to the network. Some folks turn their machines completely off at the end of the day. Others disconnect their machines and take them on business trips, home for the weekend etc. Did I mention that about 35 of these machines are in utility trucks and "attempt" to sync over a wireless connection. Thanks in advance for any assistance or suggestions. Sanga

    Read the article

  • Jquery failure after site went live

    - by Brandon Condrey
    I have been designing a site for weeks using JQuery. I don't have a local server or a testing server so I just created a directory through FTP, '/testing'. Everything was working great in the testing directory. I attempted to go live tonight by moving all the files in '/testing' to the root directory and I changed all file paths and script sources accordingly. The site loads, but everything related to JQuery is non-functional. Javascript console gives errors of (just as an example from a plugin): '$.os.name' is not a function I'm at loss for what to do. I changed the paths referencing the JQuery library, installed a fresh copy of JQuery (to a new directory), etc. There is a wordpress installation in a different directory '/blog'. I've read about some compatibility issues with wordpress, but that seems to be related to using JQuery inside wordpress, which I am not. I'm not sure if any code would be beneficial since it was all functional in a different directory. Your help is greatly appreciated.

    Read the article

  • How can I get a list of modified records from a SQL Server database?

    - by Pixelfish
    I am currently in the process of revamping my company's management system to run a little more lean in terms of network traffic. Right now I'm trying to figure out an effective way to query only the records that have been modified (by any user) since the last time I asked. When the application starts it loads the job information and caches it locally like the following: SELECT * FROM jobs. I am writing out the date/time a record was modified ala UPDATE jobs SET Widgets=@Widgets, LastModified=GetDate() WHERE JobID=@JobID. When any user requests the list of jobs I query all records that have been modified since the last time I requested the list like the following: SELECT * FROM jobs WHERE LastModified>=@LastRequested and store the date/time of the request to pass in as @LastRequest when the user asks again. In theory this will return only the records that have been modified since the last request. The issue I'm running into is when the user's date/time is not quite in sync with the server's date/time and also of server load when querying an un-indexed date/time column. Is there a more effective system then querying date/time information?

    Read the article

  • Convert asp.net application to windows forms app

    - by rogdawg
    I have written and deployed an ASP.NET application that is pretty complex. It uses XSL transformations to create web forms for a large variety of data objects. The data comes from the database as XML via a web service. Now, I need to create a Windows desktop application that will provide a small subset of the web applications functionality to a user who may not have access to the web (working in remote areas). I will provide the data syncing using the MS Sync Framework. And I will have the desktop use a local data store. I would like to use the same xslt files in the desktop app that I use in the web app for the form creation so that, if changes are made, the desktop app can update itself when it connects and syncs its data. But, I am wondering how to replicate the asp.net codebehind logic of my web app in the windows forms. If I use a browser control to render the XSLTransformation result, then how could I handle click events, etc, in the form? Also, can I launch other windows as "dialog boxes" from my windows forms (I do this in my web app using RadControls functionality)? Thanks for any advice you can give.

    Read the article

  • What happens if I just add a second IP to a domain?

    - by tntu
    We have two servers that are in constant sync. We have two applications that connect to them. Each app to different server. We devised a new version of those apps that will read a dns entry and get a list of IP addresses and try them in order. Now problem is old apps. We have noticed that some ppl still use the old ones even if we have released the new. If we were to add two IP's to each domain would they receive the IP's in the order we set them or random? Either way it will still work for us but I'm just curious. If first server goes offline will the client application try the other? To be noted for old version: Interruption does not affect in any way the continuation once connection is reestablished. Each communication is independent of previous ones. Applications connect at set intervals of time anywhere between 5 seconds to 1 hour. Connection is done simply using an http post to the URL in question.

    Read the article

  • Shared Git repo syncing to svn causing git svn rebase to pollute repo with a log of no-op merge prob

    - by John K
    This wasn't so bad at the beginning, but now I have hundreds of no-op merge problems (solved by git rebase --skip). I have setup a shared git repo for my group because it is easier to deal with. But the company uses SVN so I have to keep SVN in sync with GIT. Worked like a dream at first, but after weeks of doing this GIT is giving me a lot of the following errors. Applying: * making all config actions work Using index info to reconstruct a base tree... Falling back to patching base and 3-way merge... Auto-merging app/controllers/vulnerabilities_controller.rb CONFLICT (content): Merge conflict in app/controllers/vulnerabilities_controller.rb Auto-merging public/javascripts/network_analysis_vulnerability_config.js CONFLICT (content): Merge conflict in public/javascripts/network_analysis_vulnerability_config.js Failed to merge in the changes. Patch failed at 0046 * making all config actions work My workflow: git co master git pull origin git svn rebase ... deal with no-op merge problems ... git svn dcommit git pull origin git push origin The problem is that what is in SVN is the correct so I use git rebase --skip, but I have to do that hundreds of times before I can dcommit. How do I clear these merge problems permanently?

    Read the article

  • SharePoint 2007 - Content deployment and swapping content database

    - by Mel Lota
    Hi all, I'm currently working on a SharePoint 2007 site which is setup to allow clients to author content on a staging server and then this is automatically pushed up to the live environment via content deployment. The content deployment is setup in the 'Content deployment jobs and paths' in central admin. Now the problem I've got is that it seems that historically there have been a mixture of full and incremental deployments done to the live site collection which according to Stefan Goßner's best practices post (http://blogs.technet.com/stefan_gossner/pages/content-deployment-best-practices.aspx) is a bad idea due to the fact that things soon become out of sync. It's gotten to the point where the content deployment has just stopped working and incremental or full deployments are throwing errors in the logs. What I'm thinking is that I probably need to perform a full content deployment to an empty site collection and then somehow switch the new clean site collection with the current live one. I was wondering if anybody has any experience with this and could provide any pointers, I'm currently investigating the feasibility of performing the clean content deployment and then switching the live content database with the new one, however in my tests I've found that as soon as I switch content databases, the incremental deployment still fails. Any help much appreciated. (Note: I did post this on SharePoint Overflow as well, but thought I'd put it on here in case anybody else has any ideas) Cheers

    Read the article

  • How to make chrome.tabs.update works with content script

    - by user1673772
    I work on a little extension on Google Chrome, I want to create a new tab, go on the url "sample"+i+".com", launch a content script on this url, update the current tab to "sample"+(i+1)+".com", and launch the same script. I looked the Q&A available on stackoverflow and I google it but I didn't found a solution who works. This is my actually code of background.js (it works), it creates two tabs (i=21 and i=22) and load my content script for each url, when I tried to do a chrome.tabs.update Chrome launchs directly a tab with i = 22 (and the script works only one time) : function extraction(tab) { for (var i =21; i<23;i++) { chrome.storage.sync.set({'extraction' : 1}, function() {}); //for my content script chrome.tabs.create({url: "http://example.com/"+i+".html"}, function() {}); } } chrome.browserAction.onClicked.addListener(function(tab) {extraction(tab);}); If anyone can help me, the content script and manifest.json are not the problem. I want to make that 15000 times so I can't do otherwise. Thank you.

    Read the article

  • AppleScript: open frontmost file with another application

    - by jacobianism
    I'd like to write an AppleScript program to do the following (Automator would be fine too): I want to open the current active TextMate file (possibly there are several tabs open and other windows) with the application Transmit 2. (This will upload the file over FTP using Transmit's DockSend feature.) Here I've used a specific application (TextMate) but ideally I'd like it to work for any file currently active in any application. Ultimately I will assign a keyboard shortcut to run it. Here's what I have so far: tell application (path to frontmost application as text) set p to path of document 1 end tell tell application "Finder" open POSIX file p using "Transmit 2" end tell I've tried many variants of this and nothing works. EDIT: I have found this page: http://wiki.macromates.com/Main/Howtos and someone has made exactly the script I'm looking for: tell application "Transmit" to open POSIX file "$TM_FILEPATH" This is for Transmit [not 2] and I think for TextMate pre v2. I get the error (when using Transmit 2): Transmit 2 got an error: AppleEvent handler failed. One of the updates to v2 has broken it (not sure which one).

    Read the article

  • Is there a way to identify that a file has been modified and moved?

    - by Eric
    I'm writing an application that catalogs files, and attributes them with extra meta data through separate "side-car" files. If changes to the files are made through my program then it is able to keep everything in sync between them and their corresponding meta data files. However, I'm trying to figure out a way to deal with someone modifying the files manually while my program is not running. When my program starts up it scans the file system and compares the files it finds to it's previous record of what files it remembers being there. It's fairly straight forward to update after a file has been deleted or added. However, if a file was moved or renamed then my program sees that as the old file being deleted, and the new file being added. Yet I don't want to loose the association between the file and its metadata. I was thinking I could store a hash from each file so I could check to see if newly found files were really previously known files that had been moved or renamed. However, if the file is both moved/renamed and modified then the hash would not match either. So is there some other unique identifier of a file that I can track which stays with it even after it is renamed, moved, or modified?

    Read the article

  • Managing My Database in Source Control

    - by Jason
    As I am working with a new database project (within VS2008), and as I have never developed a database from scratch, I immediately began looking into how to manage a database within source control (in this case, Subversion). I found some information on SO, including this post: Keeping development databases in multiple environments in sync. One of the answers in particular pointed to a number of a links, all of which had good, useful information. I was reading a series of posts by K. Scott Allen which describe how he manages database change. From my reading (and please pardon the noobishness of my question), it seems as though the database itself is never checked into a repository. Rather, scripts that can build the database, along with test data (which is also populated from scripts) is checked into the repository. Ultimately, this means that, when a developer is testing his or her app, these scripts, which are part of the build process, are run. This ensures that the database is up-to-date, but is also run locally from every developer's machine. This makes sense to me (if I am indeed reading that correctly). However, if I am missing something, I would appreciate correction or additional guidance. In addition, another question I wanted to ask - does this also mean that I should NOT check in the mdf or ldf files that are created from Visual Studio? Thanks for any help and additional insight. Always appreciated.

    Read the article

  • Best Approach for Checking and Inserting Records

    - by nevets1219
    In one of our existing C programs which purpose is: Open connection to DB for record in all_record: if record contain certain data: if record is NOT in table A: // see #1 insert record information into table A and B // see #2 Close connection to DB select field from table where field=XXX 2 inserts This is typically done every X months to sync everything up or so I'm told. I've also been told that this process takes roughly a couple of days. There is (currently) at most 2.5million records (though not necessarily all 2.5m will be inserted). One of the table contains 10 fields and the other 5 fields. There isn't much to be done about iterating through the records since that part can't be changed at the moment. What I would like to do is speed up the part where I query MySQL. I'm not sure if I have left out any important details -- please let me know! I'm also no SQL expert so feel free to point out the obvious. I thought about: Putting all the inserts into a transaction (at the moment I'm not sure how important it is for the transaction to be all-or-none or if this affects performance) Using Insert X Where Not Exists Y LOAD DATA INFILE (but that would require I create a (possibly) large temp file) I read that (hopefully someone can confirm) I should drop indexes so they aren't re-calculated. mysql Ver 14.7 Distrib 4.1.22, for sun-solaris2.10 (sparc) using readline 4.3

    Read the article

  • Problems setting vertical scrollbar value in a datagrid (old one, not the better DataGridView).

    - by user365581
    I need to save the selected row and the vertical scrollBar's position after a refresh. This is how I do it: int currRow = myGrid.CurrentRowIndex; int vScrollPos = ((ScrollBar)myGrid.Controls[1]).Value // some code that refreshes the data among other things myGrid.CurrentRowIndex = currRow; // this sets the property myGrid.Select(currRow); // this selects in UI (both commands required) ((ScrollBar)myGrid.Controls[1]).Value = vScrollPos; Here's my problem: The grid always jumps to a place where the selected row is at the bottom. setting the current row makes it happen - similar to EnsureVisible of newer grid implementations. But after that there's the vScrollBar repositioning - and it just doesn't work right. In debug I see that the scroll bar value gets updated. In the UI, if I hit the down/up arrow on the scrollbar it suddenly jumps to the right place - But if I don't click anything the grid is just in the wrong position. I tried refreshing the grid/scroll bar to force a redraw, but it doesn't help. The actual grid position is just not in sync with the vertical ScrollBar's value. Any ideas?

    Read the article

  • actionscript find and convert text to url

    - by gravesit
    I have this script that grabs a twitter feed and displays in a little widget. What I want to do is look at the text for a url and convert that url to a link. public class Main extends MovieClip { private var twitterXML:XML; // This holds the xml data public function Main() { // This is Untold Entertainment's Twitter id. Did you grab yours? var myTwitterID= "username"; // Fire the loadTwitterXML method, passing it the url to your Twitter info: loadTwitterXML("http://twitter.com/statuses/user_timeline/" + myTwitterID + ".xml"); } private function loadTwitterXML(URL:String):void { var urlLoader:URLLoader = new URLLoader(); // When all the junk has been pulled in from the url, we'll fire finishedLoadingXML: urlLoader.addEventListener(Event.COMPLETE, finishLoadingXML); urlLoader.load(new URLRequest(URL)); } private function finishLoadingXML(e:Event = null):void { // All the junk has been pulled in from the xml! Hooray! // Remove the eventListener as a bit of housecleaning: e.target.removeEventListener(Event.COMPLETE, finishLoadingXML); // Populate the xml object with the xml data: twitterXML = new XML(e.target.data); showTwitterStatus(); } private function addTextToField(text:String,field:TextField):void{ /*Regular expressions for replacement, g: replace all, i: no lower/upper case difference Finds all strings starting with "http://", followed by any number of characters niether space nor new line.*/ var reg:RegExp=/(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/ig; //Replaces Note: "$&" stands for the replaced string. text.replace(reg,"<a href=\"$&\">$&</a>"); field.htmlText=text; } private function showTwitterStatus():void { // Uncomment this line if you want to see all the fun stuff Twitter sends you: //trace(twitterXML); // Prep the text field to hold our latest Twitter update: twitter_txt.wordWrap = true; twitter_txt.autoSize = TextFieldAutoSize.LEFT; // Populate the text field with the first element in the status.text nodes: addTextToField(twitterXML.status.text[0], twitter_txt); }

    Read the article

  • How to current snapshot of MySQL Table and store it into CSV file(after creating it) ?

    - by Rachel
    I have large database table, approximately 5GB, now I wan to getCurrentSnapshot of Database using "Select * from MyTableName", am using PDO in PHP to interact with Database. So preparing a query and then executing it // Execute the prepared query $result->execute(); $resultCollection = $result->fetchAll(PDO::FETCH_ASSOC); is not an efficient way as lots of memory is being user for storing into the associative array data which is approximately, 5GB. My final goal is to collect data returned by Select query into an CSV file and put CSV file at an FTP Location from where Client can get it. Other Option I thought was to do: SELECT * INTO OUTFILE "c:/mydata.csv" FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY "\n" FROM my_table; But I am not sure if this would work as I have cron that initiates the complete process and we do not have an csv file, so basically for this approach, PHP Scripts will have to create an CSV file. Do a Select query on the database. Store the select query result into the CSV file. What would be the best or efficient way to do this kind of task ? Any Suggestions !!!

    Read the article

  • how to fetch data using jquery

    - by user3566029
    I have tried to connect my index.php file in 000webhost.com using jQuery. From the site menu I have connected with 000webhost using FTP details available in 000webhost.com. When I tried to read data from jQuery it doesn’t working. Anyone please tell me what should I need to change? Should I need to connect my Dreamweaver database to webhost? If yes please explain? This is what I have done. index.php $mysql_host = "mysql0.000webhost.com"; $mysql_database = "a000000_mydb"; $mysql_user = "a000000_root"; $mysql_password = "******"; $conn=@mysql_connect ( $mysql_host,$mysql_user,$mysql_password )or die('aa'); mysql_select_db($mysql_database,$conn) or die('eoor on db'); $quer=mysql_query("SELECT * FROM sam"); $res=array(); while($row=mysql_fetch_row($quer)) { $res[]=$row; } print(json_encode($res)); return json_encode($res); mysql_close(); index.html $.get('public_html/index.php', function( data ) { alert( 'Successful' +data); });

    Read the article

  • Compile Apache 2.4.3 on Centos 6.2 (64bit)

    - by RiseCakoPlusplus
    I attempt to compile Apache 2.4.3 with apr-1.4.6 and apr-util-1.5.1 on Centos 6.2 (64bit). ./configure --build=x86_64-unknown-linux-gnu --host=x86_64-unknown-linux-gnu --target=x86_64-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --cache-file=../config.cache --with-libdir=lib64 --with-config-file-path=/etc --with-config-file-scan-dir=/etc/php.d --disable-debug --with-pic --disable-rpath --without-pear --with-bz2 --with-exec-dir=/usr/bin --with-freetype-dir=/usr --with-png-dir=/usr --with-xpm-dir=/usr --enable-gd-native-ttf --with-t1lib=/usr --without-gdbm --with-gettext --with-gmp --with-iconv --with-jpeg-dir=/usr --with-openssl --with-zlib --with-layout=GNU --enable-exif --enable-ftp --enable-magic-quotes --enable-sockets --with-kerberos --enable-ucd-snmp-hack --enable-shmop --enable-calendar --with-libxml-dir=/usr --enable-xml --with-system-tzdata --with-mhash --with-apxs2=/usr/sbin/apxs --libdir=/usr/lib64/php --enable-pdo=shared --with-mysql=shared,/usr --with-mysqli=shared,/usr/lib64/mysql/mysql_config --with-pdo-mysql=shared,/usr/lib64/mysql/mysql_config --without-pdo-sqlite --without-gd --disable-dom --disable-dba --without-unixODBC --disable-xmlreader --disable-xmlwriter --without-sqlite3 --disable-phar --disable-fileinfo --disable-json --without-pspell --disable-wddx --without-curl --disable-posix --disable-sysvmsg --disable-sysvshm --disable-sysvsem ./configure --with-included-apr --with-included-apr-util and when I issue make this happen: /root/httpd-2.4.3/srclib/apr/libtool: line 5989: cd: yes/lib: No such file or directory libtool: link: cannot determine absolute directory name of yes/lib' make[3]: *** [libaprutil-1.la] Error 1 make[3]: Leaving directory/root/httpd-2.4.3/srclib/apr-util' make[2]: * [all-recursive] Error 1 make[2]: Leaving directory /root/httpd-2.4.3/srclib/apr-util' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory/root/httpd-2.4.3/srclib' make: * [all-recursive] Error 1 anything I missed?

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >