Search Results

Search found 1968 results on 79 pages for 'pickle dump'.

Page 67/79 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Why does Android allocate more memory than needed when loading images

    - by Simon
    Folks, I don't think that this is a duplicate and is NOT one of those how do I avoid OOMs questions. This is a genuine quest for knowledge so hold off on those down votes please... Imagine I have a JPEG of 500x500 pixels. I load it as ARGB_8888 which is as "bad as it gets". I would expect Android to allocate 500x500x4 bytes = a little under 1MB however, look at a heap dump and you will see that Android allocates significantly more, often factors of 5-10 times greater. You frequently see questions on here about OOMS where the stack trace shows a heap request of say 15MB and it is ALWAYS much larger than is required simply to hold the bytes of the image. The OP usually catches some downvotes then is bombarded with stock answers and comments about using less memory (thanks Romain!) and in scaling. I think there is more than meets the eye here. Anybody know why this is? If there is no apparent answer, I will put together an SSCCE if it helps. PS. I assume that JPEG vs PNG etc is irrelevant since we're talking about the memory usage of the backing bitmap which is simply x times y times BPP - or am I being slow?

    Read the article

  • entity framework insert bug

    - by tmfkmoney
    I found a previous question which seemed related but there's no resolution and it's 5 months old so I've opened my own version. http://stackoverflow.com/questions/1545583/entity-framework-inserting-new-entity-via-objectcontext-does-not-use-existing-e When I insert records into my database with the following it works fine for a while and then eventually it starts inserting null values in the referenced field. This typically happens after I do an update on my model from the database although not always after I do an update. I'm using a MySQL database for this. I have debugged the code and the values are being set properly before the save event. They're just not getting inserted properly. I can always fix this issue by re-creating the model without touching any of my code. I have to recreate the entire model, though. I can't just dump the relevant tables and re-add them. This makes me think it doesn't have anything to do with my code but something with the entity framework. Does anyone else have this problem and/or solved it? using (var db = new MyModel()) { var stocks = from record in query let ticker = record.Ticker select new { company = db.Companies.FirstOrDefault(c => c.ticker == ticker), price = Convert.ToDecimal(record.Price), date_stamp = Convert.ToDateTime(record.DateTime) }; foreach (var stock in stocks) { if (stock.company != null) { var price = new StockPrice { Company = stock.company, price = stock.price, date_stamp = stock.date_stamp }; db.AddToStockPrices(price); } } db.SaveChanges(); }

    Read the article

  • jquery for ruby on rails

    - by Cezar
    Hello, I am tying to use this code http://gist.github.com/110410 to dump Prototype in favor of jQuery but I do have a problem. This is my HTML (a link_to generated link): <a onclick="var f = document.createElement('form'); f.style.display = 'none'; this.parentNode.appendChild(f); f.method = 'POST'; f.action = this.href;var s = document.createElement('input'); s.setAttribute('type', 'hidden'); s.setAttribute('name', 'authenticity_token'); s.setAttribute('value', 'Mi6RcR6YDyvg2uNwGrpbeIJutSHa2fYboU37wSDE7AU='); f.appendChild(s);f.submit();return false;" class="post add_to_cart " href="/line_items?product_id=547">Add to cart</a> Issue: Everything works as it should except that the page does a reload. I suspect that the submit gets thru which causes a page reload. Is there an elegant way to prevent that ? return false; doesn't seem to cut it in this case.

    Read the article

  • Send html array as post variable using Request.JSON

    - by ian
    I have an html: First name: <input type='text' name='first_name' value='' /><br/> Last name: <input type='text' name='last_name' value='' /><br/> <input type='checkbox' name='category[]' value='Math' /> Math<br/> <input type='checkbox' name='category[]' value='Science' /> Science<br/> <input type='checkbox' name='category[]' value='History' /> History<br/> etc..... I want to send(using post method) the selected categories(category[]) via mootools ajax so that if I dump the $_POST variable in the server I will get something like: array(1) { [category]=> array(2) { [0]=> string(4) "Math" [1]=> string(7) "History" } } What should be the javascript(mootools) code for it? Below is my partial code. new Request.JSON({url: '/ajax_url', onSuccess: function(){ alert('success'); } }).post(???); Note that I don't want to send first_name and last_name fields. I only want to send the category field which is an html array.

    Read the article

  • What is a reasonable OSGi development workflow?

    - by levand
    I'm using OSGi for my latest project at work, and it's pretty beautiful as far as modularity and functionality. But I'm not happy with the development workflow. Eventually, I plan to have 30-50 separate bundles, arranged in a dependency graph - supposedly, this is what OSGi is designed for. But I can't figure out a clean way to manage dependencies at compile time. Example: You have bundles A and B. B depends on packages defined in A. Each bundle is developed as a separate Java project. In order to compile B, A has to be on the javac classpath. Do you: Reference the file system location of project A in B's build script? Build A and throw the jar into B's lib directory? Rely on Eclipse's "referenced projects" feature and always use Eclipse's classpath to build (ugh) Use a common "lib" directory for all projects and dump the bundle jars there after compilation? Set up a bundle repository, parse the manifest from the build script and pull down the required bundles from the repository? No. 5 sounds the cleanest, but also like a lot of overhead.

    Read the article

  • How would you start automating my job?

    - by Jurily
    At my new job, we sell imported stuff. In order to be able to sell said stuff, currently the following things need to happen for every incoming shipment: Invoice arrives, in the form of an email attachment, Excel spreadsheet Monkey opens invoice, copy-pastes the relevant part of three columns into the relevant parts of a spreadsheet template, where extremely complex calculations happen, like =B2*550 Monkey sends this new spreadsheet to boss (email if lucky, printer otherwise), who sets the retail price Monkey opens the reply, then proceeds to input the data into the production database using a client program that is unusable on so many levels it's not even worth detailing Monkey fires up HyperTerminal, types in "AT", disconnect Monkey sends text messages and emails to customers using another part of the horrible client program, one at a time I want to change Monkey from myself to software wherever possible. I've never written anything that interfaces with email, Excel, databases or SMS before, but I'd be more than happy to learn if it saves me from this. Here's my uneducated wishlist: Monkey asks Thunderbird (mail server perhaps?) for the attachment Monkey tells Excel to dump the spreadsheet into a more Jurily-friendly format, like CSV or something Monkey parses the output, does the complex calculations // TODO: find a way to get the boss-generated prices with minimal manual labor involved Monkey connects to the database, inserts data Monkey spams costumers Is all this feasible? If yes, where do I start reading? How would you improve it? What language/framework do you think would be ideal for this? What would you do about the boss?

    Read the article

  • How I May Have Taken A Wrong Path in Programming

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • Send file FTP over SSL with custom port number

    - by JM4
    I have asked the question before but in a different manner. I am trying taking form data, compiling into a temporary CSV file and trying to send over to a client via FTP over SSL (this is the only route I am interested in hearing solutions for unless there is a workaround to doing this, I cannot make changes). I have tried the following: ftp_connect - nothing happens, the page just times out ftp_ssl_connect - nothing happens, the page just times out curl library - same thing, given URL it also gives error. I am given the following information: FTPS Server IP Address TCP Port (1234) Username Password Data Directory to dump file FTP Mode: Passive very, very basic code (which I believe should initiate a connection at minimum): Code: <?php $ftp_server = "00.000.00.000"; //masked for security $ftp_port = "1234"; // masked but not 990 $ftp_user_name = "username"; $ftp_user_pass = "password"; // set up basic ssl connection $conn_id = ftp_ssl_connect($ftp_server, $ftp_port, "20"); // login with username and password $login_result = ftp_login($conn_id, $ftp_user_name, $ftp_user_pass); echo ftp_pwd($conn_id); // / echo "hello"; // close the ssl connection ftp_close($conn_id); ?> When I run this over a SmartFTP client, everything works just fine. I just can't get it to work using PHP (which is a necessity). Has anybody had success doing this in the past? I would be very interested to hear your approach.

    Read the article

  • Tool to migrate a repo with only read access?

    - by Corazu
    I have a subversion repository located on our school servers from a project my friends and I worked on the past semester. We want to take the project and polish it and make it more of a portfolio item and I'm not quite sure that the repo will stay intact on the school servers (they usually wipe it after a few semesters). I don't have access to dump the repo, although I've sent an email to see if I can get one - which would be the best solution. I tried svnsync but the school server doesn't support the replay command (probably turned off), so that won't work for me. Now, theoretically, wouldn't it be possible to (manually or programmatically) checkout each revision of the repository and check it in to a new repository on our own server? I think it would work - there's only 3 of us, and there's no working copy conflicts that we'd have to worry about, just want a copy of all the history in the repo in case we need to go back and look at it. That being said, before I go and reinvent a wheel by writing a script to do it - does something like this already exist? I have to figure there's already a tool to do it, or that someone has wrote a script to do it.

    Read the article

  • Getting list of Facebook friends with latest API

    - by Eric
    I'm using the most recent version of the Facebook SDK (which lets to connect to something called the 'graph API' though I'm not sure). I've adapted Facebook's example code to let me connect to Facebook and that works... but I can't get a list of my friends. $friends = $facebook->api('friends.get'); This produces the error message: "Fatal error: Uncaught OAuthException: (#803) Some of the aliases you requested do not exist: friends.get thrown in /mycode/facebook.php on line 543" No clue why that is or what that means. Can someone tell me the right syntax (for the latest Facebook API) to get a list of friends? (I tried "$friends = $facebook-api-friends_get();" and get a different error, "Fatal error: Call to a member function friends_get() on a non-object in /mycode/example.php on line 129".) I can confirm that BEFORE this point in my code, things are fine: I'm connected to Facebook with a valid session and I can get my info and dump it to the screen just... i.e. this code executes perfectly before the failed friends.get call: $session = $facebook->getSession(); if ($session) { $uid = $facebook->getUser(); $me = $facebook->api('/me'); } print_r($me);

    Read the article

  • How do I prevent qFatal() from aborting the application?

    - by Dave
    My Qt application uses Q_ASSERT_X, which calls qFatal(), which (by default) aborts the application. That's great for the application, but I'd like to suppress that behavior when unit testing the application. (I'm using the Google Test Framework.) I have by unit tests in a separate project, statically linking to the class I'm testing. The documentation for qFatal() reads: Calls the message handler with the fatal message msg. If no message handler has been installed, the message is printed to stderr. Under Windows, the message is sent to the debugger. If you are using the default message handler this function will abort on Unix systems to create a core dump. On Windows, for debug builds, this function will report a _CRT_ERROR enabling you to connect a debugger to the application. ... To supress the output at runtime, install your own message handler with qInstallMsgHandler(). So here's my main.cpp file: #include <gtest/gtest.h> #include <QApplication> void testMessageOutput(QtMsgType type, const char *msg) { switch (type) { case QtDebugMsg: fprintf(stderr, "Debug: %s\n", msg); break; case QtWarningMsg: fprintf(stderr, "Warning: %s\n", msg); break; case QtCriticalMsg: fprintf(stderr, "Critical: %s\n", msg); break; case QtFatalMsg: fprintf(stderr, "My Fatal: %s\n", msg); break; } } int main(int argc, char **argv) { qInstallMsgHandler(testMessageOutput); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } But my application is still stopping at the assert. I can tell that my custom handler is being called, because the output when running my tests is: My Fatal: ASSERT failure in MyClass::doSomething: "doSomething()", file myclass.cpp, line 21 The program has unexpectedly finished. What can I do so that my tests keep running even when an assert fails?

    Read the article

  • Is there a Symfony callback at the termination of a session?

    - by Rob Wilkerson
    I have an application that is authenticating against an external server in a filter. In that filter, I'm trying to set a couple of session attributes on the user by using Symfony's setAttribute() method: $this->getContext()->getUser()->setAttribute( 'myAttribute', 'myValue' ); What I'm finding is that, if I dump $_SESSION immediately after setting the attribute. On the other hand, if I call getAttribute( 'myAttribute' ), I get back exactly what I put in. All along, I've assumed that reading/writing to user attributes was synonymous with reading/writing to the session, but that seems to be an incorrect assumption. Is there a timing issue? I'm not getting any non-object errors, so it seems that the user is fully initialized. Where is the disconnect here? Thanks. UPDATE The reason this was happening is because I had some code in myUser::shutdown() that cleared out a bunch of stuff. Because myUser is loosely equivalent to $_SESSION (at least with respect to attributes), I assumed that the shutdown() method would be called at the end of each session. It's not. It seems to get called at the close of each request which is why my attributes never seemed to get set. Now, though, I'm left wondering whether there's a session closing callback. Anyone know?

    Read the article

  • Rails upload file to ftp server

    - by Bob
    I'm on Rails 2.3.5 and Ruby 1.8.6 and trying to figure out how to let a user upload a file to a FTP server on a different machine than my Rails app. Also my Rails app will be hosted on Heroku which doesn't facilitate the writing of files to the local filesystem. index.html.erb <% form_tag '/ftp/upload', :method => :post, :multipart => true do %> <label for="file">File to Upload</label> <%= file_field_tag "file" %> <%= submit_tag 'Upload' %> <% end %> ftp_controller.rb require 'net/ftp' class FtpController < ApplicationController def upload file = params[:file] ftp = Net::FTP.new('remote-ftp-server') ftp.login(user = "***", passwd = "***") ftp.puttextfile(file.read, File.basename(file.original_filename)) ftp.quit() end def index end end Currently I'm just trying to get the Rails app to work on my Windows laptop. With the above code, I'm getting this error Errno::ENOENT in FtpController#upload No such file or directory -.... followed by a dump of the file contents Anyone knows what's going on?

    Read the article

  • Managing Many to Many relationships in asp.net Wizard Control

    - by Luis
    Say I have this entity with a lot of attributes. In the input form I have decided to implement a wizard control so I can collect information about this entity in several steps. The problem is that I need to collect information that has been modeled has many to many relationships. I am planning to use a telerik gridview to manage this (add/edit/delete), the problem is where do I store that data since the entity in a insert form is not created on the database yet. OK so I can store all that info in temporary lists residing in the viewstate, waiting for the final submit where I dump all that in the DB, but one of the steps I am collecting files...now storing files in the viewstate is out of the question, same as as storing them in the session... I have been thinking of implementing in a way that the user has to submit some info first (say first 3 steps), commit the data to the database creating the parent entity and then start inserting all the childs entities...but this will get weird as it's confusing since on the first steps you not saving the data to the DB and on the next ones you are commiting directly... Anyone has any thoughts on this? Thanks

    Read the article

  • MySQL to SQL Server ODBC Connector?

    - by Scott C.
    My boss wants to have data in MySQL DBs used for our website to be "linked and synced" with a Financial Server that has its DB in SQL Server. Sooooo...even though I have no idea how to accomplish this, this just sounds like an absolute nightmare especially since the MySQL DB is most likely going to be hosted in the cloud and not on a machine next to the Financial Server. Any ideas how to accomplish this? (within reason?) Also, his big thing is he wants to basically pull up the data from any record a user enters and using data pulled from that do all sorts of calculations using ANOTHER program that stores its data (apparently) in SQL Server. Thinking of all the data I might have to convert makes me very uneasy. Please tell a ODBC eliminates complicated junk like this. :/ I'm trying to talk him into just having MySQL do a nightly dump into a CSV file or something and using that (rather than connector) to update the SQL Server DBs. I guess I'm just not that comfortable with a server and/or programming I have no say over being connected DIRECTLY to my MySQL DB for the website. If there's no good answer for this, can anyone offer a suggestion as to what I can say to talk him out of this? (I'm a low-level IT guy w/ a decent grasp on programming...but I'm no expert - should I try to push this off to a seasoned IT pro?) Thanks in advance.

    Read the article

  • Pre Project Documentation

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Rails 3, Devise and custom controller action

    - by Johnny Klassy
    routes.rb match 'agencies/stub' => 'agencies#stub', :via => :get resources :agencies Here's the rake routes dump agencies_stub GET /agencies/stub(.:format) {:controller=>"agencies", :action=>"stub"} agencies GET /agencies(.:format) {:action=>"index", :controller=>"agencies"} POST /agencies(.:format) {:action=>"create", :controller=>"agencies"} new_agency GET /agencies/new(.:format) {:action=>"new", :controller=>"agencies"} edit_agency GET /agencies/:id/edit(.:format) {:action=>"edit", :controller=>"agencies"} agency GET /agencies/:id(.:format) {:action=>"show", :controller=>"agencies"} PUT /agencies/:id(.:format) {:action=>"update", :controller=>"agencies"} DELETE /agencies/:id(.:format) {:action=>"destroy", :controller=>"agencies"} Devise is setup to have all agenciesroutes only accessible as admin. The call I'm testing with is http://xyz:12345@localhost:3000/agencies/stub but it doesn't authenticate properly, ie, it doesn't recognize it as admin and throws me back to the Devise login page. The creds are a valid admin account. I'm baffled and have no idea why this is happening. Any insights will be much appreciated.

    Read the article

  • ASP.NET 2.0 + Firefox/Safari - UI Issues?

    - by Tejaswi Yerukalapudi
    I'm developing on a system that was originally developed five years ago. I don't have access to the complete source code of the system, but it is completely driven by XML and runs on ASP.NET 2.0. This was originally written for IE6, but since Microsoft has officially decided to dump it, we moved to IE7. Some javascript is added on the client side, but nothing that changes the UI has been done. (We had to integrate a credit card reader into the system) This code is accessed primarily on tablet PCs running windows, but I'd like to persuade my company to use the iPad. [The tablet somehow costs around 3k$. I think selling a 3000$ device to a client when you have the iPad for 500$ is ridiculous.] Now, my problem is if we open it in any other browser (Tested it on safari / firefox), the UI is completely messed up with elements completely out of place. Doesn't ASP.NET generate HTML that runs on any browser? My second question is if there are any credit card readers available in the market that integrate with the iPad. I don't really care about the software part as it's taken care by our company, I just need it to read the card details and post it to the server. Thanks, Teja.

    Read the article

  • Mimic Coldfusion's debug output in PHP?

    - by TekiusFanatikus
    I'm trying to mimic Coldfusion's debug output in PHP. Here's an example of what it looks like (ie. Execution Time section): I've turned to XDebug. Ideally, the exception stack error output would be what I'd be looking for. However, it only shows up when an exception occurs. I also tried something like (in our CMS-ish app) this (original question here): $content.= "<?php xdebug_start_trace('e:/xdebug/trace');?>"; $content.= "<?php require('".$page['file_'.LG]."'); ?>"; $content.= "<?php xdebug_stop_trace();?>"; ... $content.= "<?php echo readfile('e:/xdebug/trace.xt');?>"; However, I get an insane, browser crashing HTML table dropped at the bottom of page. Not very efficient. My php.ini config: xdebug.trace_format = 2 xdebug.collect_vars = 1 xdebug.collect_params = 4 xdebug.dump_globals = 1 xdebug.dump.SERVER = 'REQUEST_URI' xdebug.show_local_vars = 1 xdebug.show_mem_delta = 1 I'm just wondering if someone has already done something similar?

    Read the article

  • how to Clean up(destructor) a dynamic Array of pointers??

    - by Ahmed Sharara
    Is that Destructor is enough or do I have to iterate to delete the new nodes?? #include "stdafx.h" #include<iostream> using namespace std; struct node{ int row; int col; int value; node* next_in_row; node* next_in_col; }; class MultiLinkedListSparseArray { private: char *logfile; node** rowPtr; node** colPtr; // used in constructor node* find_node(node* out); node* ins_node(node* ins,int col); node* in_node(node* ins,node* z); node* get(node* in,int row,int col); bool exist(node* so,int row,int col); //add anything you need public: MultiLinkedListSparseArray(int rows, int cols); ~MultiLinkedListSparseArray(); void setCell(int row, int col, int value); int getCell(int row, int col); void display(); void log(char *s); void dump(); }; MultiLinkedListSparseArray::MultiLinkedListSparseArray(int rows,int cols){ rowPtr=new node* [rows+1]; colPtr=new node* [cols+1]; for(int n=0;n<=rows;n++) rowPtr[n]=NULL; for(int i=0;i<=cols;i++) colPtr[i]=NULL; } MultiLinkedListSparseArray::~MultiLinkedListSparseArray(){ // is that destructor enough?? cout<<"array is deleted"<<endl; delete [] rowPtr; delete [] colPtr; }

    Read the article

  • Unable to save data in database manually and get latest auto increment id, cakePHP

    - by shabby
    I have checked this question as well and this one as well. I am trying to implement the model described in this question. What I want to do is, on the add function of message controller, create a record in thread table(this table only has 1 field which is primary key and auto increment), then take its id and insert it in the message table along with the user id which i already have, and then save it in message_read_state and thread_participant table. This is what I am trying to do in Thread Model: function saveThreadAndGetId(){ //$data= array('Thread' => array()); $data= array('id' => ' '); //Debugger::dump(print_r($data)); $this->save($data); debug('id: '.$this->id); $threadId = $this->getInsertID(); debug($threadId); $threadId = $this->getLastInsertId(); debug($threadId); die(); return $threadId; } $data= array('id' => ' '); This line from the above function adds a row in the thread table, but i am unable to retrieve the id. Is there any way I can get the id, or am I saving it wrongly? Initially I was doing the query thing in the message controller: $this->Thread->query('INSERT INTO threads VALUES();'); but then i found out that lastId function doesnt work on manual queries so i reverted.

    Read the article

  • What is the fastest way to get a DataTable into SQL Server?

    - by John Gietzen
    I have a DataTable in memory that I need to dump straight into a SQL Server temp table. After the data has been inserted, I transform it a little bit, and then insert a subset of those records into a permanent table. The most time consuming part of this operation is getting the data into the temp table. Now, I have to use temp tables, because more than one copy of this app is running at once, and I need a layer of isolation until the actual insert into the permanent table happens. What is the fastest way to do a bulk insert from a C# DataTable into a SQL Temp Table? I can't use any 3rd party tools for this, since I am transforming the data in memory. My current method is to create a parameterized SqlCommand: INSERT INTO #table (col1, col2, ... col200) VALUES (@col1, @col2, ... @col200) and then for each row, clear and set the parameters and execute. There has to be a more efficient way. I'm able to read and write the records on disk in a matter of seconds...

    Read the article

  • Exporting database from external server (without SSH access)

    - by Derek Carlisle
    Our current website is hosted by the design agency who originally built the website, however we are bringing the development of the website in house therefore need to export the database from their server and import it to ours. We have FTP and phpMyAdmin access but don't have SSH access to the server. I was hoping to run a PHP script that would mysql dump the database, compress it and then copy it across to our server using scp: $backupFile = $_SERVER['DOCUMENT_ROOT'].'/backup' . date("Y-m-d-H-i-s") . '.gz'; system("mysqldump -h DB_HOST -u DB_USER -pDB_PASS DB_NAME | gzip > $backupFile"); exec("sshpass -p PASSWORD scp -r -P PORT_NUMBER $backupFile [email protected]:/path/to/directory/"); I have ran this locally from the command line and it worked fine, although I had to install sshpass (the hosting server might not have this installed). Also, I was hoping to run it from the browser as I don't have command line access on the hosting server, however it didn't work, no errors produced though. Can you anyone recommend how I can export from the server that I don't have SSH access to and import to my server? Thanks

    Read the article

  • In .NET, how do you send multiple arguments into a DOS command prompt?

    - by donde
    I am trying to execute DOS commands in ASP.NET 2.0. What I have now calls a BAT file which, in turn, calls a CMD file. It works (with the end result being a file gets ftp'ed). However, I'd like to dump the BAT and CMD files and run everything in .NET. What is the format of sending multiple arguments into the command window? Here is what I have now. The .NET Code: System.Diagnostics.Process proc = new System.Diagnostics.Process(); proc.EnableRaisingEvents = false; proc.StartInfo.FileName = "C:\\MyBat.BAT"; proc.Start(); proc.WaitForExit(); The Bat File looks like this (all it does is run the cmd file): ftp.exe -s:C:\MyCMD.cmd And here is the content of the Cmd file: open <my host> <my user name> <my pw> quote site cyl pri=1 sec=1 lrecl=1786 blksize=0 recfm=fb retpd=30 put C:\MyDTLFile.dtl 'MyDTLFile.dtl' quit

    Read the article

  • c++ floating point precision loss: 3015/0.00025298219406977296

    - by SigTerm
    The problem. Microsoft Visual C++ 2005 compiler, 32bit windows xp sp3, amd 64 x2 cpu. Code: double a = 3015.0; double b = 0.00025298219406977296; //*((unsigned __int64*)(&a)) == 0x40a78e0000000000 //*((unsigned __int64*)(&b)) == 0x3f30945640000000 double f = a/b;//3015/0.00025298219406977296; the result of calculation (i.e. "f") is 11917835.000000000 (*((unsigned __int64*)(&f)) == 0x4166bb4160000000) although it should be 11917834.814763514 (i.e. *((unsigned __int64*)(&f)) == 0x4166bb415a128aef). I.e. fractional part is lost. Unfortunately, I need fractional part to be correct. Questions: 1) Why does this happen? 2) How can I fix the problem? Additional info: 0) The result is taken directly from "watch" window (it wasn't printed, and I didn't forget to set printing precision). I also provided hex dump of floating point variable, so I'm absolutely sure about calculation result. 1) The disassembly of f = a/b is: fld qword ptr [a] fdiv qword ptr [b] fstp qword ptr [f] 2) f = 3015/0.00025298219406977296; yields correct result (f == 11917834.814763514 , *((unsigned __int64*)(&f)) == 0x4166bb415a128aef ), but it looks like in this case result is simply calculated during compile-time: fld qword ptr [__real@4166bb415a128aef (828EA0h)] fstp qword ptr [f] So, how can I fix this problem? P.S. I've found a temporary workaround (i need only fractional part of division, so I simply use f = fmod(a/b)/b at the moment), but I still would like to know how to fix this problem properly - double precision is supposed to be 16 decimal digits, so such calculation isn't supposed to cause problems.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >