Search Results

Search found 926 results on 38 pages for 'overwrite'.

Page 27/38 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • MySQL Server 5.6 default my.cnf and my.ini

    - by user12626240
    We've introduced a default my.cnf / my.ini file for MySQL Server that you can now see in the 5.6.8 release candidate: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M   # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin   # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # socket = ..... # server_id = .....   # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M   sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES    There is also a template file called my-default.cnf or my-default.ini that has these lines near the start: # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL.   On Linux systems, the mysql_install_db command will copy the template file to the final location, where the server will read and use the file, removing the extra three lines. On Windows, the installer will create extra settings based on the answers you gave during installation. Neither will overwrite an existing my.cnf or my.ini file. The only initially active setting here is to change the value of  sql_mode from the server default of NO_ENGINE_SUBSTITUTION to NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES. This strict mode changes warnings for some non-standard behaviour into errors. This can cause applications which rely on the non-standard things, like dates that aren't valid, to lose data. If we had just changed the server default, the new setting would affect all servers that lack an explicit sql_mode setting, including those where strict mode is harmful. So we did it in the default file instead because that will only affect new server installations. You should expect that in our next version after 5.6, the server default will include STRICT_TRANS_TABLES. Our Windows installer and some of our connectors already use STRICT_TRANS_TABLES by default. Strict has been our preferred setting for many years and it is good to see some development platforms are using it. If you need the old behaviour, just remove the STRICT_TRANS_TABLES setting. If you do this, please also ask your application provider to make it unnecessary. They can do that by setting the session sql_mode setting in their own connections, so the rest of the applications using the server don't have to have an undesirable default. We've kept this file as small as possible because we found that our old files were too big and confused people. We've also now removed the old my-huge and related example files. One key part of this is the link to the documentation, where we will provide an introduction to some key settings. We'd like to hear your feedback on settings that will benefit most users or are most important to call out for existing users. Please do that by commenting here or if you prefer by adding comments to this bug report.

    Read the article

  • How do I deploy building blocks (quick parts) for Microsoft Outlook 2007?

    - by now
    I want to deploy some building blocks for Microsoft Outlook 2007. Microsoft has put up a poor solution at http://office.microsoft.com/en-us/outlook/HA102086531033.aspx#4 that asks you to save a template. That solution would require you to distribute that template to all the clients. An optimal solution would allow you to put the template containing the building blocks somewhere on the network and simply use the ”Workgroup building blocks path” group policy setting for shared paths in Microsoft Office 2007. Sadly, Outlook doesn’t respect that policy. Also, the described solution mentioned in the article above doesn’t work. Step 4 requests you to save the template as a Word Template after first asking you to save it as an Outlook Template. It seems that they copy&pasted the steps from the Word article and forgot to check whether it worked (and adjust the steps accordingly). Anyway, does anyone have any suggestions for how to distribute the building blocks without distributing NormalEmail.dotm (which will overwrite the clients’ own building blocks each time it is updated). Thanks!

    Read the article

  • Add a row to UITableView for adding new item?

    - by David.Chu.ca
    In order to provide UI for user to add new items to my table view, I would like to add a new row in my table at a specified location (last row for example) when the view is in edit mode (I have a edit button on the view's navigation bar right side). This new row will have a add button indicator on the left side and disclosure accessory arrow on the right. When the view is not in edit mode, this add row should not be displayed. I am not sure if I should overwrite: - (void)setEditing:(BOOL)editing animated:(BOOL)animated{...} where I call the UITableView's method: insertRowsAtIndexPaths:(NSArray *)indexPaths withRowAnimation: (UITableViewRowAnimation)animation to insert a new row? My understanding is that this call may add a new row into the table view. The table view's data source is from CoreData storage. Not sure this may cause inconsistent numbers of data in the data store and table view? If it is OK and I have to manage rows in the table view, how can I add left add indicator and left disclosure arrow to the new row? Another question is that if I can do it to insert a new row as Add row, should I remove it when the table view not in edit mode? Just want to know if I am on the right track.

    Read the article

  • Fancybox Auto Close, but remain user control

    - by justinw
    Hi, i've searched through the forum yet i can't find the solution. i'm refering to this thread to do the auto close function: http://groups.google.com/group/fancybox/browse_thread/thread/d09438b7... I did follow JFK's solution which works just right: 'onComplete': function() { $("#fancybox-wrap, #fancybox-overlay").delay(3000).fadeOut(); } if you don't want the user to close the box, then add modal=true The scenario is I would like the user to have the option to close the modal when they click on the [close] button or click anywhere on the overlay. I'm using the latest version of FB and jQuery on Rails. Here's my script: <script type="text/javascript"> jQuery(document).ready(function() { jQuery("#link_post").fancybox({ 'autoDimensions':false, 'width':380, 'height':50, 'title':'This message box will automatically close in 10 seconds.', 'titlePosition':'outside', 'onComplete': function() { jQuery("#fancybox-wrap, #fancybox- overlay").delay(10000).fadeOut(); } }); }); </script> However, when i clicked on the close button, the title and close button will fade away, but the FB's content and overlay are still there! it will only fade away after 10 seconds. So, my question is how to overwrite the 'onComplete' function if user clicks on the close button before it automatically closes?

    Read the article

  • Crystal Reports Reportviewer - Set Datasource Dynamically Not Working :argh:

    - by Albert
    I'm running CR XI, and accessing .RPT files through a ReportViewer in my ASP.NET pages. I've already got the following code, which is supposed to set the Report Datasource dynamically. rptSP = New ReportDocument Dim rptPath As String = Request.QueryString("report") rptSP.Load(rptPath.ToString, 0) Dim SConn As New System.Data.SqlClient.SqlConnectionStringBuilder(ConfigurationManager.ConnectionStrings("MyConnectionString").ConnectionString) rptSP.DataSourceConnections(SConn.DataSource, SConn.InitialCatalog).SetConnection(SConn.DataSource, SConn.InitialCatalog, SConn.UserID, SConn.Password) Dim myConnectionInfo As ConnectionInfo = New ConnectionInfo myConnectionInfo.ServerName = SConn.DataSource myConnectionInfo.DatabaseName = SConn.InitialCatalog myConnectionInfo.UserID = SConn.UserID myConnectionInfo.Password = SConn.Password 'Two new methods to loop through all objects and tables contained in the requested report and set 'login credentials for each object and table. SetDBLogonForReport(myConnectionInfo, rptSP) SetDBLogonForSubreports(myConnectionInfo, rptSP) Me.CrystalReportViewer1.ReportSource = rptSP But when I go into each .RPT file, and open up the Database Expert section, there is obviously still servernames hardcoded in there, and the code listed above doesn't seem to be able to change the servernames that are hardcoded there. I say this because I have training and production environments. When the .RPT file is hardcoded with my production server, and I open it on my training server with the code above (and the web.config has the training server in the connection string), I get the ol: Object reference not set to an instance of an object. And then if I go into the .RPT file, and change over the datasource to the training server, and try to open it again, it works fine. Why doesn't the code above overwrite the .RPT files datasource? How can I avoid having to open up each .RPT and change the datasource when migrating reports from server to server? Is there a setting in the .RPT file I'm missing or something?

    Read the article

  • git push merge error, but git pull is already up-to-date. Tried reclone, same problem.

    - by Jasie
    I do: git commit . git push error: Entry 'file.php' not uptodate. Cannot merge. Then I do git pull Already up-to-date. What do I do? I just want to get the latest version from the remote copy, and overwrite anything on my local copy. Edit: I tried everything. I deleted my local repo, and git clone ssh://[email protected]/directory ... Checking out files: 100%, done. git status On branch master nothing to commit (working directory clean) All looks good, right? Pull just in case. git pull Already up-to-date. I make a one line change in a file to see if I can push it. git commit . [master 1e18af1] Rando change 1 files changed, 2 insertions(+), 0 deletions(-) git push Counting objects: 13, done. Delta compression using up to 2 threads. Compressing objects: 100% (6/6), done. Writing objects: 100% (7/7), 646 bytes, done. Total 7 (delta 3), reused 0 (delta 0) From /directory d6d61aa..1e18af1 master -> origin/master error: Entry 'someotherfile.php' not uptodate. Cannot merge. Updating b8f9a54..1e18af1 To ssh://[email protected]/directory d6d61aa..1e18af1 master - master I have no idea what's going on! How can I commit/pull again normally? Thanks very much!

    Read the article

  • OCR: How to improve accuracy - existing libraries for removing non-text 'furniture', shapes, etc to

    - by Rob
    I want to remove rectangles etc that enclose text in a screenshot image, so that I can perform optical character recognition to get accurate text from the screenshot. Background: I doing this to extract data from a legacy application for use with other applications. This is the only way to get at this data as associated files are in a closed, proprietary, binary format. I will be using AutoItScript to drive the application to show data in its UI, then I will screenshot this and feed this to tesseract. I've already had some success in automating the UI, and have been able to use tesseract to get plain ascii text out of the bitmap. There are several AutoItScripr forum articles discussing its use with tesseract/OCR but not specifically for my question. http://www.autoitscript.com/forum/index.php?s=6c32c3ece12756e635a619cdf175eff9&showforum=2 What I need to do There are thin, 1-pixel wide rectangles that closely enclose some text, when fed to tesseract, it sees them as I for example for a verticle line of the rectangle. Any thoughts on how to remove the rectangles, or best practices? I'm asking if there is a generic command line based toolset to overwrite rectangles, for example, in .png files. I could then pass the .png through this, then pass it to tesseract. Details on the tesseract release/setup I've used are as follows: Go here: http://code.google.com/p/tesseract-ocr/downloads/list - For the basic english generic character set to get Tesseract up and running and recognising your bitmapped text into ascii text, use tesseract-2.00.eng.tar.gz (current version at time of writing is: "English language data for Tesseract (2.00 and up) Jul 2007 989 KB 84845") Related questions I have already looked at on Stack Overflow http://stackoverflow.com/questions/1335581/how-to-give-best-chance-of-success-to-an-ocr-software http://stackoverflow.com/questions/2296568/analysis-and-transformation-of-the-image-on-the-basis-of-this-analysis-for-better http://stackoverflow.com/questions/2268028/reading-characters-off-of-the-screen In these, my question is not completely answered or a commercial solution is being sold. I do not want to consider a commercial solution at this stage.

    Read the article

  • Heroku and Github integration (how to structure the project)

    - by Noah
    I'm creating a webservice and I want to store the source on github and run the app on heroku. I haven't seen my exact scenario addressed anywhere on the 'net so far, so I'll ask it here: I want to have the following directory structure: /project .git README <-- project readme file TODO.otl <-- project outline ... <-- other project-related stuff /my_rails_app app config ... README <-- rails' readme file In the above, project corresponds to http://github.com/myuser/project, and my_rails_app is the code that should be pushed to heroku. Do I need a separate branch for the rails app, or is there a simpler way that I'm missing? I guess my project-related non-rails files could live in my_rails_app, but the rails README already lives there and it seems inconsistent to overwrite that. However, if I leave it, my github page for the rails app will contain the rails readme, which makes no sense. Thanks, Noah P.S. I tried just setting it up as described above and running git push heroku from the main project folder. Of course, heroku doesn't know I want to deploy the subfolder: -----> Heroku receiving push ! Heroku push rejected, no Rails or Rack app detected.

    Read the article

  • Rails: Single Table Inheritance and models subdirectories

    - by Chris
    I have a card-game application which makes use of Single Table Inheritance. I have a class Card, and a database table cards with column type, and a number of subclasses of Card (including class Foo < Card and class Bar < Card, for the sake of argument). As it happens, Foo is a card from the original printing of the game, which Bar is a card from an expansion. In an attempt to rationalise my models, I have created a directory structure like so: app/ + models/ + card.rb + base_game/ + foo.rb + expansion/ + bar.rb And modified environment.rb to contain: Rails::Initializer.run do |config| config.load_paths += Dir["#{RAILS_ROOT}/app/models/**"] end However, when my reads a card from the database, Rails throws the following exception: ActiveRecord::SubclassNotFound (The single-table inheritance mechanism failed to locate the subclass: 'Foo'. This error is raised because the column 'type' is reserved for storing the class in case of inheritance. Please rename this column if you didn't intend it to be used for storing the inheritance class or overwrite Card.inheritance_column to use another column for that information.) Is it possible to make this work, or am I doomed to a flat directory structure?

    Read the article

  • JQTOUCH, Binding to links pulled in via AJAX, to make another AJAX call? Possible?

    - by nobosh
    Hello. I'm using JQTOUCH using the AJAX example provided in the demo: $('#customers').bind('pageAnimationEnd', function(e, info){ if (!$(this).data('loaded')) { // Make sure the data hasn't already been loaded (we'll set 'loaded' to true a couple lines further down) $('.loadingscreen').css({'display':'block'}); $(this).append($('<div> </div>'). // Append a placeholder in case the remote HTML takes its sweet time making it back load('/mobile/ajax/customers/ .info', function() { // Overwrite the "Loading" placeholder text with the remote HTML $(this).parent().data('loaded', true); // Set the 'loaded' var to true so we know not to re-load the HTML next time the #callback div animation ends $('.loadingscreen').css({'display':'none'}); })); } }); This then returns a nice UL which outputs just fine.. <ul class="edgetoedge"> <li class="viewaction" id="715"> <span class="Title"><a href="/c-view/715/">Lorem Ipsum is simply dummy text of the...</a></span> <div class="meta"> <span class="dateAdded"> 1d ago </span> </div> </li> </ul> This is where I get stuck. How can I then make it so when you click on the link above, it loads the URL wrapped near the class="Title" ? I'd like it to load JQTouch like the first code example. I tried the following two things without success: $('.viewaction').bind('click', function() { alert('wow'); }); $('.viewaction').live('pageAnimationEnd', function(e, info){ }); Thank you!

    Read the article

  • Passing in defaults within window.onload?

    - by Matrym
    I now understand that the following code will not work because I'm assigning window.onload to the result of the function, not the function itself. But if I remove the parens, I suspect that I have to explicitly call a separate function to process the config before the onload. So, where I now have: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <HEAD> <script type="text/javascript" src="lb-core.js"></script> <script type="application/javascript"> var lbp = { defaults: { color: "blue" }, init: function(config) { if(config) { for(prop in config){ setBgcolor.defaults[prop] = config[prop]; } } var bod = document.body; bod.style.backgroundColor = setBgcolor.defaults.color; } } var config = { color: "green" } window.onload = lbp.init(config); </script> </HEAD> <body> <div id="container">test</div> </body> </HTML> I imagine I would have to change it to: var lbp = { defaults: { color: "blue" }, configs: function(config){ for(prop in config){ setBgcolor.defaults[prop] = config[prop]; } }, init: function() { var bod = document.body; bod.style.backgroundColor = setBgcolor.defaults.color; } } var config = { color: "green" } lbp.configs(config); window.onload = lbp.init; Then, for people to use this script and pass in a configuration, they would need to call both of those bottom lines separately (configs and init). Is there a better way of doing this? Note: If your answer is to bundle a function of window.onload, please also confirm that it is not hazardous to assign window.onload within scripts. It's my understanding that another script coming after my own could, in fact, overwrite what I'd assigned to onload.

    Read the article

  • ASP.Net: Writing chunks of file..HTTP File upload with resume..

    - by Manish
    Please refer to question: http://stackoverflow.com/questions/2062852/resume-in-upload-file-control Now to find a solution for the above question, I want to work on it and develop a user control which can resume a HTTP File Upload process. For this, I'm going to create a temporary file on server until the upload is complete. Once the uploading is done completely then will just save it to the desired location. The procedure I have thought of is: Create a temporary file with a unique name (may be GUID) on the server. Read a chunk of file and append it to this temp file on the server. Continue step 1 to 3 until EOF is reached. Now if the connection is lost or user clicks on pause button, the writing of file is stopped and the temporary file is not deleted. Clicking on resume again or uploading the same file again will check on the server if the file exists and user whether to resume or to overwrite. (Not sure how to check if it's the same file. Also, how to step sending the chunks from client to server.) Clicking on resume will start from where it is required to be uploaded and will append that to the file on the server. (Again not sure how to do this.) Questions: Are these steps correct to achieve the goal? Or need some modifications? I'm not sure how to implement all these steps. :-( Need ideas, links... Any help appreciated...

    Read the article

  • Django custom managers - how do I return only objects created by the logged-in user?

    - by Tom Tom
    I want to overwrite the custom objects model manager to only return objects a specific user created. Admin users should still return all objects using the objects model manager. Now I have found an approach that could work. They propose to create your own middleware looking like this: #### myproject/middleware/threadlocals.py try: from threading import local except ImportError: # Python 2.3 compatibility from django.utils._threading_local import local _thread_locals = local() def get_current_user(): return getattr(_thread_locals, 'user', None) class ThreadLocals(object): """Middleware that gets various objects from the request object and saves them in thread local storage.""" def process_request(self, request): _thread_locals.user = getattr(request, 'user', None) #### end And in the Custom manager you could call the get_current_user() method to return only objects a specific user created. class UserContactManager(models.Manager): def get_query_set(self): return super(UserContactManager, self).get_query_set().filter(creator=get_current_user()) Is this a good approach to this use-case? Will this work? Or is this like "using a sledgehammer to crack a nut" ? ;-) Just using: Contact.objects.filter(created_by= user) in each view doesn`t look very neat to me. EDIT Do not use this middleware approach !!! use the approach stated by Jack M. below After a while of testing this approach behaved pretty strange and with this approach you mix up a global-state with a current request. Use the approach presented below. It is really easy and no need to hack around with the middleware. create a custom manager in your model with a function that expects the current user or any other user as an input. #in your models.py class HourRecordManager(models.Manager): def for_user(self, user): return self.get_query_set().filter(created_by=user) class HourRecord(models.Model): #Managers objects = HourRecordManager() #in vour view you can call the manager like this and get returned only the objects from the currently logged-in user. hr_set = HourRecord.objects.for_user(request.user)

    Read the article

  • multiple compiling errors with basic C++ application on VS2010 Beta 1

    - by ratata
    I just recently installed VS2010 Beta 1 from the Microsoft website , I started a basic C++ Win32 Console Application , that generated the following code: #include "stdafx.h" int _tmain(int argc, _TCHAR* argv[]) { return 0; } I tried compiling the code just to see how it runs and just then I encountered several(over a 100) compiling errors. Here is the first part of the build output: 1>ClCompile: 1> stdafx.cpp 1>c:\program files\microsoft visual studio 10.0\vc\include\crtdefs.h(520): error C2065: '_In_opt_z_' : undeclared identifier 1>c:\program files\microsoft visual studio 10.0\vc\include\crtdefs.h(520): error C2143: syntax error : missing ')' before 'const' 1>c:\program files\microsoft visual studio 10.0\vc\include\crtdefs.h(520): warning C4229: anachronism used : modifiers on data are ignored 1>c:\program files\microsoft visual studio 10.0\vc\include\crtdefs.h(520): error C2182: '_invalid_parameter' : illegal use of type 'void' 1>c:\program files\microsoft visual studio 10.0\vc\include\crtdefs.h(520): error C2491: '_invalid_parameter' : definition of dllimport data not allowed 1>c:\program files\microsoft visual studio 10.0\vc\include\crtdefs.h(520): error C2059: syntax error : ')' 1>c:\program files\microsoft visual studio 10.0\vc\include\crtdefs.h(527): error C2065: '_In_opt_z_' : undeclared identifier 1>c:\program files\microsoft visual studio 10.0\vc\include\crtdefs.h(527): error C2143: syntax error : missing ')' before 'const' 1>c:\program files\microsoft visual studio 10.0\vc\include\crtdefs.h(527): warning C4229: anachronism used : modifiers on data are ignored pastebin for the full list I thought maybe the include files got mixed up by some other compiler version I have installed previously( I have VS 2008 as well) so I reinstalled VS2010 just to overwrite the headers but that didn't do much. Thanks in advance for any help you might offer as I am helpless

    Read the article

  • Why isn't my operator overloading working properly?

    - by Mithrax
    I have the following Polynomial class I'm working on: #include <iostream> using namespace std; class Polynomial { //define private member functions private: int coef[100]; // array of coefficients // coef[0] would hold all coefficients of x^0 // coef[1] would hold all x^1 // coef[n] = x^n ... int deg; // degree of polynomial (0 for the zero polynomial) //define public member functions public: Polynomial::Polynomial() //default constructor { for ( int i = 0; i < 100; i++ ) { coef[i] = 0; } } void set ( int a , int b ) //setter function { //coef = new Polynomial[b+1]; coef[b] = a; deg = degree(); } int degree() { int d = 0; for ( int i = 0; i < 100; i++ ) if ( coef[i] != 0 ) d = i; return d; } void print() { for ( int i = 99; i >= 0; i-- ) { if ( coef[i] != 0 ) { cout << coef[i] << "x^" << i << " "; } } } // use Horner's method to compute and return the polynomial evaluated at x int evaluate ( int x ) { int p = 0; for ( int i = deg; i >= 0; i-- ) p = coef[i] + ( x * p ); return p; } // differentiate this polynomial and return it Polynomial differentiate() { if ( deg == 0 ) { Polynomial t; t.set ( 0, 0 ); return t; } Polynomial deriv;// = new Polynomial ( 0, deg - 1 ); deriv.deg = deg - 1; for ( int i = 0; i < deg; i++ ) deriv.coef[i] = ( i + 1 ) * coef[i + 1]; return deriv; } Polynomial Polynomial::operator + ( Polynomial b ) { Polynomial a = *this; //a is the poly on the L.H.S Polynomial c; for ( int i = 0; i <= a.deg; i++ ) c.coef[i] += a.coef[i]; for ( int i = 0; i <= b.deg; i++ ) c.coef[i] += b.coef[i]; c.deg = c.degree(); return c; } Polynomial Polynomial::operator += ( Polynomial b ) { Polynomial a = *this; //a is the poly on the L.H.S Polynomial c; for ( int i = 0; i <= a.deg; i++ ) c.coef[i] += a.coef[i]; for ( int i = 0; i <= b.deg; i++ ) c.coef[i] += b.coef[i]; c.deg = c.degree(); for ( int i = 0; i < 100; i++) a.coef[i] = c.coef[i]; a.deg = a.degree(); return a; } Polynomial Polynomial::operator -= ( Polynomial b ) { Polynomial a = *this; //a is the poly on the L.H.S Polynomial c; for ( int i = 0; i <= a.deg; i++ ) c.coef[i] += a.coef[i]; for ( int i = 0; i <= b.deg; i++ ) c.coef[i] -= b.coef[i]; c.deg = c.degree(); for ( int i = 0; i < 100; i++) a.coef[i] = c.coef[i]; a.deg = a.degree(); return a; } Polynomial Polynomial::operator *= ( Polynomial b ) { Polynomial a = *this; //a is the poly on the L.H.S Polynomial c; for ( int i = 0; i <= a.deg; i++ ) for ( int j = 0; j <= b.deg; j++ ) c.coef[i+j] += ( a.coef[i] * b.coef[j] ); c.deg = c.degree(); for ( int i = 0; i < 100; i++) a.coef[i] = c.coef[i]; a.deg = a.degree(); return a; } Polynomial Polynomial::operator - ( Polynomial b ) { Polynomial a = *this; //a is the poly on the L.H.S Polynomial c; for ( int i = 0; i <= a.deg; i++ ) c.coef[i] += a.coef[i]; for ( int i = 0; i <= b.deg; i++ ) c.coef[i] -= b.coef[i]; c.deg = c.degree(); return c; } Polynomial Polynomial::operator * ( Polynomial b ) { Polynomial a = *this; //a is the poly on the L.H.S Polynomial c; for ( int i = 0; i <= a.deg; i++ ) for ( int j = 0; j <= b.deg; j++ ) c.coef[i+j] += ( a.coef[i] * b.coef[j] ); c.deg = c.degree(); return c; } }; int main() { Polynomial a, b, c, d; a.set ( 7, 4 ); //7x^4 a.set ( 1, 2 ); //x^2 b.set ( 6, 3 ); //6x^3 b.set ( -3, 2 ); //-3x^2 c = a - b; // (7x^4 + x^2) - (6x^3 - 3x^2) a -= b; c.print(); cout << "\n"; a.print(); cout << "\n"; c = a * b; // (7x^4 + x^2) * (6x^3 - 3x^2) c.print(); cout << "\n"; d = c.differentiate().differentiate(); d.print(); cout << "\n"; cout << c.evaluate ( 2 ); //substitue x with 2 cin.get(); } Now, I have the "-" operator overloaded and it works fine: Polynomial Polynomial::operator - ( Polynomial b ) { Polynomial a = *this; //a is the poly on the L.H.S Polynomial c; for ( int i = 0; i <= a.deg; i++ ) c.coef[i] += a.coef[i]; for ( int i = 0; i <= b.deg; i++ ) c.coef[i] -= b.coef[i]; c.deg = c.degree(); return c; } However, I'm having difficulty with my "-=" operator: Polynomial Polynomial::operator -= ( Polynomial b ) { Polynomial a = *this; //a is the poly on the L.H.S Polynomial c; for ( int i = 0; i <= a.deg; i++ ) c.coef[i] += a.coef[i]; for ( int i = 0; i <= b.deg; i++ ) c.coef[i] -= b.coef[i]; c.deg = c.degree(); // overwrite value of 'a' with the newly computed 'c' before returning 'a' for ( int i = 0; i < 100; i++) a.coef[i] = c.coef[i]; a.deg = a.degree(); return a; } I just slightly modified my "-" operator method to overwrite the value in 'a' and return 'a', and just use the 'c' polynomial as a temp. I've put in some debug print statement and I confirm that at the time of computation, both: c = a - b; and a -= b; are computed to the same value. However, when I go to print them, their results are different: Polynomial a, b; a.set ( 7, 4 ); //7x^4 a.set ( 1, 2 ); //x^2 b.set ( 6, 3 ); //6x^3 b.set ( -3, 2 ); //-3x^2 c = a - b; // (7x^4 + x^2) - (6x^3 - 3x^2) a -= b; c.print(); cout << "\n"; a.print(); cout << "\n"; Result: 7x^4 -6x^3 4x^2 7x^4 1x^2 Why is my c = a - b and a -= b giving me different results when I go to print them?

    Read the article

  • R: dev.copy2pdf, multiple graphic devices to a single file, how to append to file?

    - by Timtico
    Hi everybody, I have a script that makes barplots, and opens a new window when 6 barplots have been written to the screen and keeps opening new graphic devices whenever necessary. Depending on the input, this leaves me with a potential large number of openened windows (graphic devices) which I would like to write to a single PDF file. Considering my Perl background, I decided to iterate over the different graphics devices, printing them out one by one. I would like to keep appending to a single PDF file, but I do not know how to do this, or if this is even possible. I would like to avoid looping in R. :) The code I use: for (i in 1:length(dev.list()) { dev.set(which = dev.list()[i] dev.copy2pdf(device = quartz, file = "/Users/Tim/Desktop/R/Filename.pdf") } However, this is not working as it will overwrite the file each time. Now is there an append function in R, like there is in Perl. Which allows me to keep adding pages to the existing pdf file? Or is there a way to contain the information in a graphic window to a object, and keep adding new graphic devices to this object and finally print the whole thing to a file? Other possible solutions I thought about: writing different pdf files, combining them after creation (perhaps even possible in R, with the right libraries installed?) copying the information in all different windows to one big graphic device and then print this to a pdf file.

    Read the article

  • uncompressing .zip file in linux [closed]

    - by Suren
    hi, I have a .zip file (It contains multiple files, ex: file1.txt file2.txt file3.txt.. n so on) in a directory. And my query is: How to extract the files from .zip archive to the very same directory and how to create the list of all the files extracted from .zip archive.** The extracted file name should be printed like this in the file named: file_list: file1.txt file2.txt file3.txt filen.txt I have tried the following command assuming that my .zip file name is "data.zip". unzip -qoj data.zip | unzip -ql data.zip > file_list I have used unzip -qoj data.zip to extract all the files in the same directory(quietly,overwrite,junk_path). When I try to insert -l with the first unzip command then the command doesn't extract the file in the current and only files are listed thats why I have to used unzip again after the first pipe(If I am making a mistake here let me know please). I get the following output Length Date Time Name -------- ---- ---- ---- 0 12-21-09 14:25 data/ 6148 12-21-09 14:25 data/.DS_Store 0 12-21-09 14:25 __MACOSX/ 0 12-21-09 14:25 __MACOSX/data/ 82 12-21-09 14:25 __MACOSX/data/._.DS_Store 82 12-11-09 13:59 data/file1.txt 120 12-11-09 13:59 data/file2.txt 166 12-11-09 13:59 data/file3.txt -------- ------- 6598 8 files How do I extract only file1.txt file2.txt file3.txt from this stdout? Is it possible to do this with linux command or I have to write a perl script for this? Thank you.

    Read the article

  • Eclipse / Aptana File Sync Solutions

    - by Brad
    Our development team uses Eclipse + Aptana to do their web development work. Currently, most of them are mapping their Eclipse projects directly to the web server. I'd rather them create a local project and use that to sync to the web server project directory they are working on. The issue is that there aren't any good solutions which is just appalling given the popularity of the two. The FileSync plugin for Eclipse is only one-way. Meaning if another developer makes a change to the file on the server, another dev isn't even notified and could overwrite the change. The File Transfer option in Aptana 2.0 doesn't support any sort of Sync, just manually uploading/downloading files. The Sync option in Aptana 1.5.1 doesn't allow you to merge files when they are different. You can only update one or the other. It does however allow you to view a diff (but only if you right click and select) and in that diff you can't make any changes. I did find a way to allow files to be uploaded to their Sync repositories in Aptana using Eclipse Monkey. However it doesn't work if a user saves multiple files at once, 'Save All', again it doesn't work. And additionally, there is no notification if a user opens a local file that has an updated copy on the server. I tried to add one using Eclipse Monkey but I couldn't find any sort of listener in the Eclipse API to do it and any Eclipse Monkey documentation is far and few between. My only solution at this point is just to let them continue to map directly to the server or ask them to do a manual download before they do any work (but again what if someone uploads a change right after they do that). Anyone have any ideas?

    Read the article

  • Assigning static final int in a JUnit (4.8.1) test suite

    - by Dr. Monkey
    I have a JUnit test class in which I have several static final ints that can be redefined at the top of the tester code to allow some variation in the test values. I have logic in my @BeforeClass method to ensure that the developer has entered values that won't break my tests. I would like to improve variation further by allowing these ints to be set to (sensible) random values in the @BeforeClass method if the developer sets a boolean useRandomValues = true;. I could simply remove the final keyword to allow the random values to overwrite the initialisation values, but I have final there to ensure that these values are not inadvertently changed, as some tests rely on the consistency of these values. Can I use a constructor in a JUnit test class? Eclipse starts putting red underlines everywhere if I try to make my @BeforeClass into a constructor for the test class, and making a separate constructor doesn't seem to allow assignment to these variables (even if I leave them unassigned at their declaration); Is there another way to ensure that any attempt to change these variables after the @BeforeClass method will result in a compile-time error? Can I make something final after it has been initialised?

    Read the article

  • hg archive to Remote Directory

    - by Brett Daniel
    Is there any way to archive a Mercurial repository to a remote directory over SSH? For example, it would be nice if one could do the following: hg archive ssh://[email protected]/path/to/archive However, that does not appear to work. It instead creates a directory called ssh: in the current directory. I made the following quick-and-dirty script that emulates the desired behavior by creating a temporary ZIP archive, copying it over SSH, and unzipping the destination directory. However, I would like to know if there is a better way. if [[ $# != 1 ]]; then echo "Usage: $0 [user@]hostname:remote_dir" exit fi arg=$1 arg=${arg%/} # remove trailing slash host=${arg%%:*} remote_dir=${arg##*:} # zip named to match lowest directory in $remote_dir zip=${remote_dir##*/}.zip # root of archive will match zip name hg archive -t zip $zip # make $remote_dir if it doesn't exist ssh $host mkdir --parents $remote_dir # copy zip over ssh into destination scp $zip $host:$remote_dir # unzip into containing directory (will prompt for overwrite) ssh $host unzip $remote_dir/$zip -d $remote_dir/.. # clean up zips ssh $host rm $remote_dir/$zip rm $zip Edit: clone-and-push would be ideal, but unfortunately the remote server does not have Mercurial installed.

    Read the article

  • Trying to use a authlogic-connect as a plugin in place of gem - Server doesn't start

    - by Arkid
    I am trying to use Authlogic-connect as a plugin in Rails 3 in place of a gem. I have made an entry in the gemfile as gem "authlogic-connect", :require => "authlogic-connect", :path => "localgems" Now when I run the bundle install, it runs fine. When I try to start the server i get the error Could not find gem 'authlogic-connect (>= 0, runtime)' in source at localgems. Source does not contain any versions of 'authlogic-connect (>= 0, runtime)' Try running `bundle install`. I have placed the unzipped Gem renamed as authlogic-connect in the localgems folder. what is the problem? Here is what I get on using rails plugin install arkidmitra$ rails plugin install git://github.com/viatropos/authlogic-connect.git Usage: rails new APP_PATH [options] Options: [--skip-gemfile] # Don't create a Gemfile -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files -G, [--skip-git] # Skip Git ignores and keeps -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository Runtime options: -q, [--quiet] # Supress status output -s, [--skip] # Skip files that already exist -f, [--force] # Overwrite files that already exist -p, [--pretend] # Run but do not make any changes Rails options: -h, [--help] # Show this help message and quit -v, [--version] # Show Rails version number and quit Description: The 'rails new' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails new ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going.

    Read the article

  • Question about functional OOP style in JavaScript

    - by valums
    I prefer to use functional OOP style for my code (similar to the module pattern) because it helps me to avoid the "new" keyword and all problems with the scope of "this" keyword in callbacks. But I've run into a few minor issues with it. I would like to use the following code to create a class. namespace.myClass = function(){ var self = {}, somePrivateVar1; // initialization code that would call // private or public methods privateMethod(); self.publicMethod(); // sorry, error here function privateMethod(){} self.publicMethod = function(){}; return self; } The problem is that I can't call public methods from my initialization code, as these functions are not defined yet. The obvious solution would be to create an init method, and call it before "return self" line. But maybe you know a more elegant solution? Also, how do you usually handle inheritance with this pattern? I use the following code, butI would like to hear your ideas and suggestions. namespace.myClass2 = function(){ var self = namespace.parentClass(), somePrivateVar1; var superMethod = self.someMethod; self.someMethod = function(){ // example shows how to overwrite parent methods superMethod(); }; return self; } Edit. For those who asked what are the reasons for choosing this style of OOP, you can look into following questions: http://stackoverflow.com/questions/1557386/prototypal-vs-functional-oop-in-javascript http://stackoverflow.com/questions/383402/is-javascript-s-new-keyword-considered-harmful

    Read the article

  • Can a GeneralPath be modified?

    - by Dov
    java2d is fairly expressive, but requires constructing lots of objects. In contrast, the older API would let you call methods to draw various shapes, but lacks all the new features like transparency, stroke, etc. Java has fairly high costs associated with object creation. For speed, I would like to create a GeneralPath whose structure does not change, but go in and change the x,y points inside. path = new GeneralPath(GeneralPath.WIND_EVEN_ODD, 10); path.moveTo(x,y); path.lineTo(x2, y2); double len = Math.sqrt((x2-x)*(x2-x) + (y2-y)*(y2-y)); double dx = (x-x2) * headLen / len; double dy = (y-y2) * headLen / len; double dx2 = -dy * (headWidth/headLen); double dy2 = dx * (headWidth/headLen); path.lineTo(x2 + dx + dx2, y2 + dy + dy2); path.moveTo(x2 + dx - dx2, y2 + dy - dy2); path.lineTo(x2,y2); This one isn't even that long. Imagine a much longer sequence of commands, and only the ones on the end are changing. I just want to be able to overwrite commands, to have an iterator effectively. Does that exist?

    Read the article

  • What is best strategy to handle exceptions & errors in Rails?

    - by Nick Gorbikoff
    Hello. I was wondering if people would share their best practices / strategies on handling exceptions & errors. Now I'm not asking when to throw an exception ( it has been throroughly answered here: SO: When to throw and Exception) . And I'm not using this for my application flow - but there are legitimate exceptions that happen all the time. For example the most popular one would be ActiveRecordNotFound. What would be the best way to handle it? The DRY way? Right now I'm doing a lot of checking within my controller so if Post.find(5) returns Nil - I check for that and throw a flash message. However while this is very granular - it's a bit cumbersome in a sense that I need to check for exceptions like that in every controller, while most of them are essentially the same and have to do with record not found or related records not found - such as either Post.find(5) not found or if you are trying to display comments related to post that doesn't exist, that would throw an exception (something like Post.find(5).comments[0].created_at) I know you can do something like this in ApplicationController and overwrite it later in a particular controller/method to get more granular support, however would that be a proper way to do it? class ApplicationController < ActionController::Base rescue_from ActiveRecord::RecordInvalid do |exception| render :action => (exception.record.new_record? ? :new : :edit) end end

    Read the article

  • Securely erasing a file using simple methods?

    - by Jason
    Hello, I am using C# .NET Framework 2.0. I have a question relating to file shredding. My target operating systems are Windows 7, Windows Vista, and Windows XP. Possibly Windows Server 2003 or 2008 but I'm guessing they should be the same as the first three. My goal is to securely erase a file. I don't believe using File.Delete is secure at all. I read somewhere that the operating system simply marks the raw hard-disk data for deletion when you delete a file - the data is not erased at all. That's why there exists so many working methods to recover supposedly "deleted" files. I also read, that's why it's much more useful to overwrite the file, because then the data on disk actually has to be changed. Is this true? Is this generally what's needed? If so, I believe I can simply write the file full of 1's and 0's a few times. I've read: http://www.codeproject.com/KB/files/NShred.aspx http://blogs.computerworld.com/node/5756 http://blogs.computerworld.com/node/5687 http://stackoverflow.com/questions/4147775/securely-deleting-a-file-in-c-net

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >