Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 539/854 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • Shark tool on iphone crashes

    - by Joey
    Hi, I am trying to use Shark to profile my app. However, it crashes after I hit "stop" and it analyzes and then goes to "load session". Only once when I decided not to select my app but chose to target "everything" did it actually display some trace. However, I could not reproduce this case. Does anyone have any idea what might be going wrong? Could it be something to do with the wrong version of Shark or my SDK or some other detail? I have the latest SDK and am running 3.1.3 on the phone. The various documentation I've found on google or via Apple's docs don't seem to be terribly helpful, so if anyone has found some that's useful, I'd love to see it. Thanks.

    Read the article

  • mySQL one-to-many query

    - by Stomped
    I've got 3 tables that are something like this (simplified here ofc): users user_id user_name info info_id user_id rate contacts contact_id user_id contact_data users has a one-to-one relationship with info, although info doesn't always have a related entry. users has a one-to-many relationship with contacts, although contacts doesn't always have related entries. I know I can grab the proper 'users' + 'info' with a left join, is there a way to get all the data I want at once? For example, one returned record might be: user_id: 5 user_name: tom info_id: 1 rate: 25.00 contact_id: 7 contact_data: 555-1212 contact_id: 8 contact_data: 555-1315 contact_id: 9 contact_data: 555-5511 Is this possible with a single query? Or must I use multiple?

    Read the article

  • Can I use the whatthetrend.com API to get daily and weekly twitter trends?

    - by Charles S.
    The twitter API allows me to receive daily trends and weekly trends (in addition to current trends, of course) as either JSON or XML. Is there an equivalent with the whatthetrend.com API? I don't see a method/parameter at first glance but I wanted to see if anyone out there knew a way... http://api.whatthetrend.com/api There's a method to lookup trend definition by keyword so I guess I could use that based off the Twitter API but I'd rather just load all the data at once rather than have to remotely access the API every time I want to look up a definition. Thanks

    Read the article

  • I want to use 960 or Blueprint, but I also want to use lots of Padding and Borders, is it a good fit

    - by viatropos
    I started using 960 today and thought it would be really easy. However, trying to translate a site to 960 quickly proved tough for many reasons. The first is that I can't use any padding or borders. Unless of course I add many more divs. Same thing with borders. Question is, if I want to use lots of padding and borders (where padding and borders are either 5px "thin" or 10px "thick" styles), are 960 and blueprint overkill? It seems pretty easy to create a custom grid, but once I add padding and borders, 99% of the work is making sure the grid doesn't break. I still am going to end up lining everything up to a 960 grid with 12 columns, but I want to have padding and borders included in the width, and it seems that's not easily possible with 960 or blueprint. What are your thoughts?

    Read the article

  • Books are Dead! Long Live the Books!

    - by smisner
    We live in interesting times with regard to the availability of technical material. We have lots of free written material online in the form of vendor documentation online, forums, blogs, and Twitter. And we have written material that we can buy in the form of books, magazines, and training materials. Online videos and training – some free and some not free – are also an option. All of these formats are useful for one need or another. As an author, I pay particular attention to the demand for books, and for now I see no reason to stop authoring books. I assure you that I don’t get rich from the effort, and fortunately that is not my motivation. As someone who likes to refer to books frequently, I am still a big believer in books and have evidence from book sales that there are others like me. If I can do my part to help others learn about the technologies I work with, I will continue to produce content in a variety of formats, including books. (You can view a list of all of my books on the Publications page of my site and my online training videos at Pluralsight.) As a consumer of technical information, I prefer books because a book typically can get into a topic much more deeply than a blog post, and can provide more context than vendor documentation. It comes with a table of contents and a (hopefully accurate) index that helps me zero in on a topic of interest, and of course I can use the Search feature in digital form. Some people suggest that technology books are outdated as soon as they get published. I guess it depends on where you are with technology. Not everyone is able to upgrade to the latest and greatest version at release. I do assume, however, that the SQL Server 7.0 titles in my library have little value for me now, but I’m certain that the minute I discard the book, I’m going to want it for some reason! Meanwhile, as electronic books overtake physical books in sales, my husband is grateful that I can continue to build my collection digitally rather than physically as the books have a way of taking over significant square footage in our house! Blog posts, on the other hand, are useful for describing the scenarios that come up in real-life implementations that wouldn’t fit neatly into a book. As many years that I have working with the Microsoft BI stack, I still run into new problems that require creative thinking. Likewise, people who work with BI and other technologies that I use share what they learn through their blogs. Internet search engines help us find information in blogs that simply isn’t available anywhere else. Another great thing about blogs, also, is the connection to community and the dialog that can ensue between people with common interests. With the trend towards electronic formats for books, I imagine that we’ll see books continue to adapt to incorporate different forms of media and better ways to keep the information current. At the moment, I wish I had a better way to help readers with my last two Reporting Services books. In the case of the Microsoft® SQL Server™ 2005 Reporting Services Step by Step book, I have heard many cases of readers having problems with the sample database that shipped on CD – either the database was missing or it was corrupt. So I’ve provided a copy of the database on my site for download from http://datainspirations.com/uploads/rs2005sbsDW.zip. Then for the Microsoft® SQL Server™ 2008 Reporting Services Step by Step book, we decided to avoid the database problem by using the AdventureWorks2008 samples that Microsoft published on Codeplex (although code samples are still available on CD). We had this silly idea that the URL for the download would remain constant, but it seems that expectation was ill-founded. Currently, the sample database is found at http://msftdbprodsamples.codeplex.com/releases/view/37109 but I have no idea how long that will remain valid. My latest books (#9 and #10 which are milestones I never anticipated), Building Integrated Business Intelligence Solutions with SQL Server 2008 R2 and Office 2010 (McGraw Hill, 2011) and Business Intelligence in Microsoft SharePoint 2010 (Microsoft Press, 2011), will not ship with a CD, but will provide all code samples for download at a site maintained by the respective publishers. I expect that the URLs for the downloads for the book will remain valid, but there are lots of references to other sites that can change or disappear over time. Does that mean authors shouldn’t make reference to such sites? Personally, I think the benefits to be gained from including links are greater than the risks of the links becoming invalid at some point. Do you think the time for technology books has come to an end? Is the delivery of books in electronic format enough to keep them alive? If technological barriers were no object, what would make a book more valuable to you than other formats through which you can obtain information?

    Read the article

  • Multi-dimensional array in php

    - by pundit
    Hi all, I would like to create a multi-dimensional array with two variables but don't know how. This is what i have so far; $_SESSION['name'][] = $row_subject['name']; $_SESSION['history'][]= $_SERVER['REQUEST_URI']; I wanted to know if this is possible? $_SESSION['name'][] = $row_subject['name'],$_SERVER['REQUEST_URI']; i want to get the name of a programme which is generated via a data base and also to retrieve the url. What i am actually doing once the name is retrieve, i want to make that a link which the url would be necessary. any help would be appreciated. Thanks

    Read the article

  • debug javascript in release mode with yui compressions

    - by mickyjtwin
    In our build scripts, we are using YUI compressor to minify/compress javascript and css files. As this combines the js into one file, it needs to be referenced in the Master Layout script, e.g. First question is what is best way to use both, so that if we are developing (in debug mode) it will reference each js file individually? Second question, is once on production, would there be any steps/solution to debugging the javascript on the production server, i.e. conditionallly load the javascript files based on setting a "debug=true" setting in either js or .net?

    Read the article

  • WPF Binding XAML vs C#

    - by kubal5003
    Hello, I've got a strange problem - binding created through XAML (both ways by markup extension or normal) isn't working(BindingOperations.IsDataBound returns false and in fact there is no Binding object created). When I do literally the same from code everything is working perfectly. One more thing is that the Binding in XAML is created in a DataTemplate - what's funny about that when I use the DataTemplate for the first time it fails, then I fix it from code (add binding to specific objects) and while adding more objects to the collection the binding set in XAML just works. If I try to remove all the objects from the collection and then add a new one the binding fails once again. In reality this is a shortened version of another of my questions. For details please refer to: http://stackoverflow.com/questions/2986511/wpf-debugging-avalonedit-binding-to-document-property Sorry for doing it this way, but there's no answer and it's probably too long for anybody to read. -

    Read the article

  • How to properly update a feature branch from trunk?

    - by Pavel Radzivilovsky
    SVN book says: ...Another way of thinking about this pattern is that your weekly sync of trunk to branch is analogous to running svn update in a working copy, while the final merge step is analogous to running svn commit from a working copy I find this approach very unpractical in large developments, for several reasons, mostly related to reintegration step. From SVN v1.5, merging is done rev-by-rev. Cherry-picking the areas to be merged would cause us to resolve the trunk-branch conflicts twice (one when merging trunk revisions to the FB, and once more when merging back). Repository size: trunk changes might be significant for a large code base, and copying the differences files (unlike SVN copy) from trunk elsewhere may be a significant overhead. Instead, we do what we call "re-branching". In this case, when a significant chunk of trunk changes is needed, a new feature branch is opened from current trunk, and the merge is always downward (Feature branches - trunk - stable branches). This does not go along SVN book guidelines and developers see it as extra pain. How do you handle this situation?

    Read the article

  • Monkeypatch a model in a rake task to use a method provided by a plugin?

    - by gduquesnay.mp
    During some recent refactoring we changed how our user avatars are stored not realizing that once deployed it would affect all the existing users. So now I'm trying to write a rake task to fix this by doing something like this. namespace :fix do desc "Create associated ImageAttachment using data in the Users photo fields" task :user_avatars => :environment do class User # Paperclip has_attached_file :photo ... <paperclip stuff, styles etc> end User.all.each do |user| i = ImageAttachment.new i.photo_url = user.photo.url user.image_attachments << i end end end When I try running that though I'm getting undefined method `has_attached_file' for User:Class I'm able to do this in script/console but it seems like it can't find the paperclip plugin's methods from a rake task.

    Read the article

  • How do I find hash value of a 3D vector ?

    - by brainydexter
    I am trying to perform broad-phase collision detection with a fixed-grid size approach. Thus, for each entity's position: (x,y,z) (each of type float), I need to find which cell does the entity lie in. I then intend to store all the cells in a hash-table and then iterate through to report (if any) collisions. So, here is what I am doing: Grid-cell's position: (int type) (Gx, Gy, Gz) = (x / M, y / M, z / M) where M is the size of the grid. Once, I have a cell, I'd like to add it to a hash-table with its key being a unique hash based on (Gx, Gy, Gz) and the value being the cell itself. Now, I cannot think of a good hash function and I need some help with that. Can someone please suggest me a good hash function? Thanks

    Read the article

  • How to assign default values and define unique keys in Entity Framework 4 Designer

    - by csharpnoob
    Hello, I've had a look at the Entity Framework 4. While generating code for the SQL Server 2008 I came to the point where I want to define some default values for some fields. how to define in the designer for a Created DateTime Field the DateTime.Now default value? - Error 54: Default value (DateTime.Now) is not valid for DateTime. The value must be in the form 'yyyy-MM-dd HH:mm:ss.fffZ' how to make for code generation a string Field unique. Like E-Mail or Username can exists only once in the table. I've search alot in the internet and also checked my books Pro Entity Framework 4.0 and Programming Entity Framework. But none of them seems to come up with the default value issue, or using sql commands as default values for database generation. Another thing is, how to prevent on database generation always from droping tables? Instead i want to append non existing fields and keep the data. Thanks for any suggestions.

    Read the article

  • Development deployment: how to achive edit-and-reload with JSP pages?

    - by doublep
    Out project uses WebLogic as web-server and uses mostly JSP for user interface. With standard setup it is possible to copy edited JSP files into the exploded deployment directory and WebLogic will automatically pick them up, recompile and serve new content through HTTP. However, is it possible to avoid copying at all, so that I just save a file in my editor and it is immediately (well, after a couple of seconds for recompilation) visible? The project uses Apache Ant as building tool. I would imagine what I want would be possible with symlinks (since this is for deployment only I don't care about cross-platformity), but then I don't see how it is possible to symlink lots of files at once with Ant. So, how do I achieve save-JSP-hit-F5-in-browser functionality either with some setting in WebLogic; or with symlinking JSPs using Apache Ant (instead of copying them as is done now); or something else completely?

    Read the article

  • Global variables in jQuery

    - by Thorpe Obazee
    I have been working on this script: <script type="text/javascript" src="/js/jquery.js"></script> <script type="text/javascript"> $(function(){ compentecy = $('#competency_id'); $('#add_competency').bind('click', function(e){ e.preventDefault(); $.post('/script.php', {competency_id: compentecy.val(), syllabus_id: 2}, function(){ // competency = $('#competency_id'); competency.children('option[value=' + compentecy.val() + ']').remove(); }); }); }); </script> in the $.post callback function, it seems that I can't access global variables. I tried $.competency but it didn't work. I always get a "competency is undefined" error. I had to reinitialize the variable once again inside the callback. Is there a way to NOT reinitialize the variable inside the callback?

    Read the article

  • C# app fails to load Matlab DLL when running from a shared drive?

    - by jg
    I have a C# .NET 2.0 program that calls a Matlab .dll file that I created using Matlab Builder for .NET. This Matlab .dll file is a wrapper for a m file function that I need to call from my C# program. Everything works fine when I run this app from my local drive. However once I copy the app to a shared drive the Matlab dll fails when it's first loaded. I setup caspol to allow .NET programs to run from shared drives. Does anyone know what could cause this problem or a tool that I could use to easily figure out what the problem is? Thanks.

    Read the article

  • Converting LINQ to Twitter to Twitter API v1.1

    - by Joe Mayo
    Twitter recently updated their API to v1.1 (Current status: API v1.1). Naturally, LINQ to Twitter  needed to be updated too. This blog post outlines the changes made to LINQ to Twitter during this conversion and highlights important features that LINQ to Twitter developers will want to know. Overall Impact Generally speaking, Twitter API v1.1 is semantically very much the same as it’s predecessor. The base URL changed and so did a few resource segments, but the resources themselves are still intact. The good news is that LINQ to Twitter has always shielded the developer from this plumbing, so the entities, types, and filters didn’t change much at all.  The following sections describe what did  change. Authentication In Twitter API v1.0 authentication was not required for some resources, such as user timelines and search. However, that’s all changed because *all* queries must be authenticated in Twitter API v1.1. LINQ to Twitter has various types of authorizers you can use, supporting whatever OAuth options are available via Twitter.  You can see the LINQ to Twitter documentation, Securing Your Applications, for more info on OAuth support. The New Search One of the larger changes to the API was Search. To be more specific, the Search entity now contains a List<Status>, named Statuses, to hold results.  Additionally, any meta-data associated with the search is now in a property named SearchMetaData. The change to the Search entity and responses is the big change, but the good news is that your Search query syntax doesn’t change. Different Rate Limits The issue of rate limits itself is contentious, but this discussion is focused on the coding experience and I’ll leave the politics to those who prefer to engage in that activity. What’s important here is that both headers and resources have changed. You should review Twitter’s Rate Limit documentation to understand what the changes mean.  A quick explanation is that rate limits are applied individually to each resource in 15 minute time intervals. In LINQ to Twitter these changes surface on the Help entity, via HelpType.RateLimits. The RateLimits query has a Resources filter where you can specify a comma-separated list of categories to return rate limit info for.  The results materialize in the RateLimits dictionary, keyed on category. The Help entity also has a RateLimitsAuthorizationContext, holding the Access Token for the user performing queries – and to whom the rate limits apply. In addition to the new RateLimits query, there are new RateLimit headers that appear in the query response, whose HTTP header name is of the form X-Rate-Limit… which is different from the previous header name. LINQ to Twitter surfaces these headers via the existing properties of the TwitterContext instance. For anyone who retrieved rate limit information via the Headers property of TwitterContext, you should be aware of the new header names.  I haven’t done anything with Feature rate limit properties yet, but they appear to no longer be available – this will require more follow-up. Error Handling Twitter API v1.1 has a new format for Error Codes & Responses. LINQ to Twitter wraps these messages in the TwitterQueryException, which has been updated appropriately. The Message property of TwitterQueryException now reflects the Twitter error message, when available. There’s also a new ErrorCode that’s populated with the message error code. Parameters Most parameters stayed the same, but one of interest is Include Entities (different from LINQ to Twitter data object entities). Entities are metadata hanging off tweets, that provide start/end position in the tweet and other information for mentions, urls, hash tags, and media. Entities used to not be included unless you specified you wanted them. Now, in v1.1, entities are included by default for all APIs that return a Status.  If you were always setting IncludeEntities to true, then you won’t see a change. However, be aware that you’ll now be receiving additional data in your response from Twitter, which will explain a sudden increase in bandwidth utilization. This might or might not  matter to you  depending on the requirements of your application, but you should be aware of it. Everything Else There might be small changes here and there that I haven’t mentioned, but these were the ones you should be most aware of.  Streams didn’t change, but Twitter will be deprecating username/password authentication on public streams, in favor of OAuth, so you’ll be seeing me make that change some time in the future.  Also, Twitter will continue to evolve the API and you can expect that LINQ to Twitter will change accordingly. Summary The big changes to Twitter API were Authentication, Search, Rate Limits, and Error Handling. All API calls must be authenticated. You’ll need to change your code to read Search results differently, but the query is much the same as you use now. There’s a new RateLimits API, one of the Help queries.  Also, the new error messages are integrated into TwitterQueryException. Besides these changes, I expect  most others to be small or affect a smaller percentage of developers.  You can get the latest version of LINQ to Twitter from NuGet or visit the LINQ to Twitter download page at CodePlex.com.   @JoeMayo

    Read the article

  • Why would a UIDatePicker with no functionality, added to my app via IB, cause my app to crash?

    - by BeachRunnerJoe
    I just added a UIDatePicker to my iPad app using IB, linked it to its outlet in the code, saved it in IB, added the UIPickerViewDelegate to my UIViewController in the code, as well as added the UIDatePicker outlet in code. When I build and run, the app launches, but will crash intermittently when I attempt to open the popover view that contains the datepicker. I say intermittently because the popover view will occasionally open successfully, but never more than once (it always crashes the second time you open the popover, if it doesn't crash the first time). Also, in the console, I get the following messsage objc[594]: FREED(id): message lastClickRow sent to freed object=0x6015a70 Why is this happening and how can I fix it? What does that console message indicate? It may be worth mentioning that the popover view also contains a table view along with the datepicker control. Thanks so much in advance for your help!

    Read the article

  • "The stylesheet was not loaded because its MIME type, "text/html" is not "text/css".

    - by Null Pointer
    I have a javascript application and when I run it on firefox I am getting the following erro on the console: "The stylesheet was not loaded because its MIME type, "text/html" is not "text/css". DumbStuck!! EDIT: Note that it tells that "The stylesheet ABCD..." But ABCD is actually an HTML file. Edit (ANSWER) : Actually I had wrongly put href="", and hence the html file was refenecing itself as the CSS. Mozilla had the similar bug once, and it is from there I got the answer. But everyone's else answers helped me too. Thanks.

    Read the article

  • Async Socket Listener on separate thread - VB.net

    - by TheHockeyGeek
    I am trying to use the code from Microsoft for an Async Socket connection. It appears the listener runs in the main thread locking the GUI. I am new at both socket connections and multi-threading all at the same time. Having a hard time getting my mind wrapped around this all at once. The code used is at http://msdn.microsoft.com/en-us/library/fx6588te.aspx Using this example, how can I move the listener to its own thread? Public Shared Sub Main() ' Data buffer for incoming data. Dim bytes() As Byte = New [Byte](1023) {} ' Establish the local endpoint for the socket. Dim ipHostInfo As IPHostEntry = Dns.GetHostEntry(Dns.GetHostName()) Dim ipAddress As IPAddress = ipHostInfo.AddressList(1) Dim localEndPoint As New IPEndPoint(ipAddress, 11000) ' Create a TCP/IP socket. Dim listener As New Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp) ' Bind the socket to the local endpoint and listen for incoming connections. listener.Bind(localEndPoint) listener.Listen(100)

    Read the article

  • Preventing multiple reporting of the same rule violation in FxCop -- What is Id?

    - by Dave
    FxCop is currently reporting the same rule violation for a particular method -- it has two out parameters, because I want to return two values to the caller without creating a struct for it. I wonder if anonymous types would solve my problem, but I didn't know about them at the time I had written the method. Anyhow, I'm getting CheckId CA1021 reported once for each parameter. I've copied the SuppressMessage text from FxCop, and then realized that the Id for each message is different! To me, it seems like you only need the CheckId, so... what is the Id used for? I haven't been able to find information about it online. will the Id remain the same? I assume so, or SuppressMessage wouldn't work the way one would want it to is there a way to specify the SuppressMessage attribute so that it suppresses for all Ids?

    Read the article

  • How best to pre-install OR pre-load OR cache JavaScript library to optimize performance?

    - by Kabeer
    Hello. I am working for an intranet application. Therefore I have some control on the client machines. The JavaScript library I am using is somewhat big in size. I would like to pre-install OR pre-load OR cache the JavaScript library on each machine (each browser as well) so that it does not travel for each request. I know that browsers do cache a JavaScript library for subsequent requests but I would like the library to be cached once for all subsequent requests, sessions and users. What is the best mechanism to achieve this?

    Read the article

  • ASP.NET MVC: Problem generating thumbnails...need help!

    - by Ryan Pitts
    Ok, so i'm new to asp.net mvc and i'm trying to make a web application photo gallery. I've posted once on here about this issue i am having of trying to generate thumbnails on-the-fly on the page instead of the actual full-size images. Basically, the functionality i am looking for is to be able to have thumbnails on the page and then be able to click the images to see the full-size version. I am pulling the images and images info from an XML file. So, i did this so i could display them dynamically and so it would be easier to make changes later. Later, i am going to add functionality to upload new images to specific galleries (when i figure out how to do that as well). I am providing a link to download the project i am working on so you can see the code. I would appreciate any help with this! Thanks! URL to project: http://www.diminished4th.com/TestArtist.zip Ryan

    Read the article

  • .NET - downloading multiple pages from a website with a single DNS query

    - by lampak
    I'm using HttpRequest to download several pages from a website (in a loop). Simplifying it looks like this: HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create( "http://sub.domain.com/something/" + someString ); HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); //do something I'm not quite sure actually but every request seems to resolve the address again (I don't know how to test if I'm right). I would like to boost it a little and resolve the address once and then reuse it for all requests. I can't work out how to force HttpRequest into using it, though. I have tried using Dns.GetHostAddresses, converting the result to a string and passing it as the address to HttpWebRequest.Create. Unfortunately, server returns error 404 then. I managed to google that's probably because the "Host" header of the http query doesn't match what the server expects. Is there a simple way to solve this?

    Read the article

  • Recover backup copy of a ubuntu linux installation on a usb stick using dd

    - by Werner
    Hi, I installed Ubuntu 10.04 on a usb stick in persistent install mode. So I could boot the laptop or my desktop computer with the stick, at boot time. Once I needed the 8GB stick for another purposes so I thought about coyping it to my desktop doing from mac os x: dd if=/dev/disks3s of=/Users/jack/Desktop/usb_copy Now I am trying to do the opposite, after having used the stick, which was formatted to NTFS, just doing dd if=/Users/jack/Desktop/usb_copy of=/dev/disks3s but although I can see that almost of the files are there, I can not boot again. IT is also strange the the file permissions are kind of strange, something like _user What can I do ? Thanks

    Read the article

  • Hard to append a table with many records into another without generating duplicates

    - by Bill Mudry
    I may seem to be a bit wordy at first but for the hope it will be easier for all of you to understand what I am doing in the first place. I have an uncommon but enjoyable activity of collecting as many species of wood from around the world as I can (over 2,900 so far). Ok, that is the real world. Meanwhile I have spent over 8 years compiling over 5.8 meg of text data on all the woods of the world. That got so large that learning some basic PHP and MySQL was most welcome so I could build a new database driven home for all this research. I am still slow at it but getting there. The original premise was to find evidence of as many species of woods in the world I can. The more names identified, the more successful the project. I have named the project TAXA for ease of conversation (short for Taxonomy). You are most welcome to take a look at what I have so far at www.prowebcanada.com/taxa. It is 95% dynamically driven. So far I am reporting about 6,500 botanical wood names and, as said above, the more I can report, the more successful is the project. I have a file of all the woods in the second largest wood collection in the world, the Tervuren wood collection in the Netherlands with over 11,300 wood names even after cleaning out all duplicates. That is almost twice the number I am reporting now so porting all the new wood names from Tervuren to the 'species' table where I keep the reported data would be a major desirable advancement in the project. At one point I was able to add all the Tervuren records to the species table but over 3,000 duplicates also formed. They were not in the Tervuren file in the first place but represent the same wood names common to both files. It is common sense that there would be woods common to both that when merged would create new duplicates. At one point and with the help of others from another forum, I may very well have finally got the proper SQL statement. When I ran it, though, the system said (semi-amusingly at first) ----- that it had gone away! After looking up on the Net what could have have done this, one reason is that the MySQL timeout lapses and probably because of the large size of files I am running. I am running this on a rented account on Godaddy so I cannot go about trying to adjust any config file. For safety, I copied the tervuren.sql file as tervuren_target.sql and the species.sql file as species_master.sql tp use as working files just to make sure I protect the original files from destruction or damage. Later I can name the species_master back to just species.sql once I am happy all worked well. The species file has about 18 columns in it but only 5 columns match the columns in the Tervuren file (name for name and collation also). The rest of the columns are just along for the ride, so to speak. The common key in both is the 'species_name" columns in both. I am not sure it is at all proper to call one a primary key and the other a foreign key since there really is no relational connection to them. One is just more data for the other and can disappear after, never to be referred to the working code in the application. I have been very surprised and flabbergasted on how hard it can be to append records from one large table into another (with same column names plus others) without generating NEW duplicates in the first place. Watch out thinking that a SELECT DISTINCT statement may do the job because absolutely NO records in the species table must get destroyed in the process and there is no way (well, that I know of) to tell the 'DISTINCT" command this. Yes, the original 'species' table has duplicates in it even before all this but, trust me ---- they have to be removed the long hard way manually record by record or I will lose precious information. It is more important to just make sure no NEW duplicates form through bringing in new names in the tervuren_target.species_name into species.species_name. I am hoping and thinking that a straight SQL solution should work --- except for that nasty timeout. How do I get past that? Could it mean that I may have to turn to a PHP plus SQL method?? Or ..... would I have to break up the Tervuren files into a few smaller ones and run them independently (hope not....)" So far, what seems should be easy has proven to be unexpectedly tricky. I appreciate any help you can give but start from the assumption that this may be harder to do right than it may seem on the surface. By the way --- I am running a quad 64 bit system with Windows 7, so at least I have some fairly hefty power on the client end. I have a direct ethernet cable feeding a cable connection to the Internet. Once I get an algorithm and code working for this, I also have many other lists to process that could make the 'species' table grow even more. It could be equivalent to (ahem) lighting a rocket under my project (especially compared to do this record by record manually)! This is my first time in this forum, so I do not know how I can receive any replies. Do I have to to come back here periodically or are replies emailed out also? It would be great if you CC'd copies to me at billmudry at rogers.com :-) Much thanks for your patience and help, Bill Mudry Mississauga, Ontario Canada (next to Toronto).

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >