Search Results

Search found 3179 results on 128 pages for 'merge replication'.

Page 50/128 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Offline backup synchronization

    - by Pavan Kumar
    There is a Central Server running Windows Server 2003 and SQL Server 2005 and there are 7 client machines situated in various places and has XP Pro & SQL Server 2005 installed in all of them. They are not interconnected so they are physically seperate. One person goes to each of these centers maybe twice a month and takes the backup (Full database consisting of mdf and ldf files) with a pen drive and brings it to the Central server which contains the central database holding same schema as all the other client databases. I need to synchronize each backup database (belonging to different centers) one by one to update the existing data or inserting new data in the central database . The solution i got was Replication. The pendrive is brought to central server consisting of 7 instances of the databases and then the databases is attached to the central server one by one to the same SQL Server where the central database exists. Then my idea was to replicate the backup database one by one i.e using single subscription (Central Database) and multiple publication ( i.e 7 instances of databases in my case) toplogy by performing replication locally (i.e in the same machine). So i tried to develop a UI in C# .Net to programatically run the Transactional Replication with push subscription using RMO Programming (which is incomplete as of now because there is no point in developing when you already know it is not the solution). Transactional Replication can either be set to initialize with a snapshot or without a snapshot. If i go for the first option i.e with a snapshot , the data whatever is present in Central Database is overwritten by the new data . So the data present initially in the central database is lost. If i try to initialize without snapshot , no data (the data already has the updated and new data) will be sent from the backup database to server. The replication will work in a scenario where any incremental changes is done only after you set the replication . So the initial data whatever was present in the backup database when setting up the replication will not be replicated when running the snapshot agent for the first time to synchronize. Only changes in the backup database thereafter will be reflected to the central database .(Remember I am not going to insert new data or make any changes to the backup database after i attach it to the Central Server. ) So this solution is not feasible. I want a solution for synchronizing from one client database to central database present in the same machine using C#.NET. If you can provide me small example maybe with two databases(with same schema) DB1(Client) to DB2(Server) consisting of one or two tables it will be very helpful. The synchronization is not bidirectional.I want to only update existing data or insert new data from DB1 to DB2 (DB2 may contain some data initially). Thanks and Regards Pavan

    Read the article

  • MySQL December Webinars

    - by Bertrand Matthelié
    We'll be running 3 webinars next week and hope many of you will be able to join us: MySQL Replication: Simplifying Scaling and HA with GTIDs Wednesday, December 12, at 15.00 Central European TimeJoin the MySQL replication developers for a deep dive into the design and implementation of Global Transaction Identifiers (GTIDs) and how they enable users to simplify MySQL scaling and HA. GTIDs are one of the most significant new replication capabilities in MySQL 5.6, making it simple to track and compare replication progress between the master and slave servers. Register Now MySQL 5.6: Building the Next Generation of Web/Cloud/SaaS/Embedded Applications and Services Thursday, December 13, at 9.00 am Pacific Time As the world's most popular web database, MySQL has quickly become the leading cloud database, with most providers offering MySQL-based services. Indeed, built to deliver web-based applications and to scale out, MySQL's architecture and features make the database a great fit to deliver cloud-based applications. In this webinar we will focus on the improvements in MySQL 5.6 performance, scalability, and availability designed to enable DBA and developer agility in building the next generation of web-based applications. Register Now Getting the Best MySQL Performance in Your Products: Part IV, Partitioning Friday, December 14, at 9.00 am Pacific Time We're adding Partitioning to our extremely popular "Getting the Best MySQL Performance in Your Products" webinar series. Partitioning can greatly increase the performance of your queries, especially when doing full table scans over large tables. Partitioning is also an excellent way to manage very large tables. It's one of the best ways to build higher performance into your product's embedded or bundled MySQL, and particularly for hardware-constrained appliances and devices. Register Now We have live Q&A during all webinars so you'll get the opportunity to ask your questions!

    Read the article

  • Fixing up Visual Studio&rsquo;s gitignore , using IFix

    - by terje
    Originally posted on: http://geekswithblogs.net/terje/archive/2014/06/13/fixing-up-visual-studiorsquos-gitignore--using-ifix.aspxDownload tool Is there anything wrong with the built-in Visual Studio gitignore ???? Yes, there is !  First, some background: When you set up a git repo, it should be small and not contain anything not really needed.  One thing you should not have in your git repo is binary files. These binary files may come from two sources, one is the output files, in the bin and obj folders.  If you have a  gitignore file present, which you should always have (!!), these folders are excluded by the standard included file (the one included when you choose Team Explorer/Settings/GitIgnore – Add.) The other source are the packages folder coming from your NuGet setup.  You do use NuGet, right ?  Of course you do !  But, that gitignore file doesn’t have any exclude clause for those folders.  You have to add that manually.  (It will very probably be included in some upcoming update or release).  This is one thing that is missing from the built-in gitignore. To add those few lines is a no-brainer, you just include this: # NuGet Packages packages/* *.nupkg # Enable "build/" folder in the NuGet Packages folder since # NuGet packages use it for MSBuild targets. # This line needs to be after the ignore of the build folder # (and the packages folder if the line above has been uncommented) !packages/build/ Now, if you are like me, and you probably are, you add git repo’s faster than you can code, and you end up with a bunch of repo’s, and then start to wonder: Did I fix up those gitignore files, or did I forget it? The next thing you learn, for example by reading this blog post, is that the “standard” latest Visual Studio gitignore file exist at https://github.com/github/gitignore, and you locate it under the file name VisualStudio.gitignore.  Here you will find all the new stuff, for example, the exclusion of the roslyn ide folders was commited on May 24th.  So, you think, all is well, Visual Studio will use this file …..     I am very sorry, it won’t. Visual Studio comes with a gitignore file that is baked into the release, and that is by this time “very old”.  The one at github is the latest.  The included gitignore miss the exclusion of the nuget packages folder, it also miss a lot of new stuff, like the Roslyn stuff. So, how do you fix this ?  … note .. while we wait for the next version… You can manually update it for every single repo you create, which works, but it does get boring after a few times, doesn’t it ? IFix Enter IFix ,  install it from here. IFix is a command line utility (and the installer adds it to the system path, you might need to reboot), and one of the commands is gitignore If you run it from a directory, it will check and optionally fix all gitignores in all git repo’s in that folder or below.  So, start up by running it from your C:/<user>/source/repos folder. To run it in check mode – which will not change anything, just do a check: IFix  gitignore --check What it will do is to check if the gitignore file is present, and if it is, check if the packages folder has been excluded.  If you want to see those that are ok, add the --verbose command too.  The result may look like this: Fixing missing packages Let us fix a single repo by adding the missing packages structure,  using IFix --fix We first check, then fix, then check again to verify that the gitignore is correct, and that the “packages/” part has been added. If we open up the .gitignore, we see that the block shown below has been added to the end of the .gitignore file.   Comparing and fixing with latest standard Visual Studio gitignore (from github) Now, this tells you if you miss the nuget packages folder, but what about the latest gitignore from github ? You can check for this too, just add the option –merge (why this is named so will be clear later down) So, IFix gitignore --check –merge The result may come out like this  (sorry no colors, not got that far yet here): As you can see, one repo has the latest gitignore (test1), the others are missing either 57 or 150 lines.  IFix has three ways to fix this: --add --merge --replace The options work as follows: Add:  Used to add standard gitignore in the cases where a .gitignore file is missing, and only that, that means it won’t touch other existing gitignores. Merge: Used to merge in the missing lines from the standard into the gitignore file.  If gitignore file is missing, the whole standard will be added. Replace: Used to force a complete replacement of the existing gitignore with the standard one. The Add and Replace options can be used without Fix, which means they will actually do the action. If you combine with --check it will otherwise not touch any files, just do a verification.  So a Merge Check will  tell you if there is any difference between the local gitignore and the standard gitignore, a Compare in effect. When you do a Fix Merge it will combine the local gitignore with the standard, and add what is missing to the end of the local gitignore. It may mean some things may be doubled up if they are spelled a bit differently.  You might also see some extra comments added, but they do no harm. Init new repo with standard gitignore One cool thing is that with a new repo, or a repo that is missing its gitignore, you can grab the latest standard just by using either the Add or the Replace command, both will in effect do the same in this case. So, IFix gitignore --add will add it in, as in the complete example below, where we set up a new git repo and add in the latest standard gitignore: Notes The project is open sourced at github, and you can also report issues there.

    Read the article

  • Why might changes be populated from one NSManagedObjectContext to another without an explicit merge?

    - by Mike Laurence
    I'm working on an object import feature that utilizes multiple threads/NSManagedObjectContexts, using http://www.mac-developer-network.com/columns/coredata/may2009/ as my guide (note that I am developing for iPhone). For some reason, when I save one of my contexts the other is immediately updated with the changes, even though I have commented out my calls to mergeChangesFromContextDidSaveNotification. Are there any reasons the contexts might be merging into one another without an explicit call? Here a log of what's going on: // 1.) Main context is saved with "Peter Gabriel" // 2.) Test context is created, begins with same contents as main context // 3.) Main context is inserted with "Spoon" // 4.) Test context is inserted with "Phoenix" // Contents at this point: CoreTest[4341:903] Artists in main context: ( "Peter Gabriel", "Spoon" ) CoreTest[4341:903] Artists in test context: ( "Peter Gabriel", "Phoenix" ) // 5.) testContext is saved // New contents of contexts: CoreTest[4341:903] Artists in main context: ( "Peter Gabriel", "Phoenix", "Spoon" ) CoreTest[4341:903] Artists in test context: ( "Peter Gabriel", "Phoenix" ) As you can see, the test context is saved midway through, and the main context suddenly has the new objects from the test context, even though I haven't performed the whole NSManagedObjectContextDidSaveNotification/mergeChangesFromContext combo. My understanding is that no changes will ever be merged unless done so explicitly... does anyone know what's going on here?

    Read the article

  • iPad: Merge concept of SplitViewController and NavigationController in RootView?

    - by MikeN
    I'm having trouble merging the two concepts of using a SplitViewController in my main view and having the "RootView" controller that controls the left panes popup/sidebar table view. I want to have the left "RootView" act as a navigation menu, but how do I do this when the RootView is tied through MainWindow.xib into the left pane of the SplitView? Basically, I want the left navigation to work just like the built-in email applications folder drilldown navigation. Is there an example iPad project out there that uses both SplitView and a NavigationView for the left/Root pane?

    Read the article

  • LVDiff not working in Git

    - by Tanner
    I'm trying to get lvdiff from meta-diff suite to work with Git. My .gitconfig looks like this: [gui] recentrepo = C:/Users/Tanner/Desktop/FIRST 2010 Beta/Java/LoganRover [user] name = Tanner Smith email = [email protected] [merge "labview"] name = LabVIEW 3-Way Merge driver = 'C:/Program Files/National Instruments/Shared/LabVIEW Merge/LVMerge.exe' 'C:/Program Files/National Instruments/LabVIEW 8.6/LabVIEW.exe' %O %B %A %A recursive = binary [diff "lvdiff"] #command = 'C:/Program Files/meta-diff suite/lvdiff.exe' external = C:/Users/Tanner/Desktop/FIRST 2010 Beta/lvdiff.sh [core] autocrlf = true lvdiff.sh looks like this: #!/bin/sh "C:/Program Files/meta-diff suite/lvdiff.exe" "$2" "%5" | cat And my .gitattributes file looks like this: #Use a cusstom driver to merge LabVIEW files *.vi merge=labview #Use lvdiff as the externel diff program for LabVIEW files *.vi diff=lvdiff But everytime I do a diff, all Git returns is: diff --git a/Build DashBoard Data.vi b/Build DashBoard Data.vi index fd50547..662237f 100644 Binary files a/Build DashBoard Data.vi and b/Build DeashBoard Data.vi differ It is like it is not using it or even recognizing my changes. Any ideas?

    Read the article

  • Merge functionality of two xsl files into a single file (not a xsl import or include issue)

    - by anuamb
    I have two xsl files; both of them perform different tasks on source xml one after another. Now I need a single xsl file which will actually perform both these tasks in single file (its not an issue of xsl import or xsl include): say my source xml is: <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV#+# <EXP_AFT_CONVabc <GUARANTEE_AMOUNT#+# <CREDIT_DER/ </LVL2 <LVL21 <AZ#+# <BZbz1 <AZaz2 <BZ#+# <CZ/ </LVL21 </R7P1_1 </LIST_R7P1_1 My first xsl (tr1.xsl) removes all nodes whose value is blank or null: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="@*|node()"> <xsl:if test=". != '' or ./@* != ''"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:if> <xsl:template </xsl:stylesheet The output here is <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV#+# <EXP_AFT_CONVabc <GUARANTEE_AMOUNT#+# </LVL2 <LVL21 <AZ#+# <BZbz1 <AZaz2 <BZ#+# </LVL21 </R7P1_1 </LIST_R7P1_1 And my second xsl (tr2.xsl) does a global replace (of #+# with text blank'') on the output of first xsl: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" <xsl:template name="globalReplace" <xsl:param name="outputString"/ <xsl:param name="target"/ <xsl:param name="replacement"/ <xsl:choose <xsl:when test="contains($outputString,$target)"> <xsl:value-of select= "concat(substring-before($outputString,$target), $replacement)"/> <xsl:call-template name="globalReplace"> <xsl:with-param name="outputString" select="substring-after($outputString,$target)"/> <xsl:with-param name="target" select="$target"/> <xsl:with-param name="replacement" select="$replacement"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$outputString"/> </xsl:otherwise> </xsl:choose </xsl:template <xsl:template match="text()" <xsl:template match="@*|*"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet So my final output is <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV <EXP_AFT_CONVabc <GUARANTEE_AMOUNT </LVL2 <LVL21 <AZ <BZbz1 <AZaz2 <BZ </LVL21 </R7P1_1 </LIST_R7P1_1 My concern is that instead of these two xsl (tr1.xsl and tr2.xsl) I only need a single xsl (tr.xsl) which gives me final output? Say when I combine these two as <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" <xsl:template match="@*|node()" <xsl:if test=". != '' or ./@* != ''"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:if> <xsl:template name="globalReplace" <xsl:param name="outputString"/ <xsl:param name="target"/ <xsl:param name="replacement"/ <xsl:choose <xsl:when test="contains($outputString,$target)"> <xsl:value-of select= "concat(substring-before($outputString,$target), $replacement)"/> <xsl:call-template name="globalReplace"> <xsl:with-param name="outputString" select="substring-after($outputString,$target)"/> <xsl:with-param name="target" select="$target"/> <xsl:with-param name="replacement" select="$replacement"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$outputString"/> </xsl:otherwise> </xsl:choose </xsl:template <xsl:template match="text()" <xsl:call-template name="globalReplace" <xsl:with-param name="outputString" select="."/ <xsl:with-param name="target" select="'#+#'"/ <xsl:with-param name="replacement" select="''"/ </xsl:call-template </xsl:template <xsl:template match="@|" <xsl:copy <xsl:apply-templates select="@*|node()"/ </xsl:copy </xsl:template </xsl:stylesheet it outputs: <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV <EXP_AFT_CONVabc <GUARANTEE_AMOUNT <CREDIT_DER/ </LVL2 <LVL21 <AZ <BZbz1 <AZaz2 <BZ <CZ/ </LVL21 </R7P1_1 </LIST_R7P1_1 Only replacement is performed but not null/blank node removal.

    Read the article

  • What is the best way to join/merge two tables by column cell matching in Excel?

    - by blunders
    I've found this excel add-in to buy that appears to do what I need, but I'd rather have code that's open to use as I wish. While a GUI is nice, it's not required. In an attempt to make the question more clear, I'm adding some two sample "input" tables in tab delimited form, and the resulting output table: SAMPLE_INPUT_TABLE_01 NAME<tab>Location John<tab>US Mike<tab>CN Tom<tab>CA Sue<tab>RU SAMPLE_INPUT_TABLE_02 NAME<tab>Age John<tab>18 Mike<tab>36 Tom<tab>54 Mary<tab>18 SAMPLE_OUTPUT_TABLE_02 NAME<tab>Age<Location> John<tab>18<tab>US Mike<tab>36<tab>CN Tom<tab>54<tab>CA Sue<tab>""<tab>RU Mary<tab>18<tab>"" If it matters, I'm using Office 2010 on Windows 7.

    Read the article

  • Is it possible to "merge" the values of multiple records into a single field without using a stored

    - by j0rd4n
    A co-worker posed this question to me, and I told them, "No, you'll need to write a sproc for that". But I thought I'd give them a chance and put this out to the community. Essentially, they have a table with keys mapping to multiple values. For a report, they want to aggregate on the key and "mash" all of the values into a single field. Here's a visual: --- ------- Key Value --- ------- 1 A 1 B 1 C 2 X 2 Y The result would be as follows: --- ------- Key Value --- ------- 1 A,B,C 2 X,Y They need this in SQLServer 2005. Again, I think they need to write a stored procedure, but if anyone knows a magic out-of-the-box function that does this, I'd be impressed.

    Read the article

  • Mercurial How To Merge 2 Repositories that share a common ancestor but are not clones of the same re

    - by sylvanaar
    I am using hg-subversion, and I have 2 different hg repositories one from our svn trunk, and one from a branch of the trunk. I would like to link them somehow. At some point in the history both Hg repositories will be identical is there some way to join them? In other words is there a way to relate the repositories from within Hg? The technique I am currently using is to just export the second repository over top of the working copy of the revision they share, and then commit that working copy as a branch in Hg, but I lose the history this way. Any advice would be great

    Read the article

  • Where do objects merge/join data in a 3-tier model?

    - by BerggreenDK
    Its probarbly a simple 3-tier problem. I just want to make sure we use the best practice for this and I am not that familiary with the structures yet. We have the 3 tiers: GUI: ASP.NET for Presentation-layer (first platform) BAL: Business-layer will be handling the logic on a webserver in C#, so we both can use it for webforms/MVC + webservices DAL: LINQ to SQL in the Data-layer, returning BusinessObjects not LINQ. DB: The SQL will be Microsoft SQL-server/Express (havent decided yet). Lets think of setup where we have a database of [Persons]. They can all have multiple [Address]es and we have a complete list of all [PostalCode] and corresponding citynames etc. The deal is that we have joined a lot of details from other tables. {Relations}/[tables] [Person]:1 --- N:{PersonAddress}:M --- 1:[Address] [Address]:N --- 1:[PostalCode] Now we want to build the DAL for Person. How should the PersonBO look and when does the joins occure? Is it a business-layer problem to fetch all citynames and possible addressses pr. Person? or should the DAL complete all this before returning the PersonBO to the BAL ? Class PersonBO { public int ID {get;set;} public string Name {get;set;} public List<AddressBO> {get;set;} // Question #1 } // Q1: do we retrieve the objects before returning the PersonBO and should it be an Array instead? or is this totally wrong for n-tier/3-tier?? Class AddressBO { public int ID {get;set;} public string StreetName {get;set;} public int PostalCode {get;set;} // Question #2 } // Q2: do we make the lookup or just leave the PostalCode for later lookup? Can anyone explain in what order to pull which objects? Constructive criticism is very welcome. :o)

    Read the article

  • Merge two text files at a specific location, sed or awk.

    - by S1syphus
    I have two text files, I want to place a text in the middle of another, I did some research and found information about adding single strings: I have a comment in the second text file called STUFFGOESHERE, so I tried: sed '/^STUFFGOESHERE/a file1.txt' file2.txt sed: 1: "/^STUFFGOESHERE/a long.txt": command a expects \ followed by text So I tried something different, trying to place the contents of the text based on a given line, but no luck. Any ideas?

    Read the article

  • Loading datasets from datastore and merge into single dictionary. Resource problem.

    - by fredrik
    Hi, I have a productdatabase that contains products, parts and labels for each part based on langcodes. The problem I'm having and haven't got around is a huge amount of resource used to get the different datasets and merging them into a dict to suit my needs. The products in the database are based on a number of parts that is of a certain type (ie. color, size). And each part has a label for each language. I created 4 different models for this. Products, ProductParts, ProductPartTypes and ProductPartLabels. I've narrowed it down to about 10 lines of code that seams to generate the problem. As of currently I have 3 Products, 3 Types, 3 parts for each type, and 2 languages. And the request takes a wooping 5500ms to generate. for product in productData: productDict = {} typeDict = {} productDict['productName'] = product.name cache_key = 'productparts_%s' % (slugify(product.key())) partData = memcache.get(cache_key) if not partData: for type in typeData: typeDict[type.typeId] = { 'default' : '', 'optional' : [] } ## Start of problem lines ## for defaultPart in product.defaultPartsData: for label in labelsForLangCode: if label.key() in defaultPart.partLabelList: typeDict[defaultPart.type.typeId]['default'] = label.partLangLabel for optionalPart in product.optionalPartsData: for label in labelsForLangCode: if label.key() in optionalPart.partLabelList: typeDict[optionalPart.type.typeId]['optional'].append(label.partLangLabel) ## end problem lines ## memcache.add(cache_key, typeDict, 500) partData = memcache.get(cache_key) productDict['parts'] = partData productList.append(productDict) I guess the problem lies in the number of for loops is too many and have to iterate over the same data over and over again. labelForLangCode get all labels from ProductPartLabels that match the current langCode. All parts for a product is stored in a db.ListProperty(db.key). The same goes for all labels for a part. The reason I need the some what complex dict is that I want to display all data for a product with it's default parts and show a selector for the optional one. The defaultPartsData and optionaPartsData are properties in the Product Model that looks like this: @property def defaultPartsData(self): return ProductParts.gql('WHERE __key__ IN :key', key = self.defaultParts) @property def optionalPartsData(self): return ProductParts.gql('WHERE __key__ IN :key', key = self.optionalParts) When the completed dict is in the memcache it works smoothly, but isn't the memcache reset if the application goes in to hibernation? Also I would like to show the page for first time user(memcache empty) with out the enormous delay. Also as I said above, this is only a small amount of parts/product. What will the result be when it's 30 products with 100 parts. Is one solution to create a scheduled task to cache it in the memcache every hour? It this efficient? I know this is alot to take in, but I'm stuck. I've been at this for about 12 hours straight. And can't figure out a solution. ..fredrik

    Read the article

  • How can I merge CSS definitions in files into inline style attributes, using Perl?

    - by mintywalker
    Many email clients don't like linked CSS stylesheets, or even the embedded <style> tag, but rather want the CSS to appear inline as style attributes on all your markup. BAD: <link rel=stylesheet type="text/css" href="/style.css"> BAD: <style type="text/css">...</style> WORKS: <h1 style="margin: 0">...</h1> However this inline style attribute approach is a right pain to manage. I've found tools for Ruby and PHP that will take a CSS file and some separate markup as input and return you the merged result - a single file of markup with all the CSS converted to style attributes. I'm looking for a Perl solution to this problem, but I've not found one on CPAN or by searching Google. Any pointers? Alternatively, are there CPAN modules one could combine to achieve the same result? Ruby http://code.dunae.ca/premailer.web/ PHP http://www.pelagodesign.com/sidecar/emogrifier/ Perl ?

    Read the article

  • Are there any FREE Xaml Diff/Merge tools available?

    - by Ahikam
    I am searching a free diff tool with support of native syntax and markup for XAML, but failed to find one. I like Altova's DiffDog but it's paid. CodeCompare is an useful tool. It has some worth with its integration into VisualStudio and usage of native editors. It's a perfect solution for XML! However, it does not support XAML editors. Who can recommend a diff tool for XAML?

    Read the article

  • In mySQL, Is it possible to SELECT from two tables and merge the columns?

    - by Travis
    If I have two tables in mysql that have similar columns... TABLEA id name somefield1 TABLEB id name somefield1 somefield2 How do I structure a SELECT statement so that I can SELECT from both tables simultaneously, and have the result sets merged for the columns that are the same? So for example, I am hoping to do something like... SELECT name, somefield1 FROM TABLEA, TABLEB WHERE name="mooseburgers"; ...and have the name, and somefield1 columns from both tables merged together in the result set. Thank-you for your help!

    Read the article

  • Best way to merge two identical ASPNET web sites?

    - by ase69s
    We have two websites which only diference is in the design (Diferent images, styles, layouts..etc) but the web structure of files and cs code is the same so we want to simplify its manteinance... The actual structure would be: DefaultA.aspx DefaultA.aspx.cs DefaultB.aspx DefaultB.aspx.cs LoginA.aspx LoginA.aspx.cs LoginB.aspx LoginB.aspx.cs One idea would be changing the design differencies at runtime depending of the origin website, but we dont like much this because performance, abstraction in designing them and url confusion... Another one is sharing the cs (both aspx inheriting and using the same cs) file but we never have done or seen it done in any website before so we wonder if its a good aproach... What do you think? Any other way better in terms of performance vs development-ease?

    Read the article

  • How to Merge two databases in one in SQL Server 2008?

    - by SzamDev
    Hi I have 2 PCs, each one of them has SQL Server 2008 installed on it and there is a database with data in it. I need a way that I can move data in my DB from this SQL Server to another one (another PC which has the same DB) move data from one PC to another one - There is one problem, the ID column, because each DB in my 2 PCs has data in it so this column counts from 1,2,3,....... ( data will be conflict with other data in my DB ) Is there any way to solve my problem and move data successfully?

    Read the article

  • How to merge existing row with new data in SQLite?

    - by CSharperWithJava
    I have a database full of simple note data, with columns for title, due date, priority, and details. There is also a _id column PRIMARY KEY int. Say I have a note in the table already with some data filled and the rest null. I also have a set of data that will fill all those fields. Is there a way that I can only write data to the fields that are NULL? I can't overwrite existing data, but I'd like to add data to NULL columns. I know the rowId of the target row. If my target row had rowId of 5, I could do something like this: UPDATE SET duedate='some date', priority='2', details='some text' WHERE _id=5 But that would overwrite all the data in that row, and I don't want to lose any data that might be there. How can I change this statement to avoid writing to non-null fields?

    Read the article

  • On Google AppEngine what is the best way to merge two tables?

    - by gpjones
    If I have two tables, Company and Sales, and I want to display both sets of data in a single list, how would I do this on Google App Engine using GQL? The models are: class Company(db.Model): companyname = db.StringProperty() companyid = db.StringProperty() salesperson = db.StringProperty() class Sales(db.Model): companyid = db.StringProperty() weeklysales = db.StringProperty() monthlysales = db.StringProperty() The views are: def company(request): companys = db.GqlQuery("SELECT * FROM Company") sales = db.GqlQuery("SELECT * FROM Sales") template_values = { 'companys' : companys, 'sales' : sales } return respond(request, 'list', template_values) List html includes: {%for company in companys%} {% for sale in sales %} {% ifequal company.companyid sales.companyid %} {{sales.weeklysales}} {{sales.monthlysales}} {% endifequal %} {% endfor %} {{company.companyname}} {{company.companyid}} {{company.salesperson}} {%endfor%} Any help would be greatly appreciated.

    Read the article

  • PHP resized image functions and S3 upload functions - but how to merge the two?

    - by chocolatecoco
    I am using S3 to store images and I am resizing and compressing images before it gets uploaded using PHP. I'm using this class for storing the images to an S3 bucket - http://undesigned.org.za/2007/10/22/amazon-s3-php-class This all works fine if I'm not doing any file processing before the file is uploaded because it reads the file upload from the $_FILES array. The problem is I am resizing and compressing the image before storing to the S3 bucket. So I'm no longer able to read from the $_FILES array. The functions for resizing: public function resizeImage($newWidth, $newHeight, $option="auto") { // *** Get optimal width and height - based on $option $optionArray = $this->getDimensions($newWidth, $newHeight, $option); $optimalWidth = $optionArray['optimalWidth']; $optimalHeight = $optionArray['optimalHeight']; // *** Resample - create image canvas of x, y size $this->imageResized = imagecreatetruecolor($optimalWidth, $optimalHeight); imagecopyresampled($this->imageResized, $this->image, 0, 0, 0, 0, $optimalWidth, $optimalHeight, $this->width, $this->height); // *** if option is 'crop', then crop too if ($option == 'crop') { $this->crop($optimalWidth, $optimalHeight, $newWidth, $newHeight); } } The script I am using to store the file after resizing and compressing to a local directory: public function saveImage($savePath, $imageQuality="100") { // *** Get extension $extension = strrchr($savePath, '.'); $extension = strtolower($extension); switch($extension) { case '.jpg': case '.jpeg': if (imagetypes() & IMG_JPG) { imagejpeg($this->imageResized, $savePath, $imageQuality); } break; case '.gif': if (imagetypes() & IMG_GIF) { imagegif($this->imageResized, $savePath); } break; case '.png': // *** Scale quality from 0-100 to 0-9 $scaleQuality = round(($imageQuality/100) * 9); // *** Invert quality setting as 0 is best, not 9 $invertScaleQuality = 9 - $scaleQuality; if (imagetypes() & IMG_PNG) { imagepng($this->imageResized, $savePath, $invertScaleQuality); } break; // ... etc default: // *** No extension - No save. break; } imagedestroy($this->imageResized); } with this PHP code to invoke it: $resizeObj = new resize("$images_dir/$filename"); $resizeObj -> resizeImage($thumbnail_width, $thumbnail_height, 'crop'); $resizeObj -> saveImage($images_dir."/tb_".$filename, 90); How do I modify the code above so I can pass it through this function: $s3->putObjectFile($thefile, "s3bucket", $s3directory, S3::ACL_PUBLIC_READ)

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >