Search Results

Search found 3710 results on 149 pages for 'databases'.

Page 128/149 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Changing character encoding in MySQL, PHP scripts, HTML

    - by Sandman
    So, I have built on this system for quite some time, and it is currently outputting Latin1 (ISO-8859-1) to the web browser, and this is the components: MySQL - all data is stored with the Latin1 character set PHP - All PHP text files are stored on disk with Latin1 encoding HTML - The output has the http-equiv="content-type" content="text/html; charset=iso-8859-1" meta tag So, I'm trying to understand how the encoding of the different parts come into play in my workflow. If I open a PHP script and change its encoding within the text editor to UTF-8 and save it back to disk and reload the web browser, the text is all messed up - unless the text comes from the DB. If I change the encoding of the DB to UTF-8 and keep the PHP files in latin1 I have to use utf8_decode() for the data to display correctly. And if I change the HTML code the browser will read it incorrectly. So yeah, I realise that if I want to "upgrade" to UTF8, I have to update all three parts of this setup for it to work correctly, but since it's a huge system with some 180k lines of PHP code and millions of posts in a lot of databases/tables, I don't want to start something like this without understanding everything correctly. What haven't I thought about? What could mess this up beyond fixing? What are the procedures for changing the encoding of an entire MySQL installation and what's the easiest way to change the encoding of hundreds or thousands of PHP files on disk? The META tag is luckily added dynamically, so I'll change that in one place only :) Let me hear about your experiences with this.

    Read the article

  • Why use Entity Framework over Linq2SQL ...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason to use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • Database frontend for multiple db engines

    - by xeroxed_yeti
    Hey Stackoverflow, yeah it's spring and a lot of things happens to me... Also changing some software things at my computer, because suddenly everything seems to be boring after starting my laptop. I even changed my wallpaper!!! Besides I'm looking for a new database frontend and after using google with serveral queries I didn't find the right software. You have to know, my laptop and me are very very special :) I'm looking for a database frontend which should have following features can access PostgreSQL and MySQL databases can handle schemata overs a nice sql query tool supports an import and export functionality (something like tab separated text files) it for free looks awesome - every time when a college come to my office he must get the feeling: oh boy, this man really knows his job and should get more money! At the moment I used phpmyadmin, phppgadmin, pgadminIII, mysqladmin and dbVisualizer. Furthermore I was a big fan of the aqua datastudio until it became commercial. This tools offers a great variety of functionalities which can simplify programmes live. However, now you have to buy a license...I'm a scientist and money for software is limited =) So it's my first time (question) here at stackoverflow please be cheerful :)

    Read the article

  • Can I copy an entire Magento application to a new server?

    - by Gapton
    I am quite new to Magento and I have a case where a client has a running ecommerce website built with Magento and is running on Amazon AWS EC2 server. If I can locate the web root (/var/www/) and download everything from it (say via FTP), I should be able to boot up my virtual machine with LAMP installed, put every single files in there and it should run, right? So I did just that, and Magento is giving me an error. I am guessing there are configs somewhere that needs to be changed in order to make it work, or even a lot of paths need to be changed etc. Also databases to be replicated. What are the typical things I should get right? Database is one, and the EC2 instance uses nginx while I use plain Apache, I guess that won't work straight out of the box, and I probably will have to install nginx and a lot of other stuffs. But basically, if I setup the environment right, it should run. Is my assumption correct? Thanks for answering!

    Read the article

  • Rails Binary Stream support

    - by Craig Walker
    I'm going to be starting a project soon that requires support for large-ish binary files. I'd like to use Ruby on Rails for the webapp, but I'm concerned with the BLOB support. In my experience with other languages, frameworks, and databases, BLOBs are often overlooked and thus have poor, difficult, and/or buggy functionality. Does RoR spport BLOBs adequately? Are there any gotchas that creep up once you're already committed to Rails? BTW: I want to be using PostgreSQL and/or MySQL as the backend database. Obviously, BLOB support in the underlying database is important. For the moment, I want to avoid focusing on the DB's BLOB capabilities; I'm more interested in how Rails itself reacts. Ideally, Rails should be hiding the details of the database from me, and so I should be able to switch from one to the other. If this is not the case (ie: there's some problem with using Rails with a particular DB) then please do mention it. UPDATE: Also, I'm not just talking about ActiveRecord here. I'll need to handle binary files on the HTTP side (file upload effectively). That means getting access to the appropriate HTTP headers and streams via Rails. I've updated the question title and description to reflect this.

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • Sorting a list in PHP

    - by Andy
    Alright, so I've just (almost) finished my first little php script. Basically all it does is handling forms. It writes whatever the user put in to the fields of the form to a text file, then I include that text file in the index of the little page I have set up. So, currently it writes to the beginning of the text file (but doesn't overwrite previous data). But, several users wants this list to be alphabetically sorted instead. So what I want to do is make whatever they submit fall into the list in alphabetical order. The thing here is also that all I use in the text file are divs. The list is basically 'divided' into 3 parts. 'Title', 'Link', and 'Poster'. All I have done is positioned them with css. So, can I sort this list (the titles, in this case) alphabetically and still have the 'link' and 'poster' information assigned the way they already are, but just with the titles sorted? It don't use databases at all on my site, so there is no database handling at all used in this script (mainly because I'm not experienced at all in this).

    Read the article

  • Looking for combinations of server and embedded database engines

    - by codeelegance
    I'm redesigning an application that will be run as both a single user and multiuser application. It is a .NET 2.0 application. I'm looking for server and embedded databases that work well together. I want to deploy the embedded database in the single user setup and of course, the server in the multiuser setup. Past releases have been based on MSDE but in the past year we've been having a lot of install issues: new installs hanging and leaving the system in an unknown state, upgrades disconnecting the database, etc. I migrated the application to SQL Server 2005 and the install is more reliable (as long as a user doesn't try to install over a broken MSDE installation). Since next year's release will be a complete redesign I figured now's the best time to address the database issue as well. The database has been abstracted from the rest of the application so I just need to choose which database(s) to use and write an implementation for each one. So far I've considered: SQL Server/ SQL Server Compact Edition Firebird (same DB engine is available in two different server modes and an embedded dll) Each has its own merits but I'm also interested in any other suggestions. This is a fairly simple program and its data requirements are simple as well. I don't expect it to strain whatever database I eventually choose. So easy configuration and deployment hold more weight than performance.

    Read the article

  • using a database and deploying the application

    - by evan
    I have a WPF application that stores a large amount of information in XML files and as the user uses the application they add more information to the XML files. It's basically using the XML files as a database. Since over the life of the program the XML files have gotten quite large, and I've been think about putting the data on a website, I've been looking into how to move all the information into an SQL database. I've used SQL databases with web applications (PHP, Ruby, and ASP.NET) but never with a Desktop application. Ideally I'd like to be able to keep all the information in one database file and distribute it along with the application without requiring the user to connect to a remote database (so they don't need an internet connection - though eventually it would be nice if could compare the local file's version with one online somewhere and update if necessary) and without making them install a local database server on their computer. Is this possible? I'd also like to use LINQ with any new database solution so switching to a database doesn't force to many changes (I read the XML with LINQ). I'm sure this question has been asked and that there are already some good tutorials on the subject but I just can't find them.

    Read the article

  • Left/Right/Inner joins using C# and LINQ

    - by Keith Barrows
    I am trying to figure out how to do a series of queries to get the updates, deletes and inserts segregated into their own calls. I have 2 tables, one in each of 2 databases. One is a Read Only feeds database and the other is the T-SQL R/W Production source. There are a few key columns in common between the two. What I am doing to setup is this: List<model.AutoWithImage> feedProductList = _dbFeed.AutoWithImage.Where(a => a.ClientID == ClientID).ToList(); List<model.vwCompanyDetails> companyDetailList = _dbRiv.vwCompanyDetails.Where(a => a.ClientID == ClientID).ToList(); foreach (model.vwCompanyDetails companyDetail in companyDetailList) { List<model.Product> productList = _dbRiv.Product.Include("Company").Where(a => a.Company.CompanyId == companyDetail.CompanyId).ToList(); } Now that I have a (source) list of products from the feed, and an existing (target) list of products from my prod DB I'd like to do 3 things: Find all SKUs in the feed that are not in the target Find all SKUs that are in both, that are active feed products and update the target Find all SKUs that are in both, that are inactive and soft delete from the target What are the best practices for doing this without running a double loop? Would prefer a LINQ 4 Objects solution as I already have my objects. EDIT: BTW, I will need to transfer info from feed rows to target rows in the first 2 instances, just set a flag in the last instance. TIA

    Read the article

  • Sharepoint user details not visible to other users

    - by richardoz
    I am managing a SharePoint site that uses Form Based Authentication. We have several generic lists, document libraries and active task lists that users can create update and delete. Users can use the people pickers to select/search for everyone. But the users cannot see other users names, email addresses etc. in display lists or the people pickers. If I log in as the site collection administrator, I can see everyones details. So I know the data is available. Updated details on this problem (non-administrators) SharePoint users cannot see other users information. Example: User A assigns a task to user B. User A creates a new task and uses the people picker to find user B. User B is only visible by the login name “bname” and any information about user B is not visible or searchable within the people picker. Once user B is assigned the task, user A no longer sees the name in the task list – even though user A created it. No modified by, created by, assigned to or owner field data is visible to non-administrator users. Facts: Extranet site is configured to use Forms Based Authentication. Intranet uses windows based authentication Users of both the intranet and extranet have the same problem All databases are local The site uses SSRS integration SharePoint WSS on Windows 2003 Std -- After activating the verbose logging it looks like SharePoint is definately asking SQL server for only the user info for the currently logged in user: SELECT TOP 6 /lots-of-columns/ FROM UserData INNER MERGE JOIN Docs AS t1 ON ( 1 = 1 AND UserData.[tp_RowOrdinal] = 0 AND t1.SiteId = UserData.tp_SiteId AND t1.SiteId = @L2 AND t1.DirName = UserData.tp_DirName AND t1.LeafName = UserData.tp_LeafName AND t1.Level = UserData.tp_Level AND t1.IsCurrentVersion = 1 AND (1 = 1) ) LEFT OUTER JOIN AllUserData AS t2 ON ( UserData.[tp_Author]=t2.[tp_ID] AND UserData.[tp_RowOrdinal] = 0 AND t2.[tp_RowOrdinal] = 0 AND ( (t2.tp_IsCurrent = 1) ) AND t2.[tp_CalculatedVersion] = 0 AND t2.[tp_DeleteTransactionId] = 0x AND t2.tp_ListId = @L3 AND UserData.tp_ListId = @L4 AND t2.[tp_Author]=162 /* this is the currently logged in user */ ) WHERE (UserData.tp_IsCurrent = 1) AND UserData.tp_SiteId=@L2 AND (UserData.tp_DirName=@DN) AND UserData.tp_RowOrdinal=0 AND ( ( (UserData.[datetime1] IS NULL ) OR (UserData.[datetime1] = @L5DTP) ) AND t1.SiteId=@L2 AND (t1.DirName=@DN) ) ORDER BY UserData.[tp_Modified] Desc, UserData.[tp_ID] Asc Again, any ideas would be appreciated.

    Read the article

  • SQLce DAL Linq to Sql or EntityFramework

    - by bretddog
    Hi, I'm learning databases, using SqlCe, and need business object to database mapping. Currently I try to decide if to use Linq to Sql, or EntityFramework. (I understand a bit L2S, but haven't familiarized with EF yet) The program will only be debeloped and used by myself, so I have good control of the priorities: I don't need to consider potential change of database type or data storage type, as I'm quite certain SQLce will stay sufficient. I DO expect continued development and changes to the data scheme while the program is in active use; change business object properties (Hence database columns), and possibly overall table scheme. So old data must be transported to new scheme. I also want to keep a decent degree of layer separation DAL/BLL, although this may not be necessary, it is good for me to learn these principles. My question is: With these priorities, would I have any benefit by choosing either Linq2Sql vs. EntityFramwork? (and please explain why) Btw, the project involves very simple table scheme with only 4-5 tables and very simple relations. Thanks!

    Read the article

  • How to convert Unicode strings (\u00e2, etc) into NSString for display?

    - by karlbecker_com
    I am trying to support arbitrary unicode from a variety of international users. They have already put a bunch of data into sqlite databases on their iPhones, and now I want to capture the data into a database, then send it back to their device. Right now I am using a php page that is sending data back to from an internet mysql database. The data is saved in the mysql database properly, but when it's sent back it comes out as unicode text, such as Frank\u00e2\u0080\u0099s iPad instead of just Frank's iPad where the apostrophe should really be a curly apostrophe. The answer posted to another question indicates that there is no built-in Cocoa methods to convert the "\u00e2\u0080\u0099" portion of the unicode string from the webserver to an NSString object. Is this correct? That seems really surprising (and scarily disappointing), since Cocoa definitely allows input from many different Unicode characters, and I need to support any arbitrary language that I have never heard of, and all of the possible characters. I save them to and from the local sqlite database just fine now, but once I send it to a web server, then perhaps pull down different data, I want to ensure the data pulled from the web server is correctly formatted.

    Read the article

  • Unattended Install of SQL Server 2005 Express with LOCAL Server InstanceName

    - by Jeff
    I'm creating an install package using InnoSetup and installing SQL Server 2005 Express. Here's the code below that appears in my RUN section: Filename: "{app}\SQL Server 2005 Express\SQLEXPR.exe" ; Parameters: "-q /norebootchk /qn reboot=ReallySuppress addlocal=all INSTANCENAME=(LOCAL) SCCCHECKLEVEL=IncompatibleComponents:1;MDAC25Version:0 ERRORREPORTING=2 SQLAUTOSTART=1 SAPWD=passwordhere SECURITYMODE=SQL"; WorkingDir: {app}\SQL Server 2005 Express; StatusMsg: Installing Microsoft SQL Server 2005 Express... Please Wait...;Check:SQLVerifyInstall What I'm trying to accomplish is have the SQL Server package install but only have the instance name itself reference the name of the machine name and nothing more. What I'm receiving instead is a named instance instead of local such as MachineName\SQLEXPRESS which is not what I want to receive. I need a local instance instead of a named instance due to the way my code is written to be able to install and talk with the databases in question. I would change it, trust me, were it not the fact that this install package is a replacement to a previous package that used the MSDE installer. I have to be able to support both through code. Any suggestions are welcome but a clear and concise method to get the installer to quietly install using only the machine name is my main goal. Thanks for the help and support!

    Read the article

  • Implementing a 1 to many relationship with SQLite

    - by Patrick
    I have the following schema implemented successfully in my application. The application connects desk unit channels to IO unit channels. The DeskUnits and IOUnits tables are basically just a list of desk/IO units and the number of channels on each. For example a desk could be 4 or 12 channel. CREATE TABLE DeskUnits (Name TEXT, NumChannels NUMERIC); CREATE TABLE IOUnits (Name TEXT, NumChannels NUMERIC); CREATE TABLE RoutingTable (DeskUnitName TEXT, DeskUnitChannel NUMERIC, IOUnitName TEXT, IOUnitChannel NUMERIC); The RoutingTable 'table' then connects each DeskUnit channel to an IOUnit channel. For example the DeskUnit called "Desk1" channel 1 may route to IOunit name "IOUnit1" channel 2, etc. So far I hope this is pretty straightforward and understandable. The problem is, however, this is a strictly 1 to 1 relationship. Any DeskUnit channel can route to only 1 IOUnit channel. Now, I need to implement a 1 to many relationship. Where any DeskUnit channel can connect to multiple IOUnit channels. I realise I may have to rearrange the tables completely, but I am not sure the best way to go about this. I am fairly new to SQLite and databases in general so any help would be appreciated. Thanks Patrick

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Modelling deterministic and nondeterministic data separately

    - by Superstringcheese
    I'm working with the Microsoft ADO.NET Entity Framework for a game project. Following the advice of other posters on SO, I'm considering modelling deterministic and nondeterministic data separately. The idea for this came from a discussion on multiplayer games, but it seemed to make sense in a single-player scenario as well. Deterministic (things that aren't going to change during gameplay) Attributes (Strength, Agility, etc.) and their descriptions Skills and their descriptions and requirements Races, Factions, Equipment, etc. Base Attribute/Skill/Equipment loadouts for monsters Nondeterministic (things that will change a lot during gameplay) Beings' current AttributeModifers (Potion of Might = +10 Strength), current health and mana, etc. Player inventory, cash, experience, level Player quests states Player FactionRelationships ...and so on. My deterministic model would serve as a set of constants. My nondeterministic model would provide my on-the-fly operable data and would be serialized to a savegame file to maintain game state between play sessions. The data store will be an embedded SQL Compact database. So I might want to create relations between my Attributes table (deterministic model) and my BeingAttributeModifiers table (nondeterministic model), but how do I set that up across models? Det model/db Nondet model/db ____________ ________________________ |Attributes | |PlayerAttributeModifiers| |------------| |------------------------| |Id | |Id | |Name | |AttributeId | |Description | |SourceId | ------------ |Value | ------------------------ Should I use two separate models (edmx) that transact with a single database containing both deterministic-type and nondeterministic-type tables? Or should/can I use two separate databases in one model? Or two models each with their own database? With distinct models/dbs it seems like this will get really complicated and I'll end up fighting EF a lot, rolling my own transaction code, and generally losing out on a lot of the advantages of the framework. I know these are vague questions, I'm just looking for a sanity check before I forge ahead any further.

    Read the article

  • Copying a subset of data to an empty database with the same schema

    - by user193655
    I would like to export part of a database full of data to an empty database. Both databases has the same schema. I want to maintain referential integrity. To simplify my cases it is like this: MainTable has the following fields: 1) MainID integer PK 2) Description varchar(50) 3) ForeignKey integer FK to MainID of SecondaryTable SecondaryTable has the following fields: 4) MainID integer PK (referenced by (3)) 5) AnotherDescription varchar(50) The goal I'm trying to accomplish is "export all records from MainTable using a WHERE condition", for example all records where MainID < 100. To do it manually I shuold first export all data from SecondaryTable contained in this select: select * from SecondaryTable ST outer join PrimaryTable PT on ST.MainID=PT.MainID then export the needed records from MainTable: select * from MainTable where MainID < 100. This is manual, ok. Of course my case is much much much omre complex, I have 200+ tables, so donig it manually is painful/impossible, I have many cascading FKs. Is there a way to force the copy of main table only "enforcing referntial integrity". so that my query is something like: select * from MainTable where MainID < 100 WITH "COPYING ALL FK sources" In this cases also the field (5) will be copied. ====================================================== Is there a syntax or a tool to do this? Table per table I'd like to insert conditions (like MainID <100 is only for MainTable, but I have also other tables).

    Read the article

  • Problems with display of UTF-8 encoded content from a DB

    - by LookUp Webmaster
    Dear members of the Stackoverflow community, We are developing a web application using the Zend Framework, and we are facing some encoding issues that we hope you might help us solve. The situation goes something like this: There are certain tables on a MySQL database that need to be displayed as html. Because the site is designed using the Spanish language, the database contains some characters like "á" or "ñ". Our internal policy is to set all the encodings as UTF-8, including all the databases and the tables. The problem is, that when we retrieve the content from the DB, some characters are displayed as question marks. We are out of ideas. These are all the things that we have already tried and double-checked: 1. The SQL file from which we load all the data is properly UTF-8 encoded. 2. The SQL is loaded through phpmyadmin (which is configured as UTF-8), and the resulting tables are displayed properly. 3. The netbeans environment used for coding is also set as UTF-8. The weird thing is that all the content that is hard-coded either as php or html is displayed properly. Only the values that are extracted from the database have issues. Any ideas? Thank you very much.

    Read the article

  • Set "Start With" value for Oracle sequence dynamically

    - by Allan
    I'm trying to create a release script that can be deployed on multiple databases, but where the data can be merged back together at a later date. The obvious way to handle this is to set the sequence numbers for production data sufficiently high in subsequent deployments to prevent collisions. The problem is in coming up with a release script that will accept the environment number and set the "Start With" value of the sequences appropriately. Ideally, I'd like to use something like this: ACCEPT EnvironNum PROMPT 'Enter the Environment Number: ' --[more scripting] CREATE SEQUENCE seq1 START WITH &EnvironNum*100000; --[more scripting] This doesn't work because you can't evaluate a numeric expression in DDL. Another option is to create the sequences using dynamic SQL via PL/SQL. ACCEPT EnvironNum PROMPT 'Enter the Environment Number: ' --[more scripting] EXEC execute immediate 'CREATE SEQUENCE seq1 START WITH ' || &EnvironNum*100000; --[more scripting] However, I'd prefer to avoid this solution as I generally try to avoid issuing DDL in PL/SQL. Finally, the third option I've come up with is simply to accept the Start With value as a substitution variable, instead of the environment number. Does anyone have a better thought on how to go about this?

    Read the article

  • Technology Plan - Which tools should I use?

    - by Armadillo
    Hi, Soon, I'll start my own software company. My primary product/solution will be a Billing/Invoice Software. In a near future, I pretend to expand this first module to an ERP. My app should be able to run as a stand-alone application and as a Web-based application (so there will be, probably two GUI for the same Database). My problem, now, is to choose the right tools; I'm talking about what programming language(s) should I use, what kind of database should I choose, and stuff like that. I'm primarily a VB6 programmer, so probably I'll choose the .net framework (vb/c#). But I'm seriously thinking about Java. Java has 2 "pros" that I really like: write once, run anywhere and it is free (I think...). I've been thinking about RIAs too, but I just don't have any substantial feedback about them... Then, I'll need a report tool. Crystal Reports? HTML based Reports? Other? Databases: I'm not sure if I should use SQL-Server Express or PostgreSQL (or other). I'd be happy to hear any comments and advices Thanks

    Read the article

  • How can I synchronise two datatables and update the target in the database?

    - by Craig
    I am trying to synchronise two tables between two databases. I thought that the best way to do this would be to use the DataTable.Merge method. This seems to pick up the changes, but nothing ever gets committed to the database. So far I have: string tableName = "[Table_1]"; string sql = "select * from " + tableName; using (SqlConnection sourceConn = new SqlConnection(ConfigurationManager.ConnectionStrings["source"].ConnectionString)) { SqlDataAdapter sourceAdapter = new SqlDataAdapter(); sourceAdapter.SelectCommand = new SqlCommand(sql, sourceConn); sourceConn.Open(); DataSet sourceDs = new DataSet(); sourceAdapter.Fill(sourceDs); using (SqlConnection targetConn = new SqlConnection(ConfigurationManager.ConnectionStrings["target"].ConnectionString)) { SqlDataAdapter targetAdapter = new SqlDataAdapter(); targetAdapter.SelectCommand = new SqlCommand(sql, targetConn); SqlCommandBuilder builder = new SqlCommandBuilder(targetAdapter); targetAdapter.InsertCommand = builder.GetInsertCommand(); targetAdapter.UpdateCommand = builder.GetUpdateCommand(); targetAdapter.DeleteCommand = builder.GetDeleteCommand(); targetConn.Open(); DataSet targetDs = new DataSet(); targetAdapter.Fill(targetDs); targetDs.Tables[0].TableName = tableName; sourceDs.Tables[0].TableName = tableName; targetDs.Tables[0].Merge(sourceDs.Tables[0]); targetAdapter.Update(targetDs.Tables[0]); } } At the present time, there is one row in the source that is not in the target. This row is never transferred. I have also tried it with an empty target, and nothing is transferred.

    Read the article

  • PHP Transferring Photos From One Oracle Database Table to Another

    - by Jonathan Swift
    I am attempting to transfer a set of photos (blobs) from one table to another across databases. I'm nearly there, except for binding the photo parameter. I have the following code: $conn_db1 = oci_pconnect('username', 'password', 'db1'); $conn_db2 = oci_pconnect('username', 'password', 'db2'); $parse_db1_select = oci_parse($conn_db1, "SELECT REF PID, BINARY_OBJECT PHOTOGRAPH FROM BLOBS"); $parse_db2_insert = oci_parse($conn_db2, "INSERT INTO PHOTOGRAPHS (PID, PHOTOGRAPH) VALUES (:pid, :photo)"); oci_execute($parse_db1_select); while ($row = oci_fetch_assoc($parse_db1_select)) { $pid = $row['PID']; $photo = $row['PHOTOGRAPH']; oci_bind_by_name($parse_db2_insert, ':pid', $pid, -1, OCI_B_INT); // This line causes an error oci_bind_by_name($parse_db_insert, ':photo', $photo, -1, OCI_B_BLOB); oci_execute($parse_db2_insert); } oci_close($db1); oci_close($db2); But I get the following error, on the error line commented above: Warning: oci_execute() [function.oci-execute]: ORA-03113: end-of-file on communication channel Process ID: 0 Session ID: 790 Serial number: 118 Does anyone know the right way to do this?

    Read the article

  • Syncing magento database froms development to production

    - by ringerce
    I use git for version control. I have a development, staging and production environment. When I finish in development I push to staging for review by the client. When approved, I push changes from staging to production. That works fine as long as there is no database changes. What happens if I install modules via Magento connect on local development and it makes database modifications. How would I push those changes up to the production server since the production server is always changing? Edit: I wrote two shell scripts. One that pulls the production database down to my development server, replaces base url with develpment url and updates my development db accordingly. It also leaves the production sql dump behind to be added to my git repo. I'm not really sure if it's beneficial to keep the raw dumps in source control but I'm going to try it out. The second scripts moves the development database up to staging and essentially performs the same operations as the first. Now when it comes time to move to production I pull the updated production repo into the production server and allow magento to do it's thing. I also started using SQLYog recently and it has a database comparison wizard which will give me the differences in my development and production databases and allow me to merge the changes in selectively. It always creates a migration script that I added to source control as well. If anything goes wrong I can run the comparison to see if anything was missed. Does this sounds like a decent workflow to you guys?

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >