Search Results

Search found 32851 results on 1315 pages for 'vsts database edition'.

Page 190/1315 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • How do I fix issue causing "incomplete startup packet" log message trying to implement replication in Postgresql?

    - by colour me brad
    I've got two cloud servers running Ubuntu 13.04 and PostgreSQL 9.2. I've primarily used this blog post to aid me in setting things up. However, to do the initial database dump to the slave I'm using pg_start_backup/pg_stop_backup strategy used in this other blog post. I've read through the docs and postgres wikis as well. I ran into several problems I was able to solve, but I can't get past this wretched "the database is starting up" failure. I'm not sure if seeing "cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory" after "consistent recover state reached" is normal or the first sign of a problem. The searching I've done on "the database is starting up" and "incomplete startup packet" tells me that something is sending empty TCP packets to the slave. The only thing that even knows about the slave is the master, so I'm not sure why it's sending empty packets... Has anyone worked with this and have an idea what might be going wrong? The postgres log on the slave looks like so: 2013-08-26 13:01:38 CDT LOG: entering standby mode 2013-08-26 13:01:38 CDT LOG: restored log file "000000010000000000000039" from archive 2013-08-26 13:01:38 CDT LOG: incomplete startup packet 2013-08-26 13:01:39 CDT LOG: redo starts at 0/39000020 2013-08-26 13:01:39 CDT LOG: consistent recovery state reached at 0/390000E0 cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory 2013-08-26 13:01:39 CDT LOG: streaming replication successfully connected to primary 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:41 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT LOG: incomplete startup packet 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up thanks! brad

    Read the article

  • Server Performance

    - by sb12
    I know very little about performance tuning of servers etc... so i thought i'd put this up here as i start some research on it, just to get some direction. I am in the process of migrating from my old server to a new one - both are 64 bit machines. One is a few years old, the other brand new (PowerEdge R410). The old server spec is: 2 cpus, 3.4GHz Pentiums, 8G of RAM, Fedora 11 currently installed The new server spec is: 16 cpus, 3.2 GHz Xeon, 16G of RAM, CentOS 6.2 installed. Also RAID10 is on the new server - no RAID on the old one. Both servers currently have the same database (MySQL) with the same data migrated. I wrote a Perl script that simply steps through each row of a table in the database (about 18000 rows) and updates a value in that row. Every row in the table is updated. Out of curiosity i ran this perl script on both machines, just to see how the new server would perform vs. the old one, and it produced interesting results: The old server was twice as fast as the new one to complete. Looking at the database, both are configured exactly the same (the new one being a dump of the old one...)... Anyone any ideas why this would be given the hardware gap between both? As i said i'm about to start some digging, but thought i'd put this up here to maybe get some good direction.... Many thanks in advance..

    Read the article

  • MySQL server stopped working after upgrade

    - by umpirsky
    I upgraded to 12.04 and my MySQL server just stopped working. It throws: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I tried to reinstall it from software center, but it fails with: Package operation failed The installation or removal of a software package failed. installArchives() failed: Selecting previously unselected package mysql-server. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 243412 files and directories currently installed.) Unpacking mysql-server (from .../mysql-server_5.5.22-0ubuntu1_all.deb) ... Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: mysql-server-5.5 mysql-server Error in function: Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured I also tried: $ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: mysql-server-5.5 mysql-server-core-5.5 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: mysql-server-5.5 E: Sub-process /usr/bin/dpkg returned an error code (1) Any idea? EDIT: Crash report is being auto generated. EDIT: After trying and trying I got suggestion to do: #apt-get --purge remove mysql-server-5.1 mysql-server-5.5 Reading package lists... Done Building dependency tree Reading state information... Done Virtual packages like 'mysql-server-5.1' can't be removed The following packages will be REMOVED: mysql-server-5.5* 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 31.3 MB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 243407 files and directories currently installed.) Removing mysql-server-5.5 ... Purging configuration files for mysql-server-5.5 ... Processing triggers for ureadahead ... Processing triggers for man-db ... The most important part is: Virtual packages like 'mysql-server-5.1' can't be removed Any idea?

    Read the article

  • Need help tuning Mysql and linux server

    - by Newtonx
    We have multi-user application (like MailChimp,Constant Contact) . Each of our customers has it's own contact's list (from 5 to 100.000 contacts). Everything is stored in one BIG database (currently 25G). Since we released our product we have the following data history. 5 years of data history : - users/customers (200+) - contacts (40 million records) - campaigns - campaign_deliveries (73.843.764 records) - campaign_queue ( 8 millions currently ) As we get more users and table records increase our system/web app is getting slower and slower . Some queries takes too long to execute . SCHEMA Table contacts --------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------+------------------+------+-----+---------+----------------+ | contact_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | client_id | int(10) unsigned | YES | | NULL | | | name | varchar(60) | YES | | NULL | | | mail | varchar(60) | YES | MUL | NULL | | | verified | int(1) | YES | | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_created | date | YES | MUL | NULL | | | geolocation | varchar(100) | YES | | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------------+------------------+------+-----+---------+----------------+ Table campaign_deliveries +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | contact_id | int(10) unsigned | NO | MUL | 0 | | | sent_date | date | YES | MUL | NULL | | | sent_time | time | YES | MUL | NULL | | | smtp_server | varchar(20) | YES | | NULL | | | owner | int(5) | YES | MUL | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------+------------------+------+-----+---------+----------------+ Table campaign_queue +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | queue_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_to_send | date | YES | | NULL | | | contact_id | int(11) | NO | MUL | NULL | | | date_created | date | YES | | NULL | | +---------------+------------------+------+-----+---------+----------------+ Slow queries LOG -------------------------------------------- Query_time: 350 Lock_time: 1 Rows_sent: 1 Rows_examined: 971004 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 70 AND contacts.verified = 1); Query_time: 235 Lock_time: 1 Rows_sent: 1 Rows_examined: 4455209 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 2); How can we optimize it ? Queries should take no more than 30 secs to execute? Can we optimize it and keep all data in one BIG database or should we change app's structure and set one single database to each user ? Thanks

    Read the article

  • How will I support 100,000 requests an hour?!

    - by tylerl
    I know this question is a little strange but I got lucky with an idea and I need some numbers to use for when I try to make a deal with a company. I'm wondering how much it'll cost me to run a site that's heavy on PHP and gets between 70,000 and 100,000 requests an hour on something like Rackspace's Cloud Servers. I have no idea how many servers I need or how much RAM each one should have. There will be a decent number of images on the site (probably something like 10,000 in the first couple weeks) and the site runs on about 2,500 lines of PHP code. I figure I should sign up for a CDN of some kind, although CDN In A Box is all I've heard of and I'm not sure it's necessary for a site that's already on a cloud platform. I've obviously never done anything like this before so I'm just looking to get an estimation of what I need for this massive site... Also, I use a database and I was wondering how that works - would I dedicate one of the cloud servers to running the database or would I need to put the database into each of the cloud servers? Thanks in advance...

    Read the article

  • MySQL Error: FUNCTION LEVENSHTEIN already exists

    - by kgrote
    I've got an ExpressionEngine database and I exported a couple of tables from it, then dropped those tables. When I try to re-import the tables in PHPMyAdmin, I get this error: SQL query: -- -- Database: `my_db` -- DELIMITER $$ -- -- Functions -- CREATE DEFINER=`my_username`@`%` FUNCTION `LEVENSHTEIN`(s1 VARCHAR(255), s2 VARCHAR(255)) RETURNS int(11) DETERMINISTIC BEGIN DECLARE s1_len, s2_len, i, j, c, c_temp, cost INT; DECLARE s1_char CHAR; DECLARE cv0, cv1 VARBINARY(256); SET s1_len = CHAR_LENGTH(s1), s2_len = CHAR_LENGTH(s2), cv1 = 0x00, j = 1, i = 1, c = 0; IF s1 = s2 THEN RETURN 0; ELSEIF s1_len = 0 THEN RETURN s2_len; ELSEIF s2_len = 0 THEN RETURN s1_len; ELSE WHILE j <= s2_len DO SET cv1 = CONCAT(cv1, UNHEX(HEX(j))), j = j + 1; END WHILE; WHILE i <= s1_len DO SET s1_char = SUBSTRING(s1, i, 1), c = i, cv0 = UNHEX(HEX(i)), j = 1; WHILE j <= s2_len DO SET c = c + 1; IF s1_char = SUBSTRING(s2, j, 1) THEN SET cost = 0; ELSE SET cost = 1; END IF; SET c_temp = CONV(HEX(SUBSTRING(cv1, j, 1)), 16, 10) + cost; IF c > c_temp THEN SET c = [...] MySQL said: Documentation #1304 - FUNCTION LEVENSHTEIN already exists I get this error even if I drop all tables from the DB and try to import anything. The only way I can get the error to go away is to totally delete the database and re-create it. What's causing that error and how can I stop it from happening?

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    I Asked this at Staock Overflow, but I would like your oppinion too as it has as much to do with administration as it does with coding. We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • Hosting options for data-enabled web application

    - by Hertfordian
    I am independently developing an asp.net business application with a MySQL database. I currently have a Windows web hosting account which includes MySQL and MS SQL as installed supported options. I am not yet finally committed to using MySQL and I want to keep my options open to evaluate MS SQL and possibly other options such as PostGreSQL later when more of the business logic is in place - my data access layer will handle the database connectivity. The web hosting setup I have now is fine for development purposes, but if in future I want to use, say, PostGreSQL Server, and a level of usage of, say, 10,000 hits per day concentrated in business hours, I'm assuming I'll need a dedicated server. But in that case, should I just install PostGreSQL on the dedicated server, or is best practice to have a separate database server - perhaps locked down so that it can only be accessed through the web server? And supposing it was only 2000 hits a day - how would that change things? I'd appreciate it if anyone could point me in the direction of a useful guide to these sorts of issues. Naturally if I start paying for separate servers, I would like to know exactly why I'm doing it and what the performance issues and thresholds are.

    Read the article

  • Calculating memory footprints using /proc/sysvipc/shm

    - by MarkTeehan
    This is for a SLES 10 database server. One of my servers runs three databases and three app servers; I am analyzing how their shared memory segments grow and shrink to avoid intermittent out-of-memory scenarios. "Top" is hot helpful for this since its calculations for RES and VIRT are inconsistent. I am doing this by matching up the contents of /proc/sysvipc/shm with memory usage reported by the database admin console. I do this by totaling up saving the contents of /proc/sysvipc/shm and then total up "bytes" for all of the segments for the offending userid. This is a large server with hundreds of segments and tens (or hundreds) of GB of allocated memory per userid. However it doesn't match up - the database management software claims to be using around 25% more memory than the total I calculate. Negligible swap space is in use, so I am ignoring that. I am running it as root so I am sure I see all shared memory segments. My question is : is all (significant) allocated memory recorded in /proc/sysvipc/shm, or is this only shared memory (*and not "un-shared" memory?). If this is incorrect, what is the correct way to calculate out the total allocated memory for each userid? Also: I believe doing a 'cat' on this file locks server IPC. I check it every 5 seconds - is it likely that this frequency could be problematic? Thanks! Mark Teehan Singapore

    Read the article

  • Excel 2013: VLookup for cells that share common characters within cell but are both surrounded by other non-matching text

    - by Kylie Z
    I am pulling information from 2 different databases. The databases use different naming protocol for the exact same item/specified placement however they always have certain components of the name in common. The length of these names can vary throughout each of the databases (see the pic below) so I don't think counting characters would help. I need a formula (probably a vlookup/match/index of some sort) to pair up the names from the 2nd database name with the 1st database name and then place it in the adjacent column(B2) on sheet1. Until this point I've had to match, copy, and paste the pairs manually from one sheet to the other and it takes FOREVER. Any help would be much appreciated!!! For example: Database1 Name in Sheet1,A2: 728x90_Allstate_629930_ALL_JUL_2013_MASSACHUSETTSAUTO_BAN_MSN_ROSMSNAUTOSMASSACHUSETTS_7.2.13 Database2 Name in Sheet2, A13: BAN_MSN_ROSMSNAUTOSMASSACHUSETTS728X90_728X90_DFA Common Factors: "ROSMSNAUTOSMASSACHUSETTS" & "728X90" Therefore A2 and A13 need to pair up In some cases, Database 1 and 2 will have a common name aspect but sizing will be different. They need to have BOTH aspects in common in order to be paired so I would NOT want the below example to pair up. Database1 Name in Sheet1,A2: 728x90_Allstate_629930_ALL_JUL_2013_MASSACHUSETTSAUTO_BAN_MSN_ROSMSNAUTOSMASSACHUSETTS_7.2.13 Database2 Name in Sheet2, A12: BAN_MSN_ROSMSNAUTOSMASSACHUSETTS300X250_300X250_DFA Common Factor: Only "ROSMSNAUTOSMASSACHUSETTS" matches. "728x90" is not equal to "300X250" - Sizing is different so they should not be paired.

    Read the article

  • How to handle ViewModel and Database in C#/WPF/MVVM App

    - by Mike B
    I have a task management program with a "Urgency" field. Valid values are Int16 currently mapped to 1 (High), 2 (Medium), 3 (Low), 4 (None) and 99 (Closed). The urgency field is used to rank tasks as well as alter the look of the items in the list and detail view. When a user is editing or adding a new task they select or view the urgency in a ComboBox. A Converter passes Strings to replace the Ints. The urgency collection is so simple I did not make it a table in the database, instead it is a, ObservableCollection(Int16) that is populated by a method. Since the same screen may be used to view a closed task the "Closed" urgency must be in the ItemsSource but I do not want the user to be able to select it. In order to prevent the user from being able to select that item in the ComboBox but still be able to see it if the item in the database has that value should I... Manually disable the item in the ComboBox in code or Xaml (I doubt it) Change the Urgency collection from an Int16 to an Object with a Selectable Property that the isEnabled property of the ComboBoxItem Binds to. Do as in 2 but also separate the urgency information into its own table in the database with a foreign key in the Tasks table None of the above (I suspect this is the correct answer) I ask this because this is a learning project (My first real WPF and first ever MVVM project). I know there is rarely one Right way to do something but I want to make sure I am learning in a reasonable manner since it if far harder to Unlearn bad habits Thanks Mike

    Read the article

  • Javascript html5 database transaction problem in loops

    - by Marek
    I'm hittig my head on this and i will be glad for any help. I need to store in a database a context (a list of string ids) for another page. I open a page with a list of artworks and this page save into the database those artowrks ids. When i click on an artwork i open its webpage but i can access the database to know the context and refer to the next and prev artwork. This is my code to retrieve the contex: updateContext = function () { alert("updating context"); try { mydb.transaction( function(transaction) { transaction.executeSql("select artworks.number from artworks, collections where collections.id = artworks.section_id and collections.short_name in ('cro', 'cra', 'crp', 'crm');", [], contextDataHandler, errorHandler); }); } catch(e) { alert(e.message); } } the contextDatahandler function then iterates through the results and fill again the current_context table: contextDataHandler = function(transaction, results) { try { mydb.transaction( function(transaction) { transaction.executeSql("drop table current_context;", [], nullDataHandler, errorHandler); transaction.executeSql("create table current_context(id String);", [], nullDataHandler, errorHandler); } ) } catch(e) { alert(e.message); } var s = ""; for (var i=0; i < results.rows.length; i++) { var item = results.rows.item(0); s += item['number'] + " "; mydb.transaction( function(tx) { tx.executeSql("insert into current_context(id) values (?);", [item['number']]); } ) } alert(s); } the result is, i get the current_context table deleted, recreated, and filled, but all the rows are filled with the LAST artwork id. the query to retrieve the artworks ids works, so i think it's a transaction problem, but i cant figure out where. thanks for any help

    Read the article

  • converting mysql database to sql server

    - by every_answer_gets_a_point
    i have a mysql database: /* MySQL Data Transfer Source Host: 10.0.0.5 Source Database: jnetdata Target Host: 10.0.0.5 Target Database: jnetdata Date: 5/26/2009 12:27:33 PM */ SET FOREIGN_KEY_CHECKS=0; -- ---------------------------- -- Table structure for chavrusas -- ---------------------------- CREATE TABLE `chavrusas` ( `id` int(11) NOT NULL auto_increment, `date_created` datetime default NULL, `luser_id` int(11) default NULL, `ruser_id` int(11) default NULL, `luser_type` varchar(50) default NULL, `ruser_type` varchar(50) default NULL, `SessionDay` varchar(250) default NULL, `SessionTime` datetime default NULL, `WeeklyReminder` tinyint(1) NOT NULL default '0', `reminder_phone` tinyint(1) NOT NULL default '0', `calling_card` varchar(50) default NULL, `active` tinyint(1) NOT NULL default '0', `notes` mediumtext, `ended` tinyint(1) NOT NULL default '0', `end_date` datetime default NULL, `initiated_by_student` tinyint(1) NOT NULL default '0', `initiated_by_volunteer` tinyint(1) NOT NULL default '0', `student_general_reason` varchar(50) default NULL, `volunteer_general_reason` varchar(50) default NULL, `student_reason` varchar(250) default NULL, `volunteer_reason` varchar(250) default NULL, `student_nli` tinyint(1) NOT NULL default '0', `volunteer_nli` tinyint(1) NOT NULL default '0', `jnet_initiated` tinyint(1) default '0', `belongs_to` varchar(50) default NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM AUTO_INCREMENT=5913 DEFAULT CHARSET=latin1; -- ---------------------------- -- Table structure for tbluseravailability -- ---------------------------- CREATE TABLE `tbluseravailability` ( `availability_id` int(11) NOT NULL auto_increment, `user_id` int(11) NOT NULL, `weekday_id` int(11) NOT NULL, `timeslot_id` int(11) NOT NULL, PRIMARY KEY (`availability_id`) ) ENGINE=MyISAM AUTO_INCREMENT=10865 DEFAULT CHARSET=latin1; -- ---------------------------- -- Table structure for tblusers -- ---------------------------- CREATE TABLE `tblusers` ( `id` int(11) NOT NULL auto_increment, `password` varchar(50) default NULL, `title` varchar(255) default NULL, `first` varchar(255) default NULL, `last` varchar(255) default NULL, `gender` varchar(255) default NULL, `address` varchar(255) default NULL, `address_2` varchar(255) default NULL, `city` varchar(255) default NULL, `state` varchar(255) default NULL, `postcode` varchar(255) default NULL, `country` varchar(255) default NULL, `email` varchar(255) default NULL, `emailnotes` varchar(255) default NULL, `Home_Phone` varchar(255) default NULL, `Office_Phone` varchar(255) default NULL, `Cell_Phone` varchar(255) default NULL, `Contact_Preference` varchar(255) default NULL, `Birthdate` datetime default NULL, `Age` varchar(255 and it goes on for about 10mb i need to convert it to ms sql, how do i do it?

    Read the article

  • Mapping a child collection without indexing based on database primary key or using bag

    - by Colin Bowern
    I have a existing parent-child relationship I am trying to map in Fluent Nhibernate: [RatingCollection] -- [Rating] Rating Collection has: ID (database generated ID) Code Name Rating has: ID (database generated id) Rating Collection ID Code Name I have been trying to figure out which permutation of HasMany makes sense here. What I have right now: HasMany<Rating>(x => x.Ratings) .WithTableName("Rating") .KeyColumnNames.Add("RatingCollectionId") .Component(c => { c.Map(x => x.Code); c.Map(x => x.Name); ); It works from a CRUD perspective but because it's a bag it ends up deleting the rating contents any time I try to do a simple update / insert to the Ratings property. What I want is an indexed collection but not using the database generated ID (which is in the six digit range right now). Any thoughts on how I could get a zero-based indexed collection (so I can go entity.Ratings[0].Name = "foo") which would allow me to modify the collection without deleting/reinserting it all when persisting?

    Read the article

  • Multiple database connection in Rails

    - by Sanal
    I'm using active_delegate for multiple connection in Rails. Here I'm using mysql as master_database for some models,and postgresql for some other models. Problem is that when I try to access the mysql models, I'm getting the error below! Stack trace shows that, it is still using the postgresql adapter to access my mysql models! RuntimeError: ERROR C42P01 Mrelation "categories" does not exist P15 F.\src\backend\parser\parse_relation.c L886 RparserOpenTable: SELECT * FROM "categories" STACKTRACE =========== d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract_adapter.rb:212:in `log' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/postgresql_adapter.rb:507:in `execute' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/postgresql_adapter.rb:985:in `select_raw' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/postgresql_adapter.rb:972:in `select' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:7:in `select_all_without_query_cache' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:81:in `cache_sql' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:661:in `find_by_sql' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:1553:in `find_every' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:615:in `find' D:/ROR/Aptana/dedomenon/app/models/category.rb:50:in `get_all_with_exclusive_scope' D:/ROR/Aptana/dedomenon/app/models/category.rb:50:in `get_all_with_exclusive_scope' D:/ROR/Aptana/dedomenon/app/controllers/categories_controller.rb:48:in `index' here is my database.yml file postgre: &postgre adapter: postgresql database: codex host: localhost username: postgres password: root port: 5432 mysql: &mysql adapter: mysql database: project host: localhost username: root password: root port: 3306 development: <<: *postgre test: <<: *postgre production: <<: *postgre master_database: <<: *mysql and my master_databse model is like this class Category < ActiveRecord::Base delegates_connection_to :master_database, :on => [:create, :save, :destroy] end Anyone has any solution??

    Read the article

  • Access SSAS cube from across domains without direct database connection

    - by SuperKing
    Hello, I'm working with SQL Server Analysis Services for the first time and have the dilemma of working on a project in which users must be able to access SSAS Cubes (via a custom web dashboard) that live across different servers and domains, but without having access to the other server's SSAS database connection strings. So Organization A and Organization B will have their own cubes on their own servers, but Organization A users must be able to view Organization B's cubes, and Organization B users must be able to view Organization A's cubes, but neither organization should have access to the connection string. I've read about allowing HTTP access to the SSAS server and cube from the link below, but that requires setting up users for authentication or allowing anonymous access to one organization's server for users of another organization, and I'm not sure this would be acceptable for this situation, or if this is the preferred way to do this. Is performance acceptable here? http://technet.microsoft.com/en-us/library/cc917711.aspx I also wonder if perhaps it makes sense to run a nightly/weekly process that accesses the other organization's SSAS database via a web service or something, and pull that data into a database on the organization's server, and then rebuild the cube. Then that cube would be queried without having to go and connect to the other organization server when viewing the cube. Has anyone else attempted to accomplish something similar? Is HTTP access the standard way to go for this? Or any other possible options? Thanks, and please let me know if you need more info, still unclear on how some of this works.

    Read the article

  • Calling SubmitChanges on DataContext does not update database.

    - by drasto
    In C# ASP.NET MVC application I use Link to SQL to provide data for my application. I have got simple database schema like this: In my controller class I reference this data context called Model (as you can see on the right side of picture in properties) like this: private Model model = new Model(); I've got a table (List) of Series rendered on my page. It renders properly and I was able to add delete functionality to delete Series like this: public ActionResult Delete(int id) { model.Series.DeleteOnSubmit(model.Series.SingleOrDefault(s => s.ID == id)); model.SubmitChanges(); return RedirectToAction("Index"); } Where appropriate action link looks like this: <%: Html.ActionLink("Delete", "Delete", new { id=item.ID })%> Also create (implemented in similar way) works fine. However edit does not work. My edit looks like this: public ActionResult Edit(int id) { return View(model.Series.SingleOrDefault(s => s.ID == id)); } [HttpPost] public ActionResult Edit(Series series) { if (ModelState.IsValid) { UpdateModel(series); series.Title = series.Title + " some string to ensure title has changed"; model.SubmitChanges(); return RedirectToAction("Index"); } I have controlled that my database has a primary key set up correctly. I debugged my application and found out that everything works as expected until the line with model.SubmitChanges();. This command does not apply the changes of Title property(or any other) against the database. Please help.

    Read the article

  • PHP MySQL database problem

    - by Jordan Pagaduan
    Code 1: <?php class dbConnect { var $dbHost = 'localhost', $dbUser = 'root', $dbPass = '', $dbName = 'input_oop', $dbTable = 'users'; function __construct() { $dbc = mysql_connect($this->dbHost,$this->dbUser,$this->dbPass) or die ("Cannot connect to MySQL : " . mysql_error()); mysql_select_db($this->dbName) or die ("Database not Found : " . mysql_error()); } } class User extends dbConnect { var $name; function userInput($q) { $sql = "INSERT INTO $this->dbTable set name = '".$q."'"; mysql_query($sql) or die (mysql_error()); } } ?> Code 2: <?php $q = $_GET['q']; $dbc=mysql_connect("localhost","root","") or die (mysql_error()); mysql_select_db('input_oop') or die (mysql_error()); $sql = "INSERT INTO users set name = '".$q."'"; mysql_query($sql) or die (mysql_error()); ?> My Code 1 save in my database: Saving Multiple! My Code 2 save in my database: What is wrong with my code 1?

    Read the article

  • Database access through collections

    - by Mike
    Hi All, I have an 3 tiered application where I need to get database results and populated the UI. I have a MessagesCollection class that deals with messages. I load my user from the database. On the instantiation of a user (ie. new User()), a MessageCollection Messages = new MessageCollection(this) is performed. Message collection accepts a user as a parameter. User user = user.LoadUser("bob"); I want to get the messages for Bob. user.Messages.GetUnreadMessages(); GetUnreadMessages calls my Business Data provider which in turn calls the data access layer. The Business data provider returns List. My question is - I am not sure what the best practice is here - If I have a collection of messages in an array inside the MessagesCollection class, I could implement ICollection to provide GetEnumerator() and ability to traverse the messages. But what happens if the messages change and the the user has old messages loaded? What about big message collections? What if my user had 10,000 unread messages? I don't think accessing the database and returning 10,000 Message objects would be efficient.

    Read the article

  • "Priming" a whole database in MSSQL for first-hit speed

    - by David Spillett
    For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards. One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM? It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.

    Read the article

  • Connecting Error to Remote Oracle XE database using ASP.NET

    - by imsatasia
    Hello, I have installed Oracle XE on my Development machine and it is working fine. Then I installed Oracle XE client on my Test machine which is also working fine and I can access Development PC database from Browser. Now, I want to create an ASP.Net application which can access that Oracle XE database. I tried it too, but it always shows me an error on my TEST machine to connect database to the Development Machine using ASP.Net. Here is my code for ASP.Net application: protected void Page_Load(object sender, EventArgs e) { string connectionString = GetConnectionString(); OracleConnection connection = new OracleConnection(connectionString); connection.Open(); Label1.Text = "State: " + connection.State; Label1.Text = "ConnectionString: " + connection.ConnectionString; OracleCommand command = connection.CreateCommand(); string sql = "SELECT * FROM Users"; command.CommandText = sql; OracleDataReader reader = command.ExecuteReader(); while (reader.Read()) { string myField = (string)reader["nID"]; Console.WriteLine(myField); } } static private string GetConnectionString() { // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. return "User Id=System;Password=admin;Data Source=(DESCRIPTION=" + "(ADDRESS=(PROTOCOL=TCP)(HOST=myServerAddress)(PORT=1521))" + "(CONNECT_DATA=(SERVICE_NAME=)));"; }

    Read the article

  • PDO Database Connections Problem

    - by Metropolis
    Hey Everyone, Over a year ago I created my own database classes which use PDO, and handle all preparing, executing, and closing connections. These classes have been working great up until now. There are two different database severs I am grabbing from, MySQL, and MS SQL Express. I am retrieving an employee id from the MySQL server and using it to get that employees information from the MS SQL server. There are about 11k records coming from the MySQL server and my program is only making it through 1200 before crashing with an error like the following. Connection failed (odbc:Driver=FreeTDS;Servername=MSSQLExpress;Database=SMDINC) Class (PDOException) SQLSTATE[08001] SQLDriverConnect: 0 [unixODBC][FreeTDS][SQL Server]Unable to connect to data source It seems like the program is not able to connect to the data source, but it is running the exact same query about 30 times before this and having no problem. Also, I have thoroughly checked all of the data coming into the query and it all looks fine. I believe the issue may be that there are to many connections being created, but I have tried to close all connections in many different places, and nothing seems to be fixing the problem. Any debugging help, or suggestions would be appreciated! Craig Metrolis

    Read the article

  • SSIS - Bulk Update at Database Field Level

    - by Adam
    Hello, Here's our mission: Receive files from clients. Each file contains anywhere from 1 to 1,000,000 records. Records are loaded to a staging area and business-rule validation is applied. Valid records are then pumped into an OLTP database in a batch fashion, with the following rules: If record does not exist (we have a key, so this isn't an issue), create it. If record exists, optionally update each database field. The decision is made based on one of 3 factors...I don't believe it's important what those factors are. Our main problem is finding an efficient method of optionally updating the data at a field level. This is applicable across ~12 different database tables, with anywhere from 10 to 150 fields in each table (original DB design leaves much to be desired, but it is what it is). Our first attempt has been to introduce a table that mirrors the staging environment (1 field in staging for each system field) and contains a masking flag. The value of the masking flag represents the 3 factors. We've then put an UPDATE similar to... UPDATE OLTPTable1 SET Field1 = CASE WHEN Mask.Field1 = 0 THEN Staging.Field1 WHEN Mask.Field1 = 1 THEN COALESCE( Staging.Field1 , OLTPTable1.Field1 ) WHEN Mask.Field1 = 2 THEN COALESCE( OLTPTable1.Field1 , Staging.Field1 ) ... As you can imagine, the performance is rather horrendous. Has anyone tackled a similar requirement? We're a MS shop using a Windows Service to launch SSIS packages that handle the data processing. Unfortunately, we're pretty much novices at this stuff.

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • Database nesting layout confusion

    - by arzon
    I'm no expert in databases and a beginner in Rails, so here goes something which kinda confuses me... Assuming I have three classes as a sample (note that no effort has been made to address any possible Rails reserved words issue in the sample). class File < ActiveRecord::Base has_many :records, :dependent => :destroy accepts_nested_attributes_for :records, :allow_destroy => true end class Record < ActiveRecord::Base belongs_to :file has_many :users, :dependent => :destroy accepts_nested_attributes_for :users, :allow_destroy => true end class User < ActiveRecord::Base belongs_to :record end Upon entering records, the database contents will appear as such. My issue is that if there are a lot of Files for the same Record, there will be duplicate record names. This will also be true if there will be multiple Records for the same user in the the Users table. I was wondering if there is a better way than this so as to have one or more files point to a single Record entry and one or more Records will point to a single User. BTW, the File names are unique. Files table: id name 1 name1 2 name2 3 name3 4 name4 Records table: id file_id record_name record_type 1 1 ForDaisy1 ... 2 2 ForDonald1 ... 3 3 ForDonald2 ... 4 4 ForDaisy1 ... Users table: id record_id username 1 1 Daisy 2 2 Donald 3 3 Donald 4 4 Daisy Is there any way to optimize the database to prevent duplication of entries, or this should really the correct and proper behavior. I spread out the database into different tables to be able to easily add new columns in the future.

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >