Search Results

Search found 8523 results on 341 pages for 'bobby tables'.

Page 53/341 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Should I Split Tables Relevant to X Module Into Different DB? Mysql

    - by Michael Robinson
    I've inherited a rather large and somewhat messy codebase, and have been tasked with making it faster, less noodly and generally better. Currently we use one big database to hold all data for all aspects of the site. As we need to plan for significant growth in the future, I'm considering splitting tables relevant to specific sections of the site into different databases, so if/when one gets too large for one server I can more easily migrate some user data to different mysql servers while retaining overall integrity. I would still need to use joins on some tables across the new databases. Is this a normal thing to do? Would I incur a performance hit because of this?

    Read the article

  • Mapping tables from an existing database to an object -- is Hibernate suited?

    - by Bernhard V
    Hello! I've got some tables in an existing database and I want to map them to a Java object. Actually it's one table that contains the main information an some other tables that reference to such a table entry with a foreign key. I don't want to store objects in the database, I only want to read from it. The program should not be allowed to apply any changes to the underlying database. Currently I read from the database with 5 JDBC sql queries and set the results then on an object. I'm now looking for a less code intensive way. Another goal is the learning aspect. Is Hibernate suitable for this task, or is there another ORM framework that better fits my requirement?

    Read the article

  • T-SQL - Is there a (free) way to compare data in two tables?

    - by RPM1984
    Okay so i have table a and table b. (SQL Server 2008) Both tables have the exact same schema. For the purposes of this question, consider table a = my local dev table, table b = the live table. I need to create a SQL script (containing UPDATE/DELETE/INSERT statements) that will update table b to be the same as table a. This script will then be deployed to the live database. Any free tools out there that can do this, or better yet a way i can do it myself? I'm thinking i probably need to do some type of a join on all the fields in the tables, then generate dynamic sql based on that. Anyone have any ideas?

    Read the article

  • Select from multiple tables in Rails - Has Many "articles" through [table_1, table_2]?

    - by viatropos
    I'm in a situation where I need to get all articles that are tied to a User through 2 tables: article_access: gives users privilege to see an article article_favorites: of public articles, users have favorited these So in ActiveRecord you might have this: class User < ActiveRecord::Base has_many :article_access_tokens has_many :article_favorites def articles unless @articles ids = article_access_tokens.all(:select => "article_id").map(&:article_id) + article_favorites.all(:select => "article_id").map(&:article_id) @articles = Article.send(:scoped, :conditions => {:id => ids.uniq}) end @articles end end That gives me basically an articles association which reads from two separate tables. Question is though, what's the right way to do this? Can I somehow make 1 SQL SELECT call to do this?

    Read the article

  • One on One table relation - is it harmful to keep relation in both tables?

    - by EBAGHAKI
    I have 2 tables that their rows have one on one relation.. For you to understand the situation, suppose there is one table with user informations and there is another table that contains a very specific informations and each user can only link to one these specific kind of informations ( suppose second table as characters ) And that character can only assign to the user who grabs it, Is it against the rules of designing clean databases to hold the relation key in both tables? User Table: user_id, name, age, character_id Character Table: character_id, shape, user_id I have to do it for performance, how do you think about it?

    Read the article

  • is there a limit of merge tables with Mysql ?

    - by sysko
    I'm working on a database with mysql 5.0 for an open source project it's used to stored sentences in specific languages and their translations in other languages I used to have a big table "sentences" and "sentences_translations" (use to join sentences to sentences) table but has we have now near one million entries, this begin to be a bit slow, moreover, most of request are made using a "where lang =" so I've decided to create a table by language sentences_LANGUAGECODE and sentences_translation_LANGSOURCE_LANGTARGET and to create merge table like this sentences_ENG_OTHERS which merge sentences_ENG_ARA sentences_ENG_DEU etc... when we want to have the translations in all languages of an english sentence sentences_OTHERS_ENG when we want to have only the english translations of some sentences I've created a script to create all these tables (they're around 31 languages so more than 60 merge table), I've tested, that works really great a request which use to take 160ms now take only 30 :) but I discover that all my merge table after the 15th use to have "NULL" as type of storage engine instead of MRG_MYISAM, and if delete one, then I can create an others, using FLUSH table between each creation also allow me to create more merge tables so is this a limitation from mysql ? can we override it ? thanks for your answers

    Read the article

  • How to map one class against multiple tables with SQLAlchemy?

    - by tote
    Lets say that I have a database structure with three tables that look like this: items - item_id - item_handle attributes - attribute_id - attribute_name item_attributes - item_attribute_id - item_id - attribute_id - attribute_value I would like to be able to do this in SQLAlchemy: item = Item('item1') item.foo = 'bar' session.add(item) session.commit() item1 = session.query(Item).filter_by(handle='item1').one() print item1.foo # => 'bar' I'm new to SQLAlchemy and I found this in the documentation (http://www.sqlalchemy.org/docs/05/mappers.html#mapping-a-class-against-multiple-tables): j = join(items, item_attributes, items.c.item_id == item_attributes.c.item_id). \ join(attributes, item_attributes.c.attribute_id == attributes.c.attribute_id) mapper(Item, j, properties={ 'item_id': [items.c.item_id, item_attributes.c.item_id], 'attribute_id': [item_attributes.c.attribute_id, attributes.c.attribute_id], }) It only adds item_id and attribute_id to Item and its not possible to add attributes to Item object. Is what I'm trying to achieve possible with SQLAlchemy? Is there a better way to structure the database to get the same behaviour of "dynamic columns"?

    Read the article

  • What to do if 2 (or more) relationship tables would have the same name?

    - by primehunter326
    So I know the convention for naming M-M relationship tables in SQL is to have something like so: For tables User and Data the relationship table would be called UserData User_Data or something similar (from here) What happens then if you need to have multiple relationships between User and Data, representing each in its own table? I have a site I'm working on where I have two primary items and multiple independent M-M relationships between them. I know I could just use a single relationship table and have a field which determines the relationship type, but I'm not sure whether this is a good solution. Assuming I don't go that route, what naming convention should I follow to work around my original problem?

    Read the article

  • How to crosscheck two tables and insert relevant data into a new table in MYSQL?

    - by JackDamery
    I'm trying to crosscheck a row that exists in two tables using a MYSQL query in phpmyadmin and then if a userID is found in both tables, insert their userID and user name into another table. Here's my code: INSERT INTO userswithoutmeetings SELECT user.userID IF('user.userID'='meeting.userID'); I keep getting plagued by this error: 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'IF('user.userID'='meeting.userID')' at line 3 Other statements I've tried have worked but not deposited the values in the table.

    Read the article

  • How to copy tables from one website to another with php?

    - by Lost_in_code
    I have 2 websites, lets say - example.com and example1.com example.com has a database fruits which has a table apple with 7000 records. I exported apple and tried to import it to example1.com but I'm always getting "MYSQL Server has gone away" error. I suspect this is due to some server side restriction. So, how can I copy the tables without having to contact the system admins? Is there a way to do this using PHP? I went through example of copying tables, but that was inside the same database. Both example.com and example1.com are on the same server.

    Read the article

  • Silverlight Cream for March 13, 2010 -- #813

    - by Dave Campbell
    In this Issue: András Velvárt, Bobby Diaz, John Papa/Laurent Bugnion, Jesse Liberty, Christopher Bennage, Tim Greenfield, and Cameron Albert. Shoutouts: Svetla Stoycheva of SilverlightShow has an Interview with SilverlightShow Eco Contest Grand Prize Winner Daniel James Svetla Stoycheva of SilverlightShow also has an Interview with SilverlightShow Eco Contest Community Vote Winner Cigdem Patlak File this under #DoesHeEverSleep, Nokola has an EasyPainter Source Pack 1 Refresh And another filing in the same category by Nokola: EasyPainter Source Pack 2: Flickr, ComboCheckBox and more! In my last pre-#MIX10 'Cream post: From SilverlightCream.com: A Designer-friendly Approach to MVVM András Velvárt has an MVVM tutorial up on SilverlightShow from the Designer perspective of wiring up the View and ViewModel... good stuff my friend! Generate Strongly Typed Observable Events for the Reactive Extensions for .NET (Rx) In an effort to get more familiar with Rx, Bobby Diaz built a WhiteBoard app... there's a demo page plus the source... thanks Bobby! Silverlight TV 13: MVVM Light Toolkit My friend Laurent Bugnion is probably in the air flying to MIX10 right now, but at the MVP Summit last month, he recorded with John Papa and they produced an episode of Silverlight TV for Laurent's MVVM Light ... good stuff guys... see you tomorrow :) WCF Ria Services For Real Jesse Liberty has a great tutorial up on what it takes to get real data into your app ... you know... the stuff you have to get after reading a blog post that uses local stuff :) 1 Simple Step for Commanding in Silverlight Christopher Bennage is discussing Commanding and of course is using Caliburn for the example :) Use Silverlight Reactive Extensions (Rx) to build responsive UIs Tim Greenfield is beginning a two-parter on using Rx. In this first he has a comparison of with and without Rx... cool idea. General Purpose Sprite Class On the heels of Bill Reiss' Sprite posts, Cameron Albert has a General Purpose Sprite class post up using Bill's SilverSprite. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • Tables in the SQL Server "master" database, will they cause problems?

    - by pepoluan
    Folks, please be kind on me... I'm just an 'accidental' DBA due to our DBA resigned, so I'm totally a newbie in DBA... You see, I have this application, "ESET Remote Administration Server" (ERAS) that stores its logs and analysis on (originally) a local Access database. The decision was to migrate its database to a SQL Server 2008 R2 machine. ESET (the maker of the software) helpfully provided tools to perform such migration; unfortunately, being the DBA neophyte that I am, I didn't realize that I have to first create my own database (on the SQL Server side) and assign that database as the 'default' database for ERAS' ODBC connection. Now, the migration tool had successfully created a whole bunch of tables inside the "master" database. My questions: Should I leave things be as it is, or should I re-migrate the ERAS database to a different database? If you suggest me perform a re-migration, my plan is to (1) create a new instance, (2) create a new database within the new instance, (3) create a new ODBC System DSN on the ERAS server pointing to the new DB in step 2, (4) use ESET's migration tool to migrate from the current DSN to the new DSN. Do you think I missed a step there? Thanks beforehand for any guidance.

    Read the article

  • Sending changes from multiple tables in disconnected dataset to SQLServer...

    - by Stecy
    We have a third party application that accept calls using an XML RPC mechanism for calling stored procs. We send a ZIP-compressed dataset containing multiple tables with a bunch of update/delete/insert using this mechanism. On the other end, a CLR sproc decompress the data and gets the dataset. Then, the following code gets executed: using (var conn = new SqlConnection("context connection=true")) { if (conn.State == ConnectionState.Closed) conn.Open(); try { foreach (DataTable table in ds.Tables) { string columnList = ""; for (int i = 0; i < table.Columns.Count; i++) { if (i == 0) columnList = table.Columns[0].ColumnName; else columnList += "," + table.Columns[i].ColumnName; } var da = new SqlDataAdapter("SELECT " + columnList + " FROM " + table.TableName, conn); var builder = new SqlCommandBuilder(da); builder.ConflictOption = ConflictOption.OverwriteChanges; da.RowUpdating += onUpdatingRow; da.Update(ds, table.TableName); } } catch (....) { ..... } } Here's the event handler for the RowUpdating event: public static void onUpdatingRow(object sender, SqlRowUpdatingEventArgs e) { if ((e.StatementType == StatementType.Update) && (e.Command == null)) { e.Command = CreateUpdateCommand(e.Row, sender as SqlDataAdapter); e.Status = UpdateStatus.Continue; } } and the CreateUpdateCommand method: private static SqlCommand CreateUpdateCommand(DataRow row, SqlDataAdapter da) { string whereClause = ""; string setClause = ""; SqlConnection conn = da.SelectCommand.Connection; for (int i = 0; i < row.Table.Columns.Count; i++) { char quoted; if ((row.Table.Columns[i].DataType == Type.GetType("System.String")) || (row.Table.Columns[i].DataType == Type.GetType("System.DateTime"))) quoted = '\''; else quoted = ' '; string val = row[i].ToString(); if (row.Table.Columns[i].DataType == Type.GetType("System.Boolean")) val = (bool)row[i] ? "1" : "0"; bool isPrimaryKey = false; for (int j = 0; j < row.Table.PrimaryKey.Length; j++) { if (row.Table.PrimaryKey[j].ColumnName == row.Table.Columns[i].ColumnName) { if (whereClause != "") whereClause += " AND "; if (row[i] == DBNull.Value) whereClause += row.Table.Columns[i].ColumnName + "=NULL"; else whereClause += row.Table.Columns[i].ColumnName + "=" + quoted + val + quoted; isPrimaryKey = true; break; } } /* Only values for column that is not a primary key can be modified */ if (!isPrimaryKey) { if (setClause != "") setClause += ", "; if (row[i] == DBNull.Value) setClause += row.Table.Columns[i].ColumnName + "=NULL"; else setClause += row.Table.Columns[i].ColumnName + "=" + quoted + val + quoted; } } return new SqlCommand("UPDATE " + row.Table.TableName + " SET " + setClause + " WHERE " + whereClause, conn); } However, this is really slow when we have a lot of records. Is there a way to optimize this or an entirely different way to send lots of udpate/delete on several tables? I would really much like to use TSQL for this but can't figure a way to send a dataset to a regular sproc. Additional notes: We cannot directly access the SQLServer database. We tried without compression and it was slower.

    Read the article

  • So Singletons are bad, then what?

    - by Bobby Tables
    There has been a lot of discussion lately about the problems with using (and overusing) Singletons. I've been one of those people earlier in my career too. I can see what the problem is now, and yet, there are still many cases where I can't see a nice alternative - and not many of the anti-Singleton discussions really provide one. Here is a real example from a major recent project I was involved in: The application was a thick client with many separate screens and components which uses huge amounts of data from a server state which isn't updated too often. This data was basically cached in a Singleton "manager" object - the dreaded "global state". The idea was to have this one place in the app which keeps the data stored and synced, and then any new screens that are opened can just query most of what they need from there, without making repetitive requests for various supporting data from the server. Constantly requesting to the server would take too much bandwidth - and I'm talking thousands of dollars extra Internet bills per week, so that was unacceptable. Is there any other approach that could be appropriate here than basically having this kind of global data manager cache object? This object doesn't officially have to be a "Singleton" of course, but it does conceptually make sense to be one. What is a nice clean alternative here?

    Read the article

  • Good resources for learning Rails?

    - by Bobby Tables
    I just finished working through Peter Cooper's "Beginning Ruby". So now I've got a reasonable grounding in the Ruby language and would like to move onto learning Rails. This question's answers give some good pointers, but I'd like to hear some specific reviews of books and online materials. I generally learn best by working through books with good practical/technical examples AND some passive reading content that breaks up the study between practical and reading sessions (this is what made "Beginning Ruby" great for me), but I'm worried that RoR is evolving fast and that any printed book I order might be obsolete by the time I get it and work through it. Is this a fair worry? Or can anyone recommend a good Rails 3 book that should be up to date at least for the next year or so? Also, I had a brief look at some of the online resources from the other questions, and Rails for Zombies seems to get a lot of praise. Has anyone here actually used it as their introductory guide to Rails? Basically I'd like to hear first-hand accounts of people who went through this "Ruby-to-Rails" learning phase recently and which materials were useful to you.

    Read the article

  • Ruby or Python?

    - by Bobby Tables
    Hi all, This question is extremely subjective and open-ended. It might even sound like something I should just research for myself and make my own decision. But I'd like to put it out there and get some thoughts from others. Long story short - I burned out with the rat race and am on a self-funded sabbatical this year. Much of it is to take a break from the corporate grind and travel around, but I also want to play around with new technologies and do some self-learning projects, to stay up to speed on programming, and well - I just love tinkering with programming, when there's no pressure! Here's the thing: I am a lifetime C/C++/Java programmer. I'm a bit of a squiggly bracket snob since I've been working with this family of languages for my entire programming career. So I'd like to learn a language which isn't so closely syntactically related to this group. What I'm basically looking for is a language which is relatively general purpose, fun to learn, has some new concepts that are different from C++/Java, and has a good community. A secondary consideration is that it has good web development frameworks. A tertiary consideration is that it's not totally academic (read: there are real world jobs out there using it). I've narrowed it down to Ruby or Python. My impression of Ruby is that it is extremely web oriented - that the only real application of it is as a server side scripting language for doing web stuff (mainly Ruby on Rails). For Python I'm not so sure. TL;DR and to put it as succinctly as possible: which of these would be better for a C++/Java guy to learn to get some new perspectives on programming? And which is more open and general purpose and applicable to a wider set of applications? I'm leaning towards Ruby at the moment, but I worry to an extent that it looks like it's used as nothing but a server side web language.

    Read the article

  • Tips On Using The Service Contracts Import Program

    - by LuciaC
    Prior to release 12.1 there was no supported way to import contracts into the EBS Service Contracts application - there were no public APIs nor contract load programs provided.  From release 12.1 onwards the 'Service Contracts Import Program' is provided to load service contracts into the application. The Service Contracts Import functionality is explained in How to Use the Service Contracts Import Program - Scope and Limitations (Doc ID 1057242.1).  This note includes an attached document which explains the program architecture, shows the Entity Relationship Diagram and details the interface table definitions. The Import program takes data from the interface tables listed below and populates the contracts schema tables:  OKS_USAGE_COUNTERS_INTERFACE OKS_SALES_CREDITS_INTERFACEOKS_NOTES_INTERFACEOKS_LINES_INTERFACEOKS_HEADERS_INTERFACEOKS_COVERED_LEVELS_INTERFACEThese interface tables must be loaded via a custom load program.The Service Contracts Import concurrent request is then submitted to create contracts from this legacy data. The parameters to run the Import program are:  Parameter Description  Mode Validate only, Import  Batch Number Batch_Id (unique id populated into the OKS_HEADERS_INTERFACE table)  Number of Workers Number of workers required (these are spawned as separate sub-requests)  Commit size Represents number of successfully processed contracts commited to database The program spawns sub-requests for the import worker(s) and the 'Service Contracts Import Report'.  The data is validated prior to import and into the Contracts tables and will report errors in the Service Contracts Import Report program output file (Import Execution Report).  Troubleshooting tips are provided in R12.1 - Common Service Contract Import Errors (Doc ID 762545.1); this document lists some, but not all, import errors.  The document will be updated over time.  Additional help is given in Debugging Tip for Service Contracts Import Errors (Doc ID 971426.1).After you successfully import contracts, you can purge the records from the interface tables by running the Service Contracts Import Purge concurrent program. Note that there is no supported way to mass delete data from the Contracts schema tables once they are populated, so data loaded by the Import program must be fully tested and verified before the program is run to load data into a Production system.A Service Contracts Import Test program has been provided which will take an existing contract in the application and load the interface tables using the data from that contract.  This can be used as an example for guidance on how to load the interface tables.  The Test program functionality is explained in How to Use the Service Contracts Test Import Program Provided in Release 12.1 (Doc ID 761209.1).  Note that the Test program has some limitations which do not apply to the full Import program and is not a supported program, it is simply a testing tool.  

    Read the article

  • Temp Table Recompiles

    - by Derek D.
    If you landed on this article, then you most likely know that temp tables can cause recompilation. This happens because temp tables are treated just like regular tables by the SQL Server Engine. When the tables (in which underlying queries rely on) change significantly, SQL Server detects this change (using auto update statistics) [...]

    Read the article

  • How will Deja-Dup operates when backing up to an external USB drive?

    - by Little Bobby Tables
    I want to set up regular backups, and deja-dup seems like a nice tool. However, I want to put my backups on an extension USB drive that I have, not on a remote network location. Naturally, this drive is not always connected. If I configure deja-dup to backup to a directory on this drive (e.g. /media/extention/backup), what would happen? Will it prompt me to connect the drive when it is missing (the desired behavior), or just fail silently? Is there some way to tweak it to do so? I can roll my own cron-based backup script that checks if this drive is mounted, but I would really prefer to use an existing, integrated tool.

    Read the article

  • Graduate expectations versus reality

    - by Bobby Tables
    When choosing what we want to study, and do with our careers and lives, we all have some expectations of what it is going to be like. Now that I've been in the industry for almost a decade, I've been reflecting a bit on what I thought (back when I was studying Computer Science) programming working life was going to be like, and how it's actually turning out to be. My two biggest shocks (or should I say, broken expectations) by far are the sheer amount of maintenance work involved in software, and the overall lack of professionalism: Maintenance: At uni, we were all told that the majority of software work is maintenance of existing systems. So I knew to expect this in the abstract. But I never imagined exactly how overwhelming this would turn out to be. Perhaps it's something I mentally glazed over, and hoped I'd be building cool new stuff from scratch a lot more. But it really is the case that most jobs are overwhelmingly maintenance, bug fixing, and support oriented. Lack of professionalism: At uni, I always had the impression that commercial software work is very process-oriented and stringently engineered. I had images of ISO processes, reams of technical documentation, every feature and bug being strictly documented, and a generally professional environment. It came as a huge shock to realise that most software companies operate no differently to a team of students working on a large semester-long project. And I've worked in both the small agile hack shop, and the medium sized corporate enterprise. While I wouldn't say that it's always been outright "unprofessional", it definitely feels like the software industry (on the whole) is far from the strong engineering discipline that I expected it to be. Has anyone else had similar experiences to this? What are the ways in which your expectations of what our profession would be like were different to the reality?

    Read the article

  • Is there such a thing as "closure" with software work?

    - by Bobby Tables
    I burned out last year (after a decade of fulltime programming jobs) and am on a sabbatical now. With all the self-examination I've started to figure out some of the root causes of my burnout, and one of the major ones is basically this: there was never any real closure in any of the work I've ever done. It was always a case of getting into an open-ended support/maintenance grind and going stale. When I first entered the industry, I had this image of programming work being very project-based. And I expected projects to have a start, beginning, and END. And then you move on and start on something totally new and fresh. Basically I never expected that a lot (most) of software work involves supporting and maintaining the same code base for open-ended long periods of time - years and even decades. That, combined with generally having itchy feet makes me think that burnout is inevitable for me, after 2-3 years, in ANY fulltime software job. All this sounds like I probably should have been a contractor instead of a fulltimer. But when I discuss this with people, a lot of them say that even THEN you can't really escape having to go back and maintain/support the stuff you worked on, over and over (eg. Coming back on support contracts, for example). The nature of software work is simply like that. There is no project closure, unlike in many other engineering fields. So my question is - Is there ANY programming work out there which is based on short to mid term projects/stints and then moving on cleanly? And is there any particular industry domain or specialization where this kind of project work is typical?

    Read the article

  • Can't print, CUPS package corrupted and hangs on re-install

    - by Little Bobby Tables
    When I upgraded to Ubuntu 10.4 (Maverick), the upgrade process got stuck on the post-installation of the CUPS package. I had to kill processes and run several forced updates before I could finally get regular updated. Ever since I can't print - The printed file gets messed up and crashes the printer. I also can't re-install CUPS, as each time the installation hangs and I have to kill it before it completes. I tried to find a workaround for this problem, but in vain. Does anyone know how to bypass this? Or at least why can the post-installation hang, and how to re-install a problematic package? Some system specs and other hints: Dell D630 laptop running Ubuntu 10.4, Gnome desktop, standard LAN network, printing to an LPD server. Everything worked fine on 9.10. Also, the printed files themselves are not corrupted. The problem does not seem to be Evince-specific, but common to all printouts.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >