Search Results

Search found 10101 results on 405 pages for 'temporary tables'.

Page 248/405 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Announcing Oracle Receivables Generic Data Fix (GDF) for Refunds

    - by user793553
    Here's the first of what will be a series of Generic Data Fixes (GDF) to be released by Receivables Development. Generic Data Fix (GDF) are created by development to fix data issues caused by bugs/issues in the application code.  Other Generic Data Fix benefits/features include: Developed for bugs that can cause data issues. Provides a SELECT script that uses an identification/signature query to identify and report all data affected by issue/condition caused by a bug. Allow customers to view and modify what will be fixed. Provides a separate FIX script to fix the data reported by the SELECT script. The FIX script creates backup tables for the data that is fixed/updated. Available on My Oracle Support for download In Release 12, when creating a refund by either of the following methods: Applied a receipt to the Refund activity - which creates an Invoice in Payables Or you went directly into Payables to create a refund for an open Credit Memo in Receivables The Invoice in Payables that is associated to the refund is cancelled, the corresponding refund application or credit memo in Receivables is not properly re-instated. For the receipt application, it still remains applied to the Refund whereas this should be automatically unapplied. For the credit memo, it stays closed instead of getting re-opened. Doc ID 761993.1 includes the patch to make sure this doesn’t happen in the future as well as a GDF script to fix the current data (Script name: ar_std_refund_unapp.sql).  Download the script and run in READ_ONLY_MODE to identify 'refund' applications with this problem. Stay tuned for more GDF scripts coming soon...

    Read the article

  • Update to SQL Server Configuration Scripting Utility

    - by Bill Graziano
    Last spring I released a utility to script SQL Server configuration information on CodePlex.  I’ve been making small changes in this application as my needs have changed.  The application is a .NET 2.0 console application.  This utility serves two needs for me.  First it helps with disaster recovery.  All server level objects (logins, jobs, linked servers, audits) are scripted to a single file per object type.  This enables the scripts to be easily run against a DR server.  If these are checked into source control you can view the history of the script and find out what changed and when. The second goal is to capture what changed inside a database.  Objects inside a database (tables, stored procedures, views, etc.) are each scripted to their own file.  This makes it easier to track the changes to an object over time.  This does include permissions and role membership so you can capture security changes.  My assumption is that a database backup is the primary method of disaster recovery for databases so this utility is designed to capture changes to objects.  You can find the full list of changes from the original on the Downloads page on CodePlex.

    Read the article

  • How to Deal with an out of touch "Project manager"

    - by Joe
    This "manager" is 70+ yrs old and a math genius. We were tasked with creating a web application. He loves SQL and stored procedures. He first created this in MS access. For the web app I had to take his DB migrate to SQL server. His first thought was to have a master stored procedure with a WAITFOR Handling requests from users. I eventually talked him out of that and use asp.net mvc. Then eventually use the asp.net membership. Now the web app is a mostly handles requests from the pages that is passed to stored procedures. It is all stored procedure driven. The business logic as well. Now we are having an one open DB connection per user logged in plus 1. I use linq to sql to check 2 tables and return the values thats it period. So 25 users is a load. He complains why my code is bad cause his test driver stored procedure simulates over 100 users with no issue. What are the best arguments for not having the business logic not all in stored procedures?? How should I deal with this?? I am giving an abbreviated story of course. He is a genius part owner of the company all the other owners trust him because he is a genius. and quoting -"He gets things done. old school".

    Read the article

  • How to capture a Header or Trailer Count Value in a Flat File and Assign to a Variable

    - by Compudicted
    Recently I had several questions concerning how to process files that carry a header and trailer in them. Typically those files are a product of data extract from non Microsoft products e.g. Oracle database encompassing various tables data where every row starts with an identifier. For example such a file data record could look like: HDR,INTF_01,OUT,TEST,3/9/2011 11:23 B1,121156789,DATA TEST DATA,2011-03-09 10:00:00,Y,TEST 18 10:00:44,2011-07-18 10:00:44,Y B2,TEST DATA,2011-03-18 10:00:44,Y B3,LEG 1 TEST DATA,TRAN TEST,N B4,LEG 2 TEST DATA,TRAN TEST,Y FTR,4,TEST END,3/9/2011 11:27 A developer is normally able to break the records using a Conditional Split Transformation component by employing an expression similar to Output1 -- SUBSTRING(Output1,1,2) == "B1" and so on, but often a verification is required after this step to check if the number of data records read corresponds to the number specified in the trailer record of the file. This portion sometimes stumbles some people so I decided to share what I came up with. As an aside, I want to mention that the approach I use is slightly more portable than some others I saw because I use a separate DFT that can be copied and pasted into a new SSIS package designer surface or re-used within the same package again and it can survive several trailer/footer records (!). See how a ready DFT can look: The first step is to create a Flat File Connection Manager and make sure you get the row split into columns like this: After you are done with the Flat File connection, move onto adding an aggregate which is in use to simply assign a value to a variable (here the aggregate is used to handle the possibility of multiple footers/headers): The next step is adding a Script Transformation as destination that requires very little coding. First, some variable setup: and finally the code: As you can see it is important to place your code into the appropriate routine in the script, otherwise the end result may not be as expected. As the last step you would use the regular Script Component to compare the variable value obtained from the DFT above to a package variable value obtained say via a Row Count component to determine if the file being processed has the right number of rows.

    Read the article

  • Unicode license

    - by Eric Grange
    Unicode Terms of use (http://www.unicode.org/copyright.html) state that any software that uses their data files (or a modification of it) should carry the Unicode license references. It seems to me that most Unicode libraries have functions to check if a character is a digit, a letter, a symbol, etc. ans so will contain a modification of the Unicode Data Files (usually in the form of tables). Does that mean the license applies and all applications that use such Unicode libraries should carry the license? I've checked around, and it appears very few Unicode software do carry the license, though arguable most of those that didn't carry the license were from companies that were members of the Unicode consortium (do they get license exemption?). Some (f.i. Mozilla) are only "Liaison Members", and while their software do not carry the license (AFAICT), they do obviously rely on data derived from those data files. Is Mozilla in breach of the license? Should we carry the license in all apps that include any form of advanced Unicode support? (ie. are bound to rely on the Unicode data files) Or is there some form of broad exemption? (since very very few software out there carries the license)

    Read the article

  • Modular Database Structures

    - by John D
    I have been examining the code base we use in work and I am worried about the size the packages have grown to. The actual code is modular, procedures have been broken down into small functional (and testable) parts. The issue I see is that we have 100 procedures in a single package - almost an entire domain model. I had thought of breaking these packages down - to create sub domains that are centered around the procedure relationships to other objects. Group a bunch of procedures that have 80% of their relationships to three tables etc. The end result would be a lot more packages, but the packages would be smaller and I feel the entire code base would be more readable - when procedures cross between two domain models it is less of a struggle to figure which package it belongs to. The problem I now have is what the actual benefit of all this would really be. I looked at the general advantages of modularity: 1. Re-usability 2. Asynchronous Development 3. Maintainability Yet when I consider our latest development, the procedures within the packages are already reusable. At this advanced stage we rarely require asynchronous development - and when it is required we simply ladder the stories across iterations. So I guess my question is if people know of reasons why you would break down classes rather than just the methods inside of classes? Right now I do believe there is an issue with these mega packages forming but the only benefit I can really pin down to break them down is readability - something that experience gained from working with them would solve.

    Read the article

  • Splitting a tetris game apart - where to put time-management?

    - by nightcracker
    I am creating a tetris game in C++ & SDL, and I'm trying to do it "good" by making it object-oriented and keeping scopes small. So far I have the following structure: A main with some lowlevel SDL set up and handling input A game class that keeps track of score and provides the interface for main (move block down, etc) A map class that keeps track of the current game field, which blocks are where. Used by the game class. A block class that consists of the current falling block, used by game. A renderer class abstracting low level SDL to a format where you render "tetris blocks". Used by map and block. Now I have a though time where to place the time-management of this game. For example, where should be decided when a block bumps the bottom of the screen how long it takes the current block locks in place and a new block spawns? I also have an other unrelated question, is there some place where you can find some standard data on tetris like standard score tables, rulesets, timings, etc?

    Read the article

  • Laptop Display Not Working

    - by etrask
    Hello, I just purchased this laptop. It is working fine but I want to use Xubuntu on it. I managed to get 10.04 installed and running but it did not recognize/use the wireless card. So I want to try 10.10. However, every time I try, the screen goes black and never comes back on. I have tried the Live CD and Alternate install CDs, for both Ubuntu and Xubuntu 10.10. After I select "Install Ubuntu" or "Try Ubuntu", the screen blacks out and never displays anything. So I deleted "quiet" and "splash" off the command line to see what messages would come up. A bunch of text flies by but it ends at: [time] TCP established hash table entries: 524288 (order: 11, 8388608 bytes) [time] TCP bind hash table entries 65536 (order: 8, 1048576 bytes) [time] TCP: Hash tables configured (established 524288 bind 65536) [time] TCP reno registered [time] UDP hash table entries: 2048 (order: 4, 65536 bytes) [time] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes) [time] NET: Registered protocol family 1 And freezes here forever. I have tried nomodeset, vga=771, and xforcevesa with no progress. Is my video card simply not going to work with this distro? It seems strange that 10.04 would work fine but not 10.10. Any help is appreciated, thank you.

    Read the article

  • Help with dual booting Windows 8.1 Professional and Ubuntu 13.10

    - by user1292548
    I recently installed a clean version of Windows 8.1 Professional on my Lenovo Y500 (with Samsung 256GB 840 Pro SSD). I have Windows all set up and running normally. I am trying to dual boot Windows 8.1 and Ubuntu 13.10, but the installation procedure don't allow me to either "Install alongside..." or shows my SSD partitions correctly when I chose the "Something Else" option. I have created a 25GB partition of free space in the Windows disk manager, but on the installation screen on Ubuntu, it shows the whole drive as a free space. I have tried installing with a burned .ISO disk and a bootable USB, the results are the same for both. Windows Disk Management screen: http://imageshack.us/a/img855/9504/59zu.jpg The Ubuntu installation screen: http://imageshack.us/a/img62/2712/9g6i.jpg I've ran into this problem before when trying to dual boot Ubuntu and Windows 7 Professional a month ago. But I gave up and never resolved the issue. --EDIT-- I tried what Eero Aaltonen suggested, and this is my result: ubuntu@ubuntu:~$ sudo parted /dev/sda print Warning: /dev/sda contains GPT signatures, indicating that it has a GPT table. However, it does not have a valid fake msdos partition table, as it should. Perhaps it was corrupted -- possibly by a program that doesn't understand GPT partition tables. Or perhaps you deleted the GPT table, and are now using an msdos partition table. Is this a GPT partition table? Yes/No? yes Model: ATA Samsung SSD 840 (scsi) Disk /dev/sda: 256GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags ubuntu@ubuntu:~$

    Read the article

  • Does LINQ require significantly more processing cycles and memory than lower-level data iteration techniques?

    - by Matthew Patrick Cashatt
    Background I am recently in the process of enduring grueling tech interviews for positions that use the .NET stack, some of which include silly questions like this one, and some questions that are more valid. I recently came across an issue that may be valid but I want to check with the community here to be sure. When asked by an interviewer how I would count the frequency of words in a text document and rank the results, I answered that I would Use a stream object put the text file in memory as a string. Split the string into an array on spaces while ignoring punctuation. Use LINQ against the array to .GroupBy() and .Count(), then OrderBy() said count. I got this answer wrong for two reasons: Streaming an entire text file into memory could be disasterous. What if it was an entire encyclopedia? Instead I should stream one block at a time and begin building a hash table. LINQ is too expensive and requires too many processing cycles. I should have built a hash table instead and, for each iteration, only added a word to the hash table if it didn't otherwise exist and then increment it's count. The first reason seems, well, reasonable. But the second gives me more pause. I thought that one of the selling points of LINQ is that it simply abstracts away lower-level operations like hash tables but that, under the veil, it is still the same implementation. Question Aside from a few additional processing cycles to call any abstracted methods, does LINQ require significantly more processing cycles to accomplish a given data iteration task than a lower-level task (such as building a hash table) would?

    Read the article

  • Why it is called "hash table", or "hash function"? Hash doesn't make any sense to me here

    - by Saeed Neamati
    It's now about 4 years of development that I'm using, hearing, talking about, and implementing hash tables and hash functions. But I really never understand why it's called hash? I remember the first days I started programming, this term was kind'of cumbersome terminology to me. I never figured out what is it, based on its name. I just experimentally understood what it does and why and when should we use it. However, I still sometimes try to figure out why it's called hash. I have no problem with table or function and to be honest, they are pretty deductive, rational terms. However, I think better words could be used instead of hash, like key, or uniqueness. Don't key table or uniqueness table. According to my dictionary, hash means: Fried dish of potato and meats (highly irrelevant) # symbol (AKA number sign, pound sign, etc.) (still irrelevant, maybe just a mis-nomenclature) Apply algorithm to character string (still has nothing to do with uniqueness, which is the most important feature of a hash table) Cut food Another term for hashish Does anyone know why it's called hash?

    Read the article

  • One codebase - lots of hosted services (similar to a basecamp style service) - planning structure

    - by RickM
    We have built a service (PHP Based) for a client, and are now looking to offer it to other clients as a hosted service. For this example, think of it like a hosted forum service, where a client signs up on our site, and is given a subdomain or can use their own domain, and the code picks up the domain, checks it against a 'master' users table, and then loads the content as needed. I'm trying to work out the best way of handling multiple clients. At the moment I can only think of two options that would work: Option 1 - Have 1 set of database tables, but on each table have a column called 'siteid' - this would mean every query has to check the siteid. This would effectively work with just 1 codebase, and 1 database. Option 2 - Have 1 'master' database with all the core stuff such as the client details and their domain. Then when the systen checks the domain, it pulls the clients database details (username/password/dbname) from a table, and loads a second database. The issue here is security of the mysql server details, however it does have the benefit that they are running their own database instead of sharing one. Which option would I be better taking here, and why? Ideally I want it to be fairly easy to convert the 'standalone' script to the 'multi-domain' script as we're on a tight deadline.

    Read the article

  • Creating a SQL Azure Database Should be Easier

    - by Ken Cox [MVP]
    Every time I try to create a database + tables + data for Windows Azure SQL I get errors.  One of them is 'Filegroup reference and partitioning scheme' is not supported in this version of SQL Server.' It’s partly due to my poor memory (since I’ve succeeded before) and partly due to the failure of tools that should be helping me. For example, when I want to create a script from an existing database on my local workstation, I use SQL Server Management Studio (currently v 11.0.2100.60).  I go to Tasks > Generate Scripts which brings up the nice Generate and Publish Scripts wizard. When I go into the Advanced button, under Script for Server Version, why don’t I see SQL Azure as an option by now? The tool should be sorting this out for me, right? Maybe this is available in SQL Server Data Tools? I haven’t got into that yet. Just merge the functionality with SSMS, please. Anyway, I pick an older version of SQL for the target and still need to tweak it for Azure. For example, I take out all the “[dbo].” stuff. Why is it put there by the wizard? I also have to get rid of "ON [PRIMARY]"  to deal with the error I noted at the top. Yes, there’s information on what a table needs to look like in SQL Azure but the tools should know this so I don’t have to mess with it.

    Read the article

  • Which frontend framework/library should I learn to enhance an existing site? [on hold]

    - by Codemonkey
    I have a large site that I've coded by hand over the last couple of years. It's a sports results service, and allows users to view their results, compare themselves to others, buy photographs, that sort of thing. The code base is fairly substantial, and scarily uses no frameworks or libraries. It's a PHP backend, and a clean & compact frontend. I use the Highcharts library, but other than that all of the JS is my own. I'm not a fan of bulk, even if it is CDN-hosted and heavily cachable. Maybe I need to change my outlook on this? I'm wanting to make some significant changes to the site now, and it seems an appropriate time to enhance my skillset by learning AngularJS, or something else of that ilk. A large part of the site is tables of data, and as just one example of the sort of thing I want to achieve, I'd like to let users add/remove/sort columns better than they currently can. Are any of the various frameworks/libraries out there more suitable to shoehorning into an existing project?

    Read the article

  • Install Base Transaction Error Troubleshooting

    - by LuciaC
    Oracle Installed Base is an item instance life cycle tracking application that facilitates enterprise-wide life cycle item management and tracking capability.In a typical process flow a sales order is created and shipped, this updates Inventory and creates a new item instance in Install Base (IB).  The Inventory update results in a record being placed in the SFM Event Queue.  If the record is successfully processed the IB tables are updated, if there is an error the record is placed in the csi_txn_errors table and the error needs to be resolved so that the IB instance can be created.It's extremely important to be proactive and monitor IB Transaction Errors regularly.  Errors cascade and can build up exponentially if not resolved. Due to this cascade effect, error records need to be considered as a whole and not individually; the root cause of any error needs to be resolved first and this may result in the subsequent errors resolving themselves. Install Base Transaction Error Diagnostic Program In the past the IBtxnerr.sql script was used to diagnose transaction errors, this is now replaced by an enhanced concurrent program version of the script. See the following note for details of how to download, install and run the concurrent program as well as details of how to interpret the results: Doc ID 1501025.1 - Install Base Transaction Error Diagnostic Program  The program provides comprehensive information about the errors found as well as links to known knowledge articles which can help to resolve the specific error. Troubleshooting Watch the replay of the 'EBS CRM: 11i and R12 Transaction Error Troubleshooting - an Overview' webcast or download the presentation PDF (go to Doc ID 1455786.1 and click on 'Archived 2011' tab).  The webcast and PDF include more information, including SQL statements that you can use to identify errors and their sources as well as recommended setup and troubleshooting tips. Refer to these notes for comprehensive information: Doc ID 1275326.1: E-Business Oracle Install Base Product Information Center Doc ID 1289858.1: Install Base Transaction Errors Master Repository Doc ID: 577978.1: Troubleshooting Install Base Errors in the Transaction Errors Processing Form  Don't forget your Install Base Community where you can ask questions to help you resolve your IB transaction errors.

    Read the article

  • Hash Algorithm Randomness Visualization

    - by clstroud
    I'm curious if anyone here has any idea how the images were generated as shown in this response: Which hashing algorithm is best for uniqueness and speed? Ian posted a very well-received response but I can't seem to understand how he went about making the images. I hate to make a new question dedicated to this, but I can't find any means to ask him more directly. On the other hand, perhaps someone has an alternative perspective. The best I can personally come up with would be to have it almost like a bar graph, which would illustrate how evenly the buckets of the hash table are being generated. I have a working Cocoa program that does this, but it can't generate anything like what he showed there. So the question is two fold I suppose: A) How does one truly interpret the data he shows? Is it more than "less whitespace = better"? B) How does one generate such an image based on some set of inputs, a hash, and an index? Perhaps I'm misunderstanding entirely, but I really would like to know more about this particular visualization technique. Or maybe I'm mis-applying this to hash tables rather than just hashes in general, but in that case I don't know how it would be "bounded" for the image.

    Read the article

  • TraceTune: Improved Comment View

    - by Bill Graziano
    I wanted an easier way to identify queries I’d already looked at so I could skip them.  I’ve been entering comments for each query as I review it.  These comments typically fall into three categories: a change I made, no easy fix available or something needs to be changed on the client.  TraceTune now highlights any statement with a comment in bold.  If you hover over the statement you’ll see the most recent comment for that statement. This gives me a quick way to see what’s new and identify those queries that still need work.  This is especially helpful when I come back to a server after weeks or months away.  These comments help jar my memory and remind me what I’ve worked on. I made the font slightly smaller in some of the tables.  It’s still readable but I’m able to get more of a SQL statement on the screen.  I also got to re-experience the pain of Internet Explorer, Chrome and FireFox all displaying text (and pop-up text) slightly different. Seeing the comments on a trace has been a big help to me.  I often do a round of tuning and then don’t come back to a server until months later.  Having the comments available helps me get back up to speed quickly.

    Read the article

  • Reporting Solution in PHP / CodeIgniter - Server side logic vs client side

    - by dot
    I'm building a report for an end user. They would like to see a list of all widgets... but then also like to see widgets with missing attributes, like missing names, or missing size. So i was thinking of creating one method that returns json data containing all widgets... and then using javascript to let them filter the data for missing data, instead of requerying the database. Ultimately, they need to be able to save all "reports" (filtered versions of data) inside a csv file. These are the two options I'm mulling over: Design 1 Create 3 separate methods in my controller/model like: get_all_data() get_records_with_missing_names() get_records_with_missing_size() And then when these methods are called, I would display the data on screen and give them a button to save to csv file. Design 2 Create one method called get_all_data() and then somehow, give them tools in the view to filter the json data using tables etc... and then letting them save subsets of the data. The reality is, in order to display all data, I still need to massage the data, and therefore, I know which records are missing attributes. So i'd rather not create separate methods by each filter. I'm not sure how I would do that just yet but at this point, i would like to know some pros/cons of each method. Thanks.

    Read the article

  • Best way to update UI when dealing with data synchronization

    - by developerdoug
    I'm working on a bug at work. The app is written in Objective-C for iOS based device, for the iPad. I'm the new guy there and I've been given a hard task. Sometimes, the UIButton text property does not show the correct state when syncing. Basically, when the app is syncing, my UI control would say "Syncing" and when its not syncing it'll display "Updated @ [specific date]". Right now there is a property on the app delegate called "SyncInProgress". When querying / syncing, occurring on background thread, it updates a counter. The property will return a bool checking expression 'counter 0'. There are three states I need to deal with. Sync has started. Sync is updating tables. Sync finished. These items need to occur in order. My coworker suggested to take a state based approach instead of just responding to events. I'm not sure about how to go about that. Would it be best to have the UI receive a notification to determine what state its in or to pull every so often if state changed? Here are two posts that I put on stackoverflow, in the last few days, that relate to this. http://stackoverflow.com/questions/11025469/ios-syncing-using-a-state-approach-instead-of-just-reacting-to-events http://stackoverflow.com/questions/11037930/viewcontroller-when-viewwillappear-called-does-not-always-correctly-reflect-stat Any ideas that anyone might have to very much appreciated. Thanks, developerDoug

    Read the article

  • Single-Click Checkbox in RadGridView

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2014/06/05/156809.aspxThe Telerik RadGridView for WPF is a flexible visual control for displaying and manipulating tables of data. It’s not without it’s quirks though. I ran into one of those quirks recently. The behavior of the RadGridView requires that you first activate the cell for editing. That enables the editing controls underneath. Although that provides better editing capabilities, it also can be unintuitive. In my case I don’t want my users to have to click a checkbox value twice in order to change the value. This can be solved easily by setting the EditTriggers and AutoSelectOnEdit properties to CellClick and True respectively. Unfortunately, the story doesn’t end there. You would think you could set those properties in a style with a TargetType of GridViewCheckBoxColumn. You’d be wrong. <!-- This works --> <telerik:GridViewCheckBoxColumn Header="Flag #1"       DataMemberBinding="{Binding Flag1}"       EditTriggers="CellClick" AutoSelectOnEdit="True"/> <!-- This doesn’t work --> <Style TargetType="{x:Type telerik:GridViewCheckBoxColumn}">       <Setter Property="EditTriggers" Value="CellClick" />       <Setter Property="AutoSelectOnEdit" Value="True" /> </Style> Telerik told me that is the correct, expected behavior, because the columns in the RadGridView are not visual elements. Their explanation is that the Style property comes from the base class FrameworkContentElement, but the column objects aren’t visual elements. That may be an implementation truth, but that doesn’t make the behavior correct or expected. It’s unintuitive that the GridViewCheckBoxColumn exposes a Style property that cannot be used properly, but there it is. At least there’s a way to get the desired effect.

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • Caching by in-memory dictionaries. Are we doing it all wrong?

    - by user73983
    This approach is pretty much the accepted way to do anything in our company. A simple example : when a piece of data for a customer is requested from a service, we fetch all the data for that customer(relevant part to the service) and save it in a in-memory dictionary then serve it from there on following requests(we run singleton services). Any update goes to DB, then updates the in memory dictionary. It seems all simple and harmless but as we implement more complicated business rules the cache gets out of sync and we have to deal with hard to find bugs. Sometimes we defer writing to database, keeping new data in cache till then. There are cases when we store millions of rows in memory because the table has many relations to other tables and we need to show aggregate data quickly. All this cache handling is a big part of our codebase and I sense this is not the right way to do it. All of this juggling adds too much noise to the code and it makes it hard to understand the actual business logic. However I don't think we can serve data in a reasonable amount of time if we have to hit the database every time. I am unhappy about the current situation but I don't have a better alternative. My only solution would be to use NHibernate 2nd level cache but I have nearly no experience with it. I know many campanies use Redis or MemCached heavily to gain performance but I have no idea how I would integrate them into our system. I also don't know if they can perform better than in-memory data structures and queries. Are there any alternative approaches that I should look into?

    Read the article

  • TFS SQL Deployment Data Script

    - by Greg
    We are using TFS and SQL 2005 (looking to upgrade to SQL 2012 if that makes a difference). We store our database schema in a Visual Studio Database project (VS 2010). When code is released to live we currently use the Visual Studio Database Project to build a script for all our schema changes. The problem we have been getting is having to alter or add to that script to add/fix data for the deployment. For example if we add a new non-nullable column to an existing table we need to populate that column with data during the insert. Other times we may want to create new records in transactional tables (e.g. assign specific users to a new security access). Do Visual Studio Database Projects have a way to store these scripts that only need to be run once and somehow include them in the build? Does it know which scripts need to be run (for example if we are inserting default data we don't want to do that again a second time)? OR Is there a better way to manage these scripts?

    Read the article

  • Architectural approaches to creating a game menu/shell overlay on PC/Linux?

    - by Ghopper21
    I'm am working on a collection of games for a custom digital tabletop installation (similar to Microsoft Surface tables). Each game will be an individual executable that runs full-screen. In addition, there needs to be a menu/shell overlay program running simultaneously. The menu/shell will allow users to pause games, switch to other games, check their game history, etc. Some key requirements of the shell: it intercepts all user input (mainly multitouch) first before passing it on to the currently running game (so that it can, for instance, know to pop-up at a "pause" command); can reveal on arbitrary portions of the screen, with the currently running (but presumably paused) game still showing underneath, ideally with its shape/size being dynamic, to allow for creation of an animated in/out drawer effect over the game. I'm currently looking into different architectural approaches to this problem, including Fraps and DirectX overlays, but I'm sure I'm missing some ways to think about this. What are the main approaches I should be considering? (Note the table is currently being run by Windows PC, but it could potentially be a Linux box instead.)

    Read the article

  • Is this table replicated?

    - by fatherjack
    Another in the potentially quite sporadic series of I need to do ... but I cant find it on the internet. I have a table that I think might be involved in replication but I don't know which publication its in... We know the table name - 'MyTable' We have replication running on our server and its replicating our database, or part of it - 'MyDatabase'. We need to know if the table is replicated and if so which publication is going to need to be reviewed if we make changes to the table. How? USE MyDatabase GO /* Lots of info about our table but not much that's relevant to our current requirements*/ SELECT * FROM sysobjects WHERE NAME = 'MyTable' -- mmmm, getting there /* To quote BOL - "Contains one row for each merge article defined in the local database. This table is stored in the publication database.replication" interesting column is [pubid] */ SELECT * FROM dbo.sysmergearticles AS s WHERE NAME = 'MyTable' -- really close now /* the sysmergepublications table - Contains one row for each merge publication defined in the database. This table is stored in the publication and subscription databases. so this would be where we get the publication details */ SELECT * FROM dbo.sysmergepublications AS s WHERE s.pubid = '2876BBD8-3D4E-4ED8-88F3-581A659E8144' -- DONE IT. /* Combine the two tables above and we get the information we need */ SELECT s.[name] AS [Publication name] FROM dbo.sysmergepublications AS s INNER JOIN dbo.sysmergearticles AS s2 ON s.pubid = s2.pubid WHERE s2.NAME = 'MyTable' So I now know which

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >