Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 8/319 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Alternatives to sql like databases

    - by user613326
    Well i was wondering these days computers usually have 2GB or 4GB memory I like to use some secure client server model, and well an sql database is likely candidate. On the other hand i only have about 8000 records, who will not frequently be read or written in total they would consume less then 16 Megabyte. And it made me wonder what would be good secure options in a windows environment to store the data work with it multi-client single server model, without using SQL or mysql Would for well such a small amount of data maybe other ideas better ? Because i like to keep maintenance as simple as possible (no administrators would need to know sql maintenance, as they dont know databases in my target environment) Maybe storing in xml files or.. something else. Just wonder how others would go if ease of administration is the main goal. Oh and it should be secure to, the client server data must be a bit secure (maybe NTLM files shares https or...etc)

    Read the article

  • migrating sharepoint databases

    - by Alex Bransky
    If you're wondering how to migrate your SharePoint databases to a new server, this Microsoft article is actually pretty useful, though still overly complex like most of their other articles. http://technet.microsoft.com/en-us/library/cc512725.aspx The one thing I would change is that they seem to recommend installing SQL Server Configuration Manager on web servers, when all that was needed in my case was to add an entry to the hosts file on the SharePoint web server that used the IP address of the new SQL Server with the name of the old SQL Server.  This might not be appropriate in cases where the old server is not being decommissioned.

    Read the article

  • How do you track Production tasks.

    - by M.C
    I manage a team of coders (5people) that maintain a few modules in a large project. On top of doing coding, we also do production operational tasks (like doing server housekeeping, batch backlog tracking) These tasks are done daily, done by 1 person, and is rotated weekly The problem is this: These tasks are routine, but there I cant think of a practical way of ensuring the person does what he is supposed to do. I thought of using spreadsheets to track, or to the extent of doing a paper checklist, which the person on duty will have to physically sign off. I just want the guy on duty to remember and execute every daily item. What works on your project?

    Read the article

  • Quickly Investigating What's in the Tables of SQL Server Databases

    From SQL Server Management Studio it's hard to look through the first few rows of a whole lot of tables in a database. This is odd, since it is a great way to get quickly familiar with a database. Phil tidied up a SQL routine he uses to investigate databases quickly in a browser. He explains how to use it, how it works, and how to use it from PowerShell. Want faster, smaller backups you can rely on?Use SQL Backup Pro for up to 95% compression, faster file transfer and integrated DBCC CHECKDB. Download a free trial now.

    Read the article

  • ACL tool for audit of Ubuntu production servers

    - by migrator
    In my production environment, I have close to 10 Ubuntu 12.04 Servers and I want to get the list of users from them. I am looking for some kind of script or tool (non-gui) to get the same. Yes, I can get the list from /etc/passwd and /etc/groups files but it would be good to have a tool or script to do this due to the following reasons. I have right now 10 systems in Ubuntu and 30 systems in Windows 2003. I am recommending my organization and IT to move all the systems to Ubuntu except the one running MS SQL server We do not have good Ubuntu admins with us and they should not mess up with the system if I give some manual commands I also need to find out date of creation of user, group, password standards like strength, expiry etc Please help me as I want to automate the process and get the list on weekly basis from IT team. Thanks in advance.

    Read the article

  • Rebuilding system databases in 2008 R2

    - by TiborKaraszi
    All my attempts so far to rebuild the system databases in 2008 R2 has failed. I first tried to run setup from below path: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Release But above turns out to be the 2008 setup program, not 2008R2 setup; even though I have no 2008 instanced installed (I have only R2 instances installed). Apparently, the 2008 setup program does a version check of the instance to be rebuilt and since it is > 10.50.0, the rebuild fails. Books Online for R2 the section...(read more)

    Read the article

  • New in Production: Fusion CRM Implementation Specialist Exams!

    - by Richard Lefebvre
    Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates. Oracle Fusion Customer Relationship Management 11g Sales Essentials Exam (1Z0-456) – now in Production! ·               All Beta exam participants will receive their exam scores as of September 24, 2012. ·               The successful candidates will receive their certificates starting mid-October, 2012. Contact Us Please direct any inquiries you may have to Oracle Partner Enablement team at [email protected].

    Read the article

  • Best Way for Developers to Upload Files to Production Server

    - by ultrajohn
    Small team of developers doing their work here and there. We have a team leader, and is sole responsible for uploading updated source files from the development server to the production server. So let's say, so if an updated files needs to be uploaded to the prod server, that concerned developer shall notify the team lead about it, and then the team lead will update the files to the prod server. So no developer has an access to the prod server except for the team lead. That's our current setup. Now, what we want to do is to give developers a way for uploading their updated files to the server without the team lead intervening in the process. What do you think is the best way to go about this?

    Read the article

  • re: Production ASP.NET MVC Plug

    Just wanted to do a quick plug for my first production asp.net mvc app. It's not too complex at the moment, but its going to grow into something potentially large, so I am now working on a new version to handle multiple cities. I also wanted to get it up and running with little jquery/client side features and plug in these "nice to haves" as we progress. Let me know what you guys think! I've got a pretty solid handle on Facebook Connect, so...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Demantra Post Production Support Common Issues, Troubleshooting Tips, and Maintaining Your Instance

    - by Annemarie Provisero
    ADVISOR WEBCAST: Demantra Post Production Support Common Issues, Troubleshooting Tips, and Maintaining Your Instance PRODUCT FAMILY: Manufacturing - Demantra Solutions   March 2, 2011 at 8 am PT, 9 am MT, 11 am ET You have now gone live, or are preparing to go live, on Demantra. What you need to keep the application running smoothly? This one-hour session is recommended for functional users who give direction to the Demantra application and the technical users who support the application. TOPICS WILL INCLUDE: Key troubleshooting logs Keeping the database well maintained both in backup and performance When data should be removed and/or archived out of the Demantra application A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • SQL Authority News – Download and Install Adventure Works 2014 Sample Databases

    - by Pinal Dave
    If you are using SQL Server there are good chances that you are familiar with AdventureWorks. AdventureWorks is a Sample Database shipped with SQL Server and it can be downloaded from CodePlex site. AdventureWorks have replaced Northwind and Pubs from the sample database in SQL Server 2005. The Microsoft team keeps updating the sample database as they release new versions. I use the AdventureWorks database for most of my example, as it is easy to use sample database which is accessible for most of the people out there. Every new version  of SQL Server should have its own Adventureworks database. The reason is that SQL Server comes up with new features with every version and most of the new features need a new dataset sample to demonstrate the capabilities of the features. This is the why every version of SQL Server has its own AdventureWorks database. SQL Server 2014 has many new features and to support that Microsoft has released new Advetureworks 2014 Sample Database. You can download Adventure Works 2014 Sample Databases from here. Here is a quick tutorial how one can install the AdventureWorks database on your server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Php profiling on production server or other options

    - by absentx
    Alright I need some help here. I am commonly asked to speed up certain sections of some websites that I program for. I have yet to be able to figure out how to use a good php diagnosis/profiling tool. Some things to consider: The sites I am working on are already built, getting a testing server set up locally is just a huge pain..I have to rewrite include paths and just so many things. This is a results oriented deal and spending days to get a site fully working on a testing platform so I can debug one page probably isn't an option. I can write tons of php, but I have no clue how to interact or mess with servers. So every tutorial I read about setting up xdebug or xhprof all seem to involve getting something installed on a production server that I don't have access to or have no clue how to work with. So are there any solutions out there that will show me where my php is slow without having to do all sorts of server stuff that I just don't know how to do? Xhprof seems to be the closest to useable for me but from what I can tell it still has to be installed on a server. If anyone can just point me in the right direction on this I would be very grateful. Maybe getting these things put on the server isn't a big deal...but I have never interacted with server command lines or anything like that. I suppose I should start sometime but I really have no idea where to start. Plus I realize that profiling on a live platform is not the greatest idea either but I feel I am in a tough spot. I have speed issues to solve and setting up a local environment while a great idea, just doesn't seem real practical at the moment.

    Read the article

  • CMS and Databases vs. DIY

    - by hozza
    I have been programming for many years now, primarily in PHP and the like and would consider myself an intermediate programmer. Some of my online projects have now gone global and very widely used, i am now in deep thought about scalability etc. All of my systems so far are written in PHP, no known database structure such as MySQL etc. Instead our databases use an 'operating system style' method of storing information, files and folders if you will. We also do not use any outside/third-party software or CMS, so far this has work out extremely well. Most people, when they hear about the way we do things, criticize and say that is an idiotic idea but normally after seeing our systems in more dept are converted to our way of doing things. Is it really that bad to not use a standard databasing systems and only using the one (slightly heavier than others) language of PHP? How well on the face of it will this kind of setup scale? N.B. Our systems include things such as account and user management, documentation development and task/project managing.

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Are document-oriented databases meant to replace relational databases?

    - by evolve
    Recently I've been working a little with MongoDB and I have to say I really like it. However it is a completely different type of database then I am used. I've noticed that it is most definitely better for certain types of data, however for heavily normalized databases it might not be the best choice. It appears to me however that it can completely take the place of just about any relational database you may have and in most cases perform better, which is mind boggling. This leads me to ask a few questions: Are document-oriented databases have been developed to be the next generation of databases and basically replace relational databases completely? Is it possible that projects would be better off using both a document-oriented database and a relational database side by side for various data which is better suited for one or the other? If document-oriented databases are not meant to replace relational databases, then does anyone have an example of a database structure which would absolutely be better off in a relational database (or vice-versa)?

    Read the article

  • Designing binary operations(AND, OR, NOT) in graphs DB's like neo4j

    - by Nicholas
    I'm trying to create a recipe website using a graph database, specifically neo4j using spring-data-neo4j, to try and see what can be done in Graph Databases. My model so far is: (Chef)-[HAS_INGREDIENT]->(Ingredient) (Chef)-[HAS_VALUE]->(Value) (Ingredient)-[HAS_INGREDIENT_VALUE]->(Value) (Recipe)-[REQUIRES_INGREDIENT]->(Ingredient) (Recipe)-[REQUIRES_VALUE]->(Value) I have this set up so I can do things like have the "chef" enter ingredients they have on hand, and suggest recipes, as well as suggest recipes that are close matches, but missing one ingredient. Some recipes can get complex, utilizing AND, OR, and NOT type logic, something like (Milk AND (Butter OR spread OR (vegetable oil OR olive oil))) and I'm wondering if it would be sane to model this in a graph using a tree type representation? An example of what I was thinking is to create three "node" types of AND, OR, and NOT and have each of them connect to the nodes value underneath. How else might this be represented in a Graph Database or is my example above a decent representation?

    Read the article

  • jMonkey Quest Database

    - by theJollySin
    I am building a game in jMonkey (Java) and I have so far only used default quest text. But now I need to start populating a lot of quests with text. My design requires A LOT of quests texts. What is the best way to build a database of quest texts in jMonkey? I don't have a lot of real experience with databases. Is there a database that integrates well with jMonkey? Here are the ideal properties I want in my database, in order of priority: Reasonably light learning curve Easy portability (in Java) to Windows, Linux, and Mac OSX Good interface with Java Good interface with jMonkey The ability to add properties to the quests: ID, level, gender, quest chain ID, etc. Or am I wrong in thinking I need to use some giant monster like SQL? I haven't been able to find much information on this, so are people using some non-database methods for storing things like quest text in jMonkey?

    Read the article

  • How do I implement a score database in Android?

    - by Michael Seun Araromi
    I making a 2D game for Android using OpenGL-ES technology. It is a space shooting game where the player shoots enemy ships. I want to keep a track of score for the amount of enemy ships destroyed and a record of a local highscore. The score should be incremented whenever an enemy is destroyed. I also want a way of displaying both the current score and highscore on the game screen. I am not familiar with databases at all and I will appreciate a clear answer or a link to a good tutorial for my cause. Thanks.

    Read the article

  • Databases and Beer

    - by Johnm
    It is a bit of a no-brainer: Include the word "beer" in a subject line of an e-mail or blog post title and you can be certain that it will be read. While there are times this practice might be a ploy to increase readership, it is not the case for this blog post. There is inspiration that can be drawn from other industries to which we, as database professionals, can apply in our industry. In this post I will highlight one of my favorite participants of the brewing industry. The Boston Beer Company started in the 1970s in Boston, Massachusetts. Others may be more familiar with this company through their Samuel Adams Boston Lager and other various seasonal beers. I am continually inspired by their commitment to mastery of the brewing process to which they evangelize frequently in their commercials. They also are continually in pursuit of pushing the boundaries of beer as we know it while working within traditional constraints. A recent example of this is their collaboration with Weihenstephan Brewery of Munich, Germany to produce the soon to be released Infinium beer. This beer, while brewed as an ale, is touted as something closer to something like Champaign - all while complying with the Reinheitsgebot. The Reinheitsgebot is also known as the "German Beer Purity Law" which was originated in 1516. This law states that beer is to consist of water, barley, hops and yeast. That's it. Quite a limiting constraint indeed. and yet, The Boston Beer Company pushed forward. Much like the process of brewing, the discipline of database design and architecture is one that is continually in process and driven by the pursuit of mastery. While we do not have purity laws to constrain us, we have many other types: best practices, company policies, government regulations, security and budgets. Through our fellow comrades, we discuss the challenges and constraints in which we operate. We boil down the principles and theories that define our profession. We reassemble these into something that is complementary to the business needs that we must fulfill. As a result, it is not uncommon to see something amazingly innovative in a small business who is pushing the boundaries of their database well beyond its intended state. It is equally common to see innovation in the use of features available in the more advanced features of databases that are found in large businesses. The tag line for The Boston Beer Company is: "Take Pride In Your Beer.", I would like to offer an alternative and say "Take Pride In Your Database." So, As you pour your next Boston Lager into a frosted glass, consider those who spend their lives mastering the craft of brewing and strive to interject their spirit into everything that you do as a database professional. Cheers!

    Read the article

  • Alternatives for comparing data from different databases

    - by Alex
    I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a "uncharged" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 3)

    - by Sadequl Hussain
    Continuing from Part 2 of the Database Migration Checklist series: Step 10: Full-text catalogs and full-text indexing This is one area of SQL Server where people do not seem to take notice unless something goes wrong. Full-text functionality is a specialised area in database application development and is not usually implemented in your everyday OLTP systems. Nevertheless, if you are migrating a database that uses full-text indexing on one or more tables, you need to be aware a few points. First of all, SQL Server 2005 now allows full-text catalog files to be restored or attached along with the rest of the database. However, after migration, if you are unable to look at the properties of any full-text catalogs, you are probably better off dropping and recreating it. You may also get the following error messages along the way: Msg 9954, Level 16, State 2, Line 1 The Full-Text Service (msftesql) is disabled. The system administrator must enable this service. This basically means full text service is not running (disabled or stopped) in the destination instance. You will need to start it from the Configuration Manager. Similarly, if you get the following message, you will also need to drop and recreate the catalog and populate it. Msg 7624, Level 16, State 1, Line 1 Full-text catalog ‘catalog_name‘ is in an unusable state. Drop and re-create this full-text catalog. A full population of full-text indexes can be a time and resource intensive operation. Obviously you will want to schedule it for low usage hours if the database is restored in an existing production server. Also, bear in mind that any scheduled job that existed in the source server for populating the full text catalog (e.g. nightly process for incremental update) will need to be re-created in the destination. Step 11: Database collation considerations Another sticky area to consider during a migration is the collation setting. Ideally you would want to restore or attach the database in a SQL Server instance with the same collation. Although not used commonly, SQL Server allows you to change a database’s collation by using the ALTER DATABASE command: ALTER DATABASE database_name COLLATE collation_name You should not be using this command for no reason as it can get really dangerous.  When you change the database collation, it does not change the collation of the existing user table columns.  However the columns of every new table, every new UDT and subsequently created variables or parameters in code will use the new setting. The collation of every char, nchar, varchar, nvarchar, text or ntext field of the system tables will also be changed. Stored procedure and function parameters will be changed to the new collation and finally, every character-based system data type and user defined data types will also be affected. And the change may not be successful either if there are dependent objects involved. You may get one or multiple messages like the following: Cannot ALTER ‘object_name‘ because it is being referenced by object ‘dependent_object_name‘. That is why it is important to test and check for collation related issues. Collation also affects queries that use comparisons of character-based data.  If errors arise due to two sides of a comparison being in different collation orders, the COLLATE keyword can be used to cast one side to the same collation as the other. Continues…

    Read the article

  • Pages in IE render differently when served through the ASP.NET Development server and Production Ser

    - by rajbk
    You see differences in the way IE renders your web application locally on the ASP.NET Development server compared to your production server. Comparing the response from both servers including response headers and CSS show no difference. The issue may occur because of a setting in IE. In IE, go to Tools –> Compatibility ViewSettings. The checkbox “Display intranet sites in Compatibility View” turned on forces IE8 to display the web application content in a way similar to how Internet Explorer 7 handles standards mode web pages. Since your local web server is considered to be in the intranet zone, IE uses “Compatibility View” to render your pages. While you could uncheck this setting in or propagate the change to all developers through group policy settings, a different way is described below. To force IE to mimic the behavior of a certain version of IE when rendering the pages, you use the meta element  to include a “X-UA-Compatible” http-equiv header in  your web page or have it sent as part of the header by adding it to your web.config file. The values are listed below: <meta http-equiv="X-UA-Compatible" content="IE=4"> <!-- IE5 mode --> <meta http-equiv="X-UA-Compatible" content="IE=7.5"> <!-- IE7 mode --> <meta http-equiv="X-UA-Compatible" content="IE=100"> <!-- IE8 mode --> <meta http-equiv="X-UA-Compatible" content="IE=a"> <!-- IE5 mode --> This value can also be set in web.config like so: <?xml version="1.0" encoding="utf-8"?> <configuration> <system.webServer> <httpProtocol> <customHeaders> <clear /> <add name="X-UA-Compatible" value="IE=EmulateIE7" /> </customHeaders> </httpProtocol> </system.webServer> </configuration> The setting can added in the IIS metabase as described here. Similarly, you can do the same in Apache by adding the directive in httpd.conf <Location /store> Header set X-UA-Compatible “IE=EmulateIE7” </Location> Even though it can be done on a site level, I recommend you do it on a per application level to avoid confusing the developer. References Defining Document Compatibility Implementing the META Switch on IIS Implementing the META Switch on Apache

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >