Search Results

Search found 29814 results on 1193 pages for 'sql datetime'.

Page 401/1193 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • how to optimize sql server table for faster response?

    - by Thomas
    i found a in a table there are 50 thousands records and it takes one minute when we fetch data from sql server table just by issuing a sql. there are one primary key that means a already a cluster index is there. i just do not understand why it takes one minute. beside index what are the ways out there to optimize a table to get the data faster. in this situation what i need to do for faster response. also tell me how we can write always a optimize sql. please tell me all the steps in detail for optimization. thanks.

    Read the article

  • Is there an open source repository for SQL code?

    - by morpheous
    I find myself writing SQL code (queries or stored procs) to solve problems that can definitely be defined as 'patterns' that occur frequently in business. Rather than having to wrack my brain each time I encounter a new problem (which must have been solved a countless times by other coders/db analysts, I wondered if there was a repository I could go to check out (peer reviewed) code - and maybe add my two pence every now and then. I know different db vendors tend to write slightly variant forms of SQL - but there could still be a repository with ANSI stuff and proprietary stuff. Hopefully, such a site would encourage more people to write standardized SQL. Is there such a site?. If no - why not? (would anyone else be interested in such a site?) If such a site exists, please provide link(s), as Google is not finding anything remotely useful.

    Read the article

  • Why can't we use strong ref cursor with dynamic SQL Statement?

    - by Vineet
    Hi ALL, I am trying to use a strong ref cur with dynamic sql statment but it is giving out an error,but when i use weak cursor it works,Please explain what is the reason and please forward me any link of oracle server architect containing matter about how compilation and parsing is done in Oracle server. THIS is the error along with code. ERROR at line 6: ORA-06550: line 6, column 7: PLS-00455: cursor 'EMP_REF_CUR' cannot be used in dynamic SQL OPEN statement ORA-06550: line 6, column 2: PL/SQL: Statement ignored declare type ref_cur_type IS REF CURSOR RETURN employees%ROWTYPE; --Creating a strong REF cursor,employees is a table emp_ref_cur ref_cur_type; emp_rec employees%ROWTYPE; BEGIN OPEN emp_ref_cur FOR 'SELECT * FROM employees'; LOOP FETCH emp_ref_cur INTO emp_rec; EXIT WHEN emp_ref_cur%NOTFOUND; END lOOP; END;

    Read the article

  • How do you make long SQL invoked from other code readable?

    - by Artem
    This is a very open question, but I think it can be very beneficial for SQL readability. So you have a Java program, and you are trying to call a monster SQL statement from it, with many subqueries and joins. The starting point for my question is a string constant like this: static string MONSTER_STATEMENT = "SELECT " + " fields" + "WHERE "+ " fieldA = (SELECT a FROM TableC) " + "AND fieldB IN (%s)" + "AND fieldC = %d " + "FROM " " tableA INNER JOIN tableB ON ..."; It later gets filled using String.format and executed. What are you tricks for making this kind of stuff readable? Do you separate your inner joins. Do you indent the SQL itself inside the string? Where do you put the comments? Please share all of the tricks in your arsenal.

    Read the article

  • How to access Oracle system tables from inside of a PL/SQL function or procedure?

    - by mjumbewu
    I am trying to access information from an Oracle meta-data table from within a function. For example (purposefully simplified): CREATE OR REPLACE PROCEDURE MyProcedure IS users_datafile_path VARCHAR2(100); BEGIN SELECT file_name INTO users_datafile_path FROM dba_data_files WHERE tablespace_name='USERS' AND rownum=1; END MyProcedure; / When I try to execute this command in an sqlplus process, I get the following errors: LINE/COL ERROR -------- ----------------------------------------------------------------- 5/5 PL/SQL: SQL Statement ignored 6/12 PL/SQL: ORA-00942: table or view does not exist I know the user has access to the table, because when I execute the following command from the same sqlplus process, it displays the expected information: SELECT file_name FROM dba_data_files WHERE tablespace_name='USERS' AND rownum=1; Which results in: FILE_NAME -------------------------------------------------------------------------------- /usr/lib/oracle/xe/oradata/XE/users.dbf Is there something I need to do differently?

    Read the article

  • Can SQL Server 2008 Express be used for offering commercial web hosting services?

    - by Tony_Henrich
    I read that SQL Server 2008 Express R2 database size limit has increased to 10G. That's good news. Can I use the Express edition of SQL Server to offer web hosting services to the public? Microsoft should be best in answering this but I can't find a clear answer on their site. I am also seeing several Windows web hosting plans include SQL Server as a total package for less than $5/month. I am wondering how they can afford to offer this.

    Read the article

  • How do i use SQL Server 2008? With Visual Studios?

    - by acidzombie24
    Its a two part question. How do i use SQL Server 2008? With Visual Studios? I started up a dummy project and with server explorer i tried with create new sql server database and add connection using my computer name (it came from a dropdown) as the server location. When i tried to create the database 'TestDB1' i got an error. I dont understand why. Its a fresh install and i have restarted the comp a few times since then. I havent messed with visual studios or the servers or even the control options to disable anything that would have been automatic. So whats with this? -edit- My goals are 1) create a database. 2) Be able to see all the database that exist on the server 3) execute sql queries in the ide 4) be able to browse tables. I dont need all of these but as many possible would be nice.

    Read the article

  • SQL Server 2008. Allow Remote Connections?

    - by George
    I have SQL Server 2000 and 2008 installed on a Windows XP Pro box. I can connect to both db instances locally. From another box, a Windows 7 box, I can connect to the SQL 2000 instance on the first box but I cannot connect to the 2008 instance using the same SQL Server authentication credentials that worked locally. Allow Remote Connections is set to TRUE for both the 2000 and 2008 database instances. What else can I look for to be able to connect to the remote 2008 instance from the Windows 7 box? I am trying to connect using Mgt Studio 2008.

    Read the article

  • Windows Backup failed as another backup or recovery is in progress.

    - by remunda
    Hi, i have set up backup schedule on our server. SQL Server 2008 to 01:00am on windows server 2008 R2 to 4:00am. Sql backup runs well, but system backup ends sometimes with error. The error is : http://technet.microsoft.com/en-us/library/dd364768%28WS.10%29.aspx I mean, that is caused by SQL Server, because it unexpectedly runs backup. this is from sql server log (it runs for every database): Date 4.2.2010 4:00:21 Log SQL Server (Current - 29.1.2010 23:25:00) Source spid72 Message I/O is frozen on database master. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup. and Date 4.2.2010 4:00:24 Log SQL Server (Current - 29.1.2010 23:25:00) Source Backup Message Database backed up. Database: master, creation date(time): 2010/01/29(23:25:32), pages dumped: 370, first LSN: 859:56:37, last LSN: 859:80:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{17B91D5C-9968-4D11-A7F1-1A31523D32F0}25'}). This is an informational message only. No user action is required. My questions is: Why runs sql backup with windows backup? How can i dissolve this that errors? Thank you a lot.

    Read the article

  • Sql Server 2005 database lost, How to recover all records. MDF/LDF size is same as it should be

    - by Shantanu Gupta
    Few months back, I installed a sql server 2005 on one of my client machine. I gave him a backup option to take backup timely but he never took any backup. Today he called me that "i m not able to see any record of mine." I visited at my clients system and saw that none of the record was present on the tables. There was not even a single row in any of the tables. Then I checked if he has any backup file which i found to be absent. I asked him the reason what could be the possible cause. He said it might be due to virus. After this I checked the size of mdf and ldf file and found it should be what it is. when i created his server mdf ldf file had 2MB of database now it is 83 MB and 193Mb mdf/ldf respectively. This shows the data is still present in it but it is not being displayed. What could be the possible cause and how can i restore all data back to my tables ?

    Read the article

  • Diagnosing Microsoft SQL Server error 9001: The log for the database is not available.

    - by Scott Mitchell
    Over the weekend a website I run stopped functioning, recording the following error in the Event Viewer each time a request is made to the website: Event ID: 9001 The log for database 'database name' is not available. Check the event log for related error messages. Resolve any errors and restart the database. The website is hosted on a dedicated server, so I am able to RDP into the server and poke around. The LDF file for the database exists in the C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA folder, but attempting to do any work with the database from Management Studio results in a dialog box reporting the same error - 9001: The log for database is not available... This is the first time I've received this error, and I've been hosting this site (and others) on this dedicated web server for over two years now. It is my understanding that this error indicates a corrupt log file. I was able to get the website back online by Detaching the database and then restoring a backup from a couple days ago, but my concern is that this error is indicative of a more sinister problem, namely a hard drive failure. I emailed support at the web hosting company and this was their reply: There doesn't appear to be any other indications of the cause in the Event Log, so it's possible that the log was corrupted. Currently the memory's resources is at 87%, which also may have an impact but is unlikely. Can the log just "become corrupted?" My question: What are the next steps I should take to diagnose this problem? How can I determine if this is, indeed, a hardware problem? And if it is, are there any options beyond replacing the disk? Thanks

    Read the article

  • Trying to connect simple VB6 ADO to SQL Server 2008

    - by Henry
    We have a VB6 App that -for current purposes - is very basic ADO: Dim a New ADODB.Recorset, set some basic properties like Cursor Location and Lock Type, set a Connection String and a .Source like "Select * from CustomerMaster", and .OPEN - nothing fancy here! Yet, on a new SBS installation with SQL Server 2008 across 2 Servers (one for Apps, the other dedicated to SQL!), it dies/hangs/crashes if you try to run such a Query from anything but the SQL Sever box. Initially, we were using the SQLOLEDB.1 driver, which would crash/hang the entire SQL Server after about 4 such queries (built a simple 6 line App just for this purpose). Then switched to the NATIVE SQL driver, which did allow us unlimited, happy queries - until you did the first Change/Update - THEN it would corrupt the SQL Server if you exited and tried to go back in. All this 'corruption' is happening from the 'App Server' of the SBS pair, and I presume that the App Server (also installed in tandem with the SQL server this week) has the latest MDACs, etc. And running it from a 'lowly XP workstation' is (obviously) no better. ANY ideas??? -Henry

    Read the article

  • Device CAL, User Cal or Processor license needed for SQL 2008 (architecture explained inside)?

    - by nycgags
    So we have a number of servers in the Amazon cloud running SQL Server Standard edition to aggregate data. For that purpose we are fine, the licensing is handled by our contract with Amazon, no problem there. For the beefier work, we want to install Enterprise Edition (EE) on our servers processing raw data so that we can take advantage of table partitioning. We currently have 3 servers aggregating data from about 40 node servers, all 43 of these servers are running standard edition which is fine. We also have 4 servers running standard processing the raw data, but I think we can get away with 2 (for redundancy) running Enterprise Edition. We have 2-3 dba's that access these DW servers for maintenance (using the same windows login via remote desktop). So visually: 40 -- 3 -- [2] -- 2 -- 1 nodes -- aggregators -- raw (which we want to run EE) -- calculators -- datawarehouse Nodes PUSH to aggregators, Raws PULL from aggregators, Calculators PULL from Raw, Calculators PUSH to datwarehouse I am specifying the push vs. pull in case that changes how the # of licenses is calculated. Q1) how many device (or user) CAL's do we need? Q2) do I need to speak with someone from MSFT to find out if it is ok to install in the Amazon Cloud (Amazon said we need to verify it is ok in our license terms)? Q3) what happens if another device tries to access a server with the limited number of device CAL's? Q4) Are the device CAL's simultaneous number of devices or total? Q5) Do Device and User CAL's cost the same or is there a difference? Q6) Would we need to buy a processor license (we are hoping not to)?

    Read the article

  • then an error occurred during the login process - Connection Error 233

    - by scott brunner
    We have SQL Server 2008 installed on 64Bit Windows Server 2003. When we try connect to the local SQL Server using SQL Server Management Studio at the console, we get the error: A connection was successfully established with the server, but then an error occurred during the login process. provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe. When we try TCP from same local SSMS to local server, we get the same error but intead of the pipe message its something like "connection forcibly closed". Now, here is the strange part - we CAN connect to this SQL Server from any other machine on the network using SSMS. - AND - WE CAN'T connect to ANY SQL Server from the problem server. So it seems the SQL Server instance is fine and accepting remote connections. However, the SSMS on that machine will not connect to any SQL Server even remotely. When we try an ADO.NET connection from C# remotely we can connect, run that same code on the console of the trouble server and we get the same errors. How can this be solved?

    Read the article

  • Alternative method of viewing a database diagram in SQL Server to see what tables have gone missing?

    - by Triynko
    I have a database diagram for my database, but when I open it in SQL Server, I almost immediately get a message saying some permissions changed or tables in the diagram were dropped or renamed, and tables in the diagram vanish before I can even scroll over to see what or where they were. Basically, it's saying, "Hey, you know all that time you spent laying out tables in this diagram... half of them are going to vanish when you view it, and I'm not going to tell you which tables vanished or where they were in the diagram. You're just going to see a bunch of random empty spaces where tables used to be ;)" Ridiculous. So I thought that maybe if I look in the dbo.sysdiagrams table, I could look at some plain text definition of the diagram to get a clue about the names of the tables that went missing (because thier names were probably only changed slightly) or their coordinates in the diagram (because their spatial location would give me a clue as to what they were), so that I could re-add them, but I can't, because it's a binary definition. So, is there some other program I could use to view the existing database diagram that's not going to just drop and forget the missing tables without telling me what they were, or is this information lost and at the mercy of some SSMS-proprietary database diagram format and viewer which refuses to cooperate with me.

    Read the article

  • Interview with Geoff Bones, developer on SQL Storage Compress

    - by red(at)work
    How did you come to be working at Red Gate? I've been working at Red Gate for nine months; before that I had been at a multinational engineering company. A number of my colleagues had left to work at Red Gate and spoke very highly of it, but I was happy in my role and thought, 'It can't be that great there, surely? They'll be back!' Then one day I visited to catch up them over lunch in the Red Gate canteen. I was so impressed with what I found there, that, three days later, I'd applied for a role as a developer. And how did you get into software development? My first job out of university was working as a systems programmer on IBM mainframes. This was quite a while ago: there was a lot of assembler and loading programs from tape drives and that kind of stuff. I learned a lot about how computers work, and this stood me in good stead when I moved over the development in the 90s. What's the best thing about working as a developer at Red Gate? Where should I start? One of the great things as a developer at Red Gate is the useful feedback and close contact we have with the people who use our products, either directly at trade shows and other events or through information coming through the product managers. The company's whole ethos is built around assisting the user, and this is in big contrast to my previous development roles. We aim to produce tools that people really want to use, that they enjoy using, and, as a developer, this is a great thing to aim for and a great feeling when we get it right. At Red Gate we also try to cut out the things that distract and stop us doing our jobs. As a developer, this means that I can focus on the code and the product I'm working on, knowing that others are doing a first-class job of making sure that the builds are running smoothly and that I'm getting great feedback from the testers. We keep our process light and effective, as we want to produce great software more than we want to produce great audit trails. Tell us a bit about the products you are currently working on. You mean HyperBac? First let me explain a bit about what HyperBac is. At heart it's a compression and encryption technology, but with a few added features that open up a wealth of really exciting possibilities. Right now we have the HyperBac technology in just three products: SQL HyperBac, SQL Virtual Restore and SQL Storage Compress, but we're only starting to develop what it can do. My personal favourite is SQL Virtual Restore; for example, I love the way you can use it to run independent test databases that are all backed by a single compressed backup. I don't think the market yet realises the kind of things you do once you are using these products. On the other hand, the benefits of SQL Storage Compress are straightforward: run your databases but use only 20% of the disk space. Databases are getting larger and larger, and, as they do, so does your ROI. What's a typical day for you? My days are pretty varied. We have our daily team stand-up meeting and then sometimes I will work alone on a current issue, or I'll be pair programming with one of my colleagues. From time to time we give half a day up to future planning with the team, when we look at the long and short term aims for the product and working out the development priorities. I also get to go to conferences and events, which is unusual for a development role and gives me the chance to meet and talk to our customers directly. Have you noticed anything different about developing tools for DBAs rather than other IT kinds of user? It seems to me that DBAs are quite independent minded; they know exactly what the problem they are facing is, and often have a solution in mind before they begin to look for what's on the market. This means that they're likely to cherry-pick tools from a range of vendors, picking the ones that are the best fit for them and that disrupt their environments the least. When I've met with DBAs, I've often been very impressed at their ability to summarise their set up, the issues, the obstacles they face when implementing a tool and their plans for their environment. It's easier to develop products for this audience as they give such a detailed overview of their needs, and I feel I understand their problems.

    Read the article

  • Datetime Format Setting In SSRS

    - by Amit
    We have a requirement where date time values would be passed to the report parameter which is of "String" date type (and not "DateTime"). The report parameter would be a queried one i.e. it would have a list of values in which the passed value should fall in. The strange part is that if the date time value passed to this parameter is passed in this format mm/dd/yyyy hh:mm:ss AM/PM then only it succeeds otherwise a error is displayed (If month/date/hour/minute/second is having a single digit value then we need to pass single digit value to parameter as well). Assuming that the report server picks this format from the "regional settings" in control panel, we tried modifying the date & time formats as yyyy-mm-dd & HH:mm:ss but the outcome was the same. On researching more I found some suggestions specifying to change the language property in the rdl (I was not able to figure out how) but this would not be the solution I am looking for. I also found another topic here but it didn't provide the solution but it didn't provide the solution we are looking for. I need to understand on if this format is controlled by the report server & is it configurable. It would be great if someone can provide some guidance. Thanks.

    Read the article

  • Set a datetime for next or previous sunday at specific time

    - by Marc
    I have an app where there is always a current contest (defined by start_date and end_date datetime). I have the following code in the application_controller.rb as a before_filter. def load_contest @contest_last = Contest.last @contest_last.present? ? @contest_leftover = (@contest_last.end_date.utc - Time.now.utc).to_i : @contest_leftover = 0 if @contest_last.nil? Contest.create(:start_date => Time.now.utc, :end_date => Time.now.utc + 10.minutes) elsif @contest_leftover < 0 @winner = Organization.order('votes_count DESC').first @contest_last.update_attributes!(:organization_id => @winner.id, :winner_votes => @winner.votes_count) if @winner.present? Organization.update_all(:votes_count => 0) Contest.create(:start_date => @contest_last.end_date.utc, :end_date => Time.now.utc + 10.minutes) end end My questions: 1) I would like to change the :end_date to something that signifies next Sunday at a certain time (eg. next Sunday at 8pm). Similarly, I could then set the :start_date to to the previous Sunday at a certain time. I saw that there is a sunday() class (http://api.rubyonrails.org/classes/Time.html#method-i-sunday), but not sure how to specify a certain time on that day. 2) For this situation of always wanting the current contest, is there a better way of loading it in the app? Would caching it be better and then reloading if a new one is created? Not sure how this would be done, but seems to be more efficient. Thanks!

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • Date/Time formatting in .NET (Devexpress Gantt charts to show time rather than date)

    - by calico-cat
    I have some data about a day's events that I'm trying to visualise as a Gantt chart using Devexpress XtraCharts. Devexpress's example here shows the chart being populated by date. However, I'd like it to be populated by time to compare the events throughout one day. My X-axis is displaying correctly - done like so: ganttDiagram.AxisY.DateTimeMeasureUnit = DateTimeMeasurementUnit.Minute I have data with the correct time, however, the label on each series is showing the date (which are all the same, because it's all the same day!) Thus, instead of being a bar, all of them are just single points, with the label showing 31/03/2010 - 31/03/2010. Each series is created with the code below: s.Points.Add(New SeriesPoint("Machine", New DateTime() {ev.StartTime, ev.EndTime}))

    Read the article

  • Python Ephem calculation

    - by dassouki
    the output should process the first date as "day" and second as "night". I've been playing with this for a few hours now and can't figure out what I'm doing wrong. Any ideas? Output: $ python time_of_day.py * should be day: event date: 2010/4/6 16:00:59 prev rising: 2010/4/6 09:24:24 prev setting: 2010/4/5 23:33:03 next rise: 2010/4/7 09:22:27 next set: 2010/4/6 23:34:27 day * should be night: event date: 2010/4/6 00:01:00 prev rising: 2010/4/5 09:26:22 prev setting: 2010/4/5 23:33:03 next rise: 2010/4/6 09:24:24 next set: 2010/4/6 23:34:27 day time_of_day.py import datetime import ephem # install from http://pypi.python.org/pypi/pyephem/ #event_time is just a date time corresponding to an sql timestamp def type_of_light(latitude, longitude, event_time, utc_time, horizon): o = ephem.Observer() o.lat, o.long, o.date, o.horizon = latitude, longitude, event_time, horizon print "event date ", o.date print "prev rising: ", o.previous_rising(ephem.Sun()) print "prev setting: ", o.previous_setting(ephem.Sun()) print "next rise: ", o.next_rising(ephem.Sun()) print "next set: ", o.next_setting(ephem.Sun()) if o.previous_rising(ephem.Sun()) <= o.date <= o.next_setting(ephem.Sun()): return "day" elif o.previous_setting(ephem.Sun()) <= o.date <= o.next_rising(ephem.Sun()): return "night" else: return "error" print "should be day: ", type_of_light('45.959','-66.6405','2010/4/6 16:01','-4', '-6') print "should be night: ", type_of_light('45.959','-66.6405','2010/4/6 00:01','-4', '-6')

    Read the article

  • Launching python within python and timezone issue

    - by Gabi Purcaru
    I had to make a launcher script for my django app, and it seems it somehow switches the timezone to GMT (default being +2), and every datetime is two hours behind when using the script. What could be causing that? Here is the launcher script that I use: #!/usr/bin/env python import os import subprocess import shlex import time cwd = os.getcwd() p1 = subprocess.Popen(shlex.split("python manage.py runserver"), cwd=os.path.join(cwd, "drugsworld")) p2 = subprocess.Popen(shlex.split("python coffee_auto_compiler.py"), cwd=os.path.join(cwd)) try: while True: time.sleep(2) except KeyboardInterrupt: p1.terminate() p2.terminate() If I manually run python manage.py runserver, the timezone is +2. If, however, I use this script, the timezone is set to GMT.

    Read the article

  • I need to calculate the date / time difference between one date time column

    - by Stringz
    Details. I have the notes table having the following columns. ID - INT(3) Date - DateTime Note - VARCHAR(100) Tile - Varchar(100) UserName - Varchar(100) Now this table will be having NOTES along with the Titles entered by UserName on the specified date / time. I need to calculate the DateTimeDifference between the TWO ROWS in the SAME COLUMN For example the above table has this peice of information in the table. 64, '2010-03-26 18:16:13', 'Action History', 'sending to Level 2.', 'Salman Khwaja' 65, '2010-03-26 18:19:48', 'Assigned By', 'This is note one for the assignment of RF.', 'Salman Khwaja' 66, '2010-03-27 19:19:48', 'Assigned By', 'This is note one for the assignment of CRF.', 'Salman Khwaja' Now I need to have the following resultset in query reports using MYSQL. TASK - TIME Taken ACTION History - 2010-03-26 18:16:13 Assigned By - 00:03:35 Assigned By - 25:00:00 More smarter approach would be TASK - TIME Taken ACTION History - 2010-03-26 18:16:13 Assigned By - 3 minutes 35 seconds Assigned By - 1 day, 1 hour. I would appreciate if one could give me the PLAIN QUERY along with PHP code to embed it too.

    Read the article

  • C# check age of db Item

    - by Jacob Huggart
    Hello All, I am using C# to send an email with an encrypted link. The encrypted portion of the link contains a time stamp that needs to be used to verify if the link is more than 48 hours old. How do I compare an old time to the current time and find out if the old time is more than 48 hours ago? This is what I have now: var hours = DateTime.Now.Ticks - data.DTM.Value.Ticks; //data.DTM = stored time stamp if (hours.CompareTo(48) > 1) //if link is more than 48 hours old, deny access. return View("LinkExpired"); } Comparing ticks seems like it's a very backwards way to do it and I know that the hours.CompareTo would have to be adjusted if I stick with comparing ticks. How can I just get a value for the number of hours that have passed?

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >