Search Results

Search found 3403 results on 137 pages for 'monthly statements'.

Page 32/137 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • SQL Server 2008 Compression

    - by Peter Larsson
    Hi! Today I am going to talk about compression in SQL Server 2008. The data warehouse I currently design and develop holds historical data back to 1973. The data warehouse will have an other blog post laster due to it's complexity. However, the server has 60GB of memory (of which 48 is dedicated to SQL Server service), so all data didn't fit in memory and the SAN is not the fastest one around. So I decided to give compression a go, since we use Enterprise Edition anyway. This is the code I use to compress all tables with PAGE compression. DECLARE @SQL VARCHAR(MAX)   DECLARE curTables CURSOR FOR             SELECT 'ALTER TABLE ' + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                     + '.' + QUOTENAME(OBJECT_NAME(object_id))                     + ' REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)'             FROM    sys.tables   OPEN    curTables   FETCH   NEXT FROM    curTables INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curTables         INTO    @SQL     END   CLOSE       curTables DEALLOCATE  curTables Copy and paste the result to a new code window and execute the statements. One thing I noticed when doing this, is that the database grows with the same size as the table. If the database cannot grow this size, the operation fails. For me, I first ended up with orphaned connection. Not good. And this is the code I use to create the index compression statements DECLARE @SQL VARCHAR(MAX)   DECLARE curIndexes CURSOR FOR             SELECT      'ALTER INDEX ' + QUOTENAME(name)                         + ' ON '                         + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                         + '.'                         + QUOTENAME(OBJECT_NAME(object_id))                         + ' REBUILD PARTITION = ALL WITH (FILLFACTOR = 100, DATA_COMPRESSION = PAGE)'             FROM        sys.indexes             WHERE       OBJECTPROPERTY(object_id, 'IsMSShipped') = 0                         AND OBJECTPROPERTY(object_id, 'IsTable') = 1             ORDER BY    CASE type_desc                             WHEN 'CLUSTERED' THEN 1                             ELSE 2                         END   OPEN    curIndexes   FETCH   NEXT FROM    curIndexes INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curIndexes         INTO    @SQL     END   CLOSE       curIndexes DEALLOCATE  curIndexes When this was done, I noticed that the 90GB database now only was 17GB. And most important, complete database now could reside in memory! After this I took care of the administrative tasks, backups. Here I copied the code from Management Studio because I didn't want to give too much time for this. The code looks like (notice the compression option). BACKUP DATABASE [Yoda] TO              DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH            NOFORMAT,                 INIT,                 NAME = N'Yoda - Full Database Backup',                 SKIP,                 NOREWIND,                 NOUNLOAD,                 COMPRESSION,                 STATS = 10,                 CHECKSUM GO   DECLARE @BackupSetID INT   SELECT  @BackupSetID = Position FROM    msdb..backupset WHERE   database_name = N'Yoda'         AND backup_set_id =(SELECT MAX(backup_set_id) FROM msdb..backupset WHERE database_name = N'Yoda')   IF @BackupSetID IS NULL     RAISERROR(N'Verify failed. Backup information for database ''Yoda'' not found.', 16, 1)   RESTORE VERIFYONLY FROM    DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH    FILE = @BackupSetID,         NOUNLOAD,         NOREWIND GO After running the backup, the file size was even more reduced due to the zip-like compression algorithm used in SQL Server 2008. The file size? Only 9 GB. //Peso

    Read the article

  • Stairway to MDX - STEP 1: Getting Started with MDX

    To learn MDX, there is really no alternative to installing the system and trying out the statements, and experimenting. William Pearson, the well-known expert on MDX, kicks off a stairway series on this important topic by getting you running from a standing start. NEW! SQL Monitor 2.0Monitor SQL Server Central's servers withRed Gate's new SQL Monitor.No installation required. Find out more.

    Read the article

  • Grant Showplan : MS SQL Server 2005

    SQL version applied to: 2005 Grant Showplan The SHOWPLAN permission only governs who can run the various SET SHOWPLAN statements. There is no impact on performance of this. And with some of the SHOWPLAN statement in effect, the statement(s) is not executed and goes through compilation phase only.  read moreBy Sachin DiwakerDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Rails Book Suggestions [closed]

    - by Solomon081
    Possible Duplicate: Is there a canonical book on Ruby on Rails? I'm looking to learn Ruby on Rails. I already have a small background in Ruby and don't really need a book that covers both, as I ordered the Pickaxe Book a couple days ago. I recently read some of Beginning Ruby on Rails-Steven Holzner, but abandoned that after seeing multiple statements on SO about how terrible the code in it was, and also the fact that it used Rails 1...Just wondering, what is the best book for an ABSOLUTE BEGINNER in Rails?

    Read the article

  • Is there a procedural graphical programming environment?

    - by Marc
    I am searching for a graphical programming environment for procedural programming in which you can integrate some or all of the common sources of calculation procedures, such as Excel sheets, MATLAB scripts or even .NET assemblies. I think of something like a flowchart configurator in which you define the procedures via drag& drop using flow-statements (if-else, loops, etc.). Do you know of any systems heading in this direction?

    Read the article

  • Running Multiple Queries in Oracle SQL Developer

    - by thatjeffsmith
    There are two methods for running queries in SQL Developer: Run Statement Run Statement, Shift+Enter, F9, or this button Run Script No grids, just script (SQL*Plus like) ouput is fine, thank you very much! What’s the Difference? There are some obvious differences between the two features, the most obvious being the format of the output delivered. But there are some other, more subtle differences here, primarily around fetching. What is Fetch? After you run send your query to Oracle, it has to do 3 things: Parse Execute Fetch Technically it has to do at least 2 things, and sometimes only 1. But, to get the data back to the user, the fetch must occur. If you have a 10 row query or a 1,000,000 row query, this can mean 1 or many fetches in groups of records. Ok, before I went on the Fetch tangent, I said there were two ways to run statements in SQL Developer: Run Statement Run statement brings your query results to a grid with a single fetch. The user sees 50, 100, 500, etc rows come back, but SQL Developer and the database know that there are more rows waiting to be retrieved. The process on the server that was used to execute the query is still hanging around too. To alleviate this, increase your fetch size to 500. Every query ran will come back with the first 500 rows, and rows will be continued to be fetched in 500 row increments. You’ll then see most of your ad hoc queries complete with a single fetch. Scroll down, or hit Ctrl+End to force a full fetch and get all your rows back. Run Script Run Script runs the contents of the worksheet (or what’s highlighted) as a ‘script.’ What does that mean exactly? Think of this as being equivalent to running this in SQL*Plus: @my_script.sql; Each statement is executed. Also, ALL rows are fetched. So once it’s finished executing, there are no open cursors left around. The more obvious difference here is that the output comes back formatted as plain old text. Run one or more commands plus SQL*Plus commands like SET and SPOOL The Trick: Run Statement Works With Multiple Statements! It says ‘run statement,’ but if you select more than one with your mouse and hit the button – it will run each and throw the results to 1 grid for each statement. If you mouse hover over the Query Result panel tab, SQL Developer will tell you the query used to populate that grid. This will work regardless of what you have this preference set to: DATABASE – WORKSHEET – SHOW QUERY RESULTS IN NEW TABS Mind the fetch though! Close those cursors by bring back all the records or closing the grids when you’re done with them.

    Read the article

  • What are the canonical problem sets or problem domains for the different types of languages?

    - by SnOrfus
    I'm just wondering what some of the canonical problem sets are for certain types of languages? ie. Fill in the blanks: __ is the perfect language to use for solving __. The question was arrived at reading or hearing people say statements like such and such module in our codebase would be much easier to implement using a functional language. Fee free to include examples that would seem obvious to you.

    Read the article

  • Databinder.Eval using Lamda Expressions

    Using lambda expression to help with compile time checking of Eval statements...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Concurrancy Problems and their Relationship with Isolation Level

    - by pinaldave
    Concurrency is simply put capability of the machine to support two or more transactions working with the same data at the same time. This usually comes up with data is being modified, as during the retrieval of the data this is not the issue. Most of the concurrency problems can be avoided by SQL Locks. There are four types of concurrency problems visible in the normal programming. 1)      Lost Update – This problem occurs when there are two transactions involved and both are unaware of each other. The transaction which occurs later overwrites the transactions created by the earlier update. 2)      Dirty Reads – This problem occurs when a transactions selects data that isn’t committed by another transaction leading to read the data which may not exists when transactions are over. Example: Transaction 1 changes the row. Transaction 2 changes the row. Transaction 1 rolls back the changes. Transaction 2 has selected the row which does not exist. 3)      Nonrepeatable Reads – This problem occurs when two SELECT statements of the same data results in different values because another transactions has updated the data between the two SELECT statements. Example: Transaction 1 selects a row, which is later on updated by Transaction 2. When Transaction A later on selects the row it gets different value. 4)      Phantom Reads – This problem occurs when UPDATE/DELETE is happening on one set of data and INSERT/UPDATE is happening on the same set of data leading inconsistent data in earlier transaction when both the transactions are over. Example: Transaction 1 is deleting 10 rows which are marked as deleting rows, during the same time Transaction 2 inserts row marked as deleted. When Transaction 1 is done deleting rows, there will be still rows marked to be deleted. When two or more transactions are updating the data, concurrency is the biggest issue. I commonly see people toying around with isolation level or locking hints (e.g. NOLOCK) etc, which can very well compromise your data integrity leading to much larger issue in future. Here is the quick mapping of the isolation level with concurrency problems: Isolation Dirty Reads Lost Update Nonrepeatable Reads Phantom Reads Read Uncommitted Yes Yes Yes Yes Read Committed No Yes Yes Yes Repeatable Read No No No Yes Snapshot No No No No Serializable No No No No I hope this 400 word small article gives some quick understanding on concurrency issues and their relation to isolation level. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Book Review: SSIS Design Patterns

    - by andyleonard
    Samuel Vanga ( Blog | @SamuelVanga ) has posted a review of our new book SSIS Design Patterns at his blog . Several of Sam’s statements struck me, but none more than this: Within a few hours of reading SQL Server 2012 Integration Services Design Patterns , it stood out that none of the authors were trying to impress by showing what they all know in SSIS. Instead, they focused on describing solutions and patterns in a great detail (exactly why I paid for). Sam mentions he could not locate the source...(read more)

    Read the article

  • What if(event) statement means in JavaScript?

    - by j flo
    I'm rather new to JavaScript and programming in general so I am pretty much only used to seeing if statements that have some kind of comparison operator like, if (x < 10) or if(myBool). I have seen an if statement checking against an event, but I don't understand what or why the event is being checked like that. What's the semantic meaning behind that check or comparison? Here is the code in question: if(event){ event.preventDefault(); }

    Read the article

  • MySQL Input Parameters Add Flexibility to Crosstab Stored Procedures

    When generating a result set where the query contains an unknown number of column and/or row values we can use a combination of Prepared Statements, which allows us to tailor the output based on the number of data values. We can also add input parameters to a procedure to assign the field names, aliases, and even the aggregate function!

    Read the article

  • MySQL Input Parameters Add Flexibility to Crosstab Stored Procedures

    When generating a result set where the query contains an unknown number of column and/or row values we can use a combination of Prepared Statements, which allows us to tailor the output based on the number of data values. We can also add input parameters to a procedure to assign the field names, aliases, and even the aggregate function!

    Read the article

  • Are there deprecated practices for multithread and multiprocessor programming that I should no longer use?

    - by DeveloperDon
    In the early days of FORTRAN and BASIC, essentially all programs were written with GOTO statements. The result was spaghetti code and the solution was structured programming. Similarly, pointers can have difficult to control characteristics in our programs. C++ started with plenty of pointers, but use of references are recommended. Libraries like STL can reduce some of our dependency. There are also idioms to create smart pointers that have better characteristics, and some version of C++ permit references and managed code. Programming practices like inheritance and polymorphism use a lot of pointers behind the scenes (just as for, while, do structured programming generates code filled with branch instructions). Languages like Java eliminate pointers and use garbage collection to manage dynamically allocated data instead of depending on programmers to match all their new and delete statements. In my reading, I have seen examples of multi-process and multi-thread programming that don't seem to use semaphores. Do they use the same thing with different names or do they have new ways of structuring protection of resources from concurrent use? For example, a specific example of a system for multithread programming with multicore processors is OpenMP. It represents a critical region as follows, without the use of semaphores, which seem not to be included in the environment. th_id = omp_get_thread_num(); #pragma omp critical { cout << "Hello World from thread " << th_id << '\n'; } This example is an excerpt from: http://en.wikipedia.org/wiki/OpenMP Alternatively, similar protection of threads from each other using semaphores with functions wait() and signal() might look like this: wait(sem); th_id = get_thread_num(); cout << "Hello World from thread " << th_id << '\n'; signal(sem); In this example, things are pretty simple, and just a simple review is enough to show the wait() and signal() calls are matched and even with a lot of concurrency, thread safety is provided. But other algorithms are more complicated and use multiple semaphores (both binary and counting) spread across multiple functions with complex conditions that can be called by many threads. The consequences of creating deadlock or failing to make things thread safe can be hard to manage. Do these systems like OpenMP eliminate the problems with semaphores? Do they move the problem somewhere else? How do I transform my favorite semaphore using algorithm to not use semaphores anymore?

    Read the article

  • Idera Compliance Manager 3.5 and SQL Server 2012 Release Candidate

    Unlike most conventional database auditing solutions, SQL Compliance Manager places a blanket over data access with real-time auditing. Clients can pinpoint any malicious intent with sensitive column auditing. This feature gives specifics as to who has accessed information located within an audited table's sensitive columns. With transaction status auditing, database administrators can detect suspicious activity by auditing the status of transactions that execute DML statements on an audited database with the help of rollbacks and save-points. In addition, SQL Compliance Manager lives up t...

    Read the article

  • Sub query pass through

    - by SQL and the like
    Occasionally in forums and on client sites I see conditional subqueries in statements. This is where the developer has decided that it is only necessary to process some data under a certain condition.  By way of example, something like this : Create Procedure GetOrder @SalesOrderId integer, @CountDetails tinyint as Select SOH.salesorderid , case when @CountDetails = 1 then (Select count(*) from Sales.SalesOrderDetail SOD where SOH.SalesOrderID = SOD.SalesOrderID) end from sales.SalesOrderHeader...(read more)

    Read the article

  • How do you version/track changes to SQL tables?

    - by gabe.
    When working in a team of developers, where everyone is making changes to local tables, and development tables, how do you keep all the changes in sync? A central log file where everyone keeps their sql changes? A wiki page to track alter table statements, individual .sql files that the devs can run to bring their local db's to the latest version? I've used some of these solutions, and I'm tyring to get a good solid solution together that works, so I'd appreciate your ideas.

    Read the article

  • Why is my class worse than the hierarchy of classes in the book (beginner OOP)?

    - by aditya menon
    I am reading this book. The author is trying to model a lesson in a college. The goal is to output the Lesson Type (Lecture or Seminar), and the Charges for the lesson depending on whether it is a hourly or fixed price lesson. So the output should be: lesson charge 20. Charge type: hourly rate. lesson type seminar. lesson charge 30. Charge type: fixed rate. lesson type lecture. When the input is as follows: $lessons[] = new Lesson('hourly rate', 4, 'seminar'); $lessons[] = new Lesson('fixed rate', null, 'lecture'); I wrote this: class Lesson { private $chargeType; private $duration; private $lessonType; public function __construct($chargeType, $duration, $lessonType) { $this->chargeType = $chargeType; $this->duration = $duration; $this->lessonType = $lessonType; } public function getChargeType() { return $this->getChargeType; } public function getLessonType() { return $this->getLessonType; } public function cost() { if($this->chargeType == 'fixed rate') { return "30"; } else { return $this->duration * 5; } } } $lessons[] = new Lesson('hourly rate', 4, 'seminar'); $lessons[] = new Lesson('fixed rate', null, 'lecture'); foreach($lessons as $lesson) { print "lesson charge {$lesson->cost()}."; print " Charge type: {$lesson->getChargeType()}."; print " lesson type {$lesson->getLessonType()}."; print "<br />"; } But according to the book, I am wrong (I am pretty sure I am, too). The author gave a large hierarchy of classes as the solution instead. In a previous chapter, the author stated the following 'four signposts' as the time when I should consider changing my class structure: Code Duplication The Class Who Knew Too Much About His Context The Jack of All Trades - Classes that try to do many things Conditional Statements The only problem I can see is Conditional Statements, and that too in a vague manner - so why refactor this? What problems do you think might arise in the future that I have not foreseen?

    Read the article

  • WebCenter Customer Spotlight: Global Village Telecom Ltda

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryGlobal Village Telecom Ltda. (GVT)  is a leading Brazilian telecommunications company, developing solutions and providing services for corporate and end users. GVT is located in Curitiba, Brazil, employs 6,000 people and has an annual revenue of around US$1 billion.  GVT business objectives were to improve corporate communications, accelerate internal information flow, provide continuous access to the all business files and  enable the company’s leadership to provide information to all departments in real time. GVT implemented Oracle WebCenter Content to centralize the company's content and they built  a portal to share and find content in real-time. Oracle WebCenter Content enabled GVT to quickly and efficiently integrate communication among all company employees—ensuring that GVT maintain a competitive edge in the market. Human Resources reduced the time required for issuing internal statements to all staff from three weeks to one day. Company OverviewGlobal Village Telecom Ltda. (GVT)  is a leading telecommunications company, developing solutions and providing services for corporate and end users. The company offers diverse innovative products and advanced solutions in conventional fixed telephone communications, data transmission, high speed broadband internet services, and voice over IP (VoIP) services for all market segment. GVT is located in Curitiba, Brazil, employs 6,000 people and have an  annual revenue of around US$1 billion.   Business ChallengesGVT business objectives were to improve corporate communications, accelerate internal information flow, provide continuous access to the all business files and enable the company’s leadership to provide information to all departments in real time. Solution DeployedGVT worked with the Oracle Partner IT7 to deploy Oracle WebCenter Content to securely centralize the company's content such as growth indicators, spreadsheets, and corporate and descriptive project schedules. The solution enabled real-time information sharing through the development of Click GVT, a portal that currently receives 100,000 monthly impressions from employee searches. Business ResultsGVT gained a competitive edge in the communications market by accelerating internal information flow, streamlining the content standardizing information and enabled real-time information sharing and discovery. Human Resources  reduced the time required for issuing  internal statements to all staff from three weeks to one day. “The competitive nature of telecommunication industry demands rapid information in the internal flow of the company. Oracle WebCenter Content enabled us to quickly and efficiently integrate communication among all company employees—ensuring that we maintain a competitive edge in the market.” Marcel Mendes Filho, Systems Manager, Global Village Telecom Ltda. Additional Information Global Viallage Telecom Ltda Customer Snapshot Oracle WebCenter Content

    Read the article

  • EJB Named Criteria - Apply bind variable in Backingbean

    - by Deepak Siddappa
    EJB Named criteria are predefined and reusable where-clause definitions that are dynamically applied to a ViewObject query. Here we often use to filter the ViewObject SQL statement query based on Where Clause conditions.Take a scenario where we need to filter the SQL statements query based on Where Clause conditions, instead of playing with SQL statements use the EJB Named Criteria which is supported by default in ADF and set the Bind Variable parameter at run time.You can download the sample workspace from here [Runs with Oracle JDeveloper 11.1.2.0.0 (11g R2) + HR Schema] Implementation StepsCreate Java EE Web Application with entity based on Employees table, then create a session bean and data control for the session bean.Open the DataControls.dcx file and create sparse xml for as shown below. In sparse xml navigate to Named criteria tab -> Bind Variable section, create binding variable deptId. Now create a named criteria and map the query attributes to the bind variable. In the ViewController create index.jspx page, from data control palette drop employeesFindAll->Named Criteria->EmployeesCriteria->Table as ADF Read-Only Filtered Table and create the backingBean as "IndexBean".Open the index.jspx page and remove the "filterModel" binding from the table, add <af:inputText />, command button and bind them to backingBean. For command button create the actionListener as "applyEmpCriteria" and add below code to the file. public void applyEmpCriteria(ActionEvent actionEvent) { DCIteratorBinding dc = (DCIteratorBinding)evaluteEL("#{bindings.employeesFindAllIterator}"); ViewObject vo = dc.getViewObject(); vo.applyViewCriteria(vo.getViewCriteriaManager().getViewCriteria("EmployeesCriteria")); vo.ensureVariableManager().setVariableValue("deptId", this.getDeptId().getValue()); vo.executeQuery(); } /** * Programmtic evaluation of EL * * @param el EL to evalaute * @return Result of the evalutaion */ public Object evaluteEL(String el) { FacesContext fctx = FacesContext.getCurrentInstance(); ELContext elContext = fctx.getELContext(); Application app = fctx.getApplication(); ExpressionFactory expFactory = app.getExpressionFactory(); ValueExpression valExp = expFactory.createValueExpression(elContext, el, Object.class); return valExp.getValue(elContext); } Run the index.jspx page, enter departmentId value as 90 and click in ApplyEmpCriteria button. Now the bind variable for the Named criteria will be applied at runtime in the backing bean and it will re-execute ViewObject query to filter based on where clause condition.

    Read the article

  • efficient collision detection - tile based html5/javascript game

    - by Tom Burman
    Im building a basic rpg game and onto collisions/pickups etc now. Its tile based and im using html5 and javascript. i use a 2d array to create my tilemap. Im currently using a switch statement for whatever key has been pressed to move the player, inside the switch statement. I have if statements to stop the player going off the edge of the map and viewport and also if they player is about to land on a tile with tileID 3 then the player stops. Here is the statement: canvas.addEventListener('keydown', function(e) { console.log(e); var key = null; switch (e.which) { case 37: // Left if (playerX > 0) { playerX--; } if(board[playerX][playerY] == 3){ playerX++; } break; case 38: // Up if (playerY > 0) playerY--; if(board[playerX][playerY] == 3){ playerY++; } break; case 39: // Right if (playerX < worldWidth) { playerX++; } if(board[playerX][playerY] == 3){ playerX--; } break; case 40: // Down if (playerY < worldHeight) playerY++; if(board[playerX][playerY] == 3){ playerY--; } break; } viewX = playerX - Math.floor(0.5 * viewWidth); if (viewX < 0) viewX = 0; if (viewX+viewWidth > worldWidth) viewX = worldWidth - viewWidth; viewY = playerY - Math.floor(0.5 * viewHeight); if (viewY < 0) viewY = 0; if (viewY+viewHeight > worldHeight) viewY = worldHeight - viewHeight; }, false); My question is, is there a more efficient way of handling collisions, then loads of if statements for each key? The reason i ask is because i plan on having many items that the player will need to be able to pickup or not walk through like walls cliffs etc. Thanks for your time and help Tom

    Read the article

  • Avoid SQL Injection with Parameters

    - by simonsabin
    The best way to avoid SQL Injection is with parameters. With parameters you can’t get SQL Injection. You only get SQL Injection where you are building a SQL statement by concatenating your parameter values in with your SQL statement. Annoyingly many TSQL statements don’t take parameters, CREATE DATABASE for instance, or really annoyingly ALTER USER. In these situations you have to rely on using QUOTENAME or REPLACE to avoid SQL Injection. (Kimberly Tripp takes about this in her recent blog post Little...(read more)

    Read the article

  • SQL Injection prevention

    - by simonsabin
    Just asking people not to use a list of certain words is not prevention from SQL Injection https://homebank.sactocu.org/UA2004/faq-mfa.htm#pp6 To protect yourself from SQL Injection you have to do 1 simple thing. Do not build your SQL statements by concatenating values passed by the user into a string an executing them. If your query has to be dynamic then make sure any values passed by a user are passed as parameters and use sp_executesql in TSQL or a SqlCommand object in ADO.Net...(read more)

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >