Search Results

Search found 3935 results on 158 pages for 'extended procedures'.

Page 50/158 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • First-Global-Teach for the Oracle Imaging and Process Management 11g: Administration: San Francisco

    - by stephen.schleifer
    First-Global-Teach for the Oracle Imaging and Process Management 11g: Administration: San Francisco | June 23-25 This course enables participants to use Oracle Imaging and Process Management (I/PM) 11g to access, track, and annotate documents. In addition, they also get an overview of the product architecture of Oracle I/PM running on Oracle WebLogic Server.The course also delves into administration tasks such as security permissions, configuration such as creating BPEL connections, and procedures for creating applications, searches, and input mappings. Customer and partners can register by looking up the course (#D61575GC10) on http://education.oracle.com

    Read the article

  • dependency analysis from C# code thru to database tables/columns

    - by fpdave
    I'm looking for a tool to do system wide dependency analysis in C# code and SQL-Server databases. Its looking like the only tool available that does this might be CAST (cast software), which is expensive and it does lots more besides that I dont really need. c# code thru to database column dependency would be hugely useful for many reasons, including: - determining effects of database changes throughout the system - seeing hot spots in the database schema - finding dead stored procedures/tables/etc - understanding the existing code base does anyone know of any such tools?

    Read the article

  • What would a start-to-finish development procedure would look like?

    - by Tom Busby
    I have a problem that my developer friends share. We recently left university and find ourselves either end up working for a firm which already has good procedures (TDD, automated testing, proper agile development, etc) or working for a firm which doesn't. I want to learn some of these vital skills and get a grip on what a complete start-to-finish development procedure would look like. What differences would be between a smaller project, and a long term project with many team members.

    Read the article

  • T-SQL in SQL Azure

    - by kaleidoscope
    The following table summarizes the Transact-SQL support provided by SQL Azure Database at PDC 2009: Transact-SQL Features Supported Transact-SQL Features Unsupported Constants Constraints Cursors Index management and rebuilding indexes Local temporary tables Reserved keywords Stored procedures Statistics management Transactions Triggers Tables, joins, and table variables Transact-SQL language elements such as Create/drop databases Create/alter/drop tables Create/alter/drop users and logins User-defined functions Views, including sys.synonyms view Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags   Amit, S

    Read the article

  • SQL Server data platform upgrade - Why upgrade and how best you can reduce pre & post upgrade problems?

    - by ssqa.net
    SQL Server upgrade, let it be database(s) or instance(s) or both the process and procedures must follow best practices in order to reduce any problems that may occur even after the platform is upgraded. The success of any project relies upon the simpler methods of implementation and a process to reduce the complexity in testing to ensure a successful outcome. Also the topic has been a popular topic that .... read more from here ......(read more)

    Read the article

  • How to Name Linked Servers

    - by Bill Graziano
    I did another SQL Server migration over the weekend that dealt with linked servers.  I’ve seen all kinds of odd naming schemes and there are a few I like and a few I suggest you avoid. Don’t name your linked server for its IP address.  At some point whatever is on the other end of that IP address will move.  You’ll probably need to point your linked server to a new IP address but not change the name of the linked server.  And then you’ve completely lost any context around this.  Bonus points if a new SQL Server eventually ends up at the old IP address further adding confusion when you’re trying to troubleshoot. Don’t name your linked server based on its instance name.  This one is less obvious.  It sounds nice to have a linked server named [VSRV1\SQLTRAN01].  You know what it is and it’s easy to use.  It’s less nice when you’ve got 200 stored procedures that all reference this linked server but the database they reference has moved to a new instance.  Now when you query this you’re actually querying a different instance. (Please note: I’m not saying it’s a good idea to have 200 stored procedures that all reference a linked server.  I’m just saying it’s not all that uncommon.) Consider naming your linked server something that you can easily search on.  See my note above.  You can also get around this by always enclosing the name in brackets.  That is harder to enforce unless you use some odd characters in it. Consider naming your linked server based on the function.  For example, I’ve had some luck having a linked server named [DW] that points to our data warehouse server.  That server can change names or physically move and all I need to do is update the linked server to point to the new destination.  The descriptive name of the linked server is still accurate.  No code needs to change and people still know what it is just by looking at it. Consider naming your linked server for the database.  I’m still thinking through this one.  It may mean you have multiple linked servers that point to the same instance.  I’ve found that database names rarely change.  It also makes it easier to move individual databases to new servers. Consider pointing your linked servers to DNS entries and not IP addresses.  I’ve done this for reporting databases and had some success.  Especially for read-only snapshots that can get created on the main database or on the mirror.  What issues have you had with linked server names?  What has worked for you?  Where are the holes in my approach?

    Read the article

  • Achieving Zero Downtime Deployment

    - by MattW
    I am trying to achieve zero downtime deployments so I can deploy less during off hours and more during "slower" hours - or anytime, in theory. My current setup, somewhat simplified: Web Server A (.NET App) Web Server B (.NET App) Database Server (SQL Server) My current deployment process: "Stop" the sites on both Web Server A and B Upgrade the database schema for the version of the app being deployed Update Web Server A Update Web Server B Bring everything back online Current Problem This leads to a small amount of downtime each month - about 30 mins. I do this during off hours, so it isn't a huge problem - but it is something I'd like to get away from. Also - there is no way to really go 'back'. I don't generally make rollback DB scripts - only upgrade scripts. Leveraging The Load Balancer I'd love to be able to upgrade one Web Server at a time. Take Web Server A out of the load balancer, upgrade it, put it back online, then repeat for Web Server B. The problem is the database. Each version of my software will need to execute against a different version of the database - so I am sort of "stuck". Possible Solution A current solution I am considering is adopting the following rules: Never delete a database table. Never delete a database column. Never rename a database column. Never reorder a column. Every stored procedure must be versioned. Meaning - 'spFindAllThings' will become 'spFindAllThings_2' when it is edited. Then it becomes 'spFindAllThings_3' when edited again. Same rule applies to views. While, this seems a bit extreme - I think it solves the problem. Each version of the application will be hitting the DB in a non breaking way. The code expects certain results from the views/stored procedures - and this keeps that 'contract' valid. The problem is - it just seeps sloppy. I know I can clean up old stored procedures after the app is deployed for awhile, but it just feels dirty. Also - it depends on all of the developers following these rule, which will mostly happen, but I imagine someone will make a mistake. Finally - My Question Is this sloppy or hacky? Is anybody else doing it this way? How are other people solving this problem?

    Read the article

  • Izenda Reports 6.3 Top 10 Features

    - by gt0084e1
    Izenda 6.3 Top 10 New Features and Capabilities 1. Izenda Maps Add-On The Izenda Maps add-on allows rapid visualization of geographic or geo-spacial data.  It is fully integrated with the the rest of Izenda report package and adds a Maps tab which allows users to add interactive maps to their reports. Contact your representative or [email protected] for limited time discounts. Izenda Maps even has rich drill-down capabilities that allow you to dive deeper with a simple hover (also requires dashboards). 2. Streamlined Pie Charts with "Other" Slices The advanced properties of the Pie Chart now allows you to combine the smaller slices into a single "Other" slice. This reduces the visual complexity without throwing off the scale of the chart. Compare the difference below. 3. Combined Bar + Line Charts The Bar chart now allows dual visualization of multiple metrics simultaneously by adding a line for secondary data. Enabled via AdHocSettings.AllowLineOnBar = true; 4. Stacked Bar Charts The stacked bar chart lets you see a breakdown of a measure based on categorical data.  It is enabled with the following code. AdHocSettings.AllowStackedBarChart = true; 5. Self-Joining Data Sources The self-join features allows for parent-child relationships to be accessed from the Data Sources tab. The same table can be used as a secondary child table within the Report Designer. 6. Report Design From Dashboard View Dashboards now sport both view and design icons to allow quick access to both. 7. Field Arithmetic on Dates Differences between dates can now be used as measures with the arithmetic feature. 8. Simplified Multi-Tenancy Integrating with multi-tenant systems is now easier than ever. The following APIs have been added to facilitate common scenarios. AdHocSettings.CurrentUserTenantId = value; AdHocSettings.SchedulerTenantID = value; AdHocSettings.SchedulerTenantField = "AccountID"; 9. Support For SQL 2008 R2 and SQL Azure Izenda now supports the latest version of Microsoft's database as well as the SQL Azure service. 10. Enhanced Performance and Compatibility for Stored Procedures Izenda now supports more stored procedures than ever and runs them faster too.

    Read the article

  • The role of the Infrastructure DBA

    - by GavinPayneUK
    Do you have someone performing an Infrastructure DBA role within your organisation? Do you realise why today you now might need one? When I first started working with SQL Server there were three distinct roles in the SQL Server virtual team: developer , DBA and sysadmin . In my simple terms, the developer looked after the “code”: the schema, stored procedures, and any ETL to get data in, out or updated within the database. They could talk in business entity terms about Customer numbers, Product codes...(read more)

    Read the article

  • WebCenter Customer Spotlight: Instituto Mexicano de la Propiedad Industrial

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryInstituto Mexicano de la Propiedad Industrial (IMPI) is a decentralized  federal agency with the goals of protecting and ensuring awareness of industrial property rights in Mexico. IMPI  business objectives were to increase efficiency, improve client service, accelerate services to the public and reduce paper use by digitizing management of necessary documentation for patent and trademark submissions and approvals. IMPI  implemented  Oracle WebCenter Content to develop electronic inquiry service by digitizing and managing documents and a public Web site making patent-related information easily available online. With the implemented solution IMPI increased the number of monthly inquires from 200 in person consultations to 80,000 electronic consultations and the number of trademark record inquiries from 30,000 to 300,000. Company OverviewInstituto Mexicano de la Propiedad Industrial (IMPI) is a decentralized federal agency with the goals of protecting and ensuring awareness of industrial property rights in Mexico. IMPI is responsible for registering and publicizing inventions, distinctive signs, trademarks, and patents. In addition to its Mexico City headquarters, IMPI has five regional offices.  Business Challenges IMPI  business objectives were to increase efficiency by automating internal operations and patent and trademark-related procedures and services, improve client service by simplifying patent and trademark procedures, accelerate services to the public and reduce paper use by digitizing management of necessary documentation for patent and trademark submissions and approvals. Solution DeployedIMPI worked with Oracle Consulting to implement Oracle WebCenter Content to develop electronic inquiry service - services that were previously provided in person only - by digitizing and managing documents. They use Oracle Database 11g, Enterprise Edition to manage data for all mission-critical systems, automating patent and trademark transactions, providing consistent, readily available, and accurate data. IMPI developed a Web site to support newly digitized information with simple and flexible interfaces, making patent-related information easily available online to the public. Business ResultsWith the implemented solution IMPI increased the number of monthly inquires  from 200 in person consultations to 80,000 electronic consultations and the number of trademark record inquiries from 30,000 to 300,000. “Oracle WebCenter Content structure is unique. It lets us separately manage communication with other applications and databases, and performs content management itself. It’s a stable tool, at an appropriate cost, that lets us develop and provide reliable electronic services.” Eugenio Ponce de León, Divisional Director of Systems and Technology, Instituto Mexicano de la Propiedad Industrial Additional Information Instituto Mexicano de la Propiedad Customer Snapshot Oracle WebCenter Content

    Read the article

  • Simple Query tuning with STATISTICS IO and Execution plans

    A great deal can be gleaned from the use of the STATISTICS IO and the execution plan, when you are checking that a query is performing properly. Josef Richberg, the current holder of the 'Exceptional DBA' award, explains how an apparently draconian IT policy turns out to be a useful ways of ensuring that Stored Procedures are carefully checked for performance before they are released

    Read the article

  • Installing Ubuntu 12.04.1 x64 with Fake RAID 1 [SOLVED]

    - by Arkadius
    I had: Software: Dual boot with Windows XP Ubuntu 10.04 LTS x32 Hardware Fake RAID 1 (mirroring) with 2x1 TB: Partition 1 - Windows Partition 2 - SWAP Partition 3 - / (root) Partition 4 - Extended Partition 5 - /home Partition 6 - /data arek@domek:/var/log/installer$ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de1b9 Device Boot Start End Blocks Id System /dev/sda1 * 63 524297339 262148638+ 7 HPFS/NTFS/exFAT /dev/sda2 524297340 528506369 2104515 82 Linux swap / Solaris /dev/sda3 528506370 570468149 20980890 83 Linux /dev/sda4 570468150 1953118439 691325145 5 Extended /dev/sda5 570468213 675340469 52436128+ 83 Linux /dev/sda6 675340533 1953118439 638888953+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de1b9 Device Boot Start End Blocks Id System /dev/sdb1 * 63 524297339 262148638+ 7 HPFS/NTFS/exFAT /dev/sdb2 524297340 528506369 2104515 82 Linux swap / Solaris /dev/sdb3 528506370 570468149 20980890 83 Linux /dev/sdb4 570468150 1953118439 691325145 5 Extended /dev/sdb5 570468213 675340469 52436128+ 83 Linux /dev/sdb6 675340533 1953118439 638888953+ 83 Linux arek@domek:/var/log/installer$ ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 236 Oct 7 20:17 control lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha -> ../dm-0 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha1 -> ../dm-1 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha2 -> ../dm-2 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha3 -> ../dm-3 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha4 -> ../dm-4 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha5 -> ../dm-5 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha6 -> ../dm-6 I wanted to upgrade from 10.04 x32 to 12.04 x64 using FRESH installation. So, run installation of Ubuntu 12.04.1 x64 LTS using alternate CD. During the installation I selected manual partitioning and to: - Use and Format / (root) - Use and Format SWAP - Use and Keep data on /home - Use and Keep data on /data After I clicked "Continue" I get error creating and formatting SWAP partition. I go to terminal with Alt + F2 (?) and hit enter. I discovered that there was visible RAID as only disk with NO partitions. Something like this: arek@domek:/var/log/installer$ ls -l /dev/mapper/ lrwxrwxrwx 1 root root 7 Oct 7 20:17 /dev/mapper/pdc_jhjbcaha -> ../dm-0 arek@domek:/var/log/installer$ ls -l /dev/dm* brw-rw---- 1 root disk 252, 0 Oct 7 20:17 /dev/dm-0 So I switched to log console Alt+F3 (?) and saw errors like below: Oct 7 14:02:45 check-missing-firmware: /dev/.udev/firmware-missing does not exist, skipping Oct 7 14:02:45 check-missing-firmware: /run/udev/firmware-missing does not exist, skipping Oct 7 14:02:45 check-missing-firmware: no missing firmware in /dev/.udev/firmware-missing /run/udev/firmware-missing Oct 7 14:02:45 anna-install: Installing dmraid-udeb Oct 7 14:02:45 anna[12599]: DEBUG: retrieving dmraid-udeb 1.0.0.rc16-4.1ubuntu8 Oct 7 14:02:49 anna[12599]: DEBUG: retrieving libdmraid1.0.0.rc16-udeb 1.0.0.rc16-4.1ubuntu8 Oct 7 14:02:49 anna[12599]: DEBUG: retrieving kpartx-udeb 0.4.9-3ubuntu5 Oct 7 14:02:49 disk-detect: Serial ATA RAID disk(s) detected. Oct 7 14:02:55 disk-detect: Enabling dmraid support. Oct 7 14:02:55 disk-detect: RAID set "pdc_jhjbcaha" was activated Oct 7 14:02:55 HERE --> dmraid-activate: ERROR: Cannot retrieve RAID set information for pdc_jhjbcaha Oct 7 14:02:56 check-missing-firmware: /dev/.udev/firmware-missing does not exist, skipping Oct 7 14:02:56 check-missing-firmware: /run/udev/firmware-missing does not exist, skipping Oct 7 14:02:56 check-missing-firmware: no missing firmware in /dev/.udev/firmware-missing /run/udev/firmware-missing Oct 7 14:02:57 main-menu[428]: DEBUG: resolver (libnewt0.52): package doesn't exist (ignored) Oct 7 14:02:57 main-menu[428]: DEBUG: resolver (ext2-modules): package doesn't exist (ignored) Oct 7 14:02:57 main-menu[428]: INFO: Menu item 'partman-base' selected Oct 7 14:02:57 kernel: [ 316.512999] NTFS driver 2.1.30 [Flags: R/O MODULE]. Oct 7 14:02:57 kernel: [ 316.523221] Btrfs loaded Oct 7 14:02:57 kernel: [ 316.534781] JFS: nTxBlock = 8192, nTxLock = 65536 Oct 7 14:02:57 kernel: [ 316.554749] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled Oct 7 14:02:57 kernel: [ 316.555336] SGI XFS Quota Management subsystem Oct 7 14:02:58 md-devices: mdadm: No arrays found in config file or automatically Oct 7 14:02:58 partman: No matching physical volumes found Oct 7 14:02:58 partman: No volume groups found Oct 7 14:02:58 partman: Reading all physical volumes. This may take a while... Oct 7 14:02:58 partman-lvm: No volume groups found Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:06:11 HERE --> partman: mkswap: can't open '/dev/mapper/pdc_jhjbcaha2': No such file or directory Oct 7 14:07:28 init: starting pid 401, tty '/dev/tty2': '-/bin/sh' Oct 7 14:15:00 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 Oct 7 14:15:00 net/hw-detect.hotplug: Detected hotpluggable network interface lo As You can see there are 2 errors Oct 7 14:02:55 dmraid-activate: ERROR: Cannot retrieve RAID set information for pdc_jhjbcaha and Oct 7 14:06:11 partman: mkswap: can't open '/dev/mapper/pdc_jhjbcaha2': No such file or directory I looked in the internet and try to run command "dmraid -ay" and get something like that: dmraid -ay /dev/mapper/pdc_jhjbcaha -> Already activated /dev/mapper/pdc_jhjbcaha1 -> Successfully activated /dev/mapper/pdc_jhjbcaha2 -> Successfully activated /dev/mapper/pdc_jhjbcaha3 -> Successfully activated /dev/mapper/pdc_jhjbcaha4 -> Successfully activated /dev/mapper/pdc_jhjbcaha5 -> Successfully activated /dev/mapper/pdc_jhjbcaha6 -> Successfully activated Then I returned to installer with Alt+F1 (?) and click "Return" to return to partitioning menu. I did NOT change anything just selected again "Continue" and everything goes smoothly. I hope this will help someone. arkadius

    Read the article

  • SolidQ Journal - free SQL goodness for February

    - by Greg Low
    The SolidQ Journal for February just made it out by the end of February 28th. But again, it's great to see the content appearing. I've included the second part of the article on controlling the execution context of stored procedures. The first part was in December. Also this month, along with Fernando Guerrero's editorial, Analysis Services guru Craig Utley has written about aggregations, Herbert Albert and Gianluca Holz have continued their double-act and described how to automate database migrations,...(read more)

    Read the article

  • Is there any good hosting for asp.net and MySQL

    - by HAJJAJ
    HI every one ,I have account with one of the hosting company, and i did my project in asp.net and I used MySQL for the database. the hosting company is not giving me the full privileges to create new user or to create new stored procedure!!! this is what they said for me: Due to the shared nature of our environment we had to make some modifications to your procedure (namely the definer). We also had to review your procedure to determine if it would be compatible with our environment. While your procedures will work (via phpMyAdmin or some other interface), it is unlikely they will be accessible via the Connector/.NET (ADO.NET) that your application is likely using. This is due to a security restriction with how that connector works in shared environments. http://dev.mysql.com/doc/refman/5.0/en/connector-net-programming-stored.html "Note When you call a stored procedure, the command object makes an additional SELECT call to determine the parameters of the stored procedure. You must ensure that the user calling the procedure has the SELECT privilege on the mysql.proc table to enable them to verify the parameters. Failure to do this will result in an error when calling the procedure." Unfortunately, giving read privileges on the mysql.proc table will give you access to the data of our other customers and that is not an acceptable risk. If your application can only work using stored procedures, then MSSQL will probably be the better option for your site. I apologize for the inconvenience and the wait to have this ticket completed. So is there any good hosting that any body already used it to publish his asp.net and mysql project ??? this is one of my stored procedure and i think it's sample and it will not harm any other uses!!: -- -------------------------------------------------------------------------------- -- Routine DDL -- Note: comments before and after the routine body will not be stored by the server -- -------------------------------------------------------------------------------- DELIMITER $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `SpcategoriesRead`( IN PaRactioncode VARCHAR(5), IN PaRCatID BIGINT, IN PaRSearchText TEXT ) BEGIN -- CREATING TEMPORARY TABLE TO SAVE DATA FROM THE ACTIONCODE SELECTS -- DROP TEMPORARY TABLE IF EXISTS TEMP; CREATE temporary table tmp ( CatID BIGINT primary key not null, CatTitle TEXT, CatDescription TEXT, CatTitleAr TEXT, CatDescriptionAr TEXT, PictureID BIGINT, Published BOOLEAN, DisplayOrder BIGINT, CreatedOn DATE ); IF PaRactioncode = 1 THEN -- Retrive all DATA from the database -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories; ELSEIF PaRactioncode = 2 THEN -- Retrive all from the database By ID -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE CatID=PaRCatID; ELSEIF PaRactioncode = 3 THEN -- NOSET YET -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE Published=1 ORDER BY DisplayOrder; END IF; IF PaRSearchText IS NOT NULL THEN set PaRSearchText=concat('%', PaRSearchText ,'%'); SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp WHERE Concat(CatTitle, CatDescription, CatTitleAr, CatDescriptionAr) LIKE PaRSearchText; ELSE SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp; END IF; DROP TEMPORARY TABLE IF EXISTS tmp; END

    Read the article

  • Enhance your Queries with Stored Functions

    HeidiSQL 4's Stored Routine Editor offers a user-friendly alternative to using a command-line interface to create and manage your stored procedures and functions. Today, we'll be learning how to take advantage of some useful native MySQL functions as well as use the editor to create our own custom functions.

    Read the article

  • Is reliance on parametrized queries the only way to protect against SQL injection?

    - by Chris Walton
    All I have seen on SQL injection attacks seems to suggest that parametrized queries, particularly ones in stored procedures, are the only way to protect against such attacks. While I was working (back in the Dark Ages) stored procedures were viewed as poor practice, mainly because they were seen as less maintainable; less testable; highly coupled; and locked a system into one vendor; (this question covers some other reasons). Although when I was working, projects were virtually unaware of the possibility of such attacks; various rules were adopted to secure the database against corruption of various sorts. These rules can be summarised as: No client/application had direct access to the database tables. All accesses to all tables were through views (and all the updates to the base tables were done through triggers). All data items had a domain specified. No data item was permitted to be nullable - this had implications that had the DBAs grinding their teeth on occasion; but was enforced. Roles and permissions were set up appropriately - for instance, a restricted role to give only views the right to change the data. So is a set of (enforced) rules such as this (though not necessarily this particular set) an appropriate alternative to parametrized queries in preventing SQL injection attacks? If not, why not? Can a database be secured against such attacks by database (only) specific measures? EDIT Emphasis of the question changed slightly, in the light of the initial responses received. Base question unchanged. EDIT2 The approach of relying on paramaterized queries seems to be only a peripheral step in defense against attacks on systems. It seems to me that more fundamental defenses are both desirable, and may render reliance on such queries not necessary, or less critical, even to defend specifically against injection attacks. The approach implicit in my question was based on "armouring" the database and I had no idea whether it was a viable option. Further research has suggested that there are such approaches. I have found the following sources that provide some pointers to this type of approach: http://database-programmer.blogspot.com http://thehelsinkideclaration.blogspot.com The principle features I have taken from these sources is: An extensive data dictionary, combined with an extensive security data dictionary Generation of triggers, queries and constraints from the data dictionary Minimize Code and maximize data While the answers I have had so far are very useful and point out difficulties arising from disregarding paramaterized queries, ultimately they do not answer my original question(s) (now emphasised in bold).

    Read the article

  • Have you used the ExecutionValue and ExecValueVariable properties?

    The ExecutionValue execution value property and it’s friend ExecValueVariable are a much undervalued feature of SSIS, and many people I talk to are not even aware of their existence, so I thought I’d try and raise their profile a bit. The ExecutionValue property is defined on the base object Task, so all tasks have it available, but it is up to the task developer to do something useful with it. The basic idea behind it is that it allows the task to return something useful and interesting about what it has performed, in addition to the standard success or failure result. The best example perhaps is the Execute SQL Task which uses the ExecutionValue property to return the number of rows affected by the SQL statement(s). This is a very useful feature, something people often want to capture into a variable, and start using the result set options to do. Unfortunately we cannot read the value of a task property at runtime from within a SSIS package, so the ExecutionValue property on its own is a bit of a let down, but enter the ExecValueVariable and we have the perfect marriage. The ExecValueVariable is another property exposed through the task (TaskHost), which lets us select a SSIS package variable. What happens now is that when the task sets the ExecutionValue, the interesting value is copied into the variable we set on the ExecValueVariable property, and a variable is something we can access and do something with. So put simply if the ExecutionValue property value is of interest, make sure you create yourself a package variable and set the name as the ExecValueVariable. Have  look at the 3 step guide below: 1 Configure your task as normal, for example the Execute SQL Task, which here calls a stored procedure to do some updates. 2 Create variable of a suitable type to match the ExecutionValue, an integer is used to match the result we want to capture, the number of rows. 3 Set the ExecValueVariable for the task, just select the variable we created in step 2. You need to do this in Properties grid for the task (Short-cut key, select the task and press F4) Now when we execute the sample task above, our variable UpdateQueueRowCount will get the number of rows we updated in our Execute SQL Task. I’ve tried to collate a list of tasks that return something useful via the ExecutionValue and ExecValueVariable mechanism, but the documentation isn’t always great. Task ExecutionValue Description Execute SQL Task Returns the number of rows affected by the SQL statement or statements. File System Task Returns the number of successful operations performed by the task. File Watcher Task Returns the full path of the file found Transfer Error Messages Task Returns the number of error messages that have been transferred Transfer Jobs Task Returns the number of jobs that are transferred Transfer Logins Task Returns the number of logins transferred Transfer Master Stored Procedures Task Returns the number of stored procedures transferred Transfer SQL Server Objects Task Returns the number of objects transferred WMI Data Reader Task Returns an object that contains the results of the task. Not exactly clear, but I assume it depends on the WMI query used.

    Read the article

  • Sql Injection Prevention

    To protect your application from SQL injection, perform the following steps: * Step 1. Constrain input. * Step 2. Use parameters with stored procedures. * Step 3. Use parameters with dynamic SQL.

    Read the article

  • Handling SQL Server Errors

    This article covers the basics of TRY CATCH error handling in T-SQL introduced in SQL Server 2005. It includes the usage of common functions to return information about the error and using the TRY CATCH block in stored procedures and transactions.

    Read the article

  • SQL Triggers and when or when not to use them.

    - by John Mitchell
    When I was originally learning about SQL I was always told, only use triggers if you really need to and opt to use stored procedures instead if possible. Now unfortunately at the time (a good few years ago) I wasn't as curious and caring about fundamentals as I am now so never did ask to the reason why. What's the communities opinion in this? Is it just someone's personal preference, or should triggers be avoided (just like cursors) unless there is a good reason for them.

    Read the article

  • Free tools for SQL Server - Automating Execution Plan Analysis

    - by jchang
    Since this topic is being discussed, I will plug my own tools, SQL Exec Stats and (a little dated) documentation the main capability is cross-referencing index usuage with specific execution plans. another feature is generating execution plans for all stored procedures in a database, along with the index usage cross-reference. There are several sources of execution plans or plan handles, this could be a live trace, a previously saved trace, previously saved sqlplan files, from dm_exec_cached_plans,...(read more)

    Read the article

  • Kernel Panic Troubleshooting For Dummies

    - by iamsid
    I have made Ubuntu my primary OS since 10.04, and I have converted many of my friends and neighbors into Ubuntu users since then. I am the fix-it guy they approach when they encounter problems. The most frantic calls I get are when they encounter an "update-induced kernel panic." (Many human beings are allergic to command-line interfaces and search engines.) What are the step-by-step procedures to resolve an "update-induced kernel panic"?

    Read the article

  • SEO Tips

    SEO is the acronym for search engine optimization. The term "optimization" has been defined as a complete set of procedures that are implemented to make the system fully functional. In the case of search engine optimization, the websites are designed to match the algorithm of the search engines so that the "crawlers" can easily pick up the pages for indexing purposes.

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >