Search Results

Search found 90467 results on 3619 pages for 'new rows'.

Page 67/3619 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • MySql returning multiple rows from stored procedure / function

    - by pistacchio
    Hi, I need to make a stored procedure or function that returns a set of rows. I've noted that in a stored procedure i can SELECT * FROM table with success. If i fetch rows in a loop and SELECT something, something_other FROM table once per loop execution, I only get one single result. What I need to do is looping, doing some calculations and returning a rowset. What's the best way to do this? A temporary table? Stored functions? Any help appreciated.

    Read the article

  • Remove specific string from multiple database rows in SQL

    - by Scott
    I have a column that contains page titles, which has the website name appended to the end of each. (e.g. Product Name | Company Name Inc.) I would like to remove the " | Company Name Inc." from multiple rows simultaneously. What SQl query commands (or query itself) would allow me to accomplish this? To re-illustrate, I want to convert multiple rows of 1 column from this: Product Name | Company Name Inc. To this: Product Name

    Read the article

  • Question about inserting/updating rows with SQL Server (ASP.NET MVC)

    - by Alex
    I have a very big table with a lot of rows, every row has stats for every user for certain days. And obviously I don't have any stats for future. So to update the stats I use UPDATE Stats SET Visits=@val WHERE ... a lot of conditions ... AND Date=@Today But what if the row doesn't exist? I'd have to use INSERT INTO Stats (...) VALUES (Visits=@val, ..., Date=@Today) How can I check if the row exists or not? Is there any way different from doing the COUNT(*)? If I fill the table with empty cells, it'd take hundreds of thousands of rows taking megabytes and storing no data.

    Read the article

  • Loop through non-integer rows using SQL

    - by Jesse
    I know how to accomplish my task with .NET, but I wanted to do this just in SQL. I need to loop through all of the rows where the primary key is somewhat arbitrary. It can be a number or a series of letters, and probably any number of unusual things. I know I could do something like this... DECLARE @numRows INT SET @numRows = (SELECT COUNT(pkField) FROM myTable) DECLARE @I INT SET @I = 1 WHILE (@I <= @numRows) BEGIN --Do what I need to here SET @I = @I + 1 END ...if my rows were indexed in a contiguous fashion, but I don't know enough about SQL to do that if they're not. I keep coming across the use of "cursors," but I come across just as much reading about avoiding cursors. I found this SO solution but I'm not sure if that's what I'm needing? I appreciate any ideas.

    Read the article

  • delete rows using sql 'like' command using data from another table

    - by Captastic
    Hi All, I am trying to delete rows from a table ("lovalarm") where a field ("pointid") is like any one of a number of strings. Currently I am entering them all manually however I need to be able to have a list of over 100,000 options. My thoughts are to have a table ("lovdata") containing all possible strings and running a query to delete rows where the field is 'like' any of the strings in the other table. Can anyone point me in the right direction as to if/how I can use like in this way? Many thanks, Cap

    Read the article

  • php and SQL_CALC_FOUND_ROWS

    - by Lizard
    I am trying to add the SQL_CALC_FOUND_ROWS into a query (Please note this isn't for pagination) please note I am trying to add this to a cakePHP query the code I currently have is below: return $this->find('all', array( 'conditions' => $conditions, 'fields'=>array('SQL_CALC_FOUND_ROWS','Category.*','COUNT(`Entity`.`id`) as `entity_count`'), 'joins' => array('LEFT JOIN `entities` AS Entity ON `Entity`.`category_id` = `Category`.`id`'), 'group' => '`Category`.`id`', 'order' => $sort, 'limit'=>$params['limit'], 'offset'=>$params['start'], 'contain' => array('Domain' => array('fields' => array('title'))) )); Note the 'fields'=>array('SQL_CALC_FOUND_ROWS',' this obviously doesn't work as It tries to apply the SQL_CALC_FOUND_ROWS to the table e.g. SELECTCategory.SQL_CALC_FOUND_ROWS, Is there anyway of doing this? Any help would be greatly appreciated, thanks.

    Read the article

  • PotgreSQL 2D array to rows

    - by PostGreSQL newbie
    Hello, I am new to PostgreSQL array's. I am trying to a write a procedure to convert array-into-rows, and wanted following output: alphabet | number ---------+---------- A | 10 B | 10 C | 6 D | 9 E | 3 from following: id | alphabet_series -------+-------------------------------------------------------------------------------------------------- 1 | {{A,10},{B,10},{C,6},{D,9},{E,3},{F,9},{I,10},{J,17},{K,16},{L,17},{M,20},{N,13},{O,19}} I have searched for array-to-rows functions, but they all seems to accept 1-d array. but in this case, it is 2-d array. Any pointers will be appreciated. Many thanks.

    Read the article

  • iPhone - Adding new sections and rows into UITableView

    - by Dev
    I have a UITableView to display a list of data. The list will contain images, so I am loading images with lazy loading. Now I don't want to load the whole list at a time. The loading should be like, first it should load some 10 records and when we scrolling down to the tableview, it should automatically load next 10 records as on. For this I need to add rows when I am scrolling to bottom. The tableview may contain different sections. So how can I add new rows and new sections at the end of the tableview while scrolling down?

    Read the article

  • Using LINQ to filter rows of a matrix based on inclusion in an array

    - by Bob Feng
    I have a matrix, IEnumerable<IEnumerable<int>> matrix, for example: { {10,23,16,20,2,4}, {22,13,1,33,21,11 }, {7,19,31,12,6,22}, ... } and another array: int[] arr={ 10, 23, 16, 20} I want to filter the matrix on the condition that I group all rows of the matrix which contain the same number of elements from arr. That is to say the first row in the matrix {10,23,16,20,2,4} has 4 numbers from arr, this array should be grouped with the rest of the rows with 4 numbers from arr. better to use linq, thank you very much!

    Read the article

  • DataGridView - DefaultCellStyle, rows and columns propriority

    - by angelPL
    Hi! In C#, in DataGridView I want to set the BackColor property for the first row and first column. And the cell from first row and first column, should have property from first column, not row - but it does. For example: (table 3 x 3); 'X' - property for first row, 'Y' - property for first column, 'a' - default property should be: Y X X Y a a Y a a but is: X X X Y a a Y a a There is no matter which property I set first: dataGridView1.Rows[0].DefaultCellStyle.BackColor = Color.Lavender; dataGridView1.Columns[0].DefaultCellStyle.BackColor = Color.Beige; or: dataGridView1.Columns[0].DefaultCellStyle.BackColor = Color.Beige; dataGridView1.Rows[0].DefaultCellStyle.BackColor = Color.Lavender; Sorry for my english...

    Read the article

  • mysql_fetch_row() not a valid result resource

    - by user1717305
    I am confused, I don't know what's wrong. I'm about to transfer all data from my first table to the other. Here is my code: $getdata = mysql_query("SELECT Quantity, Description, Total FROM ordercart"); while($row = mysql_fetch_row($getdata)) { foreach($row as $cell){ $query1 = mysql_query("INSERT INTO ordermem (Quantity, Description, Total) VALUES ($cell)",$connect); } mysql_free_result($getdata); } I get the error: Warning: mysql_fetch_row(): 5 is not a valid MySQL result resource.

    Read the article

  • Question about inserting/updating rows with MS SQL (ASP.NET MVC)

    - by Alex
    I have a very big table with a lot of rows, every row has stats for every user for certain days. And obviously I don't have any stats for future. So to update the stats I use UPDATE Stats SET Visits=@val WHERE ... a lot of conditions ... AND Date=@Today But what if the row doesn't exist? I'd have to use INSERT INTO Stats (...) VALUES (Visits=@val, ..., Date=@Today) How can I check if the row exists or not? Is there any way different from doing the COUNT(*)? If I fill the table with empty cells, it'd take hundreds of thousands of rows taking megabytes and storing no data.

    Read the article

  • MySQL pivot tables - covert rows to columns

    - by user2723490
    This is the structure of my table: Then I run a query SELECT `date`,`index_name`,`results` FROM `mst_ind` WHERE `index_name` IN ('MSCI EAFE Mid NR USD', 'Alerian MLP PR USD') AND `time_period`='M1' and get a table like How can I convert "index_name" rows to columns like: date | MSCI EAFE Mid NR USD | Alerian MLP PR USD etc In other words I need each column to represent an index and rows to represent date-result. I understand that MySQL doesn't have pivot table functions. What is the easiest way of doing this? I've tried this code, but it generates an error: SELECT `date`, MAX(IF(index_name = 'Alerian MLP PR USD' AND `time_period`='M1', results, NULL)) AS res1, MAX(IF(index_name = 'MSCI EAFE Mid NR USD' AND `time_period`='M1', results, NULL)) AS res2 FROM `mst_ind` GROUP BY `date I need to make the conversion on the query level - not PHP. Please suggest a nice and elegant solution. Thanks!

    Read the article

  • Oracle Sql: how to substract rows from one table from another only once

    - by slyder07
    I'm working for a university project, and I have the following question: I have 2 tables in a Oracle DB... I need to select those rows from table1, which are not included in table2... But the main problem is that I need to exclude that rows from table2 wich was selected once... For example: Table1 Table2 ResultTable id | Number | Letter id | Number | Letter id | Number | Letter _____________________ _____________________ _____________________ 1 4 S 1 6 G 2 2 P 2 2 P 2 8 B 3 5 B 3 5 B 3 4 S 4 4 S 4 4 S 4 1 A 6 2 P 5 1 A 5 1 H 6 2 P 6 2 X So, how you see it, if one row from Table1 has a "twin" in Table2, they both are excluded. Hope I was explicit enough. Sorry for my bad English. Thanks.

    Read the article

  • Mysql with innodb and serializable transaction does not (always) lock rows

    - by Tobias G.
    Hello, I have a transaction with a SELECT and possible INSERT. For concurrency reasons, I added FOR UPDATE to the SELECT. To prevent phantom rows, I'm using the SERIALIZABLE transaction isolation level. This all works fine when there are any rows in the table, but not if the table is empty. When the table is empty, the SELECT FOR UPDATE does not do any (exclusive) locking and a concurrent thread/process can issue the same SELECT FOR UPDATE without being locked. CREATE TABLE t ( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, display_order INT ) ENGINE = InnoDB; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; START TRANSACTION; SELECT COALESCE(MAX(display_order), 0) + 1 from t FOR UPDATE; .. This concept works as expected with SQL Server, but not with MySQL. Any ideas on what I'm doing wrong? EDIT Adding an index on display_order does not change the behavior.

    Read the article

  • Storing data from database [mysql_num_rows]

    - by user1717305
    So I have this code to pass items from database to my order table. When I'm echoing the session. The session variable contains something so there's no problem with that. But when I echo those variables under numrows, it only shows nothing. Is there something wrong? <?php error_reporting(E_ALL ^ E_NOTICE); session_start(); require("connect.php"); $UserID = $_SESSION['CustNum']; $UserN = $_SESSION['UserName']; $ProdGTotal = $_SESSION['ProdGTotal']; $queryord = mysql_query("SELECT * FROM customer WHERE UserName = '$UserN'"); $numrows = mysql_num_rows($queryord); if(numrows == 1){ $row = mysql_fetch_assoc($queryord)or die ('Unable to run query:'.mysql_error()); // fetch associated: get function from a query for a database $dbstreet = $row['Street']; $dhousenum = $row['HouseNum']; $dbcnum = $row['CelNum']; $dbarea = $row['Area']; $dbbuilding = $row['Building']; $dbcity = $row['City']; $dbpnum = $row['PhoneNum']; $dbfname = $row['FName']; $dblname = $row['LName']; } else die(mysql_error()); $query4=mysql_query("INSERT INTO orderdetails VALUES ('', '$UserID', Now(), '$dbhousenum', '$dbstreet', '$dbarea', '$dbbuilding', '$dbcity', '$dbfname', '$dblname', '$dbcnum', '$dbpnum', '$ProdGTotal')",$connect); if ($query4){ header("location:index.php"); } else die(mysql_error()); ?>

    Read the article

  • New Validation Attributes in ASP.NET MVC 3 Future

    - by imran_ku07
         Introduction:             Validating user inputs is an very important step in collecting information from users because it helps you to prevent errors during processing data. Incomplete or improperly formatted user inputs will create lot of problems for your application. Fortunately, ASP.NET MVC 3 makes it very easy to validate most common input validations. ASP.NET MVC 3 includes Required, StringLength, Range, RegularExpression, Compare and Remote validation attributes for common input validation scenarios. These validation attributes validates most of your user inputs but still validation for Email, File Extension, Credit Card, URL, etc are missing. Fortunately, some of these validation attributes are available in ASP.NET MVC 3 Future. In this article, I will show you how to leverage Email, Url, CreditCard and FileExtensions validation attributes(which are available in ASP.NET MVC 3 Future) in ASP.NET MVC 3 application.       Description:             First of all you need to download ASP.NET MVC 3 RTM Source Code from here. Then extract all files in a folder. Then open MvcFutures project from mvc3-rtm-sources\mvc3\src\MvcFutures folder. Build the project. In case, if you get compile time error(s) then simply remove the reference of System.Web.WebPages and System.Web.Mvc assemblies and add the reference of System.Web.WebPages and System.Web.Mvc 3 assemblies again but from the .NET tab and then build the project again, it will create a Microsoft.Web.Mvc assembly inside mvc3-rtm-sources\mvc3\src\MvcFutures\obj\Debug folder. Now we can use Microsoft.Web.Mvc assembly inside our application.             Create a new ASP.NET MVC 3 application. For demonstration purpose, I will create a dummy model UserInformation. So create a new class file UserInformation.cs inside Model folder and add the following code,   public class UserInformation { [Required] public string Name { get; set; } [Required] [EmailAddress] public string Email { get; set; } [Required] [Url] public string Website { get; set; } [Required] [CreditCard] public string CreditCard { get; set; } [Required] [FileExtensions(Extensions = "jpg,jpeg")] public string Image { get; set; } }             Inside UserInformation class, I am using Email, Url, CreditCard and FileExtensions validation attributes which are defined in Microsoft.Web.Mvc assembly. By default FileExtensionsAttribute allows png, jpg, jpeg and gif extensions. You can override this by using Extensions property of FileExtensionsAttribute class.             Then just open(or create) HomeController.cs file and add the following code,   public class HomeController : Controller { public ActionResult Index() { return View(); } [HttpPost] public ActionResult Index(UserInformation u) { return View(); } }             Next just open(or create) Index view for Home controller and add the following code,  @model NewValidationAttributesinASPNETMVC3Future.Model.UserInformation @{ ViewBag.Title = "Index"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Index</h2> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>UserInformation</legend> <div class="editor-label"> @Html.LabelFor(model => model.Name) </div> <div class="editor-field"> @Html.EditorFor(model => model.Name) @Html.ValidationMessageFor(model => model.Name) </div> <div class="editor-label"> @Html.LabelFor(model => model.Email) </div> <div class="editor-field"> @Html.EditorFor(model => model.Email) @Html.ValidationMessageFor(model => model.Email) </div> <div class="editor-label"> @Html.LabelFor(model => model.Website) </div> <div class="editor-field"> @Html.EditorFor(model => model.Website) @Html.ValidationMessageFor(model => model.Website) </div> <div class="editor-label"> @Html.LabelFor(model => model.CreditCard) </div> <div class="editor-field"> @Html.EditorFor(model => model.CreditCard) @Html.ValidationMessageFor(model => model.CreditCard) </div> <div class="editor-label"> @Html.LabelFor(model => model.Image) </div> <div class="editor-field"> @Html.EditorFor(model => model.Image) @Html.ValidationMessageFor(model => model.Image) </div> <p> <input type="submit" value="Save" /> </p> </fieldset> } <div> @Html.ActionLink("Back to List", "Index") </div>             Now just run your application. You will find that both client side and server side validation for the above validation attributes works smoothly.                      Summary:             Email, URL, Credit Card and File Extension input validations are very common. In this article, I showed you how you can validate these input validations into your application. I explained this with an example. I am also attaching a sample application which also includes Microsoft.Web.Mvc.dll. So you can add a reference of Microsoft.Web.Mvc assembly directly instead of doing any manual work. Hope you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • European Interoperability Framework - a new beginning?

    - by trond-arne.undheim
    The most controversial document in the history of the European Commission's IT policy is out. EIF is here, wrapped in the Communication "Towards interoperability for European public services", and including the new feature European Interoperability Strategy (EIS), arguably a higher strategic take on the same topic. Leaving EIS aside for a moment, the EIF controversy has been around IPR, defining open standards and about the proper terminology around standardization deliverables. Today, as the document finally emerges, what is the verdict? First of all, to be fair to those among you who do not spend your lives in the intricate labyrinths of Commission IT policy documents on interoperability, let's define what we are talking about. According to the Communication: "An interoperability framework is an agreed approach to interoperability for organisations that want to collaborate to provide joint delivery of public services. Within its scope of applicability, it specifies common elements such as vocabulary, concepts, principles, policies, guidelines, recommendations, standards, specifications and practices." The Good - EIF reconfirms that "The Digital Agenda can only take off if interoperability based on standards and open platforms is ensured" and also confirms that "The positive effect of open specifications is also demonstrated by the Internet ecosystem." - EIF takes a productive and pragmatic stance on openness: "In the context of the EIF, openness is the willingness of persons, organisations or other members of a community of interest to share knowledge and stimulate debate within that community, the ultimate goal being to advance knowledge and the use of this knowledge to solve problems" (p.11). "If the openness principle is applied in full: - All stakeholders have the same possibility of contributing to the development of the specification and public review is part of the decision-making process; - The specification is available for everybody to study; - Intellectual property rights related to the specification are licensed on FRAND terms or on a royalty-free basis in a way that allows implementation in both proprietary and open source software" (p. 26). - EIF is a formal Commission document. The former EIF 1.0 was a semi-formal deliverable from the PEGSCO, a working group of Member State representatives. - EIF tackles interoperability head-on and takes a clear stance: "Recommendation 22. When establishing European public services, public administrations should prefer open specifications, taking due account of the coverage of functional needs, maturity and market support." - The Commission will continue to support the National Interoperability Framework Observatory (NIFO), reconfirming the importance of coordinating such approaches across borders. - The Commission will align its internal interoperability strategy with the EIS through the eCommission initiative. - One cannot stress the importance of using open standards enough, whether in the context of open source or non-open source software. The EIF seems to have picked up on this fact: What does the EIF says about the relation between open specifications and open source software? The EIF introduces, as one of the characteristics of an open specification, the requirement that IPRs related to the specification have to be licensed on FRAND terms or on a royalty-free basis in a way that allows implementation in both proprietary and open source software. In this way, companies working under various business models can compete on an equal footing when providing solutions to public administrations while administrations that implement the standard in their own software (software that they own) can share such software with others under an open source licence if they so decide. - EIF is now among the center pieces of the Digital Agenda (even though this demands extensive inter-agency coordination in the Commission): "The EIS and the EIF will be maintained under the ISA Programme and kept in line with the results of other relevant Digital Agenda actions on interoperability and standards such as the ones on the reform of rules on implementation of ICT standards in Europe to allow use of certain ICT fora and consortia standards, on issuing guidelines on essential intellectual property rights and licensing conditions in standard-setting, including for ex-ante disclosure, and on providing guidance on the link between ICT standardisation and public procurement to help public authorities to use standards to promote efficiency and reduce lock-in.(Communication, p.7)" All in all, quite a few good things have happened to the document in the two years it has been on the shelf or was being re-written, depending on your perspective, in any case, awaiting the storms to calm. The Bad - While a certain pragmatism is required, and governments cannot migrate to full openness overnight, EIF gives a bit too much room for governments not to apply the openness principle in full. Plenty of reasons are given, which should maybe have been put as challenges to be overcome: "However, public administrations may decide to use less open specifications, if open specifications do not exist or do not meet functional interoperability needs. In all cases, specifications should be mature and sufficiently supported by the market, except if used in the context of creating innovative solutions". - EIF does not use the internationally established terminology: open standards. Rather, the EIF introduces the notion of "formalised specification". How do "formalised specifications" relate to "standards"? According to the FAQ provided: The word "standard" has a specific meaning in Europe as defined by Directive 98/34/EC. Only technical specifications approved by a recognised standardisation body can be called a standard. Many ICT systems rely on the use of specifications developed by other organisations such as a forum or consortium. The EIF introduces the notion of "formalised specification", which is either a standard pursuant to Directive 98/34/EC or a specification established by ICT fora and consortia. The term "open specification" used in the EIF, on the one hand, avoids terminological confusion with the Directive and, on the other, states the main features that comply with the basic principle of openness laid down in the EIF for European Public Services. Well, this may be somewhat true, but in reality, Europe is 30 year behind in terminology. Unless the European Standardization Reform gets completed in the next few months, most Member States will likely conclude that they will go on referencing and using standards beyond those created by the three European endorsed monopolists of standardization, CEN, CENELEC and ETSI. Who can afford to begin following the strict Brussels rules for what they can call open standards when, in reality, standards stemming from global standardization organizations, so-called fora/consortia, dominate in the IT industry. What exactly is EIF saying? Does it encourage Member States to go on using non-ESO standards as long as they call it something else? I guess I am all for it, although it is a bit cumbersome, no? Why was there so much interest around the EIF? The FAQ attempts to explain: Some Member States have begun to adopt policies to achieve interoperability for their public services. These actions have had a significant impact on the ecosystem built around the provision of such services, e.g. providers of ICT goods and services, standardisation bodies, industry fora and consortia, etc... The Commission identified a clear need for action at European level to ensure that actions by individual Member States would not create new electronic barriers that would hinder the development of interoperable European public services. As a result, all stakeholders involved in the delivery of electronic public services in Europe have expressed their opinions on how to increase interoperability for public services provided by the different public administrations in Europe. Well, it does not take two years to read 50 consultation documents, and the EU Standardization Reform is not yet completed, so, more pragmatically, you finally had to release the document. Ok, let's leave some of that aside because the document is out and some people are happy (and others definitely not). The Verdict Considering the controversy, the delays, the lobbying, and the interests at stake both in the EU, in Member States and among vendors large and small, this document is pretty impressive. As with a good wine that has not yet come to full maturity, let's say that it seems to be coming in in the 85-88/100 range, but only a more fine-grained analysis, enjoyment in good company, and ultimately, implementation, will tell. The European Commission has today adopted a significant interoperability initiative to encourage public administrations across the EU to maximise the social and economic potential of information and communication technologies. Today, we should rally around this achievement. Tomorrow, let's sit down and figure out what it means for the future.

    Read the article

  • A New Threat To Web Applications: Connection String Parameter Pollution (CSPP)

    - by eric.maurice
    Hi, this is Shaomin Wang. I am a security analyst in Oracle's Security Alerts Group. My primary responsibility is to evaluate the security vulnerabilities reported externally by security researchers on Oracle Fusion Middleware and to ensure timely resolution through the Critical Patch Update. Today, I am going to talk about a serious type of attack: Connection String Parameter Pollution (CSPP). Earlier this year, at the Black Hat DC 2010 Conference, two Spanish security researchers, Jose Palazon and Chema Alonso, unveiled a new class of security vulnerabilities, which target insecure dynamic connections between web applications and databases. The attack called Connection String Parameter Pollution (CSPP) exploits specifically the semicolon delimited database connection strings that are constructed dynamically based on the user inputs from web applications. CSPP, if carried out successfully, can be used to steal user identities and hijack web credentials. CSPP is a high risk attack because of the relative ease with which it can be carried out (low access complexity) and the potential results it can have (high impact). In today's blog, we are going to first look at what connection strings are and then review the different ways connection string injections can be leveraged by malicious hackers. We will then discuss how CSPP differs from traditional connection string injection, and the measures organizations can take to prevent this kind of attacks. In web applications, a connection string is a set of values that specifies information to connect to backend data repositories, in most cases, databases. The connection string is passed to a provider or driver to initiate a connection. Vendors or manufacturers write their own providers for different databases. Since there are many different providers and each provider has multiple ways to make a connection, there are many different ways to write a connection string. Here are some examples of connection strings from Oracle Data Provider for .Net/ODP.Net: Oracle Data Provider for .Net / ODP.Net; Manufacturer: Oracle; Type: .NET Framework Class Library: - Using TNS Data Source = orcl; User ID = myUsername; Password = myPassword; - Using integrated security Data Source = orcl; Integrated Security = SSPI; - Using the Easy Connect Naming Method Data Source = username/password@//myserver:1521/my.server.com - Specifying Pooling parameters Data Source=myOracleDB; User Id=myUsername; Password=myPassword; Min Pool Size=10; Connection Lifetime=120; Connection Timeout=60; Incr Pool Size=5; Decr Pool Size=2; There are many variations of the connection strings, but the majority of connection strings are key value pairs delimited by semicolons. Attacks on connection strings are not new (see for example, this SANS White Paper on Securing SQL Connection String). Connection strings are vulnerable to injection attacks when dynamic string concatenation is used to build connection strings based on user input. When the user input is not validated or filtered, and malicious text or characters are not properly escaped, an attacker can potentially access sensitive data or resources. For a number of years now, vendors, including Oracle, have created connection string builder class tools to help developers generate valid connection strings and potentially prevent this kind of vulnerability. Unfortunately, not all application developers use these utilities because they are not aware of the danger posed by this kind of attacks. So how are Connection String parameter Pollution (CSPP) attacks different from traditional Connection String Injection attacks? First, let's look at what parameter pollution attacks are. Parameter pollution is a technique, which typically involves appending repeating parameters to the request strings to attack the receiving end. Much of the public attention around parameter pollution was initiated as a result of a presentation on HTTP Parameter Pollution attacks by Stefano Di Paola and Luca Carettoni delivered at the 2009 Appsec OWASP Conference in Poland. In HTTP Parameter Pollution attacks, an attacker submits additional parameters in HTTP GET/POST to a web application, and if these parameters have the same name as an existing parameter, the web application may react in different ways depends on how the web application and web server deal with multiple parameters with the same name. When applied to connections strings, the rule for the majority of database providers is the "last one wins" algorithm. If a KEYWORD=VALUE pair occurs more than once in the connection string, the value associated with the LAST occurrence is used. This opens the door to some serious attacks. By way of example, in a web application, a user enters username and password; a subsequent connection string is generated to connect to the back end database. Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; In the password field, if the attacker enters "xxx; Integrated Security = true", the connection string becomes, Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; Intergrated Security = true; Under the "last one wins" principle, the web application will then try to connect to the database using the operating system account under which the application is running to bypass normal authentication. CSPP poses serious risks for unprepared organizations. It can be particularly dangerous if an Enterprise Systems Management web front-end is compromised, because attackers can then gain access to control panels to configure databases, systems accounts, etc. Fortunately, organizations can take steps to prevent this kind of attacks. CSPP falls into the Injection category of attacks like Cross Site Scripting or SQL Injection, which are made possible when inputs from users are not properly escaped or sanitized. Escaping is a technique used to ensure that characters (mostly from user inputs) are treated as data, not as characters, that is relevant to the interpreter's parser. Software developers need to become aware of the danger of these attacks and learn about the defenses mechanism they need to introduce in their code. As well, software vendors need to provide templates or classes to facilitate coding and eliminate developers' guesswork for protecting against such vulnerabilities. Oracle has introduced the OracleConnectionStringBuilder class in Oracle Data Provider for .NET. Using this class, developers can employ a configuration file to provide the connection string and/or dynamically set the values through key/value pairs. It makes creating connection strings less error-prone and easier to manager, and ultimately using the OracleConnectionStringBuilder class provides better security against injection into connection strings. For More Information: - The OracleConnectionStringBuilder is located at http://download.oracle.com/docs/cd/B28359_01/win.111/b28375/OracleConnectionStringBuilderClass.htm - Oracle has developed a publicly available course on preventing SQL Injections. The Server Technologies Curriculum course "Defending Against SQL Injection Attacks!" is located at http://st-curriculum.oracle.com/tutorial/SQLInjection/index.htm - The OWASP web site also provides a number of useful resources. It is located at http://www.owasp.org/index.php/Main_Page

    Read the article

  • New SSIS tool on Codeplex – SSIS Log Analyzer

    I stumbled across a new SSIS tool on Codeplex today, the SSIS Log Analyzer which was only released a few days ago. Whilst it is a beta release and currently only supports 2005 (2008 is promised) it looks quite interesting. It seems to be a fancy log viewer, but with some clever features and a nice looking front-end. I’ve only read the documentation so far, but it has graphs and a debug view that shows your package with the colour animations similar to when debugging in BIDS, and everyone knows, the way the pretty colours and numbers change is the best bit! I’ll quote some of the features for you here and then let you make your own mind up, is it useful in the real world? Option to analyze the logs manually by applying row and column filters over the log data or by using queries to specify more complex criterions. Automated Performance Analysis which provides a quick graphical look on which tasks spent most time during package execution. Rerun (debug) the entire sequence of events which happened during package execution showing the flow of control in graphical form, changes in runtime values for each task like execution duration etc. Support for Auto Analyzers to automatically find out issues and provide suggestions for problems which can be figured out with the help of SSIS logs and/or package. Option to analyze just log file or log and package together. Provides a lightweight environment to have a quick look at the package. Opening it in BIDS takes some time as being an authoring environment it does all sorts of validations resulting in some delay. See http://ssisloganalyzer.codeplex.com/  for more details.

    Read the article

  • New SSIS tool on Codeplex – SSIS Log Analyzer

    I stumbled across a new SSIS tool on Codeplex today, the SSIS Log Analyzer which was only released a few days ago. Whilst it is a beta release and currently only supports 2005 (2008 is promised) it looks quite interesting. It seems to be a fancy log viewer, but with some clever features and a nice looking front-end. I’ve only read the documentation so far, but it has graphs and a debug view that shows your package with the colour animations similar to when debugging in BIDS, and everyone knows, the way the pretty colours and numbers change is the best bit! I’ll quote some of the features for you here and then let you make your own mind up, is it useful in the real world? Option to analyze the logs manually by applying row and column filters over the log data or by using queries to specify more complex criterions. Automated Performance Analysis which provides a quick graphical look on which tasks spent most time during package execution. Rerun (debug) the entire sequence of events which happened during package execution showing the flow of control in graphical form, changes in runtime values for each task like execution duration etc. Support for Auto Analyzers to automatically find out issues and provide suggestions for problems which can be figured out with the help of SSIS logs and/or package. Option to analyze just log file or log and package together. Provides a lightweight environment to have a quick look at the package. Opening it in BIDS takes some time as being an authoring environment it does all sorts of validations resulting in some delay. See http://ssisloganalyzer.codeplex.com/  for more details.

    Read the article

  • Visual Studio 2010 Extension Manager (and the new VS 2010 PowerCommands Extension)

    - by ScottGu
    This is the twenty-third in a series of blog posts I’m doing on the VS 2010 and .NET 4 release. Today’s blog post covers some of the extensibility improvements made in VS 2010 – as well as a cool new "PowerCommands for Visual Studio 2010” extension that Microsoft just released (and which can be downloaded and used for free). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Extensibility in VS 2010 VS 2010 provides a much richer extensibility model than previous releases.  Anyone can build extensions that add, customize, and light-up the Visual Studio 2010 IDE, Code Editors, Project System and associated Designers. VS 2010 Extensions can be created using the new MEF (Managed Extensibility Framework) which is built-into .NET 4.  You can learn more about how to create VS 2010 extensions from this this blog post from the Visual Studio Team Blog. VS 2010 Extension Manager Developers building extensions can distribute them on their own (via their own web-sites or by selling them).  Visual Studio 2010 also now includes a built-in “Extension Manager” within the IDE that makes it much easier for developers to find, download, and enable extensions online.  You can launch the “Extension Manager” by selecting the Tools->Extension Manager menu option: This loads an “Extension Manager” dialog which accesses an “online gallery” at Microsoft, and then populates a list of available extensions that you can optionally download and enable within your copy of Visual Studio: There are already hundreds of cool extensions populated within the online gallery.  You can browse them by category (use the tree-view on the top-left to filter them).  Clicking “download” on any of the extensions will download, install, and enable it. PowerCommands for Visual Studio 2010 This weekend Microsoft released the free PowerCommands for Visual Studio 2010 extension to the online gallery.  You can learn more about it here, and download and install it via the “Extension Manager” above (search for PowerCommands to find it). The PowerCommands download adds dozens of useful commands to Visual Studio 2010.  Below is a screen-shot of just a few of the useful commands that it adds to the Solution Explorer context menus: Below is a list of all the commands included with this weekend’s PowerCommands for Visual Studio 2010 release: Enable/Disable PowerCommands in Options dialog This feature allows you to select which commands to enable in the Visual Studio IDE. Point to the Tools menu, then click Options. Expand the PowerCommands options, then click Commands. Check the commands you would like to enable. Note: All power commands are initially defaulted Enabled. Format document on save / Remove and Sort Usings on save The Format document on save option formats the tabs, spaces, and so on of the document being saved. It is equivalent to pointing to the Edit menu, clicking Advanced, and then clicking Format Document. The Remove and sort usings option removes unused using statements and sorts the remaining using statements in the document being saved. Note: The Remove and sort usings option is only available for C# documents. Format document on save and Remove and sort usings both are initially defaulted OFF. Clear All Panes This command clears all output panes. It can be executed from the button on the toolbar of the Output window. Copy Path This command copies the full path of the currently selected item to the clipboard. It can be executed by right-clicking one of these nodes in the Solution Explorer: The solution node; A project node; Any project item node; Any folder. Email CodeSnippet To email the lines of text you select in the code editor, right-click anywhere in the editor and then click Email CodeSnippet. Insert Guid Attribute This command adds a Guid attribute to a selected class. From the code editor, right-click anywhere within the class definition, then click Insert Guid Attribute. Show All Files This command shows the hidden files in all projects displayed in the Solution Explorer when the solution node is selected. It enhances the Show All Files button, which normally shows only the hidden files in the selected project node. Undo Close This command reopens a closed document , returning the cursor to its last position. To reopen the most recently closed document, point to the Edit menu, then click Undo Close. Alternately, you can use the CtrlShiftZ shortcut. To reopen any other recently closed document, point to the View menu, click Other Windows, and then click Undo Close Window. The Undo Close window appears, typically next to the Output window. Double-click any document in the list to reopen it. Collapse Projects This command collapses a project or projects in the Solution Explorer starting from the root selected node. Collapsing a project can increase the readability of the solution. This command can be executed from three different places: solution, solution folders and project nodes respectively. Copy Class This command copies a selected class entire content to the clipboard, renaming the class. This command is normally followed by a Paste Class command, which renames the class to avoid a compilation error. It can be executed from a single project item or a project item with dependent sub items. Paste Class This command pastes a class entire content from the clipboard, renaming the class to avoid a compilation error. This command is normally preceded by a Copy Class command. It can be executed from a project or folder node. Copy References This command copies a reference or set of references to the clipboard. It can be executed from the references node, a single reference node or set of reference nodes. Paste References This command pastes a reference or set of references from the clipboard. It can be executed from different places depending on the type of project. For CSharp projects it can be executed from the references node. For Visual Basic and Website projects it can be executed from the project node. Copy As Project Reference This command copies a project as a project reference to the clipboard. It can be executed from a project node. Edit Project File This command opens the MSBuild project file for a selected project inside Visual Studio. It combines the existing Unload Project and Edit Project commands. Open Containing Folder This command opens a Windows Explorer window pointing to the physical path of a selected item. It can be executed from a project item node Open Command Prompt This command opens a Visual Studio command prompt pointing to the physical path of a selected item. It can be executed from four different places: solution, project, folder and project item nodes respectively. Unload Projects This command unloads all projects in a solution. This can be useful in MSBuild scenarios when multiple projects are being edited. This command can be executed from the solution node. Reload Projects This command reloads all unloaded projects in a solution. It can be executed from the solution node. Remove and Sort Usings This command removes and sort using statements for all classes given a project. It is useful, for example, in removing or organizing the using statements generated by a wizard. This command can be executed from a solution node or a single project node. Extract Constant This command creates a constant definition statement for a selected text. Extracting a constant effectively names a literal value, which can improve readability. This command can be executed from the code editor by right-clicking selected text. Clear Recent File List This command clears the Visual Studio recent file list. The Clear Recent File List command brings up a Clear File dialog which allows any or all recent files to be selected. Clear Recent Project List This command clears the Visual Studio recent project list. The Clear Recent Project List command brings up a Clear File dialog which allows any or all recent projects to be selected. Transform Templates This command executes a custom tool with associated text templates items. It can be executed from a DSL project node or a DSL folder node. Close All This command closes all documents. It can be executed from a document tab. How to temporarily disable extensions Extensions provide a great way to make Visual Studio even more powerful, and can help improve your overall productivity.  One thing to keep in mind, though, is that extensions run within the Visual Studio process (DevEnv.exe) and so a bug within an extension can impact both the stability and performance of Visual Studio.  If you ever run into a situation where things seem slower than they should, or if you crash repeatedly, please temporarily disable any installed extensions and see if that fixes the problem.  You can do this for extensions that were installed via the online gallery by re-running the extension manager (using the Tools->Extension Manager menu option) and by selecting the “Installed Extensions” node on the top-left of the dialog – and then by clicking “Disable” on any of the extensions within your installed list: Hope this helps, Scott

    Read the article

  • SQL SERVER – sp_describe_first_result_set New System Stored Procedure in SQL Server 2012

    - by pinaldave
    I might have said this earlier many times but I will say it again – SQL Server never stops to amaze me. Here is the example of it sp_describe_first_result_set. I stumbled upon it when I was looking for something else on BOL. This new system stored procedure did attract me to experiment with it. This SP does exactly what its names suggests – describes the first result set. Let us see very simple example of the same. Please note that this will work on only SQL Server 2012. EXEC sp_describe_first_result_set N'SELECT * FROM AdventureWorks.Sales.SalesOrderDetail', NULL, 1 GO Here is the partial resultset. Now let us take this simple example to next level and learn one more interesting detail about this function. First I will be creating a view and then we will use the same procedure over the view. USE AdventureWorks GO CREATE VIEW dbo.MyView AS SELECT [SalesOrderID] soi_v ,[SalesOrderDetailID] sodi_v ,[CarrierTrackingNumber] stn_v FROM [Sales].[SalesOrderDetail] GO Now let us execute above stored procedure with various options. You can notice I am changing the very last parameter which I am passing to the stored procedure.This option is known as for browse_information_mode. EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 0; GO EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 1; GO EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 2; GO Here is result of all the three queries together in single image for easier understanding regarding their difference. You can see that when BrowseMode is set to 1 the resultset describes the details of the original source database, schema as well source table. When BrowseMode is set to 2 the resulset describes the details of the view as the source database. I found it really really interesting that there exists system stored procedure which now describes the resultset of the output. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Online Session on What is New in Denali – Today Online

    - by pinaldave
    I will be presenting today on subject Inside of Next Generation SQL Server – Denali online at Zeollar.com. This sessions are really fun as they are online, downloadable, and 100% demo oriented. I will be using SQL Server ‘Denali’ CTP 1 to present on the subject of What is New in Denali. The webcast will start at 12:30 PM sharp and will end at 1 PM India Time. It will be 100% demo oriented and no slides. I will be covering following topics in the session. SQL SERVER – Denali Feature – Zoom Query Editor SQL SERVER – Denali – Improvement in Startup Options SQL SERVER – Denali – Clipboard Ring – CTRL+SHIFT+V SQL SERVER – Denali – Multi-Monitor SSMS Windows SQL SERVER – Denali – Executing Stored Procedure with Result Sets SQL SERVER – Performance Improvement with of Executing Stored Procedure with Result Sets in Denali SQL SERVER – ‘Denali’ – A Simple Example of Contained Databases SQL SERVER – Denali – ObjectID in Negative – Local TempTable has Negative ObjectID SQL SERVER – Server Side Paging in SQL Server Denali – A Better Alternative SQL SERVER – Server Side Paging in SQL Server Denali Performance Comparison SQL SERVER – Denali – SEQUENCE is not IDENTITY SQL SERVER – Denali – Introduction to SEQUENCE – Simple Example of SEQUENCE If time permits we will cover few more topics as well. The session will be recorded as well. My earlier session on the Topic of Best Practices Analyzer is also available to watch online here: SQL SERVER – Video – Best Practices Analyzer using Microsoft Baseline Configuration Analyzer Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >