Search Results

Search found 6988 results on 280 pages for 'if else statement'.

Page 88/280 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • db2 stored procedure. locking / releasing table

    I use a stored procedure to read/update/return certain fields in a journaled as400 table. I want to lock the table first and then release it after the record is updated. I tried tons of stuff, but releasing table is a problem. SP defines and opens cursor, selects record into variables and updates the record. I tried 'begin atomic', then lock table in exclusive mode and then when it's over, it doesn't release. Is there any statement i missing or do i need to compile it with certain parameters? I use a simple create procedure statement in AS400 navigator's sql panel to compile it. Will very appreciate some help with example. Thanks.

    Read the article

  • Creating Oracle stored procedure that returns data

    - by user3614327
    In Firebird you can create a stored procedure that returns data an invoke it like a table passing the arguments: create or alter procedure SEL_MAS_IVA ( PCANTIDAD double precision) returns ( CANTIDAD_CONIVA double precision) as begin CANTIDAD_CONIVA = pCANTIDAD*(1.16); suspend; end select * from SEL_MAS_IVA(100) will return a single row single column (named CANTIDAD_CONIVA) relation with the value 116 This is a very simple example. The stored procedure can of course have any number of input and output parameters and do whatever it needs to return data (including multiple rows), which is accomplished by the "suspend" statement (which as it name implies, suspends the SP execution, returns data to the caller, and resumes with the next statement) How can I create such kind of stored procedures in Oracle?

    Read the article

  • files get uploaded just before they get cancelled

    - by user1763986
    Got a little situation here where I am trying to cancel a file's upload. What I have done is stated that if the user clicks on the "Cancel" button, then it will simply remove the iframe so that it does not go to the page where it uploads the files into the server and inserts data into the database. Now this works fine if the user clicks on the "Cancel" button in quickish time the problem I have realised though is that if the user clicks on the "Cancel" button very late, it sometimes doesn't remove the iframe in time meaning that the file has just been uploaded just before the user has clicked on the "Cancel" button. So my question is that is there a way that if the file does somehow get uploaded before the user clicks on the "Cancel" button, that it deletes the data in the database and removes the file from the server? Below is the image upload form: <form action="imageupload.php" method="post" enctype="multipart/form-data" target="upload_target_image" onsubmit="return imageClickHandler(this);" class="imageuploadform" > <p class="imagef1_upload_process" align="center"> Loading...<br/> <img src="Images/loader.gif" /> </p> <p class="imagef1_upload_form" align="center"> <br/> <span class="imagemsg"></span> <label>Image File: <input name="fileImage" type="file" class="fileImage" /></label><br/> <br/> <label class="imagelbl"><input type="submit" name="submitImageBtn" class="sbtnimage" value="Upload" /></label> </p> <p class="imagef1_cancel" align="center"> <input type="reset" name="imageCancel" class="imageCancel" value="Cancel" /> </p> <iframe class="upload_target_image" name="upload_target_image" src="#" style="width:0px;height:0px;border:0px;solid;#fff;"></iframe> </form> Below is the jquery function which controls the "Cancel" button: $(imageuploadform).find(".imageCancel").on("click", function(event) { $('.upload_target_image').get(0).contentwindow $("iframe[name='upload_target_image']").attr("src", "javascript:'<html></html>'"); return stopImageUpload(2); }); Below is the php code where it uploads the files and inserts the data into the database. The form above posts to this php page "imageupload.php": <body> <?php include('connect.php'); session_start(); $result = 0; //uploads file move_uploaded_file($_FILES["fileImage"]["tmp_name"], "ImageFiles/" . $_FILES["fileImage"]["name"]); $result = 1; //set up the INSERT SQL query command to insert the name of the image file into the "Image" Table $imagesql = "INSERT INTO Image (ImageFile) VALUES (?)"; //prepare the above SQL statement if (!$insert = $mysqli->prepare($imagesql)) { // Handle errors with prepare operation here } //bind the parameters (these are the values that will be inserted) $insert->bind_param("s",$img); //Assign the variable of the name of the file uploaded $img = 'ImageFiles/'.$_FILES['fileImage']['name']; //execute INSERT query $insert->execute(); if ($insert->errno) { // Handle query error here } //close INSERT query $insert->close(); //Retrieve the ImageId of the last uploded file $lastID = $mysqli->insert_id; //Insert into Image_Question Table (be using last retrieved Image id in order to do this) $imagequestionsql = "INSERT INTO Image_Question (ImageId, SessionId, QuestionId) VALUES (?, ?, ?)"; //prepare the above SQL statement if (!$insertimagequestion = $mysqli->prepare($imagequestionsql)) { // Handle errors with prepare operation here echo "Prepare statement err imagequestion"; } //Retrieve the question number $qnum = (int)$_POST['numimage']; //bind the parameters (these are the values that will be inserted) $insertimagequestion->bind_param("isi",$lastID, 'Exam', $qnum); //execute INSERT query $insertimagequestion->execute(); if ($insertimagequestion->errno) { // Handle query error here } //close INSERT query $insertimagequestion->close(); ?> <!--Javascript which will output the message depending on the status of the upload (successful, failed or cancelled)--> <script> window.top.stopImageUpload(<?php echo $result; ?>, '<?php echo $_FILES['fileImage']['name'] ?>'); </script> </body> UPDATE: Below is the php code "cancelimage.php" where I want to delete the cancelled file from the server and delete the record from the database. It is set up but not finished, can somebody finish it off so I can retrieve the name of the file and it's id using $_SESSION? <?php // connect to the database include('connect.php'); /* check connection */ if (mysqli_connect_errno()) { printf("Connect failed: %s\n", mysqli_connect_error()); die(); } //remove file from server unlink("ImageFiles/...."); //need to retrieve file name here where the ... line is //DELETE query statement where it will delete cancelled file from both Image and Image Question Table $imagedeletesql = " DELETE img, img_q FROM Image AS img LEFT JOIN Image_Question AS img_q ON img_q.ImageId = img.ImageId WHERE img.ImageFile = ?"; //prepare delete query if (!$delete = $mysqli->prepare($imagedeletesql)) { // Handle errors with prepare operation here } //Dont pass data directly to bind_param store it in a variable $delete->bind_param("s",$img); //execute DELETE query $delete->execute(); if ($delete->errno) { // Handle query error here } //close query $delete->close(); ?> Can you please provide an sample code in your answer to make it easier for me. Thank you

    Read the article

  • Fill data gaps without UNION

    - by Dave Jarvis
    Problem There are data gaps that need to be filled, possibly using PARTITION BY. Query Statement The select statement reads as follows: SELECT count( r.incident_id ) AS incident_tally, r.severity_cd, r.incident_typ_cd FROM report_vw r GROUP BY r.severity_cd, r.incident_typ_cd ORDER BY r.severity_cd, r.incident_typ_cd Code Tables The severity codes and incident type codes are from: severity_vw incident_type_vw Actual Result Data 36 0 ENVIRONMENT 1 1 DISASTER 27 1 ENVIRONMENT 4 2 SAFETY 1 3 SAFETY Required Result Data 36 0 ENVIRONMENT 0 0 DISASTER 0 0 SAFETY 27 1 ENVIRONMENT 0 1 DISASTER 0 1 SAFETY 0 2 ENVIRONMENT 0 2 DISASTER 4 2 SAFETY 0 3 ENVIRONMENT 0 3 DISASTER 1 3 SAFETY Any ideas how to use PARTITION BY (or JOINs) to fill in the zero counts?

    Read the article

  • issue with selecting 1 or 0 in mysql db

    - by jason
    I am having trouble with the following SQL statement i have had this issue before but I cant remember how i fixed the problem, I am guessing the issue with this is that mysql sees 0 as null ? Note I didnt show the first part of the Select statement as its irrelevant my sql works, its showing rows with toffline that are = to 1 as well as 0... FROM table1 AS tb1 LEFT JOIN table2 AS tb2 ON tb2.id = tb1.id2 LEFT JOIN table3 AS tb3 ON tb3.id = tb1.id3 AND tb1.toffline = 0 I have also tried AND NOT tb1.toffline = 1 also WHERE tb1.toffline = 0 all with the same result...

    Read the article

  • casting a node to integer

    - by user1708762
    The code gives an error saying that "no operator matches these two operands" in the if comparison statement. I interpret,it should mean that "a node can't be converted/casted into an integer". But, the print statement prints an integer value for w[2] when used with %d format. Why is that happening? Isn't printf casting it? NODE *w=(NODE *)malloc(4*sizeof(NODE)); if(w[2]==0) printf("%d\n",w[2]); The structure of the node is- struct node{ int key; struct node *father; struct node *child[S]; int *ss; int current; };

    Read the article

  • Best Practice for loading non-existent data

    - by Aizotu
    I'm trying to build a table in MS SQL 2008, loaded with roughly 50,000 rows of data. Right now I'm doing something like: Create Table MyCustomData ( ColumnKey Int Null, Column1 NVarChar(100) Null, Column2 NVarChar(100) Null Primary Key Clustered ( ColumnKey ASC ) WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON ) ) CREATE INDEX IDX_COLUMN1 ON MyCustomData([COLUMN1]) CREATE INDEX IDX_COLUMN2 ON MyCustomData([COLUMN2]) DECLARE @MyCount Int SET @MyCount = 0 WHILE @MyCount < 50000 BEGIN INSERT INTO MyCustomData (ColumnKey, Column1, Column2) Select @MyCount + 1, 'Custom Data 1', 'Custom Data 2' Set @MyCount = @MyCount + 1 END My problem is that this is crazy slow. I thought at one point I could create a Select Statement to build my custom data and use that as the datasource for my Insert Into statement. i.e. something like INSERT INTO MyCustomData (ColumnKey, Column1, Column2) From (Select Top 50000 Row_Count(), 'Custom Data 1', 'Custom Data 2') I know this doesn't work, but its the only thing I can show that seems to provide an example of what I'm after. Any suggestions would be greatly appriciated.

    Read the article

  • Conditional Statements - If Then vs. Select Case

    - by cloyd800
    I'm a bit new to programming, and based on the few sources I've read both on the web and the books I'm learning to teach myself they are able to define what IF THEN and SELECT CASE conditional statements are, but have failed to give a comparison as to why I would use one over the other and what best practices decide this. If I'm understanding these conditional statements correctly, then both are based on a set of conditions with an outcome based around meeting these conditions, and if no conditions are met then an alternative outcome can be defined. I'm having trouble in understanding when I would use an IF THEN statement, and when I'd use a SELECT CASE statement, and what best practices are used to define this decision. Any insight on this would be greatly appreciated!

    Read the article

  • Check if user in a database is banned JDBC

    - by user2297666
    Using an oracle database, I need to perform a check to see if a user in my 'users' table is banned or not. The user is banned if his column 'banned' has a value of '1', '0' if he is not. I have the following working code here: public boolean banUser(String username) {//TODO check if user is banned already try { pstmnt = conn.prepareStatement("UPDATE users SET banned = 1 WHERE username = ?"); pstmnt.setString(1, username); pstmnt.execute(); logger.info("Banned User : " + username); return true; } catch ( SQLException e ) { e.getMessage(); } return false; } I'm not sure how to perform an if statement on top of a prepared statement. Any ideas?

    Read the article

  • BSOD & System Failure after trying to install a new RAM

    - by Praveen Kumar
    I have updated the question with sections, so that people won't find it difficult to read. Basic System Information Let me give a basic introduction on my system. I have a system of following configuration: Processor: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz 3.40GHz RAM: Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9) x 2 OS: Windows 7 Ultimate, SP1 Build 7601 HDD: 1 TB Seagate 7200 RPM The Problem It was working fine for about an year. Yesterday I planned to increase my RAM to 16 GB by putting another set of two Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9). I got it from an authorized reseller and also, the RAM was fitted by a service engineer only. After the RAM was fit (all the four), the system failed to start, with an error code of 0x000000f4. The complete information of it is: Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 16393 Additional information about the problem: BCCode: f4 BCP1: 0000000000000003 BCP2: FFFFFA8008A39060 BCP3: FFFFFA8008A39340 BCP4: FFFFF800037C8510 OS Version: 6_1_7601 Service Pack: 1_0 Product: 256_1 Files that help describe the problem: C:\Windows\Minidump\093012-13041-01.dmp C:\Users\Praveen Kumar\AppData\Local\Temp\WER-30716-0.sysdata.xml Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Another Problem We first thought that it was the RAM, which caused the issue. So I returned the RAMs and now my computer configuration is exactly how it was the previous day. But, following the removal of the RAM, I also had several crashes after that. One suspicious thing was with an error code c0000134: STOP: c0000135 The program can’t start because %hs is missing from your computer . Try resintalling the program to fix this problem. After reading contents from this, this and this, which were never my case, they didn't help me. But I didn't receive any more STOP c0000134 messages. But this 0x000000f4 keeps on coming. I am writing from the same system and it allows me to work for say, half an hour max. Then I hear a device disconnect sound, the one you hear in Windows 7, when a USB Mass Storage Device is plugged out. Immediately following that, my screen goes blank and I get 0x000000f4 blue screen. Okay, now I am really concerned about my Hard Disk data, but I have no clue if there is a problem with the HDD. My Question What all files do I need to submit for your reference? Can this issue be fixed? I am getting more time if I remove my RAM, clean it and then put it back. Weird! Hope I have given the necessary information to help you guys. Thanks in advance. Minidumps I have uploaded all the Minidump DMP files from C:\Windows\Minidump folder here: http://www.praveen-kumar.com/Minidumps.zip Let me know if you face any issues in accessing it. Will be able to share elsewhere. Updates 30-Sep-2012 10:15 AM IST: When I keep the system cover opened, pressed the HDD Cable well, it is allowing me to be on for about half an hour, I guess? Also, I feel that the CPU fan speed is kind of slow. It rotates at around 900 RPM, but the CPU Temperature is not more than 70° C. 30-Sep-2012 10:30 AM IST: My Modem (Beetel 220BX ADSL2+ Router) failed. I have no idea how it is related to this issue, but I thought that I need to document this too. I really have a bad day here. 30-Sep-2012 11:00 AM IST: System still running fine, with the cabinet cover open, now for about an hour. 30-Sep-2012 12:00 PM IST: I shut down the system and closed the cabinet. Started the system, and it hung after giving the password. After a few minutes, got the same 0x000000f4 error. So, while it is in the upright position, fixed the Hard Disk cable and now it is booting fine. Waiting for more observations and answers.

    Read the article

  • ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (today) In today’s post I’m going to discuss how Razor enables you to both implicitly and explicitly define code nuggets within your view templates, and walkthrough some code examples of each of them.  Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post. Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a collection of products and output a <ul> list of product names that link to their corresponding product pages: When run, the above code generates output like below: Notice above how we were able to embed two code nuggets within the content of the foreach loop.  One of them outputs the name of the Product, and the other embeds the ProductID within a hyperlink.  Notice that we didn’t have to explicitly wrap these code-nuggets - Razor was instead smart enough to implicitly identify where the code began and ended in both of these situations.  How Razor Enables Implicit Code Nuggets Razor does not define its own language.  Instead, the code you write within Razor code nuggets is standard C# or VB.  This allows you to re-use your existing language skills, and avoid having to learn a customized language grammar. The Razor parser has smarts built into it so that whenever possible you do not need to explicitly mark the end of C#/VB code nuggets you write.  This makes coding more fluid and productive, and enables a nice, clean, concise template syntax.  Below are a few scenarios that Razor supports where you can avoid having to explicitly mark the beginning/end of a code nugget, and instead have Razor implicitly identify the code nugget scope for you: Property Access Razor allows you to output a variable value, or a sub-property on a variable that is referenced via “dot” notation: You can also use “dot” notation to access sub-properties multiple levels deep: Array/Collection Indexing: Razor allows you to index into collections or arrays: Calling Methods: Razor also allows you to invoke methods: Notice how for all of the scenarios above how we did not have to explicitly end the code nugget.  Razor was able to implicitly identify the end of the code block for us. Razor’s Parsing Algorithm for Code Nuggets The below algorithm captures the core parsing logic we use to support “@” expressions within Razor, and to enable the implicit code nugget scenarios above: Parse an identifier - As soon as we see a character that isn't valid in a C# or VB identifier, we stop and move to step 2 Check for brackets - If we see "(" or "[", go to step 2.1., otherwise, go to step 3  Parse until the matching ")" or "]" (we track nested "()" and "[]" pairs and ignore "()[]" we see in strings or comments) Go back to step 2 Check for a "." - If we see one, go to step 3.1, otherwise, DO NOT ACCEPT THE "." as code, and go to step 4 If the character AFTER the "." is a valid identifier, accept the "." and go back to step 1, otherwise, go to step 4 Done! Differentiating between code and content Step 3.1 is a particularly interesting part of the above algorithm, and enables Razor to differentiate between scenarios where an identifier is being used as part of the code statement, and when it should instead be treated as static content: Notice how in the snippet above we have ? and ! characters at the end of our code nuggets.  These are both legal C# identifiers – but Razor is able to implicitly identify that they should be treated as static string content as opposed to being part of the code expression because there is whitespace after them.  This is pretty cool and saves us keystrokes. Explicit Code Nuggets in Razor Razor is smart enough to implicitly identify a lot of code nugget scenarios.  But there are still times when you want/need to be more explicit in how you scope the code nugget expression.  The @(expression) syntax allows you to do this: You can write any C#/VB code statement you want within the @() syntax.  Razor will treat the wrapping () characters as the explicit scope of the code nugget statement.  Below are a few scenarios where we could use the explicit code nugget feature: Perform Arithmetic Calculation/Modification: You can perform arithmetic calculations within an explicit code nugget: Appending Text to a Code Expression Result: You can use the explicit expression syntax to append static text at the end of a code nugget without having to worry about it being incorrectly parsed as code: Above we have embedded a code nugget within an <img> element’s src attribute.  It allows us to link to images with URLs like “/Images/Beverages.jpg”.  Without the explicit parenthesis, Razor would have looked for a “.jpg” property on the CategoryName (and raised an error).  By being explicit we can clearly denote where the code ends and the text begins. Using Generics and Lambdas Explicit expressions also allow us to use generic types and generic methods within code expressions – and enable us to avoid the <> characters in generics from being ambiguous with tag elements. One More Thing….Intellisense within Attributes We have used code nuggets within HTML attributes in several of the examples above.  One nice feature supported by the Razor code editor within Visual Studio is the ability to still get VB/C# intellisense when doing this. Below is an example of C# code intellisense when using an implicit code nugget within an <a> href=”” attribute: Below is an example of C# code intellisense when using an explicit code nugget embedded in the middle of a <img> src=”” attribute: Notice how we are getting full code intellisense for both scenarios – despite the fact that the code expression is embedded within an HTML attribute (something the existing .aspx code editor doesn’t support).  This makes writing code even easier, and ensures that you can take advantage of intellisense everywhere. Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s ability to implicitly scope code nuggets reduces the amount of typing you need to perform, and leaves you with really clean code. When necessary, you can also explicitly scope code expressions using a @(expression) syntax to provide greater clarity around your intent, as well as to disambiguate code statements from static markup. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Using linked servers, OPENROWSET and OPENQUERY

    - by BuckWoody
    SQL Server has a few mechanisms to reach out to another server (even another server type) and query data from within a Transact-SQL statement. Among them are a set of stored credentials and information (called a Linked Server), a statement that uses a linked server called called OPENQUERY, another called OPENROWSET, and one called OPENDATASOURCE. This post isn’t about those particular functions or statements – hit the links for more if you’re new to those topics. I’m actually more concerned about where I see these used than the particular method. In many cases, a Linked server isn’t another Relational Database Management System (RDMBS) like Oracle or DB2 (which is possible with a linked server), but another SQL Server. My concern is that linked servers are the new Data Transformation Services (DTS) from SQL Server 2000 – something that was designed for one purpose but which is being morphed into something much more. In the case of DTS, most of us turned that feature into a full-fledged job system. What was designed as a simple data import and export system has been pressed into service doing logic, routing and timing. And of course we all know how painful it was to move off of a complex DTS system onto SQL Server Integration Services. In the case of linked servers, what should be used as a method of running a simple query or two on another server where you have occasional connection or need a quick import of a small data set is morphing into a full federation strategy. In some cases I’ve seen a complex web of linked servers, and when credentials, names or anything else changes there are huge problems. Now don’t get me wrong – linked servers and other forms of distributing queries is a fantastic set of tools that we have to move data around. I’m just saying that when you start having lots of workarounds and when things get really complicated, you might want to step back a little and ask if there’s a better way. Are you able to tolerate some latency? Perhaps you’re able to use Service Broker. Would you like to be platform-independent on the data source? Perhaps a middle-tier might make more sense, abstracting the queries there and sending them to the proper server. Designed properly, I’ve seen these systems scale further and be more resilient than loading up on linked servers. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Top 10 Reasons SQL Developer is Perfect for Oracle Beginners

    - by thatjeffsmith
    Learning new technologies can be daunting. If you’ve never used a Mac before, you’ll probably be a bit baffled at first. But, you’re probably at least coming from a desktop computing background (Windows), so you common frame of reference. But what if you’re just now learning to use a relational database? Yes, you’ve played with Access a bit, but now your employer or college instructor has charged you with becoming proficient with Oracle database. Here’s 10 reasons why I think Oracle SQL Developer is the perfect vehicle to help get you started. 1. It’s free No need to break into one of these… No start-up costs, no need to wrangle budget dollars from your company. Students don’t have any money after books and lab fees anyway. And most employees don’t like having to ask for ‘special’ software anyway. So avoid all of that and make sure the free stuff doesn’t suit your needs first. Upgrades are available on a regular base, also at no cost, and support is freely available via our public forums. 2. It will run pretty much anywhere Windows – check. OSX (Apple) – check. Unix – check. Linux – check. No need to start up a windows VM to run your Windows-only software in your lab machine. 3. Anyone can install it There’s no installer, no registry to be updated, no admin privs to be obtained. If you can download and extract files to your machine or USB storage device, you can run it. You can be up and running with SQL Developer in under 5 minutes. Here’s a video tutorial to see how to get started. 4. It’s ubiquitous I admit it, I learned a new word yesterday and I wanted an excuse to use it. SQL Developer’s everywhere. It’s had over 2,500,000 downloads in the past year, and is the one of the most downloaded items from OTN. This means if you need help, there’s someone sitting nearby you that can assist, and since they’re in the same tool as you, they’ll be speaking the same language. 5. Simple User Interface Up-up-down-down-Left-right-left-right-A-B-A-B-START will get you 30 lives, but you already knew that, right? You connect, you see your objects, you click on your objects. Or, you can use the worksheet to write your queries and programs in. There’s only one toolbar, and just a few buttons. If you’re like me, video games became less fun when each button had 6 action items mapped to it. I just want the good ole ‘A’, ‘B’, ‘SELECT’, and ‘START’ controls. If you’re new to Oracle, you shouldn’t have the double-workload of learning a new complicated tool as well. 6. It’s not a ‘black box’ Click through your objects, but also get the SQL that drives the GUI As you use the wizards to accomplish tasks for you, you can view the SQL statement being generated on your behalf. Just because you have a GUI, doesn’t mean you’re ceding your responsibility to learn the underlying code that makes the database work. 7. It’s four tools in one It’s not just a query tool. Maybe you need to design a data model first? Or maybe you need to migrate your Sybase ASE database to Oracle for a new project? Or maybe you need to create some reports? SQL Developer does all of that. So once you get comfortable with one part of the tool, the others will be much easier to pick up as your needs change. 8. Great learning resources available Videos, blogs, hands-on learning labs – you name it, we got it. Why wait for someone to train you, when you can train yourself at your own pace? 9. You can use it to teach yourself SQL Instead of being faced with the white-screen-of-panic, you can visually build your queries by dragging and dropping tables and views into the Query Builder. Yes, ‘just like Access’ – only better. And as you build your query, toggle to the Worksheet panel and see the SQL statement. Again, SQL Developer is not a black box. If you prefer to learn by trial and error, the worksheet will attempt to suggest the next bit of your SQL statement with it’s completion insight feature. And if you have syntax errors, those will be highlighted – just like your misspelled words in your favorite word processor. 10. It scales to match your experience level You won’t be a n00b forever. In 6-8 months, when you’re ready to tackle something a bit more complicated, like XML DB or Oracle Spatial, the tool is already there waiting on you. No need to go out and find the ‘advanced’ tool. 11. Wait, you said this was a ‘Top 10′ list? Yes. Yes, I did. I’m using this ‘trick’ to get you to continue reading because I’m going to say something you might not want to hear. Are you ready? Tools won’t replace experience, failure, hard work, and training. Just because you have the keys to the car, doesn’t mean you’re ready to head out on the race track. While SQL Developer reduces the barriers to entry, it does not completely remove them. Many experienced folks simply do not like tools. Rather, they don’t like the people that pick up tools without the know-how to properly use them. If you don’t understand what ‘TRUNCATE’ means, don’t try it out. Try picking up a book first. Of course, it’s very nice to have your own sandbox to play in, so you don’t upset the other children. That’s why I really like our Dev Days Database Virtual Box image. It’s your own database to learn and experiment with.

    Read the article

  • MODx character encoding

    - by Piet
    Ahhh character encodings. Don’t you just love them? Having character issues in MODx? Then probably the MODx manager character encoding, the character encoding of the site itself, your database’s character encoding, or the encoding MODx/php uses to talk to MySQL isn’t correct. The Website Encoding Your MODx site’s character encoding can be configured in the manager under Tools/Configuration/Site/Character encoding. This is the encoding your website’s visitors will get. The Manager’s Encoding The manager’s encoding can be changed by setting $modx_manager_charset at manager/includes/lang/<language>.inc.php like this (for example): $modx_manager_charset = 'iso-8859-1'; To find out what language you’re using (and thus was file you need to change), check Tools/Configuration/Site/Language (1 line above the character encoding setting). This needs to be the same encoding as your site. You can’t have your manager in utf8 and your site in iso-8859-1. Your Database’s Encoding The charset MODx/php uses to talk to your database can be set by changing $database_connection_charset in manager/includes/config.inc.php. This needs to be the same as your database’s charset. Make sure you use the correct corresponding charset, for iso-8859-1 you need to use ‘latin1′. Utf8 is just utf8. Example: $database_connection_charset = 'latin1'; Now, if you check Reports/System info, the ‘Database Charset’ might say something else. This is because the mysql variable ‘character_set_database’ is displayed here, which contains the character set used by the default database and not the one for the current database/connection. However, if you’d change this to display ‘character_set_connection’, it could still say something else because the ’set character set’ statement used by MODx doesn’t change this value either. The ’set names’ statement does, but since it turns out my MODx install works now as expected I’ll just leave it at this before I get a headache. If I saved you a potential headache or you think I’m totally wrong or overlooked something, let me know in the comments. btw: I want to be able to use a real editor with MODx. Somehow.

    Read the article

  • SQL SERVER – Best Reference – Wait Type – Day 27 of 28

    - by pinaldave
    I have great learning experience to write my article series on Extended Event. This was truly learning experience where I have learned way more than I would have learned otherwise. Besides my blog series there was excellent quality reference available on internet which one can use to learn this subject further. Here is the list of resources (in no particular order): sys.dm_os_wait_stats (Book OnLine) – This is excellent beginning point and official documentations on the wait types description. SQL Server Best Practices Article by Tom Davidson – I think this document goes without saying the BEST reference available on this subject. Performance Tuning with Wait Statistics by Joe Sack – One of the best slide deck available on this subject. It covers many real world scenarios. Wait statistics, or please tell me where it hurts by Paul Randal – Notes from real world from SQL Server Skilled Master Paul Randal. The SQL Server Wait Type Repository… by Bob Ward – A thorough article on wait types and its resolution. A MUST read. Tracking Session and Statement Level Waits by by Jonathan Kehayias – A unique article on the subject where wait stats and extended events are together. Wait Stats Introductory References By Jimmy May – Excellent collection of the reference links. Great Resource On SQL Server Wait Types by Glenn Berry – A perfect DMV to find top wait stats. Performance Blog by Idera – In depth article on top of the wait statistics in community. I have listed all the reference I have found in no particular order. If I have missed any good reference, please leave a comment and I will add the reference in the list. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Tracking Session and Statement Level Waits Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SSIS - XML Source Script

    - by simonsabin
    The XML Source in SSIS is great if you have a 1 to 1 mapping between entity and table. You can do more complex mapping but it becomes very messy and won't perform. What other options do you have? The challenge with XML processing is to not need a huge amount of memory. I remember using the early versions of Biztalk with loaded the whole document into memory to map from one document type to another. This was fine for small documents but was an absolute killer for large documents. You therefore need a streaming approach. For flexibility however you want to be able to generate your rows easily, and if you've ever used the XmlReader you will know its ugly code to write. That brings me on to LINQ. The is an implementation of LINQ over XML which is really nice. You can write nice LINQ queries instead of the XMLReader stuff. The downside is that by default LINQ to XML requires a whole XML document to work with. No streaming. Your code would look like this. We create an XDocument and then enumerate over a set of annoymous types we generate from our LINQ statement XDocument x = XDocument.Load("C:\\TEMP\\CustomerOrders-Attribute.xml");   foreach (var xdata in (from customer in x.Elements("OrderInterface").Elements("Customer")                        from order in customer.Elements("Orders").Elements("Order")                        select new { Account = customer.Attribute("AccountNumber").Value                                   , OrderDate = order.Attribute("OrderDate").Value }                        )) {     Output0Buffer.AddRow();     Output0Buffer.AccountNumber = xdata.Account;     Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate); } As I said the downside to this is that you are loading the whole document into memory. I did some googling and came across some helpful videos from a nice UK DPE Mike Taulty http://www.microsoft.com/uk/msdn/screencasts/screencast/289/LINQ-to-XML-Streaming-In-Large-Documents.aspx. Which show you how you can combine LINQ and the XmlReader to get a semi streaming approach. I took what he did and implemented it in SSIS. What I found odd was that when I ran it I got different numbers between using the streamed and non streamed versions. I found the cause was a little bug in Mikes code that causes the pointer in the XmlReader to progress past the start of the element and thus foreach (var xdata in (from customer in StreamReader("C:\\TEMP\\CustomerOrders-Attribute.xml","Customer")                                from order in customer.Elements("Orders").Elements("Order")                                select new { Account = customer.Attribute("AccountNumber").Value                                           , OrderDate = order.Attribute("OrderDate").Value }                                ))         {             Output0Buffer.AddRow();             Output0Buffer.AccountNumber = xdata.Account;             Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate);         } These look very similiar and they are the key element is the method we are calling, StreamReader. This method is what gives us streaming, what it does is return a enumerable list of elements, because of the way that LINQ works this results in the data being streamed in. static IEnumerable<XElement> StreamReader(String filename, string elementName) {     using (XmlReader xr = XmlReader.Create(filename))     {         xr.MoveToContent();         while (xr.Read()) //Reads the first element         {             while (xr.NodeType == XmlNodeType.Element && xr.Name == elementName)             {                 XElement node = (XElement)XElement.ReadFrom(xr);                   yield return node;             }         }         xr.Close();     } } This code is specifically designed to return a list of the elements with a specific name. The first Read reads the root element and then the inner while loop checks to see if the current element is the type we want. If not we do the xr.Read() again until we find the element type we want. We then use the neat function XElement.ReadFrom to read an element and all its sub elements into an XElement. This is what is returned and can be consumed by the LINQ statement. Essentially once one element has been read we need to check if we are still on the same element type and name (the inner loop) This was Mikes mistake, if we called .Read again we would advance the XmlReader beyond the start of the Element and so the ReadFrom method wouldn't work. So with the code above you can use what ever LINQ statement you like to flatten your XML into the rowsets you want. You could even have multiple outputs and generate your own surrogate keys.        

    Read the article

  • Enhanced REST Support in Oracle Service Bus 11gR1

    - by jeff.x.davies
    In a previous entry on REST and Oracle Service Bus (see http://blogs.oracle.com/jeffdavies/2009/06/restful_services_with_oracle_s_1.html) I encoded the REST query string really as part of the relative URL. For example, consider the following URI: http://localhost:7001/SimpleREST/Products/id=1234 Now, technically there is nothing wrong with this approach. However, it is generally more common to encode the search parameters into the query string. Take a look at the following URI that shows this principle http://localhost:7001/SimpleREST/Products?id=1234 At first blush this appears to be a trivial change. However, this approach is more intuitive, especially if you are passing in multiple parameters. For example: http://localhost:7001/SimpleREST/Products?cat=electronics&subcat=television&mfg=sony The above URI is obviously used to retrieve a list of televisions made by Sony. In prior versions of OSB (before 11gR1PS3), parsing the query string of a URI was more difficult than in the current release. In 11gR1PS3 it is now much easier to parse the query strings, which in turn makes developing REST services in OSB even easier. In this blog entry, we will re-implement the REST-ful Products services using query strings for passing parameter information. Lets begin with the implementation of the Products REST service. This service is implemented in the Products.proxy file of the project. Lets begin with the overall structure of the service, as shown in the following screenshot. This is a common pattern for REST services in the Oracle Service Bus. You implement different flows for each of the HTTP verbs that you want your service to support. Lets take a look at how the GET verb is implemented. This is the path that is taken of you were to point your browser to: http://localhost:7001/SimpleREST/Products/id=1234 There is an Assign action in the request pipeline that shows how to extract a query parameter. Here is the expression that is used to extract the id parameter: $inbound/ctx:transport/ctx:request/http:query-parameters/http:parameter[@name="id"]/@value The Assign action that stores the value into an OSB variable named id. Using this type of XPath statement you can query for any variables by name, without regard to their order in the parameter list. The Log statement is there simply to provided some debugging info in the OSB server console. The response pipeline contains a Replace action that constructs the response document for our rest service. Most of the response data is static, but the ID field that is returned is set based upon the query-parameter that was passed into the REST proxy. Testing the REST service with a browser is very simple. Just point it to the URL I showed you earlier. However, the browser is really only good for testing simple GET services. The OSB Test Console provides a much more robust environment for testing REST services, no matter which HTTP verb is used. Lets see how to use the Test Console to test this GET service. Open the OSB we console (http://localhost:7001/sbconsole) and log in as the administrator. Click on the Test Console icon (the little "bug") next to the Products proxy service in the SimpleREST project. This will bring up the Test Console browser window. Unlike SOAP services, we don't need to do much work in the request document because all of our request information will be encoded into the URI of the service itself. Belore the Request Document section of the Test Console is the Transport section. Expand that section and modify the query-parameters and http-method fields as shown in the next screenshot. By default, the query-parameters field will have the tags already defined. You just need to add a tag for each parameter you want to pass into the service. For out purposes with this particular call, you'd set the quer-parameters field as follows: <tp:parameter name="id" value="1234" /> </tp:query-parameters> Now you are ready to push the Execute button to see the results of the call. That covers the process for parsing query parameters using OSB. However, what if you have an OSB proxy service that needs to consume a REST-ful service? How do you tell OSB to pass the query parameters to the external service? In the sample code you will see a 2nd proxy service called CallREST. It invokes the Products proxy service in exactly the same way it would invoke any REST service. Our CallREST proxy service is defined as a SOAP service. This help to demonstrate OSBs ability to mediate between service consumers and service providers, decreasing the level of coupling between them. If you examine the message flow for the CallREST proxy service, you'll see that it uses an Operational branch to isolate processing logic for each operation that is defined by the SOAP service. We will focus on the getProductDetail branch, that calls the Products REST service using the HTTP GET verb. Expand the getProduct pipeline and the stage node that it contains. There is a single Assign statement that simply extracts the productID from the SOA request and stores it in a local OSB variable. Nothing suprising here. The real work (and the real learning) occurs in the Route node below the pipeline. The first thing to learn is that you need to use a route node when calling REST services, not a Service Callout or a Publish action. That's because only the Routing action has access to the $oubound variable, especially when invoking a business service. The Routing action contains 3 Insert actions. The first Insert action shows how to specify the HTTP verb as a GET. The second insert action simply inserts the XML node into the request. This element does not exist in the request by default, so we need to add it manually. Now that we have the element defined in our outbound request, we can fill it with the parameters that we want to send to the REST service. In the following screenshot you can see how we define the id parameter based on the productID value we extracted earlier from the SOAP request document. That expression will look for the parameter that has the name id and extract its value. That's all there is to it. You now know how to take full advantage of the query parameter parsing capability of the Oracle Service Bus 11gR1PS2. Download the sample source code here: rest2_sbconfig.jar Ubuntu and the OSB Test Console You will get an error when you try to use the Test Console with the Oracle Service Bus, using Ubuntu (or likely a number of other Linux distros also). The error (shown below) will state that the Test Console service is not running. The fix for this problem is quite simple. Open up the WebLogic Server administrator console (usually running at http://localhost:7001/console). In the Domain Structure window on the left side of the console, select the Servers entry under the Environment heading. The select the Admin Server entry in the main window of the console. By default, you should be viewing the Configuration tabe and the General sub tab in the main window. Look for the Listen Address field. By default it is blank, which means it is listening on all interfaces. For some reason Ubuntu doesn't like this. So enter a value like localhost or the specific IP address or DNS name for your server (usually its just localhost in development envirionments). Save your changes and restart the server. Your Test Console will now work correctly.

    Read the article

  • The embarrassingly obvious about SQL Server CE

    - by Edward Boyle
    I have been working with SQL servers in one form or another for almost two decades now. But I am new to SQL Server Compact Edition. In the past weeks I have been working with SQL Serve CE a lot. The SQL, not a problem, but the engine itself is very new to me. One of the issues I ran into was a simple SQL statement taking excusive amounts of time; by excessive, I mean over one second. I wrote a little code to time the method. Sometimes it took under one second, other times as long as three seconds. –But it was a simple update statement! As embarrassing as it is, why it was slow eluded me. I posted my issue to MSDN and I got a reply from ErikEJ (MS MVP) who runs the blog “Everything SQL Server Compact” . I know little to nothing about SQL Server Compact. This guy is completely obsessed very well versed in CE. If you spend any time in MSDN forums, it seems that this guy single handedly has the answer for every CE question that comes up. Anyway, he said: “Opening a connection to a SQL Server Compact database file is a costly operation, keep one connection open per thread (incl. your UI thread) in your app, the one on the UI thread should live for the duration of your app.” It hit me, all databases have some connection overhead and SQL Server CE is not a database engine running as a service drinking Jolt Cola waiting for someone to talk to him so he can spring into action and show off his quarter-mile sprint capabilities. Imagine if you had to start the SQL Server process every time you needed to make a database connection. Principally, that is what you are doing with SQL Server CE. For someone who has worked with Enterprise Level SQL Servers a lot, I had to come to the mental image that my Open connection to SQL Server CE is basically starting a service, my own private service, and by closing the connection, I am shutting down my little private service. After making the changes in my code, I lost any reservations I had with using CE. At present, my Data Access Layer class has a constructor; in that constructor I open my connection, I also have OpenConnection and CloseConnection methods, I also implemented IDisposable and clean up any connections in Dispose(). I am still finalizing how this assembly will function. – That’s beside the point. All I’m trying to say is: “Opening a connection to a SQL Server Compact database file is a costly operation”

    Read the article

  • SQL SERVER – ColumnStore Index – Batch Mode vs Row Mode

    - by pinaldave
    What do you do when you are in a hurry and hear someone say things which you do not agree or is wrong? Well, let me tell you what I do or what I recently did. I was walking by and heard someone mentioning “Columnstore Index are really great as they are using Batch Mode which makes them seriously fast.” While I was passing by and I heard this statement my first reaction was I thought Columnstore Index can use both – Batch Mode and Row Mode. I stopped by even though I was in a hurry and asked the person if he meant that Columnstore indexes are seriously fast because they use Batch Mode all the time or Batch Mode is one of the reasons for Columnstore Index to be faster. He responded that Columnstore Indexes can run only in Batch Mode. However, I do not like to confront anybody without hearing their complete story. Honestly, I like to do information sharing and avoid confronting as much as possible. There are always ways to communicate the same positively. Well, this is what I did, I quickly pull up my earlier article on Columnstore Index and copied the script to SQL Server Management Studio. I created two versions of the script. 1) Very Large Table 2) Reasonably Small Table. I a query which uses columnstore index on both of the versions. I found very interesting result of the my tests. I saved my tests and sent it to the person who mentioned about that Columnstore Indexes are using Batch Mode only. He immediately acknowledged that indeed he was incorrect in saying that Columnstore Index uses only Batch Mode. What really caught my attention is that he also thanked me for sending him detail email instead of just having argument where he and I both were standing in the corridor and neither have no way to prove any theory. Here is the screenshots of the both the scenarios. 1) Columnstore Index using Batch Mode 2) Columnstore Index using Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput.  Batch mode processing spreads metadata access costs and overhead over all the rows in a batch.  Batch mode processing operates on compressed data when possible leading superior performance. Here is one last point – Columnstore Index can use Batch Mode or Row Mode but Batch Mode processing is only available in Columnstore Index. I hope this statement truly sums up the whole concept. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Microsoft Contributing More to OSS

    - by Josh Holmes
    I’m all excited – Microsoft has signed the Joomla! Contributor Agreement. You can read about that on the official Joomla! blog – Microsoft signs the Joomla! Contributor Agreement. There’s a couple of fairly momentous things about that statement. Read more at Microsoft Contributing More to OSS | Josh Holmes

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >