Search Results

Search found 5682 results on 228 pages for 'lord of scripts'.

Page 52/228 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Upgrade problem during Ubuntu 11.10

    - by pankaj singh
    During upgrading in last moment, show memory problem but I restart my laptop in hurry than it give that type of message what can i do? *starting automatic crash report generation [fail] PulseAudio configured for pre-user sessions saned disabled :edit/etc/default/saned *stopeing save kernel messages [ok] After than invoking init scripts through /etc/init.d, use the service(8) utility,eg. service S90binfmt-support start Since the scripts you are attempting to invoke has been converted to an Upstart pb, you may also use the start(8)utility, e.g start S90binfmt-support start:Unknown job:S90binfmt-support *Stopping anac(h)ronistic c[ok] *checking battery state.. [ok] * stopping System V runlevel compatibility [ok]

    Read the article

  • SQL SERVER – Download FREE PDFs from SQLAuthority.com

    - by Pinal Dave
    Throughout the last seven years, we have created many PDF downloads from SQLAuthority.com and many are very much appreciated by users. I just wanted to list all the downloads which we have created so far in a single place, hence here is the blog post which contains all the PDF downloads which we have created so far. SQL Server Interview Questions and Answers Download Beginning Big Data with NuoDB SQL Server Management Studio Keyboard Shorts Download SQL Server 2008 Certification Path Complete Download SQL Server Cheat Sheet Download SQL Server Database Coding Standards and Guidelines Complete List Download SQL Server Indexing Checklist Let me know which one of the PDF you like the most and if you expect us to create any more PDF articles. Leave a comment. Additionally, we have created various script bank for all the script which has been used on SQLAuthority.com so far. You can get access to the scripts by clicking on following link. SQLAuthority.com Scripts Download Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • JQGrdi PDF Export

    - by thanigai
    Originally posted on: http://geekswithblogs.net/thanigai/archive/2013/06/17/jqgrdi-pdf-export.aspxJQGrid PDF Export The aim of this article is to address the PDF export from client side grid frameworks. The solution is done using the ASP.Net MVC 4 and VisualStudio 2012. The article assumes the developer to have a fair amount of knowledge on ASP.Net MVC and C#. Tools Used Visual Studio 2012 ASP.Net MVC 4 Nuget Package Manager JQGrid  is one of the client grid framework built on top of the JQuery framework. It helps in building a beautiful grid with paging, sorting and exiting options. There are also other features available as extension plugins and developers can write their own if needed. You can download the JQgrid from the  JQGrid  homepage or as NUget package. I have given below the command to download the JQGrid through the package manager console. From the tools menu select “Library Package Manager” and then select “Package Manager Console”. I have given the screenshot below. This command will pull down the latest JQGrid package and adds them in the script folder. Once the script is downloaded and referenced in the project update the bundleconfig file to add the script reference in the pages. Bundleconfig can be found in the  App_Start  folder in the project structure. bundles .Add (newStyleBundle(“~/Content/jqgrid”).Include (“~/Content/ui.jqgrid.css”)); bundles.Add( newScriptBundle( “~/bundles/jquerygrid”) .Include( “~/Scripts/jqGrid/jquery.jqGrid*”)); Once added the config’s refer the bundles to the Views/Shared/LayoutPage.cshtml. Add the following lines to the head section of the page. @Styles.Render(“~/Content/jqgrid”) Add the following lines to the end of the page before html close tags. @Scripts.Render(“~/bundles/jquery”) @Scripts.Render(“~/bundles/jqueryui”) @Scripts.Render(“ ~/bundles/jquerygrid”)              That’s all to be done from the view perspective. Once these steps are done the developer can start coding for the JQGrid. In this example we will modify the HomeController for the demo. The index action will be the default action. We will add an argument for this index action. Let it be nullable bool. It’s just to mark the pdf request. In the Index.cshtml we will add a table tag with an id “ gridTable “. We will use this table for making the grid. Since JQGrid is an extension for the JQUery we will initialize the grid setting at the  script  section of the page. This script section is marked at the end of the page to improve performance. The script section is placed just below the bundle reference for JQuery and JQueryUI. This is the one of improvement factors from “ why slow” provided by yahoo. < tableid=“gridTable”class=“scroll”></ table> < inputtype=“button”value=“Export PDF”onclick=“exportPDF();“/>  @section scripts { <scripttype=“text/javascript”> $(document).ready(function(){$(“#gridTable”).jqGrid({datatype:“json”,url:‘@Url.Action(“GetCustomerDetails”)‘,mtype:‘GET’,colNames:["CustomerID","CustomerName","Location","PrimaryBusiness"],colModel:[{name:"CustomerID",width:40,index:"CustomerID",align:"center"},{name:"CustomerName",width:40,index:"CustomerName",align:"center"},{name:"Location",width:40,index:"Location",align:"center"},{name:"PrimaryBusiness",width:40,index:"PrimaryBusiness",align:"center"},],height:250,autowidth:true,sortorder:“asc”,rowNum:10,rowList:[5,10,15,20],sortname:“CustomerID”,viewrecords:true});});  function exportPDF (){ document . location = ‘ @ Url . Action ( “Index” ) ?pdf=true’ ; } </ script >  } The exportPDF methos just sets the document location to the Index action method with PDF Boolean as true just to mark for download PDF. An inmemory list collection is used for demo purpose. The  GetCustomerDetailsmethod is the server side action method that will provide the data as JSON list. We will see the method explanation below. [ HttpGet] publicJsonResultGetCustomerDetails(){ varresult=new { total=1, page=1, records=customerList.Count(), rows=( customerList.Select( e=>new { id=e.CustomerID, cell=newstring[]{ e.CustomerID.ToString(), e.CustomerName, e.Location, e.PrimaryBusiness}})) .ToArray()}; returnJson( result,  JsonRequestBehavior.AllowGet); }   JQGrid can understand the response data from server in certain format. The server method shown above is taking care of formatting the response so that JQGrid understand the data properly. The response data should contain totalpages, current page, full record count, rows of data with id and remaining columns as string array. The response is built using an anonymous object and will be sent as a MVC JsonResult. Since we are using HttpGet it’s better to mark the attribute as HttpGet and also the JSON requestbehavious as AllowGet. The inmemory list is initialized in the homecontroller constructor for reference. Public class HomeController : Controller{ private readonly Ilist < CustomerViewModel > customerList ; public HomeController (){ customerList=newList<CustomerViewModel>() { newCustomerViewModel{ CustomerID=100, CustomerName=“Sundar”, Location=“Chennai”, PrimaryBusiness=“Teacing”}, newCustomerViewModel{ CustomerID=101, CustomerName=“Sudhagar”, Location=“Chennai”, PrimaryBusiness=“Software”}, newCustomerViewModel{ CustomerID=102, CustomerName=“Thivagar”, Location=“China”, PrimaryBusiness=“SAP”}, }; }  publicActionResultIndex( bool?pdf){ if ( !pdf.HasValue){ returnView( customerList);} else{ stringfilePath=Server.MapPath( “Content”)  +“Sample.pdf”; ExportPDF( customerList,  new string[]{  “CustomerID”,  “CustomerName”,  “Location”,  “PrimaryBusiness” },  filePath); return File ( filePath ,  “application/pdf” , “list.pdf” ); }}   The index actionmethod has a Boolean argument named “pdf”. It’s used to indicate for PDF download. When the application starts this method is first hit for initial page request. For PDF operation a filename is generated and then sent to the  ExportPDF  method which will take care of generating the PDF from the datasource. The  ExportPDF method is listed below.  Private static void ExportPDF<TSource>(IList<TSource>customerList,string [] columns, string filePath){ FontheaderFont=FontFactory.GetFont( “Verdana”,  10,  Color.WHITE); Fontrowfont=FontFactory.GetFont( “Verdana”,  10,  Color.BLUE); Documentdocument=newDocument( PageSize.A4);  PdfWriter writer = PdfWriter . GetInstance ( document ,  new FileStream ( filePath ,  FileMode . OpenOrCreate )); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }  foreach  ( var item in customerList ) { foreach ( varcolumnincolumns){ stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } }  document.Add( table); document.Close(); }   iTextSharp is one of the pioneer in PDF export. It’s an opensource library readily available as NUget library. This command will pulldown latest available library. I am using the version 4.1.2.0. The latest version may have changed. There are three main things in this library. Document This is the document class which takes care of creating the document sheet with particular size. We have used A4 size. There is also an option to define the rectangle size. This document instance will be further used in next methods for reference. PdfWriter PdfWriter takes the filename and the document as the reference. This class enables the document class to generate the PDF content and save them in a file. Font Using the FONT class the developer can control the font features. Since I need a nice looking font I am giving the Verdana font. Following this PdfPTable and PdfPCell are used for generating the normal table layout. We have created two set of fonts for header and footer. Font headerFont=FontFactory .GetFont(“Verdana”, 10, Color .WHITE); Font rowfont=FontFactory .GetFont(“Verdana”, 10, Color .BLUE);   We are getting the header columns as string array. Columns argument array is looped and header is generated. We are using the headerfont for this purpose. PdfWriter writer=PdfWriter .GetInstance(document, newFileStream (filePath, FileMode.OpenOrCreate)); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }   Then reflection is used to generate the row wise details and form the grid. foreach  (var item in customerList){ foreach ( varcolumnincolumns) { stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } } document . Add ( table ); document . Close ();   Once the process id done the pdf table is added to the document and document is closed to write all the changes to the filepath given. Then the control moves to the controller which will take care of sending the response as a JSON result with a filename. If the file name is not given then the PDF will open in the same page otherwise a popup will open up asking whether to save the file or open file. Return File(filePath, “application/pdf”,“list.pdf”);   The final result screen is shown below. PDF file opened below to show the output. Conclusion: This is how the export pdf is done for JQGrid. The problem area that is addressed here is the clientside grid frameworks won’t support PDF’s export. In that time it’s better to have a fine grained control over the data and generated PDF. iTextSharp has helped us to achieve our goal.

    Read the article

  • How to recover unavailable memory in /dev/shm

    - by Alain Labbe
    Good day to all, I have a question regarding the use of /dev/shm. I use it as a temporary folder for large files to speed up processing and save IO off the HD. My problem is that some of my scripts sometimes require "forceful" interruption for a variety of reasons. I can then manually remove the files left over in /dev/shm but the memory is not returned to available space (as seen by df -h). Is there any way to recover the memory without restarting the system? I'm using LTS12.04 and most of the scripts are PERL running system call on C programs (bioinformatics tools). Thanks.

    Read the article

  • Should devs, testers and business users have one unified test script?

    - by Carlos Jaime C. De Leon
    In development, I would normally have my own test scripts that would document the data, scenarios and execution steps that I plan to test; this is my dev test plan. When the functionality has been deployed to Test, testers test it using their own test script that they wrote. In UAT, the business user then tests using their own test plan. In retrospect, it looks like this provides a better coverage, with dev tests having a mix of black and white box testing, while testers and business users focus on black box testing. But on the other hand, this brings up distinct test cases that only are executed per stage (ie. some cases which testers thought of are only executed on Test stage) and it would like the dev missed it, which makes it a finding/bug. Is it worth consolidating the test scripts from the start? Thus using one unified test script, or is it abit difficult to do this upfront?

    Read the article

  • Methods to Manage/Document "one-off" Reports

    - by Jason Holland
    I'm a programmer that also does database stuff and I get a lot of so-called one-time report requests and recurring report requests. I work at a company that has a SQL Server database that we integrate third-party data with and we also have some third-party vendors that we have to use their proprietary reporting system to extract data in flat file format from that we don't integrate into SQL Server for security reasons. To generate many of these reports I have to query data from various systems, write small scripts to combine data from the separate systems, cry, pull my hair, curse the last guy's name that made the report before me, etc. My question is, what are some good methods for documenting the steps taken to generate these reports so the next poor soul that has to do them won't curse my name? As of now I just have a folder with subfolders per project with the selects and scripts that generated the last report but that seems like a "poor man's" solution. :)

    Read the article

  • Easy user management on html site?

    - by James Buldon
    I hope I'm not asking a question for which the answer is obvious...If I am, apologies. Within my html site (i.e. not Wordpress, Joomla, etc.) I want to be able to have a level of user management. That means that some pages I want to be only accessible to certain people with the correct username and password. What's the best way to do this? Are there any available scripts out there? I guess I'm looking for a free/open source version of something like this: http://www.webassist.com/php-scripts-and-solutions/user-registration/

    Read the article

  • Oracle R Enterprise Tutorial Series on Oracle Learning Library

    - by mhornick
    Oracle Server Technologies Curriculum has just released the Oracle R Enterprise Tutorial Series, which is publicly available on Oracle Learning Library (OLL). This 8 part interactive lecture series with review sessions covers Oracle R Enterprise 1.1 and an introduction to Oracle R Connector for Hadoop 1.1: Introducing Oracle R Enterprise Getting Started with ORE R Language Basics Producing Graphs in R The ORE Transparency Layer ORE Embedded R Scripts: R Interface ORE Embedded R Scripts: SQL Interface Using the Oracle R Connector for Hadoop We encourage you to download Oracle software for evaluation from the Oracle Technology Network. See these links for R-related software: Oracle R Distribution, Oracle R Enterprise, ROracle, Oracle R Connector for Hadoop.  As always, we welcome comments and questions on the Oracle R Forum.

    Read the article

  • SQL Prompt Easter Egg

    - by Johnm
    Having Red Gate's SQL Prompt installed with SQL Server Management Studio has saved me many headaches over the years of its use. It is extremely nice to type in a table name and see not only the column names, but also their data types and identification of primary keys. Another cool feature is the built-in short cut scripts that are included toward the bottom of the suggestion box. An example of these short cut scripts would be to type in the letters  cv and then hit enter and the following template for CREATE VIEW will appear: CREATE VIEW --WITH ENCRYPTION, SCHEMABINDING, VIEW_METADATA AS     SELECT /* query specification */ -- WITH CHECK OPTION GO These scripts are great, and on occasion rather humorous. Recently, I was writing an UPDATE statement that would update a derived and aliased set of data in . An example of such a statement is as follows: UPDATE y SET a.[FieldA] = b.[FieldB] FROM     (         SELECT             a.[FieldA]             ,b.[FieldB]         FROM             [MyTableA] a             INNER JOIN [MyTableB] b                 ON a.[PKA] = b.[PKB]     ) y; Upon typing the UPDATE y portion I hit enter and the expression "A A A A R G H !" appeared resulting in an unexpected burst of laughter. With a dash of curiosity and a pinch of research I discovered that at the bottom of the SQL Prompt suggestion box resides a short cut script called "yell", which is described as "Vent your frustration". Another humorous short cut script is "neo", which is described as "-- I know Kung-Fu". All is required for these to activate is to type the first letter and hit enter. I wonder if there are any undocumented ones?

    Read the article

  • RoundhousE now supports Oracle, SQL2000

    RoundhousE, the database migration software that is based on sql scripts has added support for Oracle and SQL 2000. There have also been numerous other little things, including better logging and a script run errors table. The script errors table captures what went wrong when/if your scripts are not quite up to par or there is some other issue. A special thanks goes out to http://twitter.com/PascalMestdach and http://twitter.com/jochenjonc. They worked hard on this and all I did was provide guidance...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Gumblar Attack

    Gumblar appears to be a combination of exploit scripts and malware. The scripts are embedded in .html, .js and .php files using obfuscated Javascript. They load malware content from Third Party sites without the user’s knowledge, while also stealing FTP credentials from the victim’s computer, which then allows it to spread and infect additional sites. Therefore, when someone visits such an infected site they get infected; if they have FTP credentials for a website on their machine then those sites get infected too. This explains the exponential growth of the exploit in such a short space of time.

    Read the article

  • Logging Errors with messages in Codeigniter

    - by user1260776
    I’m using codeigniter on a production server, and I’m not able to properly log the errors generated to the file. I edited php.ini like this- error_reporting = E_ALL | E_NOTICE | E_STRICT|E_WARNING display_errors = Off log_errors = On error_log = "/var/log/php-scripts.log" // This is where I would like to log all the errors and notices.. And php-scripts.log is able to show the logs like this- [06-Jun-2012 03:22:20 UTC] PHP Deprecated: Directive 'safe_mode' is deprecated in PHP 5.3 and greater in Unknown on line 0 [06-Jun-2012 03:26:06 UTC] PHP Deprecated: Directive 'safe_mode' is deprecated in PHP 5.3 and greater in Unknown on line 0 [06-Jun-2012 03:30:05 UTC] PHP Deprecated: Directive 'safe_mode' is deprecated in PHP 5.3 and greater in Unknown on line 0 [06-Jun-2012 03:30:07 UTC] PHP Deprecated: Directive 'safe_mode' is deprecated in PHP 5.3 and greater in Unknown on line 0 [06-Jun-2012 03:30:11 UTC] PHP Deprecated: Directive 'safe_mode' is deprecated in PHP 5.3 and greater in Unknown on line 0 Now, the “index.php” settings in my “public_html” folder, (i’ve rest of CI folder outside public_html) I’ve these settings- define('ENVIRONMENT', 'production'); if (defined('ENVIRONMENT')) { switch (ENVIRONMENT) { case 'development': error_reporting(E_ALL); break; case 'testing': case 'production': error_reporting(0); break; default: exit('The application environment is not set correctly.'); } } Though everything seems to be fine, now, I’ll just change Environment to “development”, this is what I get on my website homepage- A PHP Error was encountered Severity: Warning Message: fclose() expects parameter 1 to be resource, null given Filename: core/Common.php Line Number: 91 A PHP Error was encountered Severity: Warning Message: Cannot modify header information - headers already sent by (output started at /home/theuser/codeigniter/system/core/Exceptions.php:185) Filename: core/Security.php Line Number: 188 The “rest” of the page is also displayed. And when I look at php-scripts.log, I’m not able to see any of these logs there- [06-Jun-2012 03:22:20 UTC] PHP Deprecated: Directive ‘safe_mode’ is deprecated in PHP 5.3 and greater in Unknown on line 0 [06-Jun-2012 03:26:06 UTC] PHP Deprecated: Directive ‘safe_mode’ is deprecated in PHP 5.3 and greater in Unknown on line 0 [06-Jun-2012 03:30:05 UTC] PHP Deprecated: Directive ‘safe_mode’ is deprecated in PHP 5.3 and greater in Unknown on line 0 [06-Jun-2012 03:30:07 UTC] PHP Deprecated: Directive ‘safe_mode’ is deprecated in PHP 5.3 and greater in Unknown on line 0 [06-Jun-2012 03:30:11 UTC] PHP Deprecated: Directive ‘safe_mode’ is deprecated in PHP 5.3 and greater in Unknown on line 0 [06-Jun-2012 03:30:45 UTC] PHP Deprecated: Directive ‘safe_mode’ is deprecated in PHP 5.3 and greater in Unknown on line 0 [06-Jun-2012 03:37:41 UTC] PHP Deprecated: Directive ‘safe_mode’ is deprecated in PHP 5.3 and greater in Unknown on line 0 One more thing is I don’t know how/where codeigniter itself would log all the errors. The permissions of “application/logs” folder is “777”, but there is no log file (I was expecting that CodeIgniter would automatically create a log file, should I create one, if I’ve to log errors there). I’ve set these configurations in config/config.php $config['log_threshold'] = 4; $config['log_path'] = ''; //hoping it would log errors at "default" location... Ideally, I just wish all those errors, warning, and notices (with messages) that were displayed on my webpage were sent to log-file /var/log/php-scripts.log when the “Environment” is “Production”. If it’s not possible, I would also be fine, If i can log it somewhere else. Now, I’m confused as to what should be the settings in the “index.php” page or some other configuration, so as to supress all the errors and warnings on the webpage when environment is "Production", and send all those errors, warnings, and notices to php-scripts.log. (or any other file) my php version is PHP 5.3.13 with Suhosin v0.9.33 Please help me with this. Thank you

    Read the article

  • Using the jQuery UI Library in a MVC 3 Application to Build a Dialog Form

    - by ChrisD
    Using a simulated dialog window is a nice way to handle inline data editing. The jQuery UI has a UI widget for a dialog window that makes it easy to get up and running with it in your application. With the release of ASP.NET MVC 3, Microsoft included the jQuery UI scripts and files in the MVC 3 project templates for Visual Studio. With the release of the MVC 3 Tools Update, Microsoft implemented the inclusion of those with NuGet as packages. That means we can get up and running using the latest version of the jQuery UI with minimal effort. To the code! Another that might interested you about JQuery Mobile and ASP.NET MVC 3 with C#. If you are starting with a new MVC 3 application and have the Tools Update then you are a NuGet update and a <link> and <script> tag away from adding the jQuery UI to your project. If you are using an existing MVC project you can still get the jQuery UI library added to your project via NuGet and then add the link and script tags. Assuming that you have pulled down the latest version (at the time of this publish it was 1.8.13) you can add the following link and script tags to your <head> tag: < link href = "@Url.Content(" ~ / Content / themes / base / jquery . ui . all . css ")" rel = "Stylesheet" type = "text/css" /> < script src = "@Url.Content(" ~ / Scripts / jquery-ui-1 . 8 . 13 . min . js ")" type = "text/javascript" ></ script > The jQuery UI library relies upon the CSS scripts and some image files to handle rendering of its widgets (you can choose a different theme or role your own if you like). Adding these to the stock _Layout.cshtml file results in the following markup: <!DOCTYPE html> < html > < head >     < meta charset = "utf-8" />     < title > @ViewBag.Title </ title >     < link href = "@Url.Content(" ~ / Content / Site . css ")" rel = "stylesheet" type = "text/css" />     <link href="@Url.Content("~/Content/themes/base/jquery.ui.all.css")" rel="Stylesheet" type="text/css" />     <script src="@Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script>     <script src="@Url.Content("~/Scripts/modernizr-1.7.min . js ")" type = "text/javascript" ></ script >     < script src = "@Url.Content(" ~ / Scripts / jquery-ui-1 . 8 . 13 . min . js ")" type = "text/javascript" ></ script > </ head > < body >     @RenderBody() </ body > </ html > Our example will involve building a list of notes with an id, title and description. Each note can be edited and new notes can be added. The user will never have to leave the single page of notes to manage the note data. The add and edit forms will be delivered in a jQuery UI dialog widget and the note list content will get reloaded via an AJAX call after each change to the list. To begin, we need to craft a model and a data management class. We will do this so we can simulate data storage and get a feel for the workflow of the user experience. The first class named Note will have properties to represent our data model. namespace Website . Models {     public class Note     {         public int Id { get ; set ; }         public string Title { get ; set ; }         public string Body { get ; set ; }     } } The second class named NoteManager will be used to set up our simulated data storage and provide methods for querying and updating the data. We will take a look at the class content as a whole and then walk through each method after. using System . Collections . ObjectModel ; using System . Linq ; using System . Web ; namespace Website . Models {     public class NoteManager     {         public Collection < Note > Notes         {             get             {                 if ( HttpRuntime . Cache [ "Notes" ] == null )                     this . loadInitialData ();                 return ( Collection < Note >) HttpRuntime . Cache [ "Notes" ];             }         }         private void loadInitialData ()         {             var notes = new Collection < Note >();             notes . Add ( new Note                           {                               Id = 1 ,                               Title = "Set DVR for Sunday" ,                               Body = "Don't forget to record Game of Thrones!"                           });             notes . Add ( new Note                           {                               Id = 2 ,                               Title = "Read MVC article" ,                               Body = "Check out the new iwantmymvc.com post"                           });             notes . Add ( new Note                           {                               Id = 3 ,                               Title = "Pick up kid" ,                               Body = "Daughter out of school at 1:30pm on Thursday. Don't forget!"                           });             notes . Add ( new Note                           {                               Id = 4 ,                               Title = "Paint" ,                               Body = "Finish the 2nd coat in the bathroom"                           });             HttpRuntime . Cache [ "Notes" ] = notes ;         }         public Collection < Note > GetAll ()         {             return Notes ;         }         public Note GetById ( int id )         {             return Notes . Where ( i => i . Id == id ). FirstOrDefault ();         }         public int Save ( Note item )         {             if ( item . Id <= 0 )                 return saveAsNew ( item );             var existingNote = Notes . Where ( i => i . Id == item . Id ). FirstOrDefault ();             existingNote . Title = item . Title ;             existingNote . Body = item . Body ;             return existingNote . Id ;         }         private int saveAsNew ( Note item )         {             item . Id = Notes . Count + 1 ;             Notes . Add ( item );             return item . Id ;         }     } } The class has a property named Notes that is read only and handles instantiating a collection of Note objects in the runtime cache if it doesn't exist, and then returns the collection from the cache. This property is there to give us a simulated storage so that we didn't have to add a full blown database (beyond the scope of this post). The private method loadInitialData handles pre-filling the collection of Note objects with some initial data and stuffs them into the cache. Both of these chunks of code would be refactored out with a move to a real means of data storage. The GetAll and GetById methods access our simulated data storage to return all of our notes or a specific note by id. The Save method takes in a Note object, checks to see if it has an Id less than or equal to zero (we assume that an Id that is not greater than zero represents a note that is new) and if so, calls the private method saveAsNew . If the Note item sent in has an Id , the code finds that Note in the simulated storage, updates the Title and Description , and returns the Id value. The saveAsNew method sets the Id , adds it to the simulated storage, and returns the Id value. The increment of the Id is simulated here by getting the current count of the note collection and adding 1 to it. The setting of the Id is the only other chunk of code that would be refactored out when moving to a different data storage approach. With our model and data manager code in place we can turn our attention to the controller and views. We can do all of our work in a single controller. If we use a HomeController , we can add an action method named Index that will return our main view. An action method named List will get all of our Note objects from our manager and return a partial view. We will use some jQuery to make an AJAX call to that action method and update our main view with the partial view content returned. Since the jQuery AJAX call will cache the call to the content in Internet Explorer by default (a setting in jQuery), we will decorate the List, Create and Edit action methods with the OutputCache attribute and a duration of 0. This will send the no-cache flag back in the header of the content to the browser and jQuery will pick that up and not cache the AJAX call. The Create action method instantiates a new Note model object and returns a partial view, specifying the NoteForm.cshtml view file and passing in the model. The NoteForm view is used for the add and edit functionality. The Edit action method takes in the Id of the note to be edited, loads the Note model object based on that Id , and does the same return of the partial view as the Create method. The Save method takes in the posted Note object and sends it to the manager to save. It is decorated with the HttpPost attribute to ensure that it will only be available via a POST. It returns a Json object with a property named Success that can be used by the UX to verify everything went well (we won't use that in our example). Both the add and edit actions in the UX will post to the Save action method, allowing us to reduce the amount of unique jQuery we need to write in our view. The contents of the HomeController.cs file: using System . Web . Mvc ; using Website . Models ; namespace Website . Controllers {     public class HomeController : Controller     {         public ActionResult Index ()         {             return View ();         }         [ OutputCache ( Duration = 0 )]         public ActionResult List ()         {             var manager = new NoteManager ();             var model = manager . GetAll ();             return PartialView ( model );         }         [ OutputCache ( Duration = 0 )]         public ActionResult Create ()         {             var model = new Note ();             return PartialView ( "NoteForm" , model );         }         [ OutputCache ( Duration = 0 )]         public ActionResult Edit ( int id )         {             var manager = new NoteManager ();             var model = manager . GetById ( id );             return PartialView ( "NoteForm" , model );         }         [ HttpPost ]         public JsonResult Save ( Note note )         {             var manager = new NoteManager ();             var noteId = manager . Save ( note );             return Json ( new { Success = noteId > 0 });         }     } } The view for the note form, NoteForm.cshtml , looks like so: @model Website . Models . Note @using ( Html . BeginForm ( "Save" , "Home" , FormMethod . Post , new { id = "NoteForm" })) { @Html . Hidden ( "Id" ) < label class = "Title" >     < span > Title < /span><br / >     @Html . TextBox ( "Title" ) < /label> <label class="Body">     <span>Body</ span >< br />     @Html . TextArea ( "Body" ) < /label> } It is a strongly typed view for our Note model class. We give the <form> element an id attribute so that we can reference it via jQuery. The <label> and <span> tags give our UX some structure that we can style with some CSS. The List.cshtml view is used to render out a <ul> element with all of our notes. @model IEnumerable < Website . Models . Note > < ul class = "NotesList" >     @foreach ( var note in Model )     {     < li >         @note . Title < br />         @note . Body < br />         < span class = "EditLink ButtonLink" noteid = "@note.Id" > Edit < /span>     </ li >     } < /ul> This view is strongly typed as well. It includes a <span> tag that we will use as an edit button. We add a custom attribute named noteid to the <span> tag that we can use in our jQuery to identify the Id of the note object we want to edit. The view, Index.cshtml , contains a bit of html block structure and all of our jQuery logic code. @ {     ViewBag . Title = "Index" ; } < h2 > Notes < /h2> <div id="NoteListBlock"></ div > < span class = "AddLink ButtonLink" > Add New Note < /span> <div id="NoteDialog" title="" class="Hidden"></ div > < script type = "text/javascript" >     $ ( function () {         $ ( "#NoteDialog" ). dialog ({             autoOpen : false , width : 400 , height : 330 , modal : true ,             buttons : {                 "Save" : function () {                     $ . post ( "/Home/Save" ,                         $ ( "#NoteForm" ). serialize (),                         function () {                             $ ( "#NoteDialog" ). dialog ( "close" );                             LoadList ();                         });                 },                 Cancel : function () { $ ( this ). dialog ( "close" ); }             }         });         $ ( ".EditLink" ). live ( "click" , function () {             var id = $ ( this ). attr ( "noteid" );             $ ( "#NoteDialog" ). html ( "" )                 . dialog ( "option" , "title" , "Edit Note" )                 . load ( "/Home/Edit/" + id , function () { $ ( "#NoteDialog" ). dialog ( "open" ); });         });         $ ( ".AddLink" ). click ( function () {             $ ( "#NoteDialog" ). html ( "" )                 . dialog ( "option" , "title" , "Add Note" )                 . load ( "/Home/Create" , function () { $ ( "#NoteDialog" ). dialog ( "open" ); });         });         LoadList ();     });     function LoadList () {         $ ( "#NoteListBlock" ). load ( "/Home/List" );     } < /script> The <div> tag with the id attribute of "NoteListBlock" is used as a container target for the load of the partial view content of our List action method. It starts out empty and will get loaded with content via jQuery once the DOM is loaded. The <div> tag with the id attribute of "NoteDialog" is the element for our dialog widget. The jQuery UI library will use the title attribute for the text in the dialog widget top header bar. We start out with it empty here and will dynamically change the text via jQuery based on the request to either add or edit a note. This <div> tag is given a CSS class named "Hidden" that will set the display:none style on the element. Since our call to the jQuery UI method to make the element a dialog widget will occur in the jQuery document ready code block, the end user will see the <div> element rendered in their browser as the page renders and then it will hide after that jQuery call. Adding the display:hidden to the <div> element via CSS will ensure that it is never rendered until the user triggers the request to open the dialog. The jQuery document load block contains the setup for the dialog node, click event bindings for the edit and add links, and a call to a JavaScript function called LoadList that handles the AJAX call to the List action method. The .dialog() method is called on the "NoteDialog" <div> element and the options are set for the dialog widget. The buttons option defines 2 buttons and their click actions. The first is the "Save" button (the text in quotations is used as the text for the button) that will do an AJAX post to our Save action method and send the serialized form data from the note form (targeted with the id attribute "NoteForm"). Upon completion it will close the dialog widget and call the LoadList to update the UX without a redirect. The "Cancel" button simply closes the dialog widget. The .live() method handles binding a function to the "click" event on all elements with the CSS class named EditLink . We use the .live() method because it will catch and bind our function to elements even as the DOM changes. Since we will be constantly changing the note list as we add and edit we want to ensure that the edit links get wired up with click events. The function for the click event on the edit links gets the noteid attribute and stores it in a local variable. Then it clears out the HTML in the dialog element (to ensure a fresh start), calls the .dialog() method and sets the "title" option (this sets the title attribute value), and then calls the .load() AJAX method to hit our Edit action method and inject the returned content into the "NoteDialog" <div> element. Once the .load() method is complete it opens the dialog widget. The click event binding for the add link is similar to the edit, only we don't need to get the id value and we load the Create action method. This binding is done via the .click() method because it will only be bound on the initial load of the page. The add button will always exist. Finally, we toss in some CSS in the Content/Site.css file to style our form and the add/edit links. . ButtonLink { color : Blue ; cursor : pointer ; } . ButtonLink : hover { text - decoration : underline ; } . Hidden { display : none ; } #NoteForm label { display:block; margin-bottom:6px; } #NoteForm label > span { font-weight:bold; } #NoteForm input[type=text] { width:350px; } #NoteForm textarea { width:350px; height:80px; } With all of our code in place we can do an F5 and see our list of notes: If we click on an edit link we will get the dialog widget with the correct note data loaded: And if we click on the add new note link we will get the dialog widget with the empty form: The end result of our solution tree for our sample:

    Read the article

  • Batch file to Delete Old Virtual Directories.

    - by Michael Freidgeim
    On some servers we have many old Virtual Directories created for previous versions of our application. IIS user interface allows to delete only one in a time. Fortunately we can use IIS scripts as described in How to manage Web sites and Web virtual directories by using command-line scripts in IIS 6.0 I've created batch file DeleteOldVDirs.cmd rem http://support.microsoft.com/kb/816568 rem syntax: iisvdir /delete WebSite [/Virtual Path]Name [/s Computer [/u [Domain\]User /p Password]] REM list all directories and create batch of deletes iisvdir /query "Default Web Site" echo "Enter Ctrl-C  if you want to stop deleting" Pause iisvdir /delete "Default Web Site/VDirName1" iisvdir /delete "Default Web Site/VDirName2" ...   If the name of WebSite or Virtual directory contain spaces(e.g  "Default Web Site"), don't forget to use double quotes. Note that the batch doesn't delete physical directories from flie system.You need to delete them using Windows Explorer, but it does support multiple selection!

    Read the article

  • What could be causing this long waiting time on page load?

    - by Andrew Findlay
    What could be causing a 1.18s wait time when my page loads? Just to make sure I did not have any conflicting or parallel scripts loading, I completely deleted all the script on my home page and ran the speed test again. Although I had a blank website and 5kb file size, there was still a 900ms "waiting" time. I'm wondering if it could be my server? Any other thoughts or suggestions as it doesn't seem to be scripts. EDIT - Just ran a DNS test on pingdom and here are my results. Does this tell me anything? No nameservers found at child?

    Read the article

  • What is new in Oracle SOA Suite 11g R1 PS6? by Shanny Anoep

    - by JuergenKress
    Oracle has released a new version 11.1.1.7.0 for their Oracle Fusion Middleware product line. This version includes Patch Set #6 (PS6) for Oracle SOA Suite 11g R1, with a big list of improvements and fixes for each component in that suite. In this post we will highlight some of the interesting updates with regards to troubleshooting, performance, reliability and scalability. Infrastructure/Purging scripts Database growth is a common problem for large-scale Oracle SOA Suite deployments. Oracle already provides multiple purging strategies for the SOA Suite runtime database. This patch set includes two new scripts for purging most of the runtime data: Table Recreation Script (TRS): This script can be used to reclaim as much database space as possible, while still retaining the open instances. It can be used as a corrective action for databases that grew excessively, for example when purging was not performed at all. This should be used as a single corrective action only; the script does not replace the normal purging scripts. Truncate script: Remove all records from the SOA Suite runtime tables without dropping the tables. This script can be used for cloning SOA Suite environments without copying the instance data, or for recreating test scenarios by cleaning all the runtime data. The Oracle SOA Suite Administrator's guide contains a table with the available purging strategies. Diagnostic dumps Using WLST you could already dump diagnostic information about various components of the SOA Suite. This version adds support to retrieve more information on BPEL and Adapters from the command-line. Diagnostic dumps for BPEL New diagnostic dumps are available for BPEL to get information on thread pools, average processing time for BPEL components, and average waiting times for asynchronous instances. This information can be very useful for performance analysis or troubleshooting. With WLST this information can be retrieved from the command-line and included for monitoring or reporting. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Suite PS6,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Configure Jenkins and Tomcat using Puppet

    - by ex3v
    I'm trying to setup Spring dev environment (jenkins, tomcat) on Vagrant. What I really want to achieve is to limit config to only Puppet scripts, so I can share it with my colleagues and work together on the same environment. So far I managed to set up simple scripts to install jenkins, tomcat and so on, they work fine. What about jenkins configuration though? I'm pretty green in jenkins usage and configuration, not sure if I'm doing it the right way... I found this article and I want to migrate whole setup described in it to Puppet. Any ideas? Thanks.

    Read the article

  • Encrypting your SQL Server Passwords in Powershell

    - by laerte
    A couple of months ago, a friend of mine who is now bewitched by the seemingly supernatural abilities of Powershell (+1 for the team) asked me what, initially, appeared to be a trivial question: "Laerte, I do not have the luxury of being able to work with my SQL servers through Windows Authentication, and I need a way to automatically pass my username and password. How would you suggest I do this?" Given that I knew he, like me, was using the SQLPSX modules (an open source project created by Chad Miller; a fantastic library of reusable functions and PowerShell scripts), I merrily replied, "Simply pass the Username and Password in SQLPSX functions". He rather pointed responded: "My friend, I might as well pass: Username-'Me'-password 'NowEverybodyKnowsMyPassword'" As I do have the pleasure of working with Windows Authentication, I had not really thought this situation though yet (and thank goodness I only revealed my temporary ignorance to a friend, and the embarrassment was minimized). After discussing this puzzle with Chad Miller, he showed me some code for saving passwords on SQL Server Tables, which he had demo'd in his Powershell ETL session at Tampa SQL Saturday (and you can download the scripts from here). The solution seemed to be pretty much ready to go, so I showed it to my Authentication-impoverished friend, only to discover that we were only half-way there: "That's almost what I want, but the details need to be stored in my local txt file, together with the names of the servers that I'll actually use the Powershell scripts on. Something like: Server1,UserName,Password Server2,UserName,Password" I thought about it for just a few milliseconds (Ha! Of course I'm not telling you how long it actually took me, I have to do my own marketing, after all) and the solution was finally ready. First , we have to download Library-StringCripto (with many thanks to Steven Hystad), which is composed of two functions: One for encryption and other for decryption, both of which are used to manage the password. If you want to know more about the library, you can see more details in the help functions. Next, we have to create a txt file with your encrypted passwords:$ServerName = "Server1" $UserName = "Login1" $Password = "Senha1" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ServersSecurePassword.txt -Append $ServerName = "Server2" $UserName = "Login2" $Password = "senha2" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ ServersSecurePassword.txt -Append .And in the c:\temp\ServersSecurePassword.txt file which we've just created, you will find your Username and Password, all neatly encrypted. Let's take a look at what the txt looks like: .and in case you're wondering, Server names, Usernames and Passwords are all separated by commas. Decryption is actually much more simple:Read-EncryptedString -InputString $EncryptString -password "YourPassword" (Just remember that the Password you're trying to decrypt must be exactly the same as the encrypted phrase.) Finally, just to show you how smooth this solution is, let's say I want to use the Invoke-DBMaint function from SQLPSX to perform a checkdb on a system database: it's just a case of split, decrypt and be happy!Get-Content c:\temp\ServerSecurePassword.txt | foreach { [array] $Split = ($_).split(",") Invoke-DBMaint -server $($Split[0]) -UserName (Read-EncryptedString -InputString $Split[1] -password "YourPassword" ) -Password (Read-EncryptedString -InputString $Split[2] -password "YourPassword" ) -Databases "SYSTEM" -Action "CHECK_DB" -ReportOn c:\Temp } This is why I love Powershell.

    Read the article

  • SQL SERVER – Template Browser – A Very Important and Useful Feature of SSMS

    - by pinaldave
    Let me start today’s blog post with a direction question. How many of you have ever used Template Browser? Template Browser is a very important and useful feature of SQL Server Management Studio (SSMS). Every time when I am talking about SQL Server there is always someone comes up with the question, why there is no step by step procedure included in SSMS for features. Honestly every time I get this question, the question I ask back is How many of you have ever used Template Browser? I think the answer to this question is most of the time either no or we have not heard of the feature. One of the people asked me back – have you ever written about it on your blog? I have not yet written about it. Basically there is nothing much to write about it. It is pretty straight forward feature, like any other feature and it is indeed difficult to elaborate. However, I will try to give a quick introduction to this feature. Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. Additionally users can create new custom templates as well with folder structure. To open a template from Template Explorer Go to View menu >> Template Explorer or type CTRL+ALT+L. You will find a list of categories click on any category and expand the folder structure. For our sample example let us expand Index Folder. In this folder you will notice the various T-SQL Scripts. These scripts can be opened by double click or can be dragged to editor area and modified as needed. Sample template is now available in the query editor area with all the necessary parameter place folder. You can replace the same parameter by typing either CTRL+SHIFT+M or by going to Query Menu >> Specify Values for Template Parameters. In this screen it will show  Specify Values for Template Parameters dialog box, accept the value or replace it with a new value. This will now get your script ready to go. Check it one more time and change the script to fit your requirement. I personally use template explorer for two things. First one is obviously for templates but the hidden one and an important one is for learning new features and T-SQL commands. There is so much to learn and so little time. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Command 'setenv' not found upon installation of Crystal09 program

    - by Geninho
    Command 'setenv' not found I use ubuntu 11.04 I tried to install the program "Crystal09" and in the tutorial asks you to copy the file cry2k9.cshrc to the home directory. I copied the file, but when I do the command "source cry2k9.cshrc" (which is the installation tutorial) the following error appears (This error message is in Portuguese-br): (Translated :edit) Command 'setenv' not found, did you mean: Command 'netenv' package 'netenv' (universe) setenv: command not found Command 'setenv' not found, did you mean: Command 'netenv' package 'netenv' (universe) setenv: command not found CRY2K9_SCRDIR - scratch directory (integrals and temp files): CRY2K9_EXEDIR - directory with crystal executables: CRY2K9_UTILS - running scripts and misc: /runcry09, runprop09 CRY2K6_GRA - graphical scripts: /maps06, doss06, band06 CRY2K9_TEST - directory with test cases:

    Read the article

  • OWB 11gR2 &ndash; OMB and File Editing

    - by David Allan
    Here we will see how we can use the IDE for editing OMB scripts. The 11gR2 release is based on the common Oracle platform IDE used also by JDeveloper. It comes with a bunch of standard behavior for editing and rendering code. One of the lesser known things is that if you drop a text file into OWB you can edit it. So you can drop your tcl scripts right into OWB and edit in-place, and don’t need another IDE like Eclipse just for this task. Cool, so you have the file here. There may be no line numbers, you can toggle line numbers on by right clicking in the gutter. If we edit the file within the OWB IDE, the save is a little different from normal. OWB doesn’t normally manipulate files so things like ctrl-s to save, saves the OWB objects, but if you edit a file the closing of the file will ask if you want to save it – check it out. Now we enter the realm of ‘he who dares’…. Note the IDE doesn’t know about tcl files out of the box, so you see above there is no syntax highlighting. The code is identified by the extension… .java is java, .html is HTML etc. With OWB, the OMB scripts are tcl, we usually have .tcl extension on these files. One of the things we can do to trick up the syntax highlighting is to simply rename the file to have a .java suffix, then all of a sudden we get syntax highlighting, see the illustration here where side by side we see a the file with a .java extension and a .tcl extension. Not ideal pretending to be .java but gets us a way to having something more useful than notepad. We can then change the syntax highlighting such that we get Eclipse like highlighting within the IDE from the Tools Preferences option; You then get the Eclipse like rendering albeit using a little tweak on the file names… Might be useful if you are doing any kind of heavy duty OMB script development and just want a single IDE. The OMBPlus panel is then at hand for executing and testing it out.

    Read the article

  • SQL Server Maintenance Utilities Update for SQL Server 2008 R2

    - by Greg Low
    Great to see that our friend Ola Hallengren has updated his maintenance utility scripts to deal with SQL Server 2008 R2. These scripts are highly regarded, particularly given the price: free ! You'll find them here: http://ola.hallengren.com/Versions.html Ola noted that the main change from 2008 is that backup compression is now supported in Standard Edition of SQL Server. That in itself is good news. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • ASP.NET 3.5 Functions and Subroutines

    The most basic of all ASP.NET 3.5 server side scripts that I ve covered using the Visual Basic programming language is not modular in nature. This means that an ASP.NET 3.5 server will interpret the scripts in the Visual Basic file e.g Default.aspx.vb from top to bottom. In most real-world applications that use Visual Basic in ASP.NET websites however most web developers structure their programs in modules. This article will give you information about subroutines and functions along with practical examples and their advantages.... Cloud Servers in Demand - GoGrid Start Small and Grow with Your Business. $0.10/hour

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >