Search Results

Search found 24208 results on 969 pages for 'cmd script'.

Page 190/969 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • SQL SERVER – Selecting Domain from Email Address

    - by pinaldave
    Recently I came across a quick need where I needed to retrieve domain of the email address. The email address is in the database table. I quickly wrote following script which will extract the domain and will also count how many email addresses are there with the same domain address. SELECT RIGHT(Email, LEN(Email) - CHARINDEX('@', email)) Domain , COUNT(Email) EmailCount FROM   dbo.email WHERE  LEN(Email) > 0 GROUP BY RIGHT(Email, LEN(Email) - CHARINDEX('@', email)) ORDER BY EmailCount DESC Above script will select the domain after @ character. Please note, if there is more than one @ character in the email, this script will not work as that email address is already invalid. Do you have any similar script which can do the same thing efficiently? Please post as a comment. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • Should I use the same AddThis tag on multiple sites?

    - by ripper234
    I have an AddThis for one site: <script type="text/javascript" src="http://s7.addthis.com/js/250/addthis_widget.js#pubid=ripper234"> </script> Now I logged into AddThis and wanted to get my tag again, I saw it changed: <script type="text/javascript" src="http://s7.addthis.com/js/300/addthis_widget.js#pubid=ripper234"> </script> Should I use the same tag I got before, or the new tag? What's the difference? Is 250/300 the internal version number?

    Read the article

  • Command works partially when run from startup applications

    - by Gaurav Butola
    I have this script (or rather a set of commands which has permission to execute) to enable two finger scrolling and two finger tap = right click The script is located in /home/gaurav/Multigesture/multigesture. When I run the following command in terminal, two finger scrolling and two finger tap = right click start working. I have to run this command each time I boot my laptop. "/home/gaurav/Multigesture/multigesture" So I put this command in the startup applications so that I dont have to run the command each time I boot but when I reboot two finger scrolling is not working, only two finger tap = right click works. What could be the problem, If the command works fine from the terminal then how come it is working partially when i put it into startup applications. Here is the content of the script xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Two-Finger Scrolling" 8 1 xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Scrolling" 8 1 1 xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Pressure" 32 10 xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Width" 32 8 PS. the file which has all the commands (script's) name is multitouch

    Read the article

  • How do I choose a package format for Linux software distribution?

    - by Ian C.
    We have a Java-based application that, to date, we've been distributing as a tarball with instructions for deploying. It's mostly self-contained so deployment is fairly straight-forward: Untar on the disk you'd like it to live on; Make sure Java is in your path and a suitable distro and version; Verify ownership and group on all the files Start up the server processes with our start script If the user wants to get in to start-on-boot stuff with SysV we have some written instructions and a template init file for it in our tarball. We'd like to make this installation process a little more seamless; take care of the permissions and the init script deployment. We're also going to start bundling our own JRE with the application so that we're mostly free of external dependencies. The question we're faced with now is: how do we pick a package format for distribution? Is RPM the standard? Can all package management tools deal with it now? Our clients primarily run RHEL and CentOS, but we do have some using SuSE and even Debian. If we can pick a distro-agnostic format we'd prefer that. What about a self-extracting shell script? Something akin to how Java is distributed. If we're dependency-free would the self-extracting script be sufficient? What features or conveniences would we lose out on going with the script versus a proper package format meant for use by a package manager?

    Read the article

  • SOA Suite 11g Dynamic Payload Testing with soapUI Free Edition

    - by Greg Mally
    Overview Many web service developers use soapUI for various tests like: smoke test, unit test, and load testing because you can get a free edition that is fairly robust. However, if you need to venture into more complex testing that requires a dynamic payload, then the free edition doesn't necessarily make it easy. This feature does exist in soapUI, but for obvious reasons it is in the Pro version. In this blog I will show you how to use soapUI free edition for dynamic payloads in a simplified example. Hopefully this will open the doors for you to expand into more complex scenarios. The following assumes that you have a working knowledge of soapUI and will not go into concepts like setting up a project etc. For the basics, please review the documentation for soapUI: http://www.soapui.org/Getting-Started/. Additionally, we will be using asynchronous web services and you can review the setup for this in my blog: SOA Suite 11g Asynchronous Testing with soapUI. Features in soapUI Free Edition Relating to this Topic The soapUI test tool provides a very feature rich environment that can do many things provided you are willing to go beyond point and click. For this example, we will be leveraging just a couple features for our dynamic payload example: Test Case Properties Scripting with Groovy Basically, we will be using a property as a global variable and we will manipulate that property using a Groovy script. Setting Up Our Property Properties are available throughout soapUI and here is a snippet from the soapUI website defining the locations: Projects : for handling Project scope values, for example a subscription ID TestSuite : for handling TestSuite scoped values, can be seen as "arguments" to a TestSuite TestCases : for handling TestCase scoped values, can be seen as "arguments" to a TestCase Properties TestStep : for providing local values/state within a TestCase Local TestStep properties : several TestStep types maintain their own list of properties specific to their functionality : DataSource, DataSink, Run TestCase MockServices : for handling MockService scoped values/arguments MockResponses : for handling MockResponse scoped values Global Properties : for handling Global properties, optionally from an external source For our example, we will be defining a custom property in a TestCase called SimpleAsyncPayload. The property can be created in either the Custom Properties tab located at the bottom of the Navigator panel when the TestCase is selected in the Navigator or the Properties label in the TestCase editor: Navigator Panel TestCase Editor You will notice that I set a value of “0” for the custom property. For this simplified example, we will need to retrieve that value and manipulate it prior to making the web service request invocation. In order to accomplish this, we will need to get Groovy ;) Let's Get Groovy We will now add a new Groovy Script step to the TestCase called Manipulate Payload: TestCase Editor > Append Step > Groovy Script Once we have added the Groovy Script step to our TestCase, we can open the Groovy Script editor to add the code to: Get the current value of the property we created called SimpleAsyncPayload. Convert the value of the property to an integer. Increment the value. Store the incremented value back into the TestCase property called SimpleAsyncPayload. The script should look something like the following: Groovy Script Editor – Manipulate Payload At this point we can test the script to see if it is working by simply running the TestCase (left-click on the green triangle in the upper left-hand corner of the TestCase editor). To verify if it ran correctly, we can look at the value of the SimpleAsyncPayload property which should now be 1: TestCase Editor – Run Results All that is left to complete the TestCase is to append another step of type Test Request. The information required to append the request is a name and an operation to invoke. In this example we will use the default name and select the SimpleAsyncBPELProcessBingd -> process as the operation (any other information being requested, simply use the defaults unless you are calling an asynchronous operation then do not add any assertions). We are now in familiar ground with the Test Request editor. Depending upon the type of operation you are invoking (synchronous or asynchronous), please update the request with the necessary information (e.g., callback information for asynchronous operations). We will now tweak the Test Request payload to retrieve the value of the SimpleAsyncPayload property. The soapUI editor makes this very simple: right-click in the payload and navigate to the property (e.g., right-click > Get Data.. > TestCase: [Groovy TestCase] > Property [SimpleAsyncPayload]): Test Request Editor – Insert Property Value Your payload should now look something like the following: Test Request Editor – Inserted Property Value Just like before, we are now ready to run the TestCase. If everything goes as expected we should see a response like the following: Message Viewer – Results of TestCase Run We are now setup to be able to run a stress test where the payload will change for each request. This simple example can be expanded to include multiple payload values, complex calculations in the scripts, or whatever can be done via the soapUI scripting. Hopefully you have found this useful and happy testing to you :)

    Read the article

  • Phishing attack stuck with jsp loginAction.do page?

    - by user970533
    I'm testing a phishing website on a staged replica of an jsp web-application. I'm doing the usual attack which involves changing the post and action field of source code to divert to my own written jsp script capture the logins and redirect the victim to the original website. It looks easy, but trust me, it's has been me more then 2 weeks and I cannot write the logins to the text file. I have tested the jsp page on my local wamp server it works fine. In staged, when I click on the ok button for user/password field I'm taken to loginAction.do script. I checked this using the tamper data add-on on Firefox. The only way I was able to make my script run was to use burp proxy intercept the request and change action parameter to refer my uploaded script. I want to know what does an loginAction.do? I have googled it - it's quite common to see it in jsp application. I have checked the code; there is nothing that tells me why the page always points to the .do script instead of mine. Is there some kind of redirection in Tomcat? I like to know. I'm unable to exploit this attack vector? I need the community's help.

    Read the article

  • AutoHotkey cannot interact with Windows 8 Windows&hellip;or can it!

    - by deadlydog
    If you’ve installed Windows 8 and are trying to use AutoHotkey (AHK) to interact with some of the Winodws 8 Windows (such as the Control Panel for example), or with apps that need to be Ran As Administrator, then you’ve likely become very frustrated as I did to discover that AHK can not send any commands (keyboard or mouse input) to these windows.  This was a huge concern as I often need to run Visual Studio as an administrator and wanted my hotkeys and hotstrings to work in Visual Studio.  After a day of fighting I finally realized the answer (and it’s pretty obvious once you think about it).  If you want AHK to be able to interact with Windows 8 Windows or apps running as administrator, then you also need to have your AHK script Run As Administrator. If you are like me then you probably have your AHK scripts set to run automatically at login, which means you don’t have the opportunity to right-click on the script and manually tell it to Run As Administrator.  Luckily the work around is simple.  First, if you want to have your AHK script (or any program for that matter) run when you log in, create a shortcut to the application and place the shortcut in: C:\Users\[User Name]\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup Note that you will need to replace “[User Name]” with your username, and that “AppData” is a hidden folder so you’ll need to turn on viewing hidden folders to see it.  So by placing that shortcut there Windows will automatically run your script when you log on.  Now, to get it to run as an administrator by default, right-click on the shortcut and go to Properties.  Under the Shortcut tab, click on the “Advanced…” button and check off “Run as administrator”.  That’s it.  Now when you log onto Windows your script will automatically start up, running as an administrator; allowing it to interact with any application and window like you had expected it to in the first place.   Happy coding!

    Read the article

  • Is there a good [and modern] reason to not have static HTML pages with AJAX content , rather than generate pages?

    - by user1725
    Assumptions: We don't care about IE6, and Noscript users. Lets pretend we have the following design concept: All your pages are HTML/CSS that create the ascetics, layout, colours, general design related things. Lets pretend this basic code below is that: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <link href="/example.css" rel="stylesheet" type="text/css"/> <script src="example.js" type="text/javascript"></script> <head> <body> <div class="left"> </div> <div class="mid"> </div> <div class="right"> </div> </body> </html> Which in theory should produce, with the right CSS, three vertical columns on the web page. Now, here's the root of the question, what are the serious advantages and/or disadvantages of loading the content of these columns (lets assume they are all indeed dynamic content, not static) via AJAX requests, or have the content pre-set with a scripting language? So for instance, we would have, in the AJAX example, lets asume jquery is used on-load: //Multiple http requests $("body > div.left").load("./script.php?content=news"); $("body > div.right").load("./script.php?content=blogs"); $("body > div.mid").load("./script.php?content=links"); OR--- //Single http request $.ajax({ url: './script.php?content=news|blogs|links', method: 'json', type: 'text', success: function (data) { $("body > div.left").html(data.news); $("body > div.right").html(data.blogs); $("body > div.mid").html(data.links); } }) Verses doing this: <body> <div class="left"> <?php echo function_returning_news(); ?> </div> <div class="mid"> <?php echo function_returning_blogs(); ?> </div> <div class="right"> <?php echo function_returning_links(); ?> </div> </body> I'm personally thinking right now that doing static HTML pages is a better method, my reasoning is: I've separated my data, logic, and presentation (ie, "MVC") code. I can make changes to one without others. Browser caches mean I'm just getting server load mostly for the content, not the presentation wrapped around it. I could turn my "script.php" into a more robust API for the website. But I'm not certain or clear that these are legitimately good reasons, and I'm not confidently aware of other issues that could happen, so I would like to know the pros-and-cons, so to speak.

    Read the article

  • Run Linux command as predefined user

    - by vijay.shad
    Hi all, I have created a shell script to start a server program. startup.sh start When above command will executes, it will try starts the server as adminuser. To achieve this my script has written like this. SUBIT="su - adminuser -c " SERVER_BOX_COMMAND_A="Server" ############## # Function to start cluster function start(){ $SUBIT "$SERVER_BOX_COMMAND_A" } When i execute the command it asks for password. Is there any other way to do this so, it will not ask for password. I have seen this behavior in Jboss startup script provided on jboss. That script changes the user to jboss and then starts the jboss server. I wanted my script to behave same way.

    Read the article

  • SSIS: Building SQL databases on-the-fly using concatenated SQL scripts

    - by DrJohn
    Over the years I have developed many techniques which help automate the whole SQL Server build process. In my current process, where I need to build entire OLAP data marts on-the-fly, I make regular use of a simple but very effective mechanism to concatenate all the SQL Scripts together from my SSMS (SQL Server Management Studio) projects. This proves invaluable because in two clicks I can redeploy an entire SQL Server database with all tables, views, stored procedures etc. Indeed, I can also use the concatenated SQL scripts with SSIS to build SQL Server databases on-the-fly. You may be surprised to learn that I often redeploy the database several times per day, or even several times per hour, during the development process. This is because the deployment errors are logged and you can quickly see where SQL Scripts have object dependency errors. For example, after changing a table structure you may have forgotten to change any related views. The deployment log immediately points out all the objects which failed to build so you can fix and redeploy the database very quickly. The alternative approach (i.e. doing changes in the database directly using the SSMS UI) would require you to check all dependent objects before making changes. The chances are that you will miss something and wonder why your app returns the wrong data – a common problem caused by changing a table without re-creating dependent views. Using SQL Projects in SSMS A great many developers fail to make use of SQL Projects in SSMS (SQL Server Management Studio). To me they are invaluable way of organizing your SQL Scripts. The screenshot below shows a typical SSMS solution made up of several projects – one project for tables, another for views etc. The key point is that the projects naturally fall into the right order in file system because of the project name. The number in the folder or file name ensures that the projects the SQL scripts are concatenated together in the order that they need to be executed. Hence the script filenames start with 100, 110 etc. Concatenating SQL Scripts To concatenate the SQL Scripts together into one file, I use notepad.exe to create a simple batch file (see example screenshot) which uses the TYPE command to write the content of the SQL Script files into a combined file. As the SQL Scripts are in several folders, I simply use several TYPE command multiple times and append the output together. If you are unfamiliar with batch files, you may not know that the angled bracket (>) means write output of the program into a file. Two angled brackets (>>) means append output of this program into a file. So the command-line DIR > filelist.txt would write the content of the DIR command into a file called filelist.txt. In the example shown above, the concatenated file is called SB_DDS.sql If, like me you place the concatenated file under source code control, then the source code control system will change the file's attribute to "read-only" which in turn would cause the TYPE command to fail. The ATTRIB command can be used to remove the read-only flag. Using SQLCmd to execute the concatenated file Now that the SQL Scripts are all in one big file, we can execute the script against a database using SQLCmd using another batch file as shown below: SQLCmd has numerous options, but the script shown above simply executes the SS_DDS.sql file against the SB_DDS_DB database on the local machine and logs the errors to a file called SB_DDS.log. So after executing the batch file you can simply check the error log to see if your database built without a hitch. If you have errors, then simply fix the source files, re-create the concatenated file and re-run the SQLCmd to rebuild the database. This two click operation allows you to quickly identify and fix errors in your entire database definition.Using SSIS to execute the concatenated file To execute the concatenated SQL script using SSIS, you simply drop an Execute SQL task into your package and set the database connection as normal and then select File Connection as the SQLSourceType (as shown below). Create a file connection to your concatenated SQL script and you are ready to go.   Tips and TricksAdd a new-line at end of every fileThe most common problem encountered with this approach is that the GO statement on the last line of one file is placed on the same line as the comment at the top of the next file by the TYPE command. The easy fix to this is to ensure all your files have a new-line at the end.Remove all USE database statementsThe SQLCmd identifies which database the script should be run against.  So you should remove all USE database commands from your scripts - otherwise you may get unintentional side effects!!Do the Create Database separatelyIf you are using SSIS to create the database as well as create the objects and populate the database, then invoke the CREATE DATABASE command against the master database using a separate package before calling the package that executes the concatenated SQL script.    

    Read the article

  • How to sudo as another user, without specifying the username

    - by Pedro
    So I'm currently trying to create a sudoers file, but I ran into something I can't figure out. The end result I'm looking for is that I want users to be able to do something like: sudo /usr/sbin/script.pl But, instead of running as root, I'd like the script to run as "other_user". I looked into the sudoers file, and I tried adding a line like: pedro ALL = (other_user) /usr/sbin/script.pl But that only works if I specify the user by doing sudo -u other_user /usr/sbin/script. Is there an (easy) way to have the script run as a specific user, without having to specify it in the command line?

    Read the article

  • OWB 11gR2 &ndash; JDBC Helper Utility

    - by David Allan
    One of the common queries when importing the tables via JDBC with 11gR2 is determining why the import wizard doesn’t display the tables that you think it should. I often just use the script below to dump out the schemas, tables and columns that the JDBC driver is returning. This is useful in a few areas; to figure out what the schema name is returned to double check with the schema name you have used in the location (this is used in the DatabaseMetaData.getTables API call within the basic JDBC metadata import. to figure out the data types returned from the JDBC driver when you see columns skipped because of no datatype supported messages. also…I can do it via scripting and don’t need to recompile classes and stuff :-) Edit the tcl script and set the JDBC driver, the connection URL and the username and password (they are at the bottom of the script), the script then calls a basic tcl procedure which writes to standard out the schemas, tables and columns with various properties. For example I executed it using the XML JDBC driver from ODI over a simple customers XML file and it writes the following metadata; You can add more details as you need and execute from the OMBPlus panel within OWB. Download the sample tcl jdbc script here There is a bunch of really useful stuff on OTN documenting this area (start with the white paper here) that is worth checking out all related to the OWB SDK covering everything from platform definitions, custom metadata importers, application adapters, code templates etc. You can find a bunch of goodies on the OWB SDK here.

    Read the article

  • SQL SERVER – Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace) – Puzzle for You

    - by pinaldave
    First of all – if you are going to say this is very old subject, I agree this is very (very) old subject. I believe in earlier time we used to have this only option to monitor Log Space. As new version of SQL Server released we all equipped with DMV, Performance Counters, Extended Events and much more new enhancements. However, during all this year, I have always used DBCC SQLPERF(logspace) to get the details of the logs. It may be because when I started my career I remember this command and it did what I wanted all the time. Recently I have received interesting question and I thought, I should request your help. However, before I request your help, let us see traditional usage of DBCC SQLPERF(logspace). Every time I have to get the details of the log I ran following script. Additionally, I liked to store the details of the when the log file snapshot was taken as well so I can go back and know the status log file growth. This gives me a fair estimation when the log file was growing. CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logSize, logSpaceUsed, [status]) EXEC ('DBCC SQLPERF(logspace)') GO SELECT * FROM dbo.logSpaceUsage GO I used to record the details of log file growth every hour of the day and then we used to plot charts using reporting services (and excel in much earlier times). Well, if you look at the script above it is very simple script. Now here is the puzzle for you. Puzzle 1: Write a script based on a table which gives you the time period when there was highest growth based on the data stored in the table. Puzzle 2: Write a script based on a table which gives you the amount of the log file growth from the beginning of the table to the latest recording of the data. You may have to run above script at some interval to get the various data samples of the log file to answer above puzzles. To make things simple, I am giving you sample script with expected answers listed below for both of the puzzle. Here is the sample query for puzzle: -- This is sample query for puzzle CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logDate, logSize, logSpaceUsed, [status]) SELECT 'SampleDB1', '2012-07-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 9:00:00.000', 16, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 11:00:00.000', 9, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 14:00:00.000', 18, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-04 7:00:00.000', 15, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-09 7:00:00.000', 25, 10, 0 GO Expected Result of Puzzle 1 You will notice that there are two entries for database SampleDB3 as there were two instances of the log file grows with the same value. Expected Result of Puzzle 2 Well, please a comment with valid answer and I will post valid answers with due credit next week. Not to mention that winners will get a surprise gift from me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: DBCC

    Read the article

  • Opening a new Windows from ASP.NET code behind

    - by TATWORTH
    At http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx there is an excellent post on how to open a new windows from code behind. The purists may not like it but it helped solve a problem for a client's client. Here is an update for VS2010 users: using System; using System.Web; using System.Web.UI; /// <summary> /// Response Helper for opening popup windo from code behind. /// </summary> public static class ResponseHelper {   /// <summary>   /// Redirect to popup window   /// </summary>   /// <param name="response">The response.</param>   /// <param name="url">URL to open to</param>   /// <param name="target">Target of window _self or _blank</param>   /// <param name="windowFeatures">Features such as window bar</param>   /// <remarks>   ///     <list type="bullet">   ///         <item>   /// From http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx   /// </item>   /// <item>   /// Note: If you use it outside the context of a Page request, you can't redirect to a new window. The reason is the need to call the ResolveClientUrl method on Page, which I can't do if there is no Page. I could have just built my own version of that method, but it's more involved than you might think to do it right. So if you need to use this from an HttpHandler other than a Page, you are on your own.   /// </item>   ///         <item>   /// Beware of popup blockers.   /// </item>   /// <item>   /// Note: Obviously when you are redirecting to a new window, the current window will still be hanging around. Normally redirects abort the current request -- no further processing occurs. But for these redirects, processing continues, since we still have to serve the response for the current window (which also happens to contain the script to open the new window, so it is important that it completes).   /// </item>   /// <item>   /// Sample call Response.Redirect("popup.aspx", "_blank", "menubar=0,width=100,height=100");   /// </item>   ///     </list>   /// </remarks>   public static void Redirect(this HttpResponse response, string url, string target, string windowFeatures)   {     if ((String.IsNullOrEmpty(target) || target.Equals("_self", StringComparison.OrdinalIgnoreCase)) && String.IsNullOrEmpty(windowFeatures))     {       response.Redirect(url);     }     else     {       Page page = (Page)HttpContext.Current.Handler;       if (page == null)       {         throw new InvalidOperationException("Cannot redirect to new window outside Page context.");       }       url = page.ResolveClientUrl(url);       string script;       if (!String.IsNullOrEmpty(windowFeatures))       {         script = @"window.open(""{0}"", ""{1}"", ""{2}"");";       }       else       {         script = @"window.open(""{0}"", ""{1}"");";       }       script = String.Format(script, url, target, windowFeatures);       ScriptManager.RegisterStartupScript(page, typeof(Page), "Redirect", script, true);     }   } }

    Read the article

  • Comunicate NodeJS and PHP

    - by Zenth
    i need ideas to solve this: I have a entire website in PHP (5.2) in a PHP "shared server", only i can use apache+PHP, CGI & NodeJS, no memcached, redis or another software. And i need to comunicate the PHP and the NodeJS Script. My first approach is using socket connection, creating in NodeJS a socket listener and connect to it witch PHP, and then, send commands, whait for response, and close connection (and end PHP Script). To the other side, i can call PHP script via ¿httprequest? ¿or using sockets again? The problem of using sockets fron Node to PHP, i CANT leave PHP script runing with set_time_limit(0) because the fuc... server, need to "call" PHP for another way. The NodeJS and Apache + PHP are in the same machine, i need to make the code for the fast response time (sockets better than web-calls). Better ideas or other solutions? thanks!

    Read the article

  • Brightness problem after upgrade Ubuntu 13.10

    - by Daniel Yunus
    I have upgrade to 13.10 recently. But, the brightness is working after I add this in version 13.04 before. After I upgrade to 13.10, this code is not working on ASUS Slimbook X401U #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. echo 0 > /sys/class/backlight/acpi_video0/brightness exit 0

    Read the article

  • How do I cut and paste commands from your blog?

    - by Maria Colgan
    At the recent ODTUG  Kscope 12 conference several people told me that they really enjoyed our blog on the Optimizer but were frustrated because they couldn’t cut and paste the commands used in the blog posts straight into their environment. Typically I use screen shots in the blog posts to make the commands clear but it does mean that it is impossible to cut and paste the commands into your environment. In order to get around this I have created a downloadable .sql script for each of our blog posts. You should now see the sentence “You can get a copy of the script I used to generate this post here”, appearing at the bottom of each blog post. Clicking on the link will open the .sql script that contains all of the commands used in the post. You can either save the entire script or just cut and paste the particular command you are interested in! I have added scripts for all of this year’s blog posts and am slowly making my way through our old posts until we have a script for everything we have posted to date. Hopefully this will help! +Maria Colgan

    Read the article

  • Adsense block not displaying anything

    - by Mild Fuzz
    I have copied and pasted the following code into my page, but it seems to be having no effect. It should have a fall back block colour, but nothing is showing. Code copied/pasted straight from google <script type="text/javascript"><!-- google_ad_client = "ca-pub-7972043490779920"; /* SALF */ google_ad_slot = "4085311300"; google_ad_width = 300; google_ad_height = 250; //--> </script> <script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"> </script>

    Read the article

  • Sum of two Textbox values into third Textbox using JQuery

    - by Rajneesh Verma
    A script that sums up two textbox values using jQuery. **Note that I am not using any validation for textbox values. <html xmlns= "http://www.w3.org/1999/xhtml" > <head runat= "server" > <title></title> <script type= "text/javascript" src= "http://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js" ></script> <script type= "text/javascript" > $(function () { var textBox1 = $( 'input:text[id$=TextBox1...(read more)

    Read the article

  • Last click counts link cookies

    - by user3636031
    I want to fix so my only the last click gets the cookie, here is my script: <script type="text/javascript"> document.write('<scr' + 'ipt type="text/javascript" src="' + document.location.protocol + '//sc.tradetracker.net/public/tradetracker/tracking/?e=dedupe&amp;t=js"></scr' + 'ipt>'); </script> <script type="text/javascript"> // The pixels. var _oPixels = { tradetracker: '<img id="tt" />', tradedoubler: '<img id="td" />', zanox: '<img id="zx" />', awin: '<img id="aw" />' }; // Run the dedupe. _ttDedupe( 'conversion', 'network' ); </script> <noscript> <img id="tt" /> <img id="td" /> <img id="zx" /> <img id="aw" /> </noscript> How can I get this right? Thanks!

    Read the article

  • TinyMCE autoresize plugin not works

    - by user31929
    I want to reproduce this simply behaviour : http://tinymcesupport.com/tutorials/autoresize-automatic-resize-plugin This is my init: <!-- TinyMCE --> <script type="text/javascript" src="js/jscripts/tiny_mce/tiny_mce.js"></script> <script type="text/javascript"> tinyMCE.init({ mode : "exact", elements : "pagina_testo_colonna1,pagina_testo_colonna2,pagina_testo_colonna3", theme : "advanced", plugins:"paste,autoresize", plugin_preview_width : "100%", width : "100%", theme_advanced_buttons1 : "pastetext,|,bold,italic,underline,strikethrough,|,bullist,numlist,|,indent,outdent,|,undo,redo,|,justifyleft,justifycenter,justifyright,justifyfull,|,link,unlink,|,charmap", theme_advanced_buttons2 : "", theme_advanced_buttons3 :"", theme_advanced_disable : "image,anchor,cleanup,help,code,hr,removeformat,sub,sup", theme_advanced_resizing : true, paste_text_use_dialog : true, relative_urls : false, remove_script_host : false }); </script> <!-- /TinyMCE --> i have added "autoresize" to plugins list but my editors not resize while i writing, they simply scroll. I have multiple editor in the same page. What's wrong with my code?

    Read the article

  • Who spotted the omission?

    - by olaf.heimburger
    In my entry OFM 11g: Install OAM 10.1.4.3 (32-bit) on 64-bit RedHat AS 5 I explained how to install OAM 10.1.4.3 (32-bit) on 64-bit RedHat. This is great and works. If you seriously want to use OAM 10.1.4.3 you should consider OHS 11g 32-bit. But this installation is a bit tricky. Nearly all tricks to get this done are described in the above mentioned entry. Today I realized that I missed a small bit to get the installation successfully done.The missing part is within the script to create a vital piece of the OHS 11g package. This part is called genclientsh and resides in $OHS_HOME/bin. This script uses gcc to link binaries. By default this script works great, but on a 64-bit Linux it fails. To get around this, find the variable LD and change the value of gcc to gcc -m32.Done. Caveat On support.oracle.com you will find a Note that suggests to build a small shell script named gcc and includes the -m32 switch. Actually, I consider this as dangerous, because we are humans and tend to forget things quickly. Building a globally available script that changes things for a single setup has side effects that will result in unpredictable results.

    Read the article

  • Why is my google translate data not showing up in analytics?

    - by learnvst
    I have a google translate widget on my website with gaTrack set to true and the correct gaId, but no translate events show up in analytics under content Events Overview after a week of the widget being live. The code snippet below was auto generated by the translate website, and works fine on the site. Any ideas why I'm not getting any translate events data? <li> <div id="google_translate_element"></div><script type="text/javascript"> function googleTranslateElementInit() { new google.translate.TranslateElement({pageLanguage: 'en', layout: google.translate.TranslateElement.InlineLayout.SIMPLE, gaTrack: true, gaId: 'UA-blahblah'}, 'google_translate_element'); } </script><script type="text/javascript" src="//translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"></script> </li>

    Read the article

  • What are the necessary periodic checks for server?

    - by Edmund
    Hi all, I have some server which my team use for hosting internal applications for development purpose. I thinking of setting up some periodic checks but do now know how to go about it. Can advise on the following? Preferably windows bat file or linux script How to write a script that will check the content of a webpage to verify if it is down. How to write a script that will check if the website is down by pinging it How to write a script that will check the diskspace of the server is running out of diskspace. How to write a script that will email back to system administrator if either of the above tasks are not fulfilled?

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >