Search Results

Search found 39866 results on 1595 pages for '2012 end of the world'.

Page 307/1595 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • Where in a typical rendering pipeline does visibility and shading occur?

    - by user29163
    I am taking a computer graphics course. The book and the lecture notes are vague on the on the order of flow between the different steps in the rendering process. For example, if we have specified a view in a scene, and then want to perform a projection transformation for that given view, then we have to go through a sequence of transformations. In the end we end up with a normalized "viewcube" ready to be mapped 2D after clipping. But why do we end up with a cube (ie 3D thing), when a projection results in projecting the 3D objects to 2D. (depth information is lost?) The other line of reasoning is that all information further needed is stored within the "cube" and that visibility detection and shading is performed with respect to this cube and then we perform rasterezation.

    Read the article

  • There are Cloud Heroes Among Us: download the ebook

    - by Javier Puerta
    Given the importance of information systems in today's business world, database administrators (DBAs) and other technology professionals often perform heroic deeds for their organizations. While many of these IT pros are too humble to acknowledge their worth, we profiled five real-world IT heroes to demonstrate their value to their organizations-and the industry at large. Many of our heroes are bloggers who share new ideas and developments with their colleagues. Our heroes are creative individuals who can accurately assess a situation and rally their colleagues to address pressing issues. These heroes are authors and known Oracle technology user group leaders. Read their stories today and join them in leading a greater future for the IT industry.

    Read the article

  • Number of iterations to real time

    - by Ivansek
    I have an animation of traffic. I have 20 cars in road network, each car have a starting node and end node. Each car know how much distance does it need to travel in order to reach the end node. I move cars each 20 ms for 10 px. To move all cars from their start node to end node I need 60 iterations. That is 60*20ms = 1200ms. Now I want to convert this time, or use data that I have, to a real time where car move 50km/h. How can I do that? Any idea?

    Read the article

  • Why Deliver Customer Service in the Cloud?

    - by Charles Knapp
    In volatile, competitive markets, delivering exceptional service across channels is essential. But delivering world-class service on tight budgets, and deliving improvements quickly, is a tough challenge. That's why so many of the world's most successful organizations choose to deliver customer service in the cloud. Example: Michele Watson, VP of Global Customer Care at Match.com, says Oracle's service in the cloud "helps our customer receive the support they need in real time, our contact center agents be more productive and helpful, and our executive and product development teams receive detailed feedback to continue our improve our customers' experience." Learn more here about why you should consider delivering customer service in the cloud. 

    Read the article

  • How to pass user-defined structs using boost mpi

    - by lava
    I am trying to send a user-defined structure named ABC using boost::mpi::send () call. The given struct contains a vector "data" whose size is determined at runtime. Objects of struct ABC are sent by master to slaves. But the slaves need to know the size of vector "data" so that the sufficient buffer is available on the slave to receive this data. I can work around it by sending the size first and initialize sufficient buffer on the slave before receiving the objects of struct ABC. But that defeats the whole purpose of using STL containers. Does anyone know of a better way to do handle this ? Any suggestions are greatly appreciated. Here is a sample code that describes the intent of my program. This code fails at runtime due to above mentioned reason. struct ABC { double cur_stock_price; double strike_price; double risk_free_rate; double option_price; std::vector <char> data; }; namespace boost { namespace serialization { template<class Archive> void serialize (Archive &ar, struct ABC &abc, unsigned int version) { ar & abc.cur_stock_price; ar & abc.strike_price; ar & abc.risk_free_rate; ar & abc.option_price; ar & bopr.data; } } } BOOST_IS_MPI_DATATYPE (ABC); int main(int argc, char* argv[]) { mpi::environment env (argc, argv); mpi::communicator world; if (world.rank () == 0) { ABC abc_obj; abc.cur_stock_price = 1.0; abc.strike_price = 5.0; abc.risk_free_rate = 2.5; abc.option_price = 3.0; abc_obj.data.push_back ('a'); abc_obj.data.push_back ('b'); world.send ( 1, ANY_TAG, abc_obj;); std::cout << "Rank 0 OK!" << std::endl; } else if (world.rank () == 1) { ABC abc_obj; // Fails here because abc_obj is not big enough world.recv (0,ANY_TAG, abc_obj;); std::cout << "Rank 1 OK!" << std::endl; for (int i = 0; i < abc_obj;.data.size(); i++) std::cout << i << "=" << abc_obj.data[i] << std::endl; } MPI_Finalize(); return 0; }

    Read the article

  • How to support 3-byte UTF-8 Characters in ANT

    - by efelton
    I am trying to support UTF-8 characters in my ANT script. As long as the character string are made up of 2-byte UTF-8 characters, such as: Lògìñ Ùsèr ÌÐ Then things work fine. When I use Unicode Han Character: ? Which, according to this site: http://www.fileformat.info/info/unicode/char/6211/index.htm Has a UTF-8 encoding of 0xE6 0x88 0x91 I can see in UltraEdit, my input properties file has the values "E6 88 91" all in a row, so I'm fairly confident that my input is correct. And When I open the same file in Notepad++ I can see all the characters correctly. Here is my Build Script: <?xml version="1.0" encoding="UTF-8" ?> <project name="utf8test" default="all" basedir="."> <target name="all"> <loadproperties encoding="UTF-8" srcfile="./apps.properties.all.txt" /> <echo>No encoding ${common.app.name}</echo> <echo encoding="UTF-8">UTF-8 ${common.app.name}</echo> <echo encoding="UnicodeLittle">UnicodeLittle ${common.app.name}</echo> <echo encoding="UnicodeLittleUnmarked">UnicodeLittleUnmarked ${common.app.name}</echo> <echo>${common.app.ServerName}</echo> <echo>${bb.vendor}</echo> <echo>No encoding ${common.app.UserIdText}</echo> <echo encoding="UTF-8">UTF-8 ${common.app.UserIdText}</echo> <echo encoding="UnicodeLittle">UnicodeLittle ${common.app.UserIdText}</echo> <echo encoding="UnicodeLittleUnmarked">UnicodeLittleUnmarked ${common.app.UserIdText}</echo> <echoproperties /> </target> </project> And here is my properties file: common.app=VrvPsLTst common.app.name=?? common.app.description=Pseudo Loc Test App for Build Script testing common.app.ServerName=http://Vèrìvò.com bb.vendor=Vèrìvò common.app.PasswordText=Pàsswòrð bb.override.list=MP_COPYRIGHTTEXT, "Çòpÿrìght 2012 Vèrívó Bùîlð TéàM" common.app.LoginButtonText=Lògìñ common.app.UserIdText=Ùsèr ÌÐ bb.SMSSuccess=Mèssàgéß Sùççêssfúllÿ Sëñt common.app.LoginScreenMessage=WèlçòMé Mêssàgë common.app.LoginProgressMessage=Àùthèñtìçàtíòñ îñ prógréss... ios.RegistrationText=Règìstràtíòñ Téxt ios.RegistrationURL=http://www.josscrowcroft.com/2011/code/utf-8-multibyte-characters-in-url-parameters-%E2%9C%93/ Here is what the output looks like: Buildfile: C:\Temp\utf8\build.xml all: [echo] No encoding ?? [echo] UTF-8 ?? [echo] ÿþU n i c o d e L i t t l e ? ? [echo] U n i c o d e L i t t l e U n m a r k e d ? ? [echo] http://Vèrìvò.com [echo] Vèrìvò [echo] No encoding Ùsèr ÌÐ [echo] UTF-8 Ùsèr ÃŒÃ? [echo] ÿþU n i c o d e L i t t l e Ù s è r Ì Ð [echo] U n i c o d e L i t t l e U n m a r k e d Ù s è r Ì Ð [echoproperties] #Ant properties [echoproperties] #Mon Jun 18 15:25:13 EDT 2012 [echoproperties] ant.core.lib=C\:\\ant\\lib\\ant.jar [echoproperties] ant.file=C\:\\Temp\\utf8\\build.xml [echoproperties] ant.file.type=file [echoproperties] ant.file.type.utf8test=file [echoproperties] ant.file.utf8test=C\:\\Temp\\utf8\\build.xml [echoproperties] ant.home=c\:\\ant\\bin\\.. [echoproperties] ant.java.version=1.6 [echoproperties] ant.library.dir=C\:\\ant\\lib [echoproperties] ant.project.default-target=all [echoproperties] ant.project.invoked-targets=all [echoproperties] ant.project.name=utf8test [echoproperties] ant.version=Apache Ant version 1.8.1 compiled on April 30 2010 [echoproperties] awt.toolkit=sun.awt.windows.WToolkit [echoproperties] basedir=C\:\\Temp\\utf8 [echoproperties] bb.SMSSuccess=M\u00E8ss\u00E0g\u00E9\u00DF S\u00F9\u00E7\u00E7\u00EAssf\u00FAll\u00FF S\u00EB\u00F1t [echoproperties] bb.override.list=MP_COPYRIGHTTEXT, "\u00C7\u00F2p\u00FFr\u00ECght 2012 V\u00E8r\u00EDv\u00F3 B\u00F9\u00EEl\u00F0 T\u00E9\u00E0?" [echoproperties] bb.vendor=V\u00E8r\u00ECv\u00F2 [echoproperties] common.app=VrvPsLTst [echoproperties] common.app.LoginButtonText=L\u00F2g\u00EC\u00F1 [echoproperties] common.app.LoginProgressMessage=\u00C0\u00F9th\u00E8\u00F1t\u00EC\u00E7\u00E0t\u00ED\u00F2\u00F1 \u00EE\u00F1 pr\u00F3gr\u00E9ss... [echoproperties] common.app.LoginScreenMessage=W\u00E8l\u00E7\u00F2?\u00E9 M\u00EAss\u00E0g\u00EB [echoproperties] common.app.PasswordText=P\u00E0ssw\u00F2r\u00F0 [echoproperties] common.app.ServerName=http\://V\u00E8r\u00ECv\u00F2.com [echoproperties] common.app.UserIdText=\u00D9s\u00E8r \u00CC\u00D0 [echoproperties] common.app.description=Pseudo Loc Test App for Build Script testing [echoproperties] common.app.name=?? [echoproperties] file.encoding=Cp1252 [echoproperties] file.encoding.pkg=sun.io [echoproperties] file.separator=\\ [echoproperties] ios.RegistrationText=R\u00E8g\u00ECstr\u00E0t\u00ED\u00F2\u00F1 T\u00E9xt [echoproperties] ios.RegistrationURL=http\://www.josscrowcroft.com/2011/code/utf-8-multibyte-characters-in-url-parameters-%E2%9C%93/ [echoproperties] java.awt.graphicsenv=sun.awt.Win32GraphicsEnvironment [echoproperties] java.awt.printerjob=sun.awt.windows.WPrinterJob [echoproperties] java.class.path=c\:\\ant\\bin\\..\\lib\\ant-launcher.jar;C\:\\Temp\\utf8\\.\\;C\:\\Program Files (x86)\\Java\\jre7\\lib\\ext\\QTJava.zip;C\:\\ant\\lib\\ant-antlr.jar;C\:\\ant\\lib\\ant-apache-bcel.jar;C\:\\ant\\lib\\ant-apache-bsf.jar;C\:\\ant\\lib\\ant-apache-log4j.jar;C\:\\ant\\lib\\ant-apache-oro.jar;C\:\\ant\\lib\\ant-apache-regexp.jar;C\:\\ant\\lib\\ant-apache-resolver.jar;C\:\\ant\\lib\\ant-apache-xalan2.jar;C\:\\ant\\lib\\ant-commons-logging.jar;C\:\\ant\\lib\\ant-commons-net.jar;C\:\\ant\\lib\\ant-contrib-1.0b3.jar;C\:\\ant\\lib\\ant-jai.jar;C\:\\ant\\lib\\ant-javamail.jar;C\:\\ant\\lib\\ant-jdepend.jar;C\:\\ant\\lib\\ant-jmf.jar;C\:\\ant\\lib\\ant-jsch.jar;C\:\\ant\\lib\\ant-junit.jar;C\:\\ant\\lib\\ant-launcher.jar;C\:\\ant\\lib\\ant-netrexx.jar;C\:\\ant\\lib\\ant-nodeps.jar;C\:\\ant\\lib\\ant-starteam.jar;C\:\\ant\\lib\\ant-stylebook.jar;C\:\\ant\\lib\\ant-swing.jar;C\:\\ant\\lib\\ant-testutil.jar;C\:\\ant\\lib\\ant-trax.jar;C\:\\ant\\lib\\ant-weblogic.jar;C\:\\ant\\lib\\ant.jar;C\:\\ant\\lib\\bb-ant-tools.jar;C\:\\ant\\lib\\xercesImpl.jar;C\:\\ant\\lib\\xml-apis.jar;C\:\\Program Files\\Java\\jre7\\lib\\tools.jar [echoproperties] java.class.version=51.0 [echoproperties] java.endorsed.dirs=C\:\\Program Files\\Java\\jre7\\lib\\endorsed [echoproperties] java.ext.dirs=C\:\\Program Files\\Java\\jre7\\lib\\ext;C\:\\Windows\\Sun\\Java\\lib\\ext [echoproperties] java.home=C\:\\Program Files\\Java\\jre7 [echoproperties] java.io.tmpdir=C\:\\Users\\efelton\\AppData\\Local\\Temp\\ [echoproperties] java.library.path=C\:\\Windows\\SYSTEM32;C\:\\Windows\\Sun\\Java\\bin;C\:\\Windows\\system32;C\:\\Windows;C\:\\Windows\\SYSTEM32;C\:\\Windows;C\:\\Windows\\SYSTEM32\\WBEM;C\:\\Windows\\SYSTEM32\\WINDOWSPOWERSHELL\\V1.0\\;C\:\\PROGRAM FILES\\INTEL\\WIFI\\BIN\\;C\:\\PROGRAM FILES\\COMMON FILES\\INTEL\\WIRELESSCOMMON\\;C\:\\PROGRAM FILES (X86)\\MICROSOFT SQL SERVER\\100\\TOOLS\\BINN\\;C\:\\PROGRAM FILES\\MICROSOFT SQL SERVER\\100\\TOOLS\\BINN\\;C\:\\PROGRAM FILES\\MICROSOFT SQL SERVER\\100\\DTS\\BINN\\;C\:\\PROGRAM FILES (X86)\\MICROSOFT SQL SERVER\\100\\TOOLS\\BINN\\VSSHELL\\COMMON7\\IDE\\;C\:\\PROGRAM FILES (X86)\\MICROSOFT SQL SERVER\\100\\DTS\\BINN\\;C\:\\Program Files\\ThinkPad\\Bluetooth Software\\;C\:\\Program Files\\ThinkPad\\Bluetooth Software\\syswow64;C\:\\Program Files (x86)\\QuickTime\\QTSystem\\;C\:\\Program Files (x86)\\AccuRev\\bin;C\:\\Program Files\\Java\\jdk1.7.0_04\\bin;C\:\\Program Files (x86)\\IDM Computer Solutions\\UltraEdit\\;. [echoproperties] java.runtime.name=Java(TM) SE Runtime Environment [echoproperties] java.runtime.version=1.7.0_04-b22 [echoproperties] java.specification.name=Java Platform API Specification [echoproperties] java.specification.vendor=Oracle Corporation [echoproperties] java.specification.version=1.7 [echoproperties] java.vendor=Oracle Corporation [echoproperties] java.vendor.url=http\://java.oracle.com/ [echoproperties] java.vendor.url.bug=http\://bugreport.sun.com/bugreport/ [echoproperties] java.version=1.7.0_04 [echoproperties] java.vm.info=mixed mode [echoproperties] java.vm.name=Java HotSpot(TM) 64-Bit Server VM [echoproperties] java.vm.specification.name=Java Virtual Machine Specification [echoproperties] java.vm.specification.vendor=Oracle Corporation [echoproperties] java.vm.specification.version=1.7 [echoproperties] java.vm.vendor=Oracle Corporation [echoproperties] java.vm.version=23.0-b21 [echoproperties] line.separator=\r\n [echoproperties] os.arch=amd64 [echoproperties] os.name=Windows 7 [echoproperties] os.version=6.1 [echoproperties] path.separator=; [echoproperties] sun.arch.data.model=64 [echoproperties] sun.boot.class.path=C\:\\Program Files\\Java\\jre7\\lib\\resources.jar;C\:\\Program Files\\Java\\jre7\\lib\\rt.jar;C\:\\Program Files\\Java\\jre7\\lib\\sunrsasign.jar;C\:\\Program Files\\Java\\jre7\\lib\\jsse.jar;C\:\\Program Files\\Java\\jre7\\lib\\jce.jar;C\:\\Program Files\\Java\\jre7\\lib\\charsets.jar;C\:\\Program Files\\Java\\jre7\\lib\\jfr.jar;C\:\\Program Files\\Java\\jre7\\classes [echoproperties] sun.boot.library.path=C\:\\Program Files\\Java\\jre7\\bin [echoproperties] sun.cpu.endian=little [echoproperties] sun.cpu.isalist=amd64 [echoproperties] sun.desktop=windows [echoproperties] sun.io.unicode.encoding=UnicodeLittle [echoproperties] sun.java.command=org.apache.tools.ant.launch.Launcher -cp .;C\:\\Program Files (x86)\\Java\\jre7\\lib\\ext\\QTJava.zip [echoproperties] sun.java.launcher=SUN_STANDARD [echoproperties] sun.jnu.encoding=Cp1252 [echoproperties] sun.management.compiler=HotSpot 64-Bit Tiered Compilers [echoproperties] sun.os.patch.level=Service Pack 1 [echoproperties] user.country=US [echoproperties] user.dir=C\:\\Temp\\utf8 [echoproperties] user.home=C\:\\Users\\efelton [echoproperties] user.language=en [echoproperties] user.name=efelton [echoproperties] user.script= [echoproperties] user.timezone= [echoproperties] user.variant= BUILD SUCCESSFUL Total time: 1 second Thank you for your help EDIT\UPDATE 6/19/2012 I am developing in a Windows environment. I have installed a TTF from: http://freedesktop.org/wiki/Software/CJKUnifonts/Download I have updated UltraEdit to use the TTF and I can see the Chinese characters. <?xml version="1.0" encoding="UTF-8" ?> <project name="utf8test" default="all" basedir="."> <target name="all"> <echo>??</echo> <echo encoding="ISO-8859-1">ISO-8859-1 ??</echo> <echo encoding="UTF-8">UTF-8 ??</echo> <echo file="echo_output.txt" append="true" >?? ${line.separator}</echo> <echo file="echo_output.txt" append="true" encoding="ISO-8859-1">ISO-8859-1 ?? ${line.separator}</echo> <echo file="echo_output.txt" append="true" encoding="UTF-8">UTF-8 ?? ${line.separator}</echo> <echo file="echo_output.txt" append="true" encoding="UnicodeLittle">UnicodeLittle ?? ${line.separator}</echo> <echo file="echo_output.txt" append="true" encoding="UnicodeLittleUnmarked">UnicodeLittleUnmarked ?? ${line.separator}</echo> </target> </project> The output captured by running inside UltraEdit is: Buildfile: E:\temp\utf8\build.xml all: [echo] ?? [echo] ISO-8859-1 ?? [echo] UTF-8 ?? BUILD SUCCESSFUL Total time: 1 second And the echo_output.txt file shows up like this: ?? ISO-8859-1 ?? UTF-8 ?? ÿþU n i c o d e L i t t l e ? ? U n i c o d e L i t t l e U n m a r k e d ? ? So there appears to be somehting fundamentally wrong with how my ANT environment is set up since I cannot simply echo the character to the screen or to a file.

    Read the article

  • Create xml file to be used in wordpress post import

    - by adedoy
    Alright here is the thing, I have this site that was once wordpress but have been converted into 70+ static pages, the admin is deleted and the whole site is static(which means every page is in index.html), I want to create a script that makes an xml so that I will just have to import it in the new wordpress install. So far, I am able to create an XML but it only imports one post. The data source is the URL of a page and I use jquery $get to filter only to gather the post of a given archive. //html <input type="text" class="full_path"> <input type="button" value="Get Data" class="getdata"> //script $('.getdata').click(function(){ $.get($('.full_path').val(), function(data) { post = $(data).find('div [style*="width:530px;"]'); $('.result').html(post.html()); }); });//get Data Through AJAX I send the cleaned data into a php below that creates the XML: $file = 'newpost.xml'; $post_data = $_REQUEST['post_data']; // Open the file to get existing content $current = file_get_contents($file); // Append a new post to the file $catStr = ''; if(isset($post_data['categories']) && count($post_data['categories']) > 0){ foreach($post_data['categories'] as $category) { $catStr .= '<category domain="category" nicename="'.$category.'"><![CDATA['.$category.']]></category>'; } } $tagStr = ''; if(isset($post_data['tags']) && count($post_data['tags']) > 0){ foreach($post_data['tags'] as $tag) { $tagStr = '<category domain="post_tag" nicename="'.$tag.'"><![CDATA['.$tag.']]></category>'; } } $post_name = str_replace(' ','-',$post_data["title"]); $post_name = str_replace(array('"','/',':','.',',','[',']','“','”'),'',strtolower($post_name)); $post_date = '2011-4-0'.rand(1, 29).''.rand(1, 12).':'.rand(1, 59).':'.rand(1, 59); $pubTime = rand(1, 12).':'.rand(1, 59).':'.rand(1, 59).' +0000'; $post = ' <item> <title>'.$post_data["title"].'</title> <link>'.$post_data["link"].'</link> <pubDate>'.$post_data["date"].' '.$pubTime.'</pubDate> <dc:creator>admin</dc:creator> <guid isPermaLink="false">http://localhost/saunders/?p=1</guid> <description></description> <content:encoded><![CDATA['.$post_data["content"].']]></content:encoded> <excerpt:encoded><![CDATA[]]></excerpt:encoded> <wp:post_id>1</wp:post_id> <wp:post_date>'.$post_date.'</wp:post_date> <wp:post_date_gmt>'.$post_date.'</wp:post_date_gmt> <wp:comment_status>open</wp:comment_status> <wp:ping_status>open</wp:ping_status> <wp:post_name>'.$post_name.'</wp:post_name> <wp:status>publish</wp:status> <wp:post_parent>0</wp:post_parent> <wp:menu_order>0</wp:menu_order> <wp:post_type>post</wp:post_type> <wp:post_password></wp:post_password> <wp:is_sticky>0</wp:is_sticky> '.$catStr.' '.$tagStr.' <wp:postmeta> <wp:meta_key>_edit_last</wp:meta_key> <wp:meta_value><![CDATA[1]]></wp:meta_value> </wp:postmeta> </item> '; // Write the contents back to the file with the appended post file_put_contents($file, $current.$post); After being appended I add the code below to complete the xml rss tag </channel> </rss> If I look and compare the xml file of one that is exported from a wordpress site, I see little difference. Please HELP!! here is a sample of a generated xml: <?xml version="1.0" encoding="UTF-8" ?> <rss version="2.0" xmlns:excerpt="http://wordpress.org/export/1.2/excerpt/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:wp="http://wordpress.org/export/1.2/" > <channel> <title>lols why</title> <link>http://localhost/lols</link> <description>Just another WordPress site</description> <pubDate>Wed, 03 Oct 2012 04:24:04 +0000</pubDate> <language>en-US</language> <wp:wxr_version>1.2</wp:wxr_version> <wp:base_site_url>http://localhost/lols</wp:base_site_url> <wp:base_blog_url>http://localhost/lols</wp:base_blog_url> <wp:author><wp:author_id>1</wp:author_id><wp:author_login>adedoy</wp:author_login><wp:author_email>[email protected]</wp:author_email><wp:author_display_name><![CDATA[adedoy]]></wp:author_display_name><wp:author_first_name><![CDATA[]]></wp:author_first_name><wp:author_last_name><![CDATA[]]></wp:author_last_name></wp:author> <generator>http://wordpress.org/?v=3.4.1</generator> <item> <title>Sample lift?</title> <link>../../breast-lift/delaware-breast-surgery-do-i-need-a-breast-lift/</link> <pubDate>Wed, 03 Oct 2012 9:29:16 +0000</pubDate> <dc:creator>admin</dc:creator> <guid isPermaLink="false">http://localhost/lols/?p=1</guid> <description></description> <content:encoded><![CDATA[<p>sample</p>]]></content:encoded> <excerpt:encoded><![CDATA[]]></excerpt:encoded> <wp:post_id>1</wp:post_id> <wp:post_date>2011-4-0132:45:4</wp:post_date> <wp:post_date_gmt>2011-4-0132:45:4</wp:post_date_gmt> <wp:comment_status>open</wp:comment_status> <wp:ping_status>open</wp:ping_status> <wp:post_name>sample-lift?</wp:post_name> <wp:status>publish</wp:status> <wp:post_parent>0</wp:post_parent> <wp:menu_order>0</wp:menu_order> <wp:post_type>post</wp:post_type> <wp:post_password></wp:post_password> <wp:is_sticky>0</wp:is_sticky> <category domain="category" nicename="Sample Lift"><![CDATA[Sample Lift]]></category><category domain="category" nicename="Sample Procedures"><![CDATA[Yeah Procedures]]></category> <category domain="post_tag" nicename="delaware"><![CDATA[delaware]]></category> <wp:postmeta> <wp:meta_key>_edit_last</wp:meta_key> <wp:meta_value><![CDATA[1]]></wp:meta_value> </wp:postmeta> </item> <item> <title>lalalalalala</title> <link>../../administrative-tips-for-surgery/delaware-cosmetic-surgery-a-better-experience/</link> <pubDate>Wed, 03 Oct 2012 3:20:43 +0000</pubDate> <dc:creator>admin</dc:creator> <guid isPermaLink="false">http://localhost/lols/?p=1</guid> <description></description> <content:encoded><![CDATA[ lalalalalala ]]></content:encoded> <excerpt:encoded><![CDATA[]]></excerpt:encoded> <wp:post_id>1</wp:post_id> <wp:post_date>2011-4-0124:39:30</wp:post_date> <wp:post_date_gmt>2011-4-0124:39:30</wp:post_date_gmt> <wp:comment_status>open</wp:comment_status> <wp:ping_status>open</wp:ping_status> <wp:post_name>lalalalalala</wp:post_name> <wp:status>publish</wp:status> <wp:post_parent>0</wp:post_parent> <wp:menu_order>0</wp:menu_order> <wp:post_type>post</wp:post_type> <wp:post_password></wp:post_password> <wp:is_sticky>0</wp:is_sticky> <category domain="category" nicename="lalalalalala"><![CDATA[lalalalalala]]></category> <category domain="post_tag" nicename="oink"><![CDATA[oink]]></category> <wp:postmeta> <wp:meta_key>_edit_last</wp:meta_key> <wp:meta_value><![CDATA[1]]></wp:meta_value> </wp:postmeta> </item> </channel> </rss> Please tell me what am I missing....

    Read the article

  • Issues with signal handling [closed]

    - by user34790
    I am trying to actually study the signal handling behavior in multiprocess system. I have a system where there are three signal generating processes generating signals of type SIGUSR1 and SIGUSR1. I have two handler processes that handle a particular type of signal. I have another monitoring process that also receives the signals and then does its work. I have a certain issue. Whenever my signal handling processes generate a signal of a particular type, it is sent to the process group so it is received by the signal handling processes as well as the monitoring processes. Whenever the signal handlers of monitoring and signal handling processes are called, I have printed to indicate the signal handling. I was expecting a uniform series of calls for the signal handlers of the monitoring and handling processes. However, looking at the output I could see like at the beginning the monitoring and signal handling processes's signal handlers are called uniformly. However, after I could see like signal handler processes handlers being called in a burst followed by the signal handler of monitoring process being called in a burst. Here is my code and output #include <iostream> #include <sys/types.h> #include <sys/wait.h> #include <sys/time.h> #include <signal.h> #include <cstdio> #include <stdlib.h> #include <sys/ipc.h> #include <sys/shm.h> #define NUM_SENDER_PROCESSES 3 #define NUM_HANDLER_PROCESSES 4 #define NUM_SIGNAL_REPORT 10 #define MAX_SIGNAL_COUNT 100000 using namespace std; volatile int *usrsig1_handler_count; volatile int *usrsig2_handler_count; volatile int *usrsig1_sender_count; volatile int *usrsig2_sender_count; volatile int *lock_1; volatile int *lock_2; volatile int *lock_3; volatile int *lock_4; volatile int *lock_5; volatile int *lock_6; //Used only by the monitoring process volatile int monitor_count; volatile int usrsig1_monitor_count; volatile int usrsig2_monitor_count; double time_1[NUM_SIGNAL_REPORT]; double time_2[NUM_SIGNAL_REPORT]; //Used only by the main process int total_signal_count; //For shared memory int shmid; const int shareSize = sizeof(int) * (10); double timestamp() { struct timeval tp; gettimeofday(&tp, NULL); return (double)tp.tv_sec + tp.tv_usec / 1000000.; } pid_t senders[NUM_SENDER_PROCESSES]; pid_t handlers[NUM_HANDLER_PROCESSES]; pid_t reporter; void signal_catcher_1(int); void signal_catcher_2(int); void signal_catcher_int(int); void signal_catcher_monitor(int); void signal_catcher_main(int); void terminate_processes() { //Kill the child processes int status; cout << "Time up terminating the child processes" << endl; for(int i=0; i<NUM_SENDER_PROCESSES; i++) { kill(senders[i],SIGKILL); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { kill(handlers[i],SIGKILL); } kill(reporter,SIGKILL); //Wait for the child processes to finish for(int i=0; i<NUM_SENDER_PROCESSES; i++) { waitpid(senders[i], &status, 0); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { waitpid(handlers[i], &status, 0); } waitpid(reporter, &status, 0); } int main(int argc, char *argv[]) { if(argc != 2) { cout << "Required parameters missing. " << endl; cout << "Option 1 = 1 which means run for 30 seconds" << endl; cout << "Option 2 = 2 which means run until 100000 signals" << endl; exit(0); } int option = atoi(argv[1]); pid_t pid; if(option == 2) { if(signal(SIGUSR1, signal_catcher_main) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, signal_catcher_main) == SIG_ERR) { perror("2"); exit(1); } } else { if(signal(SIGUSR1, SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } } if(signal(SIGINT, signal_catcher_int) == SIG_ERR) { perror("3"); exit(1); } /////////////////////////////////////////////////////////////////////////////////////// ////////////////////// Initializing the shared memory ///////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////// cout << "Initializing the shared memory" << endl; if ((shmid=shmget(IPC_PRIVATE,shareSize,IPC_CREAT|0660))< 0) { perror("shmget fail"); exit(1); } usrsig1_handler_count = (int *) shmat(shmid, NULL, 0); usrsig2_handler_count = usrsig1_handler_count + 1; usrsig1_sender_count = usrsig2_handler_count + 1; usrsig2_sender_count = usrsig1_sender_count + 1; lock_1 = usrsig2_sender_count + 1; lock_2 = lock_1 + 1; lock_3 = lock_2 + 1; lock_4 = lock_3 + 1; lock_5 = lock_4 + 1; lock_6 = lock_5 + 1; //Initialize them to be zero *usrsig1_handler_count = 0; *usrsig2_handler_count = 0; *usrsig1_sender_count = 0; *usrsig2_sender_count = 0; *lock_1 = 0; *lock_2 = 0; *lock_3 = 0; *lock_4 = 0; *lock_5 = 0; *lock_6 = 0; cout << "End of initializing the shared memory" << endl; ///////////////////////////////////////////////////////////////////////////////////////////// /////////////////// End of initializing the shared memory /////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////Registering the signal handlers/////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the signal handlers" << endl; for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { if((pid = fork()) == 0) { if(i%2 == 0) { struct sigaction action; action.sa_handler = signal_catcher_1; sigset_t block_mask; action.sa_flags = 0; sigaction(SIGUSR1,&action,NULL); if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } } else { if(signal(SIGUSR1 ,SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } struct sigaction action; action.sa_handler = signal_catcher_2; action.sa_flags = 0; sigaction(SIGUSR2,&action,NULL); } if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } while(true) { pause(); } exit(0); } else { //cout << "Registerd the handler " << pid << endl; handlers[i] = pid; } } cout << "End of registering the signal handlers" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////End of registering the signal handlers ////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////////////////////// ///////////////////////////Registering the monitoring process ////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the monitoring process" << endl; if((pid = fork()) == 0) { struct sigaction action; action.sa_handler = signal_catcher_monitor; sigemptyset(&action.sa_mask); sigset_t block_mask; sigemptyset(&block_mask); sigaddset(&block_mask,SIGUSR1); sigaddset(&block_mask,SIGUSR2); action.sa_flags = 0; action.sa_mask = block_mask; sigaction(SIGUSR1,&action,NULL); sigaction(SIGUSR2,&action,NULL); if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } while(true) { pause(); } exit(0); } else { cout << "Monitor's pid is " << pid << endl; reporter = pid; } cout << "End of registering the monitoring process" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////End of registering the monitoring process//////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //Sleep to make sure that the monitor and handler processes are well initialized and ready to handle signals sleep(5); ////////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////Registering the signal generators/////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the signal generators" << endl; for(int i=0; i<NUM_SENDER_PROCESSES; i++) { if((pid = fork()) == 0) { if(signal(SIGUSR1, SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } srand(i); while(true) { int signal_id = rand()%2 + 1; if(signal_id == 1) { killpg(getpgid(getpid()), SIGUSR1); while(__sync_lock_test_and_set(lock_4,1) != 0) { } (*usrsig1_sender_count)++; *lock_4 = 0; } else { killpg(getpgid(getpid()), SIGUSR2); while(__sync_lock_test_and_set(lock_5,1) != 0) { } (*usrsig2_sender_count)++; *lock_5=0; } int r = rand()%10 + 1; double s = (double)r/100; sleep(s); } exit(0); } else { //cout << "Registered the sender " << pid << endl; senders[i] = pid; } } //cout << "End of registering the signal generators" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////End of registering the signal generators/////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //Either sleep for 30 seconds and terminate the program or if the number of signals generated reaches 10000, terminate the program if(option = 1) { sleep(90); terminate_processes(); } else { while(true) { if(total_signal_count >= MAX_SIGNAL_COUNT) { terminate_processes(); } else { sleep(0.001); } } } } void signal_catcher_1(int the_sig) { while(__sync_lock_test_and_set(lock_1,1) != 0) { } (*usrsig1_handler_count) = (*usrsig1_handler_count) + 1; cout << "Signal Handler 1 " << *usrsig1_handler_count << endl; __sync_lock_release(lock_1); } void signal_catcher_2(int the_sig) { while(__sync_lock_test_and_set(lock_2,1) != 0) { } (*usrsig2_handler_count) = (*usrsig2_handler_count) + 1; __sync_lock_release(lock_2); } void signal_catcher_main(int the_sig) { while(__sync_lock_test_and_set(lock_6,1) != 0) { } total_signal_count++; *lock_6 = 0; } void signal_catcher_int(int the_sig) { for(int i=0; i<NUM_SENDER_PROCESSES; i++) { kill(senders[i],SIGKILL); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { kill(handlers[i],SIGKILL); } kill(reporter,SIGKILL); exit(3); } void signal_catcher_monitor(int the_sig) { cout << "Monitoring process " << *usrsig1_handler_count << endl; } Here is the initial segment of output Monitoring process 0 Monitoring process 0 Monitoring process 0 Monitoring process 0 Signal Handler 1 1 Monitoring process 2 Signal Handler 1 2 Signal Handler 1 3 Signal Handler 1 4 Monitoring process 4 Monitoring process Signal Handler 1 6 Signal Handler 1 7 Monitoring process 7 Monitoring process 8 Monitoring process 8 Signal Handler 1 9 Monitoring process 9 Monitoring process 9 Monitoring process 10 Signal Handler 1 11 Monitoring process 11 Monitoring process 12 Signal Handler 1 13 Signal Handler 1 14 Signal Handler 1 15 Signal Handler 1 16 Signal Handler 1 17 Signal Handler 1 18 Monitoring process 19 Signal Handler 1 20 Monitoring process 20 Signal Handler 1 21 Monitoring process 21 Monitoring process 21 Monitoring process 22 Monitoring process 22 Monitoring process 23 Signal Handler 1 24 Signal Handler 1 25 Monitoring process 25 Signal Handler 1 27 Signal Handler 1 28 Signal Handler 1 29 Here is the segment when the signal handler processes signal handlers are called in a burst Signal Handler 1 456 Signal Handler 1 457 Signal Handler 1 458 Signal Handler 1 459 Signal Handler 1 460 Signal Handler 1 461 Signal Handler 1 462 Signal Handler 1 463 Signal Handler 1 464 Signal Handler 1 465 Signal Handler 1 466 Signal Handler 1 467 Signal Handler 1 468 Signal Handler 1 469 Signal Handler 1 470 Signal Handler 1 471 Signal Handler 1 472 Signal Handler 1 473 Signal Handler 1 474 Signal Handler 1 475 Signal Handler 1 476 Signal Handler 1 477 Signal Handler 1 478 Signal Handler 1 479 Signal Handler 1 480 Signal Handler 1 481 Signal Handler 1 482 Signal Handler 1 483 Signal Handler 1 484 Signal Handler 1 485 Signal Handler 1 486 Signal Handler 1 487 Signal Handler 1 488 Signal Handler 1 489 Signal Handler 1 490 Signal Handler 1 491 Signal Handler 1 492 Signal Handler 1 493 Signal Handler 1 494 Signal Handler 1 495 Signal Handler 1 496 Signal Handler 1 497 Signal Handler 1 498 Signal Handler 1 499 Signal Handler 1 500 Signal Handler 1 501 Signal Handler 1 502 Signal Handler 1 503 Signal Handler 1 504 Signal Handler 1 505 Signal Handler 1 506 Here is the segment when the monitoring processes signal handlers are called in a burst Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Why isn't it uniform afterwards. Why are they called in a burst?

    Read the article

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • GRUB doesn't recognize partitions on one harddisk

    - by knizz
    I have a dualboot computer with Windows Vista (on hd0) and Ubuntu 9.10. The bootloader is GRUB and the windows bootloader lets me decide between Vista and Ubuntu-Installation (broken WuBi). But now (i don't know why that changed) I can't use start the windows-bootloader anymore. I tried "ls" on the grub-prompt and it gave me a list like: (hd0) (hd1) (hd1,0) (hd1,1) (hd1,2) ... (fd0) It recognizes all partitions of hd1 (the ubuntu-harddisk) but not of hd0(the win-disk). .. WHY? Here is the result of the "boot info script" for the technical details: Boot Info Script 0.55 dated February 15th, 2010 ============================= Boot Info Summary: ============================== => Grub 2 is installed in the MBR of /dev/sda and looks for (UUID=a7c510e3-2399-437b-ab92-fa609e48d63f)/boot/grub. => No boot loader is installed in the MBR of /dev/sdb sda1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows Vista Boot files/dirs: /bootmgr /Boot/BCD /Windows/System32/winload.exe /wubildr.mbr /wubildr sda2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sdb1: _________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: mount: unbekannter Dateisystemtyp „“ sdb2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sdb3: _________________________________________________________________________ File system: Bios Boot Partition Boot sector type: - Boot sector info: sdb4: _________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 9.10 Boot files/dirs: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb5: _________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: =========================== Drive/Partition Info: ============================= Drive: sda ___________________ _____________________________________________________ Platte /dev/sda: 640.1 GByte, 640135028736 Byte 255 Köpfe, 63 Sektoren/Spuren, 77825 Zylinder, zusammen 1250263728 Sektoren Einheiten = Sektoren von 1 × 512 = 512 Bytes Disk identifier: 0x52554d66 Partition Boot Start End Size Id System /dev/sda1 * 2,048 307,202,047 307,200,000 7 HPFS/NTFS /dev/sda2 307,202,048 1,250,258,943 943,056,896 7 HPFS/NTFS Drive: sdb ___________________ _____________________________________________________ Platte /dev/sdb: 640.1 GByte, 640135028736 Byte 255 Köpfe, 63 Sektoren/Spuren, 77825 Zylinder, zusammen 1250263728 Sektoren Einheiten = Sektoren von 1 × 512 = 512 Bytes Disk identifier: 0x00000000 Partition Boot Start End Size Id System /dev/sdb1 1 1,250,263,727 1,250,263,727 ee GPT GUID Partition Table detected. Partition Start End Size System /dev/sdb1 34 262,177 262,144 Microsoft Windows /dev/sdb2 262,178 1,131,253,933 1,130,991,756 Linux or Data /dev/sdb3 1,131,253,934 1,131,255,887 1,954 Bios Boot Partition /dev/sdb4 1,131,255,888 1,245,312,528 114,056,641 Linux or Data /dev/sdb5 1,245,312,529 1,250,263,694 4,951,166 Linux Swap blkid -c /dev/null: ____________________________________________________________ Device UUID TYPE LABEL /dev/sda1 AE1440441440122F ntfs /dev/sda2 3AE66E4DE66E0A09 ntfs data /dev/sdb2 5419D16119DAA4DE ntfs LaufwerkD /dev/sdb4 a7c510e3-2399-437b-ab92-fa609e48d63f ext4 /dev/sdb5 60a0143a-e01b-450a-bbd1-f22059e47b65 swap ============================ "mount | grep ^/dev output: =========================== Device Mount_Point Type Options /dev/sdb4 / ext4 (rw,errors=remount-ro) =========================== sdb4/boot/grub/grub.cfg: =========================== # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s /boot/grub/grubenv ]; then have_grubenv=true load_env fi set default="0" if [ ${prev_saved_entry} ]; then saved_entry=${prev_saved_entry} save_env saved_entry prev_saved_entry= save_env prev_saved_entry fi insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi if [ ${recordfail} = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/white ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry "Ubuntu, Linux 2.6.31-20-generic" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi set quiet=1 insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-20-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro quiet splash initrd /boot/initrd.img-2.6.31-20-generic } menuentry "Ubuntu, Linux 2.6.31-20-generic (recovery mode)" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-20-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro single initrd /boot/initrd.img-2.6.31-20-generic } menuentry "Ubuntu, Linux 2.6.31-14-generic" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi set quiet=1 insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-14-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro quiet splash initrd /boot/initrd.img-2.6.31-14-generic } menuentry "Ubuntu, Linux 2.6.31-14-generic (recovery mode)" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-14-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro single initrd /boot/initrd.img-2.6.31-14-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows Vista (loader) (on /dev/sda1)" { insmod ntfs set root=(hd0,1) search --no-floppy --fs-uuid --set ae1440441440122f chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### =============================== sdb4/etc/fstab: =============================== # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 # / was on /dev/sdb4 during installation UUID=a7c510e3-2399-437b-ab92-fa609e48d63f / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb5 during installation UUID=60a0143a-e01b-450a-bbd1-f22059e47b65 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 =================== sdb4: Location of files loaded by Grub: =================== 583.8GB: boot/grub/core.img 583.8GB: boot/grub/grub.cfg 579.7GB: boot/initrd.img-2.6.31-14-generic 580.0GB: boot/initrd.img-2.6.31-20-generic 579.7GB: boot/vmlinuz-2.6.31-14-generic 579.8GB: boot/vmlinuz-2.6.31-20-generic 580.0GB: initrd.img 579.7GB: initrd.img.old 579.8GB: vmlinuz 579.7GB: vmlinuz.old =========================== Unknown MBRs/Boot Sectors/etc ======================= Unknown BootLoader on sdb1 00000000 54 34 dc 3b 8b ff 6c fa 3e 59 3d 24 25 af 5f 9b |T4.;..l.>Y=$%._.| 00000010 72 f8 36 3d 56 30 22 fd c6 08 5e 39 7f dc 29 48 |r.6=V0"...^9..)H| 00000020 48 e5 24 52 77 b0 fc 64 b6 ce 48 c3 07 ce b5 81 |H.$Rw..d..H.....| 00000030 06 68 60 4f 6e fb 83 92 df 3a 54 b9 df 21 2a cd |.h`On....:T..!*.| 00000040 1e 2f e2 49 fe cf 81 2d 52 17 1a 4e 66 b4 f3 f0 |./.I...-R..Nf...| 00000050 41 25 e3 96 26 28 fe 19 61 72 75 f8 40 a3 b7 ef |A%..&(..aru.@...| 00000060 5f 79 dc cb 28 44 44 7c 9b 9a 7b 6c 4b 4b 60 0f |_y..(DD|..{lKK`.| 00000070 a9 97 87 bc 85 9f db bb d2 1a 88 9f aa 49 18 d5 |.............I..| 00000080 92 2d db 7e fe f7 8d 7a 18 c0 33 c5 bd 7a 46 07 |.-.~...z..3..zF.| 00000090 c8 27 13 66 94 49 62 9f bc 99 56 55 25 fb 94 a9 |.'.f.Ib...VU%...| 000000a0 3f b2 a7 0a 87 d0 a4 4e 51 f1 09 02 c4 29 eb ff |?......NQ....)..| 000000b0 26 3b 51 3e 5a 0c db ee a6 57 a7 c3 ba a1 74 90 |&;Q>Z....W....t.| 000000c0 ee 70 08 18 cc b8 d0 22 ce 96 c7 cb 68 40 98 20 |.p....."....h@. | 000000d0 49 3d 07 ec df d1 8d cf 19 bc 42 90 70 24 01 b4 |I=........B.p$..| 000000e0 28 cf c6 50 d3 95 5a 1b 18 15 33 c7 b2 a8 95 92 |(..P..Z...3.....| 000000f0 bb 93 fe 18 2b 81 c1 6b 9c 30 f1 65 50 6a 80 3d |....+..k.0.ePj.=| 00000100 74 37 a8 59 a6 51 8a 63 b6 d8 16 9f a9 47 2a 7c |t7.Y.Q.c.....G*|| 00000110 04 a7 fe 69 47 02 bf e9 b7 1b 7a ea 60 5c 3c 53 |...iG.....z.`\<S| 00000120 5b 10 78 dc 4d d2 a8 22 30 45 37 fb 56 06 9f 06 |[.x.M.."0E7.V...| 00000130 aa df cf 87 3a 3e cf 72 f2 e5 a6 c6 aa e2 7c 1c |....:>.r......|.| 00000140 64 c2 fc 80 ce 02 fc 7f 0f c6 60 81 bf cd 3b 5a |d.........`...;Z| 00000150 37 a5 38 1b 0c 1b 39 2e d6 f6 3d a2 36 e5 87 c3 |7.8...9...=.6...| 00000160 17 b5 fd ee 33 c7 ce a3 d9 c2 57 dc ee 85 48 9d |....3.....W...H.| 00000170 33 60 02 cd c5 83 44 44 ea b6 07 25 0a 4b a6 6e |3`....DD...%.K.n| 00000180 fc 51 42 cd 84 0b 65 b6 19 a1 e5 b2 eb 14 0c fa |.QB...e.........| 00000190 24 77 f5 44 6e 5d 39 dd b6 8e cc f8 30 fe 21 46 |$w.Dn]9.....0.!F| 000001a0 9c ff 95 c6 c7 b5 0a df 54 ca d2 ac bc 64 d0 97 |........T....d..| 000001b0 94 54 d9 29 0f 91 60 20 c3 e4 53 c2 b0 e4 40 72 |.T.)..` ..S...@r| 000001c0 7e 25 bc 81 06 ad 05 46 14 a7 e6 71 6b 5c db 9c |~%.....F...qk\..| 000001d0 0a 5e 76 23 ae 06 01 36 98 21 65 2c 90 e7 4b 1a |.^v#...6.!e,..K.| 000001e0 2a 2d 80 a5 48 db 9e 14 e0 9f e9 aa 00 e3 77 32 |*-..H.........w2| 000001f0 0f fd 94 db 55 a6 64 46 be ae ca de da ee 89 68 |....U.dF.......h| 00000200 =======Devices which don't seem to have a corresponding hard drive============== sdc sdd sde

    Read the article

  • "ImportError: No module named flask" - Trouble with nginx + uWSGI + Flask in a virtualenv setup

    - by vjk2005
    I got nginx + uWSGI running on localhost inside a virtualenv with a simple hello world program, but I get this error when I replace the hello world with a simple Flask app: File "./wsgi_configuration_module.py", line 1, in <module> from flask import Flask ImportError: No module named flask unable to load app mountpoint Here's the flask app (wsgi_configuration_module.py): from flask import Flask application = Flask(__name__) @application.route("/") def hello(): return "hello world" if __name__ == "__main__": application.run() uWSGI config (app_conf.xml): <uwsgi> <socket>127.0.0.1:9001</socket> <chdir>/srv/www/labs/application</chdir> <pythonpath>/srv/www</pythonpath> <module>wsgi_configuration_module</module> <callable>application</callable> <no-site>true</no-site> </uwsgi> nginx config: server { listen 80; server_name localhost; access_log /srv/www/labs/logs/access.log; error_log /srv/www/labs/logs/error.log; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:9001; } location /static { root /srv/www/labs/public_html/static/; index index.html index.htm; } } virtualenv stored in ~/virtual_env with Python 2.7 + nginx + uWSGI + Flask installed in a virtualenv called basic. Things I've tried to solve this: set the --home (-H) option to my virtualenv folder ~/virtual_env while running uWSGI. Other info: I have the same setup working outside of a virtualenv. Things go wrong only when I try to replicate the setup inside of a virtualenv. Where have I gone wrong?

    Read the article

  • Security for university research lab systems

    - by ank
    Being responsible for security in a university computer science department is no fun at all. And I explain: It is often the case that I get a request for installation of new hw systems or software systems that are really so experimental that I would not dare put them even in the DMZ. If I can avoid it and force an installation in a restricted inside VLAN that is fine but occasionally I get requests that need access to the outside world. And actually it makes sense to have such systems have access to the world for testing purposes. Here is the latest request: A newly developed system that uses SIP is in the final stages of development. This system will enable communication with outside users (that is its purpose and the research proposal), actually hospital patients not so well aware of technology. So it makes sense to open it to the rest of the world. What I am looking for is anyone who has experience with dealing with such highly experimental systems that need wide outside network access. How do you secure the rest of the network and systems from this security nightmare without hindering research? Is placement in the DMZ enough? Any extra precautions? Any other options, methodologies?

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • Using SQL Developer to Debug your Anonymous PL/SQL Blocks

    - by JeffS
    Everyone knows that SQL Developer has a PL/SQL debugger – check! Everyone also knows that it’s only setup for debugging standalone PL/SQL objects like Functions, Procedures, and Packages, right? – NO! SQL Developer can also debug your Stored Java Procedures AND it can debug your standalone PLSQL blocks. These bits of PLSQL which do not live in the database are also known as ‘Anonymous Blocks.’ Anonymous PL/SQL blocks can be submitted to interactive tools such as SQL*Plus and Enterprise Manager, or embedded in an Oracle Precompiler or OCI program. At run time, the program sends these blocks to the Oracle database, where they are compiled and executed. Here’s an example of something you might want help debugging: Declare x number := 0; Begin Dbms_Output.Put(Sysdate || ' ' || Systimestamp); For Stuff In 1..100 Loop Dbms_Output.Put_Line('Stuff is equal to ' || Stuff || '.'); x := Stuff; End Loop; End; / With the power of remote debugging and unshared worksheets, we are going to be able to debug this ANON block! The trick – we need to create a dummy stored procedure and call it in our ANON block. Then we’re going to create an unshared worksheet and execute the script from there while the SQL Developer session is listening for remote debug connections. We step through the dummy procedure, and this takes OUT to our calling ANON block. Then we can use watches, breakpoints, and all that fancy debugger stuff! First things first, create this dummy procedure - create or replace procedure do_nothing is begin null; end; Then mouse-right-click on your Connection and select ‘Remote Debug.’ For an in-depth post on how to use the remote debugger, check out Barry’s excellent post on the subject. Open an unshared worksheet using Ctrl+Shift+N. This gives us a dedicated connection for our worksheet and any scripts or commands executed in it. Paste in your ANON block you want to debug. Add in a call to the dummy procedure above to the first line of your BEGIN block like so Begin do_nothing(); ... Then we need to setup the machine for remote debug for the session we have listening – basically we connect to SQL Developer. You can do that via a Environment Variable, or you can just add this line to your script - CALL DBMS_DEBUG_JDWP.CONNECT_TCP( 'localhost', '4000' ); Where ‘localhost’ is the machine where SQL Developer is running and ’4000′ is the port you started the debug listener on. Ok, with that all set, now just RUN the script. Once the PL/SQL call is made, the debugger will be invoked. You’ll end up in the DO_NOTHING() object. Debugging an ANON block from SQL Developer is possible! If you step out to the ANON block, we’ll end up in the script that’s used to call the procedure – which is the script you want to debug. The Anonymous Block is opened in a new SQL Dev page You can now step through the block, using watches and breakpoints as expected. I’m guessing your scripts are going to be a bit more complicated than mine, but this serves as a decent example to get you started. Here’s a screenshot of a watch and breakpoint defined in the anon block being debugged: Breakpoints, watches, and callstacks - oh my! For giggles, I created a breakpoint with a passcount of 90 for the FOR LOOP to see if it works. And of course it does You Might Also EnjoyUsing Pass Counts to Turbo Charge Your PL/SQL BreakpointsSQL Developer Tip: Viewing REFCURSOR OutputThe PL/SQL Debugger Strikes Back: Episode VDebugging PL/SQL with SQL Developer: Episode IVHow to find dependent objects in your PL/SQL Programs using SQL Developer

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • USB software protection dongle for Java with an SDK which is cross-platform “for real”. Does it exist?

    - by Unai Vivi
    What I'd like to ask is if anybody knows about an hardware USB-dongle for software protection which offers a very complete out-of-the-box API support for cross-platform Java deployments. Its SDK should provide a jar (only one, not one different library per OS & bitness) ready to be added to one's project as a library. The jar should contain all the native stuff for the various OSes and bitnesses From the application's point of view, one should continue to write (api calls) once and run everywhere, without having to care where the end-user will run the software The provided jar should itself deal with loading the appropriate native library Does such a thing exist? With what I've tried so far, you have different APIs and compiled libraries for win32, linux32, win64, linux64, etc (or you even have to compile stuff yourself on the target machine), but hey, we're doing Java here, we don't know (and don't care) where the program will run! And we can't expect the end-user to be a software engineer, tweak (and break!) its linux server, link libraries, mess with gcc, litter the filesystem, etc... In general, Java support (in a transparent cross-platform fashion) is quite bad with the dongle SDKs I've evaluated so far (e.g. KeyLok and SecuTech's UniKey). I even purchased (no free evaluation kit available) SecureMetric SDKs&dongles (they should've been "soooo" straighforward to integrate -- according to marketing material :\ ) and they were the worst ever: SecureDongle X has no 64bit support and SecureDongle SD is not cross-platform at all. So, has anyone out there been through this and found the ultimate Java security usb dongle for cross-platform deployments? Note: software is low-volume, high-value; application is off-line (intranet with no internet access), so no online-activation alternatives and the like. -- EDIT Tried out HASP dongles (used to be called "Aladdin"), and added them to the no-no list: here, too, there is no out-of-the-box (out-of-the-jar) support: e.g. end-linux-user has to manually put the .so library (the specific file for the appropriate bitness) in the right place on his filesystem, and export an env. variable accordingly. -- EDIT 2 I really don't understand all the negativity and all the downvoting: is this a taboo topic? Is it so hard to understand that a freelance developer has to put food on the table everyday to feed its family and pay the bills at the end of the month? Please don't talk about "adding value" as a supplier, because that'd be off-topic. Furthermore I'm not in direct contact with end-customers, but there's an intermediate reselling entity: it's this entity I want to prevent selling copies of the software without sharing the revenue. -- EDIT 3 I'd like to emphasize the fact that the question is looking for a technical answer, not one about opinions concerning business models, philosophical lucubrations on the concept of value, resellers' reliability, etc. I cannot change resellers, because this isn't a "general purpose" kind of sw, but a very vertical one and (for some reasons it's not worth explaining here) I must go through them. I just need to prevent the "we sold 2 copies, here's your share [bwahaha we sold 10]" scenario.

    Read the article

  • Bitmask data insertions in SSDT Post-Deployment scripts

    - by jamiet
    On my current project we are using SQL Server Data Tools (SSDT) to manage our database schema and one of the tasks we need to do often is insert data into that schema once deployed; the typical method employed to do this is to leverage Post-Deployment scripts and that is exactly what we are doing. Our requirement is a little different though, our data is split up into various buckets that we need to selectively deploy on a case-by-case basis. I was going to use a SQLCMD variable for each bucket (defaulted to some value other than “Yes”) to define whether it should be deployed or not so we could use something like this in our Post-Deployment script: IF ($(DeployBucket1Flag) = 'Yes')BEGIN   :r .\Bucket1.data.sqlENDIF ($(DeployBucket2Flag) = 'Yes')BEGIN   :r .\Bucket2.data.sqlENDIF ($(DeployBucket3Flag) = 'Yes')BEGIN   :r .\Bucket3.data.sqlEND That works fine and is, I’m sure, a very common technique for doing this. It is however slightly ugly because we have to litter our deployment with various SQLCMD variables. My colleague James Rowland-Jones (whom I’m sure many of you know) suggested another technique – bitmasks. I won’t go into detail about how this works (James has already done that at Using a Bitmask - a practical example) but I’ll summarise by saying that you can deploy different combinations of the buckets simply by supplying a different numerical value for a single SQLCMD variable. Each bit of that value’s binary representation signifies whether a particular bucket should be deployed or not. This is better demonstrated using the following simple script (which can be easily leveraged inside your Post-Deployment scripts): /* $(DeployData) is a SQLCMD variable that would, if you were using this in SSDT, be declared in the SQLCMD variables section of your project file. It should contain a numerical value, defaulted to 0. In this example I have declared it using a :setvar statement. Test the affect of different values by changing the :setvar statement accordingly. Examples: :setvar DeployData 1 will deploy bucket 1 :setvar DeployData 2 will deploy bucket 2 :setvar DeployData 3   will deploy buckets 1 & 2 :setvar DeployData 6   will deploy buckets 2 & 3 :setvar DeployData 31  will deploy buckets 1, 2, 3, 4 & 5 */ :setvar DeployData 0 DECLARE  @bitmask VARBINARY(MAX) = CONVERT(VARBINARY,$(DeployData)); IF (@bitmask & 1 = 1) BEGIN     PRINT 'Bucket 1 insertions'; END IF (@bitmask & 2 = 2) BEGIN     PRINT 'Bucket 2 insertions'; END IF (@bitmask & 4 = 4) BEGIN     PRINT 'Bucket 3 insertions'; END IF (@bitmask & 8 = 8) BEGIN     PRINT 'Bucket 4 insertions'; END IF (@bitmask & 16 = 16) BEGIN     PRINT 'Bucket 5 insertions'; END An example of running this using DeployData=6 The binary representation of 6 is 110. The second and third significant bits of that binary number are set to 1 and hence buckets 2 and 3 are “activated”. Hope that makes sense and is useful to some of you! @Jamiet P.S. I used the awesome HTML Copy feature of Visual Studio’s Productivity Power Tools in order to format the T-SQL code above for this blog post.

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >