Search Results

Search found 11238 results on 450 pages for 'miserable variable'.

Page 192/450 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • Apache: scope for environmental variables

    - by Anonymous
    While there's documentation available on Apache environmental variables, I can not find answer to one important question. Imagine I use rewrite rules to set environmental variable RewriteRule ... ... [E=something:1] What is the scope of "something" - global Apache server (this means "something" will be available for other request transactions), this request (means that "something" is only valid for THIS http request (and its related processing - but what's about internal redirects and other internal stuff - are they considered as THIS request, or another one?), and may be set differently within another (concurrent) request?

    Read the article

  • .htaccess to pass FULL URL into other PHP link as $_GET

    - by 4lvin
    From .htacces file, how to pass/redirect the FULL URL to another URL as GET Variable? Like: http://www.test.com/foo/bar.asp will be redirected to: http://www.newsite.com/?url=http://www.test.com/foo/bar.asp With Full Url with Domain.I tried: RewriteEngine on RedirectMatch 301 ^/(.*)\.asp http://www.newsite.com/?url=%{REQUEST_URI} But it is going out like: http://www.newsite.com/?url=?%{REQUEST_URI}

    Read the article

  • How to get the Date in a batch file in a predictable format?

    - by AngryHacker
    In a batch file I need to extract a month, day, year from the date command. So I used the following, which essentially parses the Date command to extract its sub strings into a variable: set Day=%Date:~3,2% set Mth=%Date:~0,2% set Yr=%Date:~6,4% This is all great, but if I deploy this batch file to a machine with a different regional/country settings, it fails because month, day and year are in different locations. How can I extract month, day and year regardless of the date format?

    Read the article

  • Availability of big files on multiple servers

    - by Imises
    I have to handle many (1'000 - 30'000) big files ranging from 200MB up to 2GB. The demand for these files is variable (0 - 300 downloads / file). This is why a single file must saved on 2 or more servers. My servers are placed in different datacenters (France), with different size HDDs (750GB to 4TB). Currently I share the files using PHP and ncftpget / ncftpput, but it's very slow. I need a solution to handle balancing these files across 7+ servers.

    Read the article

  • cygwin, PATH problem?

    - by jayjaypg22
    I run a .ksh containing a awk call. awk.exe and his shortcut awk is in /bin/awk, /bin is in the PATH environment variable. But when I try to launch awk, I have this error message : bash: /usr/bin/awk: no such file or directory Why didn't bash look for it in the /bin folder too? edit : tar has the same rights, tar.exe is in /bin and can be listed in /usr/bin/, the exact same way than awk. Tar works fine whereas awk not.

    Read the article

  • Google chrome proxy authentication dialogue timeout

    - by Nihar Sarangi
    I am on a network that uses LDAP proxy for authentication based on a username and password. Whenever I start Google Chrome, it pops up with a proxy authentication dialogue, but the dialogue disappears automatically after variable amount of time (sometimes it stays for 5 seconds some times less than 1 second). I have found the same issue with Chromium also. Is there any configuration I can set to control this timeout, or say, auto-authenticate with my authentication details from the shell or DE (Gnome3 on Arch)?

    Read the article

  • Creating a menu using xslt for Umbraco

    - by rob_g
    I've created a menu in umbraco using XSLT. The menu is using the usual ul and li elements and I'm displaying only the first level of the menu. The aim is to create a menu that expands to show the sub menu when I click a parent node (in the top level). I am after the xslt I would need to expose the sub menu when clicked. I think I would need to make use of ancestor-or-self to detect the current menu and parent menu and display them and also the $currentPage variable. I have the following xslt: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE xsl:stylesheet [ <!ENTITY nbsp "&#x00A0;"> ]> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxml="urn:schemas-microsoft-com:xslt" xmlns:umbraco.library="urn:umbraco.library" xmlns:Exslt.ExsltCommon="urn:Exslt.ExsltCommon" xmlns:Exslt.ExsltDatesAndTimes="urn:Exslt.ExsltDatesAndTimes" xmlns:Exslt.ExsltMath="urn:Exslt.ExsltMath" xmlns:Exslt.ExsltRegularExpressions="urn:Exslt.ExsltRegularExpressions" xmlns:Exslt.ExsltStrings="urn:Exslt.ExsltStrings" xmlns:Exslt.ExsltSets="urn:Exslt.ExsltSets" xmlns:tagsLib="urn:tagsLib" xmlns:urlLib="urn:urlLib" exclude-result-prefixes="msxml umbraco.library Exslt.ExsltCommon Exslt.ExsltDatesAndTimes Exslt.ExsltMath Exslt.ExsltRegularExpressions Exslt.ExsltStrings Exslt.ExsltSets tagsLib urlLib "> <xsl:output method="xml" omit-xml-declaration="yes"/> <xsl:param name="currentPage"/> <xsl:template match="/"> <div id="kb-categories"> <h3>Categories</h3> <xsl:call-template name="drawNodes"> <xsl:with-param name="parent" select="$currentPage/ancestor-or-self::node [@level=1]"/> </xsl:call-template> </div> </xsl:template> <xsl:template name="drawNodes"> <xsl:param name="parent"/> <xsl:if test="(umbraco.library:IsProtected($parent/@id, $parent/@path) = 0 or (umbraco.library:IsProtected($parent/@id, $parent/@path) = 1)) and $parent/@level = 1"> <ul class="kb-menuLevel1" > <xsl:for-each select="$parent/node [string(./data [@alias='showInMenu']) = 1]"> <li> <a href="/kb{umbraco.library:NiceUrl(@id)}"> <xsl:value-of select="@nodeName"/> </a> <xsl:variable name="level" select="@level" /> <xsl:if test="(count(./node [string(./data [@alias='showInMenu']) = '1']) &gt; 0)"> <xsl:call-template name="drawNodes"> <xsl:with-param name="parent" select="."/> </xsl:call-template> </xsl:if> </li> </xsl:for-each> </ul> </xsl:if> <xsl:if test="(umbraco.library:IsProtected($parent/@id, $parent/@path) = 0 or (umbraco.library:IsProtected($parent/@id, $parent/@path) = 1)) and $parent/@level &gt; 1"> <ul class="kb-menuLevel{@level}" style="display: none;"> <xsl:for-each select="$parent/node [string(./data [@alias='showInMenu']) = 1]"> <li> <a href="/kb{umbraco.library:NiceUrl(@id)}"> <xsl:value-of select="@nodeName"/> </a> <xsl:variable name="level" select="@level" /> <xsl:if test="(count(./node [string(./data [@alias='showInMenu']) = '1']) &gt; 0)"> <xsl:call-template name="drawNodes"> <xsl:with-param name="parent" select="."/> </xsl:call-template> </xsl:if> </li> </xsl:for-each> </ul> </xsl:if> </xsl:template> </xsl:stylesheet> I suspect this could be improved using apply-templates, but I'm not yet up to speed with that (this being only the second day of my learning xslt). My menu: Item 1 Item 2 Item 3 Item 4 when I click on Item 2 I want to see it's child menu too: Item 1 Item 2 -- Item 2.1 -- Item 2.2 Item 3 Item 4 and so on down the nested menu.

    Read the article

  • Accessing py2exe program over network in Windows 98 throws ImportErrors

    - by darvids0n
    I'm running a py2exe-compiled python program from one server machine on a number of client machines (mapped to a network drive on every machine, say W:). For Windows XP and later machines, have so far had zero problems with Python picking up W:\python23.dll (yes, I'm using Python 2.3.5 for W98 compatibility and all that). It will then use W:\zlib.pyd to decompress W:\library.zip containing all the .pyc files like os and such, which are then imported and the program runs no problems. The issue I'm getting is on some Windows 98 SE machines (note: SOME Windows 98 SE machines, others seem to work with no apparent issues). What happens is, the program runs from W:, the W:\python23.dll is, I assume, found (since I'm getting Python ImportErrors, we'd need to be able to execute a Python import statement), but a couple of things don't work: 1) If W:\library.zip contains the only copy of the .pyc files, I get ZipImportError: can't decompress data; zlib not available (nonsense, considering W:\zlib.pyd IS available and works fine with the XP and higher machines on the same network). 2) If the .pyc files are actually bundled INSIDE the python exe by py2exe, OR put in the same directory as the .exe, OR put into a named subdirectory which is then set as part of the PYTHONPATH variable (e.g W:\pylib), I get ImportError: no module named os (os is the first module imported, before sys and anything else). Come to think of it, sys.path wouldn't be available to search if os was imported before it maybe? I'll try switching the order of those imports but my question still stands: Why is this a sporadic issue, working on some networks but not on others? And how would I force Python to find the files that are bundled inside the very executable I run? I have immediate access to the working Windows 98 SE machine, but I only get access to the non-working one (a customer of mine) every morning before their store opens. Thanks in advance! EDIT: Okay, big step forward. After debugging with PY2EXE_VERBOSE, the problem occurring on the specific W98SE machine is that it's not using the right path syntax when looking for imports. Firstly, it doesn't seem to read the PYTHONPATH environment variable (there may be a py2exe-specific one I'm not aware of, like PY2EXE_VERBOSE). Secondly, it only looks in one place before giving up (if the files are bundled inside the EXE, it looks there. If not, it looks in library.zip). EDIT 2: In fact, according to this, there is a difference between the sys.path in the Python interpreter and that of Py2exe executables. Specifically, sys.path contains only a single entry: the full pathname of the shared code archive. Blah. No fallbacks? Not even the current working directory? I'd try adding W:\ to PATH, but py2exe doesn't conform to any sort of standards for locating system libraries, so it won't work. Now for the interesting bit. The path it tries to load atexit, os, etc. from is: W:\\library.zip\<module>.<ext> Note the single slash after library.zip, but the double slash after the drive letter (someone correct me if this is intended and should work). It looks like if this is a string literal, then since the slash isn't doubled, it's read as an (invalid) escape sequence and the raw character is printed (giving W:\library.zipos.pyd, W:\library.zipos.dll, ... instead of with a slash); if it is NOT a string literal, the double slash might not be normpath'd automatically (as it should be) and so the double slash confuses the module loader. Like I said, I can't just set PYTHONPATH=W:\\library.zip\\ because it ignores that variable. It may be worth using sys.path.append at the start of my program but hard-coding module paths is an absolute LAST resort, especially since the problem occurs in ONE configuration of an outdated OS. Any ideas? I have one, which is to normpath the sys.path.. pity I need os for that. Another is to just append os.getenv('PATH') or os.getenv('PYTHONPATH') to sys.path... again, needing the os module. The site module also fails to initialise, so I can't use a .pth file. I also recently tried the following code at the start of the program: for pth in sys.path: fErr.write(pth) fErr.write(' to ') pth.replace('\\\\','\\') # Fix Windows 98 pathing issues fErr.write(pth) fErr.write('\n') But it can't load linecache.pyc, or anything else for that matter; it can't actually execute those commands from the looks of things. Is there any way to use built-in functionality which doesn't need linecache to modify the sys.path dynamically? Or am I reduced to hard-coding the correct sys.path?

    Read the article

  • Session caching problem

    - by Levani
    I have a strange problem with php sessions. I use them for authorization on my site. I store two variables - currently logged in user's id and username in session. When I log in with one username, than log out and log in again with another username the previous user's id is returned using the session variable instead of the current user. The most strange thing is that this happens only when it comes to insert some data into database. When I directly echo this variable the correct id is displayed, but when I insert new record into database this variable sends incorrect id. Here is the php code I use for sending data into database: <?php session_start(); //connect database require_once 'dbc.php'; $authorID = $_SESSION['user_id']; if ( $authorID != 0 ) { $content = htmlentities($_POST["answ_content"],ENT_COMPAT,'UTF-8'); $dro = date('Y-m-d H:i:s'); $qID = $_POST["question_ID"]; $author = 'avtori'; $sql="INSERT INTO comments (comment_ID, comment_post_ID, comment_author, comment_date, comment_content, user_id) VALUES (NULL, '$qID', '$author', '$dro', '$content', '$authorID')"; $result = mysql_query($sql); } else { echo 'error'; } ?> Can anyone please help? Here is the logout function: function logout() { global $db; session_start(); if(isset($_SESSION['user_id']) || isset($_COOKIE['user_id'])) { mysql_query("update `users` set `ckey`= '', `ctime`= '' where `id`='$_SESSION[user_id]' OR `id` = '$_COOKIE[user_id]'") or die(mysql_error()); } /************ Delete the sessions****************/ unset($_SESSION['user_id']); unset($_SESSION['user_name']); unset($_SESSION['user_level']); unset($_SESSION['HTTP_USER_AGENT']); session_unset(); session_destroy(); /* Delete the cookies*******************/ setcookie("user_id", '', time()-60*60*24*COOKIE_TIME_OUT, "/"); setcookie("user_name", '', time()-60*60*24*COOKIE_TIME_OUT, "/"); setcookie("user_key", '', time()-60*60*24*COOKIE_TIME_OUT, "/"); header("Location: index.php"); } Here is the authentication script: include 'dbc.php'; $err = array(); foreach($_GET as $key => $value) { $get[$key] = filter($value); //get variables are filtered. } if ($_POST['doLogin']=='Login') { foreach($_POST as $key => $value) { $data[$key] = filter($value); // post variables are filtered } $user_email = $data['usr_email']; $pass = $data['pwd']; if (strpos($user_email,'@') === false) { $user_cond = "user_name='$user_email'"; } else { $user_cond = "user_email='$user_email'"; } $result = mysql_query("SELECT `id`,`pwd`,`full_name`,`approved`,`user_level` FROM users WHERE $user_cond AND `banned` = '0' ") or die (mysql_error()); $num = mysql_num_rows($result); // Match row found with more than 1 results - the user is authenticated. if ( $num > 0 ) { list($id,$pwd,$full_name,$approved,$user_level) = mysql_fetch_row($result); if(!$approved) { //$msg = urlencode("Account not activated. Please check your email for activation code"); $err[] = "Account not activated. Please check your email for activation code"; //header("Location: login.php?msg=$msg"); //exit(); } //check against salt if ($pwd === PwdHash($pass,substr($pwd,0,9))) { // this sets session and logs user in session_start(); session_regenerate_id (true); //prevent against session fixation attacks. // this sets variables in the session $_SESSION['user_id']= $id; $_SESSION['user_name'] = $full_name; $_SESSION['user_level'] = $user_level; $_SESSION['HTTP_USER_AGENT'] = md5($_SERVER['HTTP_USER_AGENT']); //update the timestamp and key for cookie $stamp = time(); $ckey = GenKey(); mysql_query("update users set `ctime`='$stamp', `ckey` = '$ckey' where id='$id'") or die(mysql_error()); //set a cookie if(isset($_POST['remember'])){ setcookie("user_id", $_SESSION['user_id'], time()+60*60*24*COOKIE_TIME_OUT, "/"); setcookie("user_key", sha1($ckey), time()+60*60*24*COOKIE_TIME_OUT, "/"); setcookie("user_name",$_SESSION['user_name'], time()+60*60*24*COOKIE_TIME_OUT, "/"); } if(empty($err)){ header("Location: myaccount.php"); } } else { //$msg = urlencode("Invalid Login. Please try again with correct user email and password. "); $err[] = "Invalid Login. Please try again with correct user email and password."; //header("Location: login.php?msg=$msg"); } } else { $err[] = "Error - Invalid login. No such user exists"; } }

    Read the article

  • Dynamic scoping in Clojure?

    - by j-g-faustus
    Hi, I'm looking for an idiomatic way to get dynamically scoped variables in Clojure (or a similar effect) for use in templates and such. Here is an example problem using a lookup table to translate tag attributes from some non-HTML format to HTML, where the table needs access to a set of variables supplied from elsewhere: (def *attr-table* ; Key: [attr-key tag-name] or [boolean-function] ; Value: [attr-key attr-value] (empty array to ignore) ; Context: Variables "tagname", "akey", "aval" '( ; translate :LINK attribute in <a> to :href [:LINK "a"] [:href aval] ; translate :LINK attribute in <img> to :src [:LINK "img"] [:src aval] ; throw exception if :LINK attribute in any other tag [:LINK] (throw (RuntimeException. (str "No match for " tagname))) ; ... more rules ; ignore string keys, used for internal bookkeeping [(string? akey)] [] )) ; ignore I want to be able to evaluate the rules (left hand side) as well as the result (right hand side), and need some way to put the variables in scope at the location where the table is evaluated. I also want to keep the lookup and evaluation logic independent of any particular table or set of variables. I suppose there are similar issues involved in templates (for example for dynamic HTML), where you don't want to rewrite the template processing logic every time someone puts a new variable in a template. Here is one approach using global variables and bindings. I have included some logic for the table lookup: ;; Generic code, works with any table on the same format. (defn rule-match? [rule-val test-val] "true if a single rule matches a single argument value" (cond (not (coll? rule-val)) (= rule-val test-val) ; plain value (list? rule-val) (eval rule-val) ; function call :else false )) (defn rule-lookup [test-val rule-table] "looks up rule match for test-val. Returns result or nil." (loop [rules (partition 2 rule-table)] (when-not (empty? rules) (let [[select result] (first rules)] (if (every? #(boolean %) (map rule-match? select test-val)) (eval result) ; evaluate and return result (recur (rest rules)) ))))) ;; Code specific to *attr-table* (def tagname) ; need these globals for the binding in html-attr (def akey) (def aval) (defn html-attr [tagname h-attr] "converts to html attributes" (apply hash-map (flatten (map (fn [[k v :as kv]] (binding [tagname tagname akey k aval v] (or (rule-lookup [k tagname] *attr-table*) kv))) h-attr )))) (defn test-attr [] "test conversion" (prn "a" (html-attr "a" {:LINK "www.google.com" "internal" 42 :title "A link" })) (prn "img" (html-attr "img" {:LINK "logo.png" }))) user=> (test-attr) "a" {:href "www.google.com", :title "A link"} "img" {:src "logo.png"} This is nice in that the lookup logic is independent of the table, so it can be reused with other tables and different variables. (Plus of course that the general table approach is about a quarter of the size of the code I had when I did the translations "by hand" in a giant cond.) It is not so nice in that I need to declare every variable as a global for the binding to work. Here is another approach using a "semi-macro", a function with a syntax-quoted return value, that doesn't need globals: (defn attr-table [tagname akey aval] `( [:LINK "a"] [:href ~aval] [:LINK "img"] [:src ~aval] [:LINK] (throw (RuntimeException. (str "No match for " tagname))) ; ... more rules [(string? ~akey)] [] ))) Only a couple of changes are needed to the rest of the code: In rule-match?, when syntax-quoted the function call is no longer a list: - (list? rule-val) (eval rule-val) + (seq? rule-val) (eval rule-val) In html-attr: - (binding [tagname tagname akey k aval v] - (or (rule-lookup [k tagname] *attr-table*) kv))) + (or (rule-lookup [k tagname] (attr-table tagname k v)) kv))) And we get the same result without globals. (And without dynamic scoping.) Are there other alternatives to pass along sets of variable bindings declared elsewhere, without the globals required by Clojure's binding? Is there an idiomatic way of doing it, like Ruby's binding or Javascript's function.apply(context)?

    Read the article

  • Writing an optimised and efficient search engine with mySQL and ColdFusion

    - by Mel
    I have a search page with the following scenarios listed below. I was told there was a better way to do it, but not how, and that I am using too many if statements, and that it's prone to causing an error through url manipulation: Search.cfm will processes a search made from a search bar present on all pages, with one search input (titleName). If search.cfm is accessed manually (through URL not through using the simple search bar on all pages) it displays an advanced search form with three inputs (titleName, genreID, platformID) or it evaluates searchResponse variable and decides what to do. If simple search query is blank, has no results, or less than 3 characters it displays an error If advanced search query is blank, has no results, or less than 3 characters it displays an error If any successful search returns results, they come back normally. The top-of-page logic is as follows: <!---SET DEFAULT VARIABLE---> <cfparam name="variables.searchResponse" default=""> <!---CHECK TO SEE IF SIMPLE SEARCH A FORM WAS SUBMITTED AND EXECUTE SEARCH IF IT WAS---> <cfif IsDefined("Form.simpleSearch") AND Len(Trim(Form.titleName)) LTE 2> <cfset variables.searchResponse = "invalidString"> <cfelseif IsDefined("Form.simpleSearch") AND Len(Trim(Form.titleName)) GTE 3> <!---EXECUTE METHOD AND GET DATA---> <cfinvoke component="myComponent" method="simpleSearch" searchString="#Form.titleName#" returnvariable="simpleSearchResult"> <cfset variables.searchResponse = "simpleSearchResult"> </cfif> <!---CHECK IF ANY RECORDS WERE FOUND---> <cfif IsDefined("variables.simpleSearchResult") AND simpleSearchResult.RecordCount IS 0> <cfset variables.searchResponse = "noResult"> </cfif> <!---CHECK IF ADVANCED SEARCH FORM WAS SUBMITTED---> <cfif IsDefined("Form.AdvancedSearch") AND Len(Trim(Form.titleName)) LTE 2> <cfset variables.searchResponse = "invalidString"> <cfelseif IsDefined("Form.advancedSearch") AND Len(Trim(Form.titleName)) GTE 2> <!---EXECUTE METHOD AND GET DATA---> <cfinvoke component="myComponent" method="advancedSearch" returnvariable="advancedSearchResult" titleName="#Form.titleName#" genreID="#Form.genreID#" platformID="#Form.platformID#"> <cfset variables.searchResponse = "advancedSearchResult"> </cfif> <!---CHECK IF ANY RECORDS WERE FOUND---> <cfif IsDefined("variables.advancedSearchResult") AND advancedSearchResult.RecordCount IS 0> <cfset variables.searchResponse = "noResult"> </cfif> I'm using the searchResponse variable to decide what the the page displays, based on the following scenarios: <!---ALWAYS DISPLAY SIMPLE SEARCH BAR AS IT'S PART OF THE HEADER---> <form name="simpleSearch" action="search.cfm" method="post"> <input type="hidden" name="simpleSearch" /> <input type="text" name="titleName" /> <input type="button" value="Search" onclick="form.submit()" /> </form> <!---IF NO SEARCH WAS SUBMITTED DISPLAY DEFAULT FORM---> <cfif searchResponse IS ""> <h1>Advanced Search</h1> <!---DISPLAY FORM---> <form name="advancedSearch" action="search.cfm" method="post"> <input type="hidden" name="advancedSearch" /> <input type="text" name="titleName" /> <input type="text" name="genreID" /> <input type="text" name="platformID" /> <input type="button" value="Search" onclick="form.submit()" /> </form> </cfif> <!---IF SEARCH IS BLANK OR LESS THAN 3 CHARACTERS DISPLAY ERROR MESSAGE---> <cfif searchResponse IS "invalidString"> <cfoutput> <h1>INVALID SEARCH</h1> </cfoutput> </cfif> <!---IF SEARCH WAS MADE BUT NO RESULTS WERE FOUND---> <cfif searchResponse IS "noResult"> <cfoutput> <h1>NO RESULT FOUND</h1> </cfoutput> </cfif> <!---IF SIMPLE SEARCH WAS MADE A RESULT WAS FOUND---> <cfif searchResponse IS "simpleSearchResult"> <cfoutput> <h1>Search Results</h1> </cfoutput> <cfoutput query="simpleSearchResult"> <!---DISPLAY QUERY DATA---> </cfoutput> </cfif> <!---IF ADVANCED SEARCH WAS MADE A RESULT WAS FOUND---> <cfif searchResponse IS "advancedSearchResult"> <cfoutput> <h1>Search Results</h1> <p>Your search for "#Form.titleName#" returned #advancedSearchResult.RecordCount# result(s).</p> </cfoutput> <cfoutput query="advancedSearchResult"> <!---DISPLAY QUERY DATA---> </cfoutput> </cfif> Is my logic a) not efficient because my if statements/is there a better way to do this? And b) Can you see any scenarios where my code can break? I've tested it but I have not been able to find any issues with it. And I have no way of measuring performance. Any thoughts and ideas would be greatly appreciated. Many thanks

    Read the article

  • Using BPEL Performance Statistics to Diagnose Performance Bottlenecks

    - by fip
    Tuning performance of Oracle SOA 11G applications could be challenging. Because SOA is a platform for you to build composite applications that connect many applications and "services", when the overall performance is slow, the bottlenecks could be anywhere in the system: the applications/services that SOA connects to, the infrastructure database, or the SOA server itself.How to quickly identify the bottleneck becomes crucial in tuning the overall performance. Fortunately, the BPEL engine in Oracle SOA 11G (and 10G, for that matter) collects BPEL Engine Performance Statistics, which show the latencies of low level BPEL engine activities. The BPEL engine performance statistics can make it a bit easier for you to identify the performance bottleneck. Although the BPEL engine performance statistics are always available, the access to and interpretation of them are somewhat obscure in the early and current (PS5) 11G versions. This blog attempts to offer instructions that help you to enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience. Overview of BPEL Engine Performance Statistics  SOA BPEL has a feature of collecting some performance statistics and store them in memory. One MBean attribute, StatLastN, configures the size of the memory buffer to store the statistics. This memory buffer is a "moving window", in a way that old statistics will be flushed out by the new if the amount of data exceeds the buffer size. Since the buffer size is limited by StatLastN, impacts of statistics collection on performance is minimal. By default StatLastN=-1, which means no collection of performance data. Once the statistics are collected in the memory buffer, they can be retrieved via another MBean oracle.as.soainfra.bpel:Location=[Server Name],name=BPELEngine,type=BPELEngine.> My friend in Oracle SOA development wrote this simple 'bpelstat' web app that looks up and retrieves the performance data from the MBean and displays it in a human readable form. It does not have beautiful UI but it is fairly useful. Although in Oracle SOA 11.1.1.5 onwards the same statistics can be viewed via a more elegant UI under "request break down" at EM -> SOA Infrastructure -> Service Engines -> BPEL -> Statistics, some unsophisticated minds like mine may still prefer the simplicity of the 'bpelstat' JSP. One thing that simple JSP does do well is that you can save the page and send it to someone to further analyze Follows are the instructions of how to install and invoke the BPEL statistic JSP. My friend in SOA Development will soon blog about interpreting the statistics. Stay tuned. Step1: Enable BPEL Engine Statistics for Each SOA Servers via Enterprise Manager First st you need to set the StatLastN to some number as a way to enable the collection of BPEL Engine Performance Statistics EM Console -> soa-infra(Server Name) -> SOA Infrastructure -> SOA Administration -> BPEL Properties Click on "More BPEL Configuration Properties" Click on attribute "StatLastN", set its value to some integer number. Typically you want to set it 1000 or more. Step 2: Download and Deploy bpelstat.war File to Admin Server, Note: the WAR file contains a JSP that does NOT have any security restriction. You do NOT want to keep in your production server for a long time as it is a security hazard. Deactivate the war once you are done. Download the bpelstat.war to your local PC At WebLogic Console, Go to Deployments -> Install Click on the "upload your file(s)" Click the "Browse" button to upload the deployment to Admin Server Accept the uploaded file as the path, click next Check the default option "Install this deployment as an application" Check "AdminServer" as the target server Finish the rest of the deployment with default settings Console -> Deployments Check the box next to "bpelstat" application Click on the "Start" button. It will change the state of the app from "prepared" to "active" Step 3: Invoke the BPEL Statistic Tool The BPELStat tool merely call the MBean of BPEL server and collects and display the in-memory performance statics. You usually want to do that after some peak loads. Go to http://<admin-server-host>:<admin-server-port>/bpelstat Enter the correct admin hostname, port, username and password Enter the SOA Server Name from which you want to collect the performance statistics. For example, SOA_MS1, etc. Click Submit Keep doing the same for all SOA servers. Step 3: Interpret the BPEL Engine Statistics You will see a few categories of BPEL Statistics from the JSP Page. First it starts with the overall latency of BPEL processes, grouped by synchronous and asynchronous processes. Then it provides the further break down of the measurements through the life time of a BPEL request, which is called the "request break down". 1. Overall latency of BPEL processes The top of the page shows that the elapse time of executing the synchronous process TestSyncBPELProcess from the composite TestComposite averages at about 1543.21ms, while the elapse time of executing the asynchronous process TestAsyncBPELProcess from the composite TestComposite2 averages at about 1765.43ms. The maximum and minimum latency were also shown. Synchronous process statistics <statistics>     <stats key="default/TestComposite!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestSyncBPELProcess" min="1234" max="4567" average="1543.21" count="1000">     </stats> </statistics> Asynchronous process statistics <statistics>     <stats key="default/TestComposite2!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestAsyncBPELProcess" min="2234" max="3234" average="1765.43" count="1000">     </stats> </statistics> 2. Request break down Under the overall latency categorized by synchronous and asynchronous processes is the "Request breakdown". Organized by statistic keys, the Request breakdown gives finer grain performance statistics through the life time of the BPEL requests.It uses indention to show the hierarchy of the statistics. Request breakdown <statistics>     <stats key="eng-composite-request" min="0" max="0" average="0.0" count="0">         <stats key="eng-single-request" min="22" max="606" average="258.43" count="277">             <stats key="populate-context" min="0" max="0" average="0.0" count="248"> Please note that in SOA 11.1.1.6, the statistics under Request breakdown is aggregated together cross all the BPEL processes based on statistic keys. It does not differentiate between BPEL processes. If two BPEL processes happen to have the statistic that share same statistic key, the statistics from two BPEL processes will be aggregated together. Keep this in mind when we go through more details below. 2.1 BPEL process activity latencies A very useful measurement in the Request Breakdown is the performance statistics of the BPEL activities you put in your BPEL processes: Assign, Invoke, Receive, etc. The names of the measurement in the JSP page directly come from the names to assign to each BPEL activity. These measurements are under the statistic key "actual-perform" Example 1:  Follows is the measurement for BPEL activity "AssignInvokeCreditProvider_Input", which looks like the Assign activity in a BPEL process that assign an input variable before passing it to the invocation:                                <stats key="AssignInvokeCreditProvider_Input" min="1" max="8" average="1.9" count="153">                                     <stats key="sensor-send-activity-data" min="0" max="1" average="0.0" count="306">                                     </stats>                                     <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                 </stats> Note: because as previously mentioned that the statistics cross all BPEL processes are aggregated together based on statistic keys, if two BPEL processes happen to name their Invoke activity the same name, they will show up at one measurement (i.e. statistic key). Example 2: Follows is the measurement of BPEL activity called "InvokeCreditProvider". You can not only see that by average it takes 3.31ms to finish this call (pretty fast) but also you can see from the further break down that most of this 3.31 ms was spent on the "invoke-service".                                  <stats key="InvokeCreditProvider" min="1" max="13" average="3.31" count="153">                                     <stats key="initiate-correlation-set-again" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="invoke-service" min="1" max="13" average="3.08" count="153">                                         <stats key="prep-call" min="0" max="1" average="0.04" count="153">                                         </stats>                                     </stats>                                     <stats key="initiate-correlation-set" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="sensor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                     <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                     <stats key="update-audit-trail" min="0" max="2" average="0.03" count="153">                                     </stats>                                 </stats> 2.2 BPEL engine activity latency Another type of measurements under Request breakdown are the latencies of underlying system level engine activities. These activities are not directly tied to a particular BPEL process or process activity, but they are critical factors in the overall engine performance. These activities include the latency of saving asynchronous requests to database, and latency of process dehydration. My friend Malkit Bhasin is working on providing more information on interpreting the statistics on engine activities on his blog (https://blogs.oracle.com/malkit/). I will update this blog once the information becomes available. Update on 2012-10-02: My friend Malkit Bhasin has published the detail interpretation of the BPEL service engine statistics at his blog http://malkit.blogspot.com/2012/09/oracle-bpel-engine-soa-suite.html.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Agile Awakenings and the Rules of Agile

    - by Robert May
    For those that care, you can read my history of management and technology to understand why I think I’m qualified to talk about this at all.  It’s boring, so feel free to skip it. Awakenings I first started to play around with the idea of “agile” in 2004 or 2005.  I found a book on the Rational Unified Process that I thought was good, and attempted to implement parts of it.  I thought I was agile, but really, it wasn’t.   I still didn’t understand the concept of a team.  I still wanted to tell the team what to do and how to get it done.  I still thought I was smarter than the team. After that job, I started work on another project and began helping that team.  The first few months were really rough.  We were implementing Scrum, which was relatively new to everyone on the team, and, quite frankly, I was doing a poor job of it.  I was trying to micro-manage every aspect of the teams work, and we were all miserable. The moment of change came when the senior architect bailed on the project.  His comment to me was: “This isn’t Agile.  Where are the stand-ups?  Where are the stories?”  He was dead on, and I finally woke up.  I finally realized that I was the problem!  I wasn’t trusting the team.  I wasn’t helping the team.  I was being a manager. Like many (most?), I was claiming to be Agile and use Scrum, but I wasn’t in fact following the rules Scrum.  Since then, I’ve done a lot of studying, hands on practice, coaching of many different teams, and other learning around Scrum, and I have discovered that Scrum has some rules that must be followed for success, even though the process is about continuous improvement. I’ve been practicing Scrum right for about 4 years now and have helped multiple teams implement it successfully, so what you’re about to get is based on experience, rather than just theory. The Rules of Scrum In my experience, what I’ve found is that most companies that claim to be doing Scrum or Agile are actually NOT doing either.  This stems largely because they think that they can “adopt the rules of Agile that fit their organization.”  Sadly, many of them think that this means they can adopt iterations (sprints) and not much else.  Either that, or they think they can do whatever they want, or were doing before, and call it Scrum.  This is simply not true. Here are some rules that must be followed for you to really be doing Scrum.  I’ll go into detail on each one of these posts in future blog posts and update links here.  My intent is that this will help other teams implementing scrum to see more success. Agile does not allow you to do whatever you want A Product Owner is required A ScrumMaster is required The team must function as a Team, and QA must be part of the team Support from upper management is required A prioritized product backlog is required A prioritized sprint backlog is required Release planning is required Complete spring planning is required Showcases are required Velocity must be measured Retrospectives are required Daily stand-ups are required Visibility is absolutely required For now, I think that’s enough, although I reserve the right to add more.  If you’re breaking any of these rules, you’re probably not doing Scrum.  There are exceptions to these rules, but until you have practiced Scrum for a while, you don’t know what those exceptions are. Breaking the Rules Many teams break these rules because they are the ones that expose the most pain.  Scrum is not Advil.  It’s not intended to mask the pain, its intended to cure it.  Let me explain that analogy a bit more.  Recently, my 7 year old son broke his arm, quite severely (see the X-Ray to the right).  That caused him a great deal of pain.  We went first to one doctor, and after viewing the X-Ray, they determined that there was no way that they’d cast the arm at their location.  It was simply too bad of a break for them to deal with.  They did, however, give him some Advil for the pain and put a splint on his arm to stabilize the broken bones.  Within minutes, he was feeling much better.  Had we been stupid, we could have gone home and he’d have been just as happy as ever . . . until the pain medication wore off or one of his siblings touched the splint.  Then, all of that pain would come right back to the top.  Sure, he could make it go away by just taking more Advil and moving the splint out of the way, but that wasn’t going to fix the problem permanently. We ended up in an emergency room with a doctor who could fix his arm.  However, we were warned that the fix was going to be VERY painful, and it was.  Even with heavy sedation (Propofol), my son was in enough pain that he squirmed and wiggled trying to get his arm away from the doctor.  He had to endure this pain in order to have a functional arm. But the setting wasn’t the end.  He had to have several casts, had to have it re-broken once, since the first setting didn’t take and finally was given a clean bill of health. Agile implementation is much like this story.  Agile was developed as a result of people recognizing that the development methodologies that were currently in place simply were ineffective.  However, the fix to the broken development that’s been festering for many years is not painless.  Many people start Agile thinking that things will be wonderful.  They won’t!  Agile is about visibility, and often, it brings great pain to surface.  It causes all of the missed deadlines, the cowboy coders, the coasters, the micro-managers, the lazy, and all of the other problems that are really part of your development process now to become painfully visible to EVERYONE.  Many people don’t like this exposure.  Agile will make the pain better, but not if you remove the cast (the rules above) prematurely and start breaking the rules that expose the most pain.  The healing will take time and is not instant (like Advil).  Figuring out what the true source of pain and fixing it is very valuable to you, your team, and your company.  Remember as you’re doing this that Agile isn’t the source of the pain, it’s really just exposing it.  Find the source. My recommendation is that ALL of these rules are followed for a minimum of six months, and preferably for an entire year, before you decide to break any of these rules.  Get a few good releases under your belt.  Figure out what your velocity is and start firing as a team.  Chances are, after you see agile really in action, you won’t want to break the rules because you’ll see their value. More Reading Jean Tabaka recently published a list of 78 Things I Have Learned in 6 Years of Agile Coaching.  Highly recommended. Technorati Tags: Agile,Scrum,Rules

    Read the article

  • URL Parts available to URL Rewrite Rules

    URL Rewrite is a powerful URL rewriting tool available for IIS7 and newer.  Your rewriting options are almost unlimited, giving you the ability to optimize URLs for search engine optimization (SEO), support multiple domain names on a single site, hiding complex paths and much more. URL Rewrite allows you to use any Server Variable as conditions, and with URL Rewrite 2.0, you can also update them on the fly.  To see all variables available to your site, see this post. An understanding...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • C# 4.0: Named And Optional Arguments

    - by Paulo Morgado
    As part of the co-evolution effort of C# and Visual Basic, C# 4.0 introduces Named and Optional Arguments. First of all, let’s clarify what are arguments and parameters: Method definition parameters are the input variables of the method. Method call arguments are the values provided to the method parameters. In fact, the C# Language Specification states the following on §7.5: The argument list (§7.5.1) of a function member invocation provides actual values or variable references for the parameters of the function member. Given the above definitions, we can state that: Parameters have always been named and still are. Parameters have never been optional and still aren’t. Named Arguments Until now, the way the C# compiler matched method call definition arguments with method parameters was by position. The first argument provides the value for the first parameter, the second argument provides the value for the second parameter, and so on and so on, regardless of the name of the parameters. If a parameter was missing a corresponding argument to provide its value, the compiler would emit a compilation error. For this call: Greeting("Mr.", "Morgado", 42); this method: public void Greeting(string title, string name, int age) will receive as parameters: title: “Mr.” name: “Morgado” age: 42 What this new feature allows is to use the names of the parameters to identify the corresponding arguments in the form: name:value Not all arguments in the argument list must be named. However, all named arguments must be at the end of the argument list. The matching between arguments (and the evaluation of its value) and parameters will be done first by name for the named arguments and than by position for the unnamed arguments. This means that, for this method definition: public static void Method(int first, int second, int third) this call declaration: int i = 0; Method(i, third: i++, second: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = i++; int CS$0$0001 = ++i; Method(i, CS$0$0001, CS$0$0000); which will give the method the following parameter values: first: 2 second: 2 third: 0 Notice the variable names. Although invalid being invalid C# identifiers, they are valid .NET identifiers and thus avoiding collision between user written and compiler generated code. Besides allowing to re-order of the argument list, this feature is very useful for auto-documenting the code, for example, when the argument list is very long or not clear, from the call site, what the arguments are. Optional Arguments Parameters can now have default values: public static void Method(int first, int second = 2, int third = 3) Parameters with default values must be the last in the parameter list and its value is used as the value of the parameter if the corresponding argument is missing from the method call declaration. For this call declaration: int i = 0; Method(i, third: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = ++i; Method(i, 2, CS$0$0000); which will give the method the following parameter values: first: 1 second: 2 third: 1 Because, when method parameters have default values, arguments can be omitted from the call declaration, this might seem like method overloading or a good replacement for it, but it isn’t. Although methods like this: public static StreamReader OpenTextFile( string path, Encoding encoding = null, bool detectEncoding = true, int bufferSize = 1024) allow to have its calls written like this: OpenTextFile("foo.txt", Encoding.UTF8); OpenTextFile("foo.txt", Encoding.UTF8, bufferSize: 4096); OpenTextFile( bufferSize: 4096, path: "foo.txt", detectEncoding: false); The complier handles default values like constant fields taking the value and useing it instead of a reference to the value. So, like with constant fields, methods with parameters with default values are exposed publicly (and remember that internal members might be publicly accessible – InternalsVisibleToAttribute). If such methods are publicly accessible and used by another assembly, those values will be hard coded in the calling code and, if the called assembly has its default values changed, they won’t be assumed by already compiled code. At the first glance, I though that using optional arguments for “bad” written code was great, but the ability to write code like that was just pure evil. But than I realized that, since I use private constant fields, it’s OK to use default parameter values on privately accessed methods.

    Read the article

  • Make mex compiler of matlab working on mint?

    - by Erogol
    Mex compiler of matlab does not work with following error Warning: You are using gcc version "4.7.2-2ubuntu1)". The version currently supported with MEX is "4.4.6". For a list of currently supported compilers see: http://www.mathworks.com/support/compilers/current_release/ /home/krm/matlab/bin/mex: 1: eval: g++: not found mex: compile of ' "fv_cache/fv_cache.cc"' failed. it is obvious that I need preceding version of gcc but this specific version is not included in software manager of mint. I installed gcc-4.4 but it does not recognized by Matlab. I also removed latest version from my computer and set gcc as a environment variable points to gcc-4.4 but again does not work. Is there any other way around to solve that issue? Maybe a interface or something.

    Read the article

  • Part 14: Execute a PowerShell script

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application With PowerShell you can add powerful scripting to your build to for example execute a deployment. If you want more information on PowerShell, please refer to http://technet.microsoft.com/en-us/library/aa973757.aspx For this example we will create a simple PowerShell script that prints “Hello world!”. To create the script, create a new text file and name it “HelloWorld.ps1”. Add to the contents of the script: Write-Host “Hello World!” To test the script do the following: Open the command prompt To run the script you must change the execution policy. To do this execute in the command prompt: powershell set-executionpolicy remotesigned Now go to the directory where you have saved the PowerShell script Execute the following command powershell .\HelloWorld.ps1 In this example I use a relative path, but when the path to the PowerShell script contains spaces, you need to change the syntax to powershell "& '<full path to script>' " for example: powershell "& ‘C:\sources\Build Customization\SolutionToBuild\PowerShell Scripts\HellloWorld.ps1’ " In this blog post, I create a new solution and that solution includes also this PowerShell script. I want to create an argument on the Build Process Template that holds the path to the PowerShell script. In the Build Process Template I will add an InvokeProcess activity to execute the PowerShell command. This InvokeProcess activity needs the location of the script as an argument for the PowerShell command. Since you don’t know the full path at the build server of this script, you can either specify in the argument the relative path of the script, but it is hard to find out what the relative path is. I prefer to specify the location of the script in source control and then convert that server path to a local path. To do this conversion you can use the ConvertWorkspaceItem activity. So to complete the task, open the Build Process Template CustomTemplate.xaml that we created in earlier parts, follow the following steps Add a new argument called “DeploymentScript” and set the appropriate settings in the metadata. See Part 2: Add arguments and variables  for more information. Scroll down beneath the TryCatch activity called “Try Compile, Test, and Associate Changesets and Work Items” Add a new If activity and set the condition to "Not String.IsNullOrEmpty(DeploymentScript)" to ensure it will only run when the argument is passed. Add in the Then branch of the If activity a new Sequence activity and rename it to “Start deployment” Click on the activity and add a new variable called DeploymentScriptFilename (scoped to the “Start deployment” Sequence Add a ConvertWorkspaceItem activity on the “Start deployment” Sequence Add a InvokeProcess activity beneath the ConvertWorkspaceItem activity in the “Start deployment” Sequence Click on the ConvertWorkspaceItem activity and change the properties DisplayName = Convert deployment script filename Input = DeploymentScript Result = DeploymentScriptFilename Workspace = Workspace Click on the InvokeProcess activity and change the properties Arguments = String.Format(" ""& '{0}' "" ", DeploymentScriptFilename) DisplayName = Execute deployment script FileName = "PowerShell" To see results from the powershell command drop a WriteBuildMessage activity on the "Handle Standard Output" and pass the stdOutput variable to the Message property. Do the same for a WriteBuildError activity on the "Handle Error Output" To publish it, check in the Build Process Template This leads to the following result We now go to the build definition that depends on the template and set the path of the deployment script to the server path to the HelloWorld.ps1. (If you want to see the result of the PowerShell script, change the Logging verbosity to Detailed or Diagnostic). Save and run the build. A lot of the deployment scripts you have will have some kind of arguments (like username / password or environment variables) that you want to define in the Build Definition. To make the PowerShell configurable, you can follow the following steps. Create a new script and give it the name "HelloWho.ps1". In the contents of the file add the following lines: param (         $person     ) $message = [System.String]::Format(“Hello {0}!", $person) Write-Host $message When you now run the script on the command prompt, you will see the following So lets change the Build Process Template to accept one parameter for the deployment script. You can of course make it configurable to add a for-loop that reads through a collection of parameters but that is out of scope of this blog post. Add a new Argument called DeploymentScriptParameter In the InvokeProcess activity where the PowerShell command is executed, modify the Arguments property to String.Format(" ""& '{0}' '{1}' "" ", DeploymentScriptFilename, DeploymentScriptParameter) Check in the Build Process Template Now modify the build definition and set the Parameter of the deployment to any value and run the build. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Introducing Microsoft SQL Server 2008 R2 - Business Intelligence Samples

    - by smisner
    On April 14, 2010, Microsoft Press (blog | twitter) released my latest book, co-authored with Ross Mistry (twitter), as a free ebook download - Introducing Microsoft SQL Server 2008 R2. As the title implies, this ebook is an introduction to the latest SQL Server release. Although you'll find a comprehensive review of the product's features in this book, you will not find the step-by-step details that are typical in my other books. For those readers who are interested in a more interactive learning experience, I have created two samples file for download: IntroSQLServer2008R2Samples project Sales Analysis workbook Here's a recap of the business intelligence chapters and the samples I used to generate the screen shots by chapter: Chapter 6: Scalable Data Warehousing covers a new edition of SQL Server, Parallel Data Warehouse. Understandably, Microsoft did not ship me the software and hardware to set up my own Parallel Data Warehouse environment for testing purposes and consequently you won't see any screenshots in this chapter. I received a lot of information and a lot of help from the product team during the development of this chapter to ensure its technical accuracy. Chapter 7: Master Data Services is a new component in SQL Server. After you install Master Data Services (MDS), which is a separate installation from SQL Server although it's found on the same media, you can install sample models to explore (which is what I did to create screenshots for the book). To do this, you deploying packages found at \Program Files\Microsoft SQL Server\Master Data Services\Samples\Packages. You will first need to use the Configuration Manager (in the Microsoft SQL Server 2008 R2\Master Data Services program group) to create a database and a Web application for MDS. Then when you launch the application, you'll see a Getting Started page which has a Deploy Sample Data link that you can use to deploy any of the sample packages. Chapter 8: Complex Event Processing is an introduction to another new component, StreamInsight. This topic was way too large to cover in-depth in a single chapter, so I focused on information such as architecture, development models, and an overview of the key sections of code you'll need to develop for your own applications. StreamInsight is an engine that operates on data in-flight and as such has no user interface that I could include in the book as screenshots. The November CTP version of SQL Server 2008 R2 included code samples as part of the installation, but these are not the official samples that will eventually be available in Codeplex. At the time of this writing, the samples are not yet published. Chapter 9: Reporting Services Enhancements provides an overview of all the changes to Reporting Services in SQL Server 2008 R2, and there are many! In previous posts, I shared more details than you'll find in the book about new functions (Lookup, MultiLookup, and LookupSet), properties for page numbering, and the new global variable RenderFormat. I will confess that I didn't use actual data in the book for my discussion on the Lookup functions, but I did create real reports for the blog posts and will upload those separately. For the other screenshots and examples in the book, I have created the IntroSQLServer2008R2Samples project for you to download. To preview these reports in Business Intelligence Development Studio, you must have the AdventureWorksDW2008R2 database installed, and you must download and install SQL Server 2008 R2. For the map report, you must execute the PopulationData.sql script that I included in the samples file to add a table to the AdventureWorksDW2008R2 database. The IntroSQLServer2008R2Samples project includes the following files: 01_AggregateOfAggregates.rdl to illustrate the use of embedded aggregate functions 02_RenderFormatAndPaging.rdl to illustrate the use of page break properties (Disabled, ResetPageNumber), the PageName property, and the RenderFormat global variable 03_DataSynchronization.rdl to illustrate the use of the DomainScope property 04_TextboxOrientation.rdl to illustrate the use of the WritingMode property 05_DataBar.rdl 06_Sparklines.rdl 07_Indicators.rdl 08_Map.rdl to illustrate a simple analytical map that uses color to show population counts by state PopulationData.sql to provide the data necessary for the map report Chapter 10: Self-Service Analysis with PowerPivot introduces two new components to the Microsoft BI stack, PowerPivot for Excel and PowerPivot for SharePoint, which you can learn more about at the PowerPivot site. To produce the screenshots for this chapter, I created the Sales Analysis workbook which you can download (although you must have Excel 2010 and the PowerPivot for Excel add-in installed to explore it fully). It's a rather simple workbook because space in the book did not permit a complete exploration of all the wonderful things you can do with PowerPivot. I used a tutorial that was available with the CTP version as a basis for the report so it might look familiar if you've already started learning about PowerPivot. In future posts, I'll continue exploring the new features in greater detail. If there's any special requests, please let me know! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • C# ?? null coalescing operator

    - by anirudha
    the null coalescing operator is used for set the value when object is null. if object have some value that nothing change and still have their default value they have.  string str = "i am string";            string message = str ?? "it is null";   the message have same value as str variable because str not null. if str is null that message have value “it is null”; as declared in statement. coalescing operator does not work on nullable operator such as int?

    Read the article

  • Refresh bounded taskflows across regions using InputParameters

    - by raghu.yadav
    Usecase1 : Selecting record from table in left region reflects dependent detail form of same table in right region using InputParameters Here is the example given by Andre Example Three important crux to be known from above example. 1) create primary key attribute in pagedef of the table in region1 2) add inputparameter name in taskflow inputparameters of region2 3) bind primary key attribute from page definition to above inputparameters in main page where above 2 regions dropped. UseCase2 : Selecting record from location table in left region reflects corresponding department records from department table in right regions. 1) create bind variable on location id in departmentVO. 2) create inputparameter say LocationParam, with type Number, value as #{pageFlowScope.LocationParam} 3) assign LocationId param from pagedef to LocationParam in taskflow2 4) create ExecuteWithParam action in region2 pagedef and invoke the same on IfRefresh condition. during run time - steps executes in backwards (3,2,1)..i,e as user selects column in location table, it assigns location from pagedef to locationParam and then to PageFlowScope and from there to view criteria.

    Read the article

  • Best methods for Lazy Initialization with properties

    - by Stuart Pegg
    I'm currently altering a widely used class to move as much of the expensive initialization from the class constructor into Lazy Initialized properties. Below is an example (in c#): Before: public class ClassA { public readonly ClassB B; public void ClassA() { B = new ClassB(); } } After: public class ClassA { private ClassB _b; public ClassB B { get { if (_b == null) { _b = new ClassB(); } return _b; } } } There are a fair few more of these properties in the class I'm altering, and some are not used in certain contexts (hence the Laziness), but if they are used they're likely to be called repeatedly. Unfortunately, the properties are often also used inside the class. This means there is a potential for the private variable (_b) to be used directly by a method without it being initialized. Is there a way to make only the public property (B) available inside the class, or even an alternative method with the same initialized-when-needed?

    Read the article

  • Coding error at open URL

    - by Lobo
    Hi, I have the following method to open a URL API String c=""; URL direccionURL; try { direccionURL = new URL("http://api.stackoverflow.com/1.0/users/523725"); BufferedReader in = new BufferedReader(new InputStreamReader( direccionURL.openStream())); String inputLine; while ((inputLine = in.readLine()) != null) c+=inputLine; in.close(); } catch (MalformedURLException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return c; In the end, the "c" variable contains a set of characters that are not the same I get if I open the same URL with a browser. Why?, What am I doing wrong? Thank's for help. Regards!

    Read the article

  • Are there real world applications where the use of prefix versus postfix operators matters?

    - by Kenneth
    In college it is taught how you can do math problems which use the ++ or -- operators on some variable referenced in the equation such that the result of the equation would yield different results if you switched the operator from postfix to prefix or vice versa. Are there any real world applications of using postfix or prefix operator where it makes a difference as to which you use? It doesn't seem to me (maybe I just don't have enough experience yet in programming) that there really is much use to having the different operators if it only applies in math equations. EDIT: Suggestions so far include: function calls //f(++x) != f(x++) loop comparison //while (++i < MAX) != while (i++ < MAX)

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >