Search Results

Search found 600 results on 24 pages for 'variations'.

Page 22/24 | < Previous Page | 18 19 20 21 22 23 24  | Next Page >

  • How to access a matrix in a matlab struct's field from a mex function?

    - by B. Ruschill
    I'm trying to figure out how to access a matrix that is stored in a field in a matlab structure from a mex function. That's awfully long winded... Let me explain: I have a matlab struct that was defined like the following: matrixStruct = struct('matrix', {4, 4, 4; 5, 5, 5; 6, 6 ,6}) I have a mex function in which I would like to be able to receive a pointer to the first element in the matrix (matrix[0][0], in c terms), but I've been unable to figure out how to do that. I have tried the following: /* Pointer to the first element in the matrix (supposedly)... */ double *ptr = mxGetPr(mxGetField(prhs[0], 0, "matrix"); /* Incrementing the pointer to access all values in the matrix */ for(i = 0; i < 3; i++){ printf("%f\n", *(ptr + (i * 3))); printf("%f\n", *(ptr + 1 + (i * 3))); printf("%f\n", *(ptr + 2 + (i * 3))); } What this ends up printing is the following: 4.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 I have also tried variations of the following, thinking that perhaps it was something wonky with nested function calls, but to no avail: /* Pointer to the first location of the mxArray */ mxArray *fieldValuePtr = mxGetField(prhs[0], 0, "matrix"); /* Get the double pointer to the first location in the matrix */ double *ptr = mxGetPr(fieldValuePtr); /* Same for loop code here as written above */ Does anyone have an idea as to how I can achieve what I'm trying to, or what I am potentially doing wrong? Thanks! Edit: As per yuk's comment, I tried doing similar operations on a struct that has a field called array which is a one-dimensional array of doubles. The struct containing the array is defined as follows: arrayStruct = struct('array', {4.44, 5.55, 6.66}) I tried the following on the arrayStruct from within the mex function: mptr = mxGetPr(mxGetField(prhs[0], 0, "array")); printf("%f\n", *(mptr)); printf("%f\n", *(mptr + 1)); printf("%f\n", *(mptr + 2)); ...but the output followed what was printed earlier: 4.440000 0.000000 0.000000

    Read the article

  • Chrome extension: sendMessage doesn't work

    - by user3334776
    I've already read the documentation from Google on 'message passing' a few times and have probably looked at over 10 other questions with the same problem and already tried quiet a few variations of most of their "solutions" and of what I have below... This is black magic, right? Either way, here it goes. Manifest File: { "manifest_version" : 2, "name" : "Message Test", "version" : "1.0", "browser_action": { "default_popup": "popup.html" }, "background": { "scripts": ["background.js"] }, "content_scripts": [ { "matches" : ["<all_urls>"], "js": ["message-test.js"] } ] } I'm aware extensions aren't suppose to use inline JS, but I'm leaving this in so the original question can be left as it was since I still can't get the message to send from the background page, When I switch from the popup to the background, I removed the appropriate lines from the manifest.json popup.html file: <html> <head> <script> chrome.tabs.query({active: true, currentWindow: true}, function(tabs) { chrome.tabs.sendMessage(tabs[0].id, {greeting: "hello", theMessage: "Why isn\'t this working?"}, function(response) { console.log(response.farewell); }); }); </script> </head> <body> </body> </html> OR background.js file: chrome.tabs.query({active: true, currentWindow: true}, function(tabs) { chrome.tabs.sendMessage(tabs[0].id, {greeting: "hello", theMessage: "Why isn\'t this working?"}, function(response) { console.log(response.farewell); }); }); message-test.js file: var Mymessage; chrome.runtime.onMessage.addListener(function(message, sender, sendResponse) { if (message.greeting == "hello"){ Mymessage = message.theMessage; alert(Mymessage); } else{ sendResponse({}); } }); No alert(Mymessage) goes off. I'm also trying to execute this after pressing a button from a popup and having a window at a specified url, but that's a later issue. The other files can be found here except with the background.js content wrapped in an addEventListener("click"....: http://pastebin.com/KhqxLx5y AND http://pastebin.com/JaGcp6tj

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • How do I get javascript-generated image maps to work with internet explorer?

    - by schwerwolf
    I'm using javascript to generate a high resolution grid for an image that I generated on a web server. The high-resolution grid is composed of a 'map' element with hundreds of 'area' child elements. Each 'area' element has onmouseover attribute that causes the display of a popup box. After assigning the map to the img (via the usemap attribute), Internet explorer ignores the 'onmouseover' attribute of the area elements that I added via javascript. The behavior is not caused by syntactical variations between IE and other browsers. A static map behaves correctly. Only the elements that I add dynamically to an existing image map fail to fire their corresponding mouse-over events. How can I get IE to fire the mouse-over event for the added 'area' elements? function generate_image_map ( img ) { var tile_width = 8; var tile_height = 10; var plotarea_left = 40; var plotarea_top = 45; var column_count = 100; var row_count = 120; var img_id = YAHOO.util.Dom.getAttribute(img, "id"); var img_map_id = YAHOO.util.Dom.getAttribute(img, "usemap"); var original_map = YAHOO.util.Selector.query(img_map_id)[0]; var area_nodes = YAHOO.util.Selector.query("area", original_map); var last_node = area_nodes[area_nodes.length - 1]; for (var y = 0; y < row_count; y++) { var top = Math.round(plotarea_top + (y * tile_height)); var bottom = Math.round(plotarea_top + (y * tile_height) + tile_height); for (var x = 0; x < column_count; x++) { var left = Math.round(plotarea_left + (x * tile_width)); var right = Math.round(plotarea_left + (x * tile_width) + tile_width); var area = document.createElement("area"); YAHOO.util.Dom.setAttribute(area, "shape", "rect"); YAHOO.util.Dom.setAttribute(area, "onmouseover", "alert('This does not appear in IE')" ); var coords = [ left, top, right, bottom ]; YAHOO.util.Dom.setAttribute(area, "coords", coords.join(",")); YAHOO.util.Dom.insertBefore(area, last_node); } } }

    Read the article

  • importing modules in app engine

    - by tanky
    Ive asked this before, but it seems i wasnt clear/detailed enough and after a week of trying im still struggling so i will try again. i am trying to use, oauth2 and ply on app engine. i have tried copying their directories into my app engine project directory (in the form ply-3.4 or brosner-python-oauth2-82a05f9) and i have tried copying the specific sub directory contained within the aforemention one. (ply or oauth2) i have tried saying import oauth2, from brosner-oauth2_python-82a05f9 import oauth and other variations on the theme, but i still cant get it to work nothing has worked. i have tried including them in app.yaml, but that seemed to create an even bigger error as my entire project wouldnt even run when i tried that. and now i have run out of things to try. the error log i am getting is as follows. INFO 2012-10-20 22:33:29,358 dev_appserver.py:2884] "GET / HTTP/1.1" 500 - WARNING 2012-10-20 22:33:58,453 py_zipimport.py:139] Can't open zipfile C:\Python27\lib\site-packages\oauth2-1.0.2-py2.7.egg: IOError: [Errno 13] file not accessible: 'C:\Python27\lib\site-packages\oauth2-1.0.2-py2.7.egg' WARNING 2012-10-20 22:33:58,453 py_zipimport.py:139] Can't open zipfile C:\Python27\lib\site-packages\ply-3.4-py2.7.egg: IOError: [Errno 13] file not accessible: 'C:\Python27\lib\site-packages\ply-3.4-py2.7.egg' WARNING 2012-10-20 22:33:58,453 py_zipimport.py:139] Can't open zipfile C:\Python27\lib\site-packages\tweepy-1.11-py2.7.egg: IOError: [Errno 13] file not accessible: 'C:\Python27\lib\site-packages\tweepy-1.11-py2.7.egg' ERROR 2012-10-20 22:34:00,015 wsgi.py:189] Traceback (most recent call last): File "C:\Program Files\Google\google_appengine\google\appengine\runtime\wsgi.py", line 187, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "C:\Program Files\Google\google_appengine\google\appengine\runtime\wsgi.py", line 225, in _LoadHandler handler = import(path[0]) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver_import_hook.py", line 676, in Decorate return func(self, *args, **kwargs) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver_import_hook.py", line 1850, in load_module return self.FindAndLoadModule(submodule, fullname, search_path) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver_import_hook.py", line 676, in Decorate return func(self, *args, **kwargs) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver_import_hook.py", line 1722, in FindAndLoadModule description) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver_import_hook.py", line 676, in Decorate return func(self, *args, **kwargs) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver_import_hook.py", line 1665, in LoadModuleRestricted description) File "C:\Documents and Settings\ladds\My Documents\udacity\sigh\main.py", line 3, in import ply ImportError: No module named ply INFO 2012-10-20 22:34:00,030 dev_appserver.py:2884] "GET / HTTP/1.1" 500 - thanks for any help.

    Read the article

  • How do I create a thread-safe write-once read-many value in Java?

    - by Software Monkey
    This is a problem I encounter frequently in working with more complex systems and which I have never figured out a good way to solve. It usually involves variations on the theme of a shared object whose construction and initialization are necessarily two distinct steps. This is generally because of architectural requirements, similar to applets, so answers that suggest I consolidate construction and initialization are not useful. By way of example, let's say I have a class that is structured to fit into an application framework like so: public class MyClass { private /*ideally-final*/ SomeObject someObject; MyClass() { someObject=null; } public void startup() { someObject=new SomeObject(...arguments from environment which are not available until startup is called...); } public void shutdown() { someObject=null; // this is not necessary, I am just expressing the intended scope of someObject explicitly } } I can't make someObject final since it can't be set until startup() is invoked. But I would really like it to reflect it's write-once semantics and be able to directly access it from multiple threads, preferably avoiding synchronization. The idea being to express and enforce a degree of finalness, I conjecture that I could create a generic container, like so: public class WoRmObject<T> { private T object; WoRmObject() { object=null; } public WoRmObject set(T val) { object=val; return this; } public T get() { return object; } } and then in MyClass, above, do: private final WoRmObject<SomeObject> someObject; MyClass() { someObject=new WoRmObject<SomeObject>(); } public void startup() { someObject.set(SomeObject(...arguments from environment which are not available until startup is called...)); } Which raises some questions for me: Is there a better way, or existing Java object (would have to be available in Java 4)? Is this thread-safe provided that no other thread accesses someObject.get() until after it's set() has been called. The other threads will only invoke methods on MyClass between startup() and shutdown() - the framework guarantees this. Given the completely unsynchronized WoRmObject container, it is ever possible under either JMM to see a value of object which is neither null nor a reference to a SomeObject? In other words, does has the JMM always guaranteed that no thread can observe the memory of an object to be whatever values happened to be on the heap when the object was allocated.

    Read the article

  • Can XPath concatenate two nodeset values? (for use in XForms)

    - by iHeartGreek
    Hi! I am wanting to concatenate two nodeset values using XPath in XForms. I know that XPath has a concat(string, string) function, but how would I go about concatenating two nodeset values? BEGIN EDIT: I tried concat function.. I tried this.. and variations of it to make it work, but it doesn't <xf:value ref="concat(instance('param_choices')/choice/root, .)"/> END EDIT Below is a simplified code example of what I am trying to achieve. XForms model: <xf:instance id="param_choices" xmlns=""> <choices> <root label="Param Choices">/param</root> <choice label="Name">/@AAA</choice> <choice label="Value">/@BBB</choice> </choices> </xf:instance> XForms ui code that I currently have: <xf:select ref="instance('criteria_data')/criteria/criterion" appearance="full"> <xf:label>Param choices:</xf:label> <br/> <xf:itemset nodeset="instance('param_choices')/choice"> <xf:label ref="@label"></xf:label> <xf:value ref="."></xf:value> </xf:itemset> </xf:select> (if user selects "Name" checkbox..) the XML output is: <criterion>/@BBB</criterion> However! I want to combine the root nodeset value with the current choice nodeset value. Essentially: <xf:value ref="(instance('definition_choices')/choice/root) + ."/> to achieve the following XML output: <criterion>/param/@BBB</criterion> Any suggestions on how to do this? (I am fairly new to XPath and XForms) p.s. what I am asking makes sense to me when I typed it out, but if you have trouble figuring out what I'm asking, just let me know.. thanks!

    Read the article

  • SAX parser does not resolve filename

    - by phantom-99w
    Another day, another strange error with SAX, Java, and friends. I need to iterate over a list of File objects and pass them to a SAX parser. However, the parser fails because of an IOException. However, the various File object methods confirm that the file does indeed exist. The output which I get: 11:53:57.838 [MainThread] DEBUG DefaultReactionFinder - C:\project\trunk\application\config\reactions\TestReactions.xml 11:53:57.841 [MainThread] ERROR DefaultReactionFinder - C:\project\trunk\application\config\reactions\null (The system cannot find the file specified) So the problem is obviously that null in the second line. I've tried nearly all variations of passing the file as a parameter to the parser, including as a String (both from getAbsolutePath() and entered by hand), as a URI and, even more weirdly, as a FileInputStream (for this I get the same error, except that the entire relative path gets reported as null, so C:\project\trunk\null). All that I can think of is that the SAXParserFactory is incorrectly configured. I have no idea what is wrong, though. Here is the code concerned: SAXParserFactory factory = SAXParserFactory.newInstance(); factory.setValidating(true); try { parser = factory.newSAXParser(); } catch (ParserConfigurationException e) { throw new InstantiationException("Error configuring an XML parser. Given error message: \"" + e.getMessage() + "\"."); } catch (SAXException e) { throw new InstantiationException("Error creating a SAX parser. Given error message: \"" + e.getMessage() + "\"."); } ... for (File f : fileLister.getFileList()) { logger.debug(f.getAbsolutePath()); try { parser.parse(f, new ReactionHandler(input)); //FileInputStream fs = new FileInputStream(f); //parser.parse(fs, new ReactionHandler(input)); //fs.close(); } catch (IOException e) { logger.error(e.getMessage()); throw new ReactionNotFoundException("An error occurred processing file \"" + f + "\"."); } ... } I have made no special provisions to provide a custom SAX parser implementation: I use the system default. Any help would be greatly appreciated!

    Read the article

  • php regex guitar tab (tabs or tablature, a type of music notation)

    - by John
    I am in the process of creating a guitar tab to rtttl (Ring Tone Text Transfer Language) converter in PHP. In order to prepare a guitar tab for rtttl conversion I first strip out all comments (comments noted by #- and ended with -#), I then have a few lines that set tempo, note the tunning and define multiple instruments (Tempo 120\nDefine Guitar 1\nDefine Bass 1, etc etc) which are stripped out of the tab and set aside for later use. Now I essentially have nothing left except the guitar tabs. Each tab is prefixed with it's instrument name in conjunction with the instrument name noted prior. Some times we have tabs for 2 separate instruments that are linked because they are to be played together, ie a Guitar and a Bass Guitar playing together. Example 1, Standard Guitar Tab: |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| Example 2, Conjunction Tab: |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| | | |Bass 1 G|----------0-------0-----------0-------0--------| D|--------2-----------2-------2-----------2------| A|------3---------------3---3---------------3----| E|----3-------------------3-------------------3--| I have considered other methods of identifying the tabs with no solid results. I am hoping that someone who does regular expressions could help me find a way to identify a single guitar tab and if possible also be able to match a tab with multiple instruments linked together. Once the tabs are in an array I will go through them one line at a time and convert them into rtttl lines (exploded at each new line "\n"). I do not want to separate the guitar tabs in the document via explode "\n\n" or something similar because it does not identify the guitar tab, rather, it is identifying the space between the tabs - not on the tabs themselves. I have been messing with this for about a week now and this is the only major hold up I have. Everything else is fairly simple. As of current, I have tried many variations of the regex pattern. Here is one of the most recent test samples: <?php $t = " |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| | | |Bass 1 G|----------0-------0-----------0-------0--------| D|--------2-----------2-------2-----------2------| A|------3---------------3---3---------------3----| E|----3-------------------3-------------------3--| "; preg_match_all("/^.*?(\\|).*?(\\|)/is",$t,$p); print_r($p); ?> It is also worth noting that inside the tabs, where the dashes and #'s are, you may also have any variation of letters, numbers and punctuation. The beginning of each line marks the tuning of each string with one of the following case insensitive: a,a#,b,c,c#,d,d#,e,f,f#,g or g. Thanks in advance for help with this most difficult problem.

    Read the article

  • Multiple column subselect in mysql 5 (5.1.42)

    - by rubber boots
    This one seems to be a simple problem, but I can't make it work in a single select or nested select. Retrieve the authors and (if any) advisers of a paper (article) into one row. I order to explain the problem, here are the two data tables (pseudo) papers (id, title, c_year) persons (id, firstname, lastname) plus a link table w/one extra attribute (pseudo): paper_person_roles( paper_id person_id act_role ENUM ('AUTHOR', 'ADVISER') ) This is basically a list of written papers (table: papers) and a list of staff and/or students (table: persons) An article my have (1,N) authors. An article may have (0,N) advisers. A person can be in 'AUTHOR' or 'ADVISER' role (but not at the same time). The application eventually puts out table rows containing the following entries: TH: || Paper_ID | Author(s) | Title | Adviser(s) | TD: || 21334 |John Doe, Jeff Tucker|Why the moon looks yellow|Brown, Rayleigh| ... My first approach was like: select/extract a full list of articles into the application, eg.SELECT q.id, q.title FROM papers AS q ORDER BY q.c_year and save the results of the query into an array (in the application). After this step, loop over the array of the returned information and retrieve authors and advisers (if any), via prepared statement (? is the paper's id) from the link table like:APPLICATION_LOOP(paper_ids in array) SELECT p.lastname, p.firstname, r.act_role FROM persons AS p, paper_person_roles AS r WHERE p.id=r.person_id AND r.paper_id = ? # The application does further processing from here (pseudo): foreach record from resulting records if record.act_role eq 'AUTHOR' then join to author_column if record.act_role eq 'ADVISER' then join to avdiser_column end print id, author_column, title, adviser_column APPLICATION_LOOP This works so far and gives the desired output. Would it make sense to put the computation back into the DB? I'm not very proficient in nontrivial SQL and can't find a solution with a single (combined or nested) select call. I tried sth. like SELECT q.title (CONCAT_WS(' ', (SELECT p.firstname, p.lastname AS aunames FROM persons AS p, paper_person_roles AS r WHERE q.id=r.paper_id AND r.act_role='AUTHOR') ) ) AS aulist FROM papers AS q, persons AS p, paper_person_roles AS r in several variations, but no luck ... Maybe there is some chance? Thanks in advance r.b.

    Read the article

  • Random generates same number in java

    - by user1613360
    This is my java code. import java.io.*; import java.util.*; import java.util.concurrent.TimeUnit; class search { private int numelem; private int[] input=new int[100]; public void setNumofelem() { System.out.println("Enter the total numebr of elements"); Scanner yz=new Scanner(System.in); numelem=yz.nextInt(); } public void randomnumber() throws Exception { int max=500,min=1,n=numelem; Random rand = new Random(); for (int j=0;j < n;j++) { input[j]=rand.nextInt(max)+1; } } public void printinput() { int b=numelem,t=0; while(true) if(b!=0) { System.out.print(" "+input[t]); b--; t++; } else break; } } public class mycode { public static void main(String args[]) throws Exception { search a=new search(); a.setNumofelem(); a.randomnumber(); a.printinput(); } } Now the function randomnumber() just returns the same number.The function executes perfectly if I execute it as a separate java program but fails miserably if I call it using an object.I have also tried the following variations but nothing works everything return the same number. Variation 1: public void randomnumber() throws Exception { int max=500,min=1,n=numelem; Random rand = new Random(); for (int j=0;j < n;j++) { TimeUnit.SECONDS.sleep(1); input[j]=rand.nextInt(max)+1; } } Variation 2: public void randomnumber() throws Exception { int max=500,min=1,n=numelem; Random rand = new Random(); for (int j=0;j < n;j++) { rand.setSeed(System.nanoTime()); input[j]=rand.nextInt(max)+1; } } Variation 3: public void randomnumber() throws Exception { int max=500,min=1,n=numelem; Random rand = new Random(); for (int j=0;j < n;j++) { TimeUnit.SECONDS.sleep(1); rand.setSeed(System.nanoTime()); input[j]=rand.nextInt(max)+1; } } Sample input/Output: Enter the number of elements: 5 23 23 23 23 23 23

    Read the article

  • actionscript 2.0 input text box

    - by user1769760
    So, here's what I'm trying to do, and I, frankly, believe it should be obvious, but I can't figure it out. I am creating a very simple Artificial Intelligence simulation. And in this simulation there's an input box at the bottom of the screen (called "input" exactly). "input" has a variable in its properties that is called "inbox" (exactly). Using a key listener the script calls up a function when the enter button is pressed. This function has 2 if statements and an else statement which dictate the responses of the AI (named "nistra"). The problem is this, When I type in what I want to say, and hit enter, it always uses the second response ("lockpick" in the code below). I have tried variations on the code but I still don't see the solution. I believe the problem is that the "typein" variable holds all the format information from the text box as well as the variable, but I could be wrong, that information is in here as well, underneath the code itself. Any help I can get would be greatly appreciated. var typein = ""; //copies the text from inbox into here, this is what nistra responds to var inbox = ""; //this is where the text from the input text box goes var respond = ""; //nistra's responses go here my_listener = new Object(); // key listener my_listener.onKeyDown = function() { if(Key.isDown(13)) //enter button pressed { typein = inbox; // moves inbox into typein nistraresponse(); // calles nistra's responses } //code = Key.getCode(); //trace ("Key pressed = " + code); } Key.addListener(my_listener); // key listener ends here nistraresponse = function() // nistra's responses { trace(typein); // trace out what "typein" holds if(typein = "Hello") // if you type in "Hello" { respond = "Hello, How are you?"; } if(typein = "lockpick") // if you type in "lockpick" { respond = "Affirmative"; } else // anything else { respond = "I do not understand the command, please rephrase"; } cntxtID = setInterval(clearnistra, 5000); // calls the function that clears out nistra's response box so that her responses don't just sit there } clearnistra = function() // clears her respond box { respond = ""; clearInterval(cntxtID); } // "typein" traces out the following <TEXTFORMAT LEADING="2"><P ALIGN="CENTER"><FONT FACE="Times New Roman" SIZE="20" COLOR="#FF0000" LETTERSPACING="0" KERNING="0">test</FONT></P></TEXTFORMAT>

    Read the article

  • Div not filling width of floated container (css expert needed

    - by Rayden
    I know there are many variations of this question posted, but none I've found quite provide an answer that works for this case. I basically have two left floated divs. Inside those two divs are div headers and tabled content. I want the Div headers (Hour/Minute) to stretch to the width of the tabled content, but they only do this in FF and Chrome, not IE7. IE7 is my works official browser so the one I need it to work with the most. Here is the CSS: #ui-timepicker-div { padding:0.2em; } #ui-timepicker-hours { float:left; } #ui-timepicker-minutes { margin:0 0 0 0.2em; float:left; } .ui-timepicker .ui-timepicker-header { padding:0.2em 0; } .ui-timepicker .ui-timepicker-title { line-height:1.8em; text-align:center; } .ui-timepicker table { margin:0.15em 0 0 0; font-size:.9em; border-collapse:collapse; } .ui-timepicker td { padding:1px; width:2.2em; } .ui-timepicker th, .ui-timepicker td { border:0; } .ui-timepicker td a { display:block; padding:0.2em 0.3em 0.2em 0.5em; text-align:right; text-decoration:none; } Here is the HTML (did not include tabled content): <div style="position: absolute; top: 252.667px; left: 648px; z-index: 1; display: none;" class="ui-timepicker ui-widget ui-widget-content ui-helper-clearfix ui-corner-all" id="ui-timepicker-div"> <div id="ui-timepicker-hours"> <div class="ui-timepicker-header ui-widget-header ui-helper-clearfix ui-corner-all"> <div class="ui-timepicker-title">Hour</div> </div> <table class="ui-timepicker"> </table> </div> <div id="ui-timepicker-minutes"> <div class="ui-timepicker-header ui-widget-header ui-helper-clearfix ui-corner-all"> <div class="ui-timepicker-title">Minutes</div> </div> <table class="ui-timepicker"> </table> </div> </div>

    Read the article

  • Refactor the following two C++ methods to move out duplicate code

    - by ossandcad
    I have the following two methods that (as you can see) are similar in most of its statements except for one (see below for details) unsigned int CSWX::getLineParameters(const SURFACE & surface, vector<double> & params) { VARIANT varParams; surface->getPlaneParams(varParams); // this is the line of code that is different SafeDoubleArray sdParams(varParams); for( int i = 0 ; i < sdParams.getSize() ; ++i ) { params.push_back(sdParams[i]); } if( params.size() > 0 ) return 0; return 1; } unsigned int CSWX::getPlaneParameters(const CURVE & curve, vector<double> & params) { VARIANT varParams; curve->get_LineParams(varParams); // this is the line of code that is different SafeDoubleArray sdParams(varParams); for( int i = 0 ; i < sdParams.getSize() ; ++i ) { params.push_back(sdParams[i]); } if( params.size() > 0 ) return 0; return 1; } Is there any technique that I can use to move the common lines of code of the two methods out to a separate method, that could be called from the two variations - OR - possibly combine the two methods to a single method? The following are the restrictions: The classes SURFACE and CURVE are from 3rd party libraries and hence unmodifiable. (If it helps they are both derived from IDispatch) There are even more similar classes (e.g. FACE) that could fit into this "template" (not C++ template, just the flow of lines of code) I know the following could (possibly?) be implemented as solutions but am really hoping there is a better solution: I could add a 3rd parameter to the 2 methods - e.g. an enum - that identifies the 1st parameter (e.g. enum::input_type_surface, enum::input_type_curve) I could pass in an IDispatch and try dynamic_cast< and test which cast is NON_NULL and do an if-else to call the right method (e.g. getPlaneParams() vs. get_LineParams()) The following is not a restriction but would be a requirement because of my teammates resistance: Not implement a new class that inherits from SURFACE/CURVE etc. (They would much prefer to solve it using the enum solution I stated above)

    Read the article

  • ufw portforwarding to virtualbox guest

    - by user85116
    My goal is to be able to connect using remote desktop on my desktop machine, to windows xp running in virtualbox on my linux server. My setup: server = debian squeeze, 64 bit, with a public IP address (host) virtualbox-ose 3.2.10 (from debian repo) windows xp running inside VBox as a guest; bridged networking mode in VBox, ip = 192.168.1.100 ufw as the firewall on debian, 3 ports are opened: 22 / ssh, 80 / apache, and 3389 for remote desktop My problem: If I try to use remote desktop on my home computer, I am unable to connect to the windows guest. If I first "ssh -X -C" into the debian server, then run "rdesktop 192.168.1.100", I am able to connect without issue. The windows firewall was configured to allow remote desktop connections, and I've even turned it off (as it is redundant here) to see if that was the problem but it made no difference. Since I am able to connect from inside the local subnet, I suspect that I have not setup my debian firewall correctly to handle connections from outside the LAN. Here is what I've done... First my ufw status: ufw status Status: active To Action From -- ------ ---- 22 ALLOW Anywhere 80 ALLOW Anywhere 3389 ALLOW Anywhere I edited /etc/ufw/sysctl.conf and added: net/ipv4/ip_forward=1 Edited /etc/default/ufw and added: DEFAULT_FORWARD_POLICY="ACCEPT" Edited /etc/ufw/before.rules and added: # setup port forwarding to forward rdp to windows VM *nat :PREROUTING - [0:0] -A PREROUTING -i eth0 -p tcp --dport 3389 -j DNAT --to-destination 192.168.1.100 -A PREROUTING -i eth0 -p udp --dport 3389 -j DNAT --to-destination 192.168.1.100 COMMIT # Don't delete these required lines, otherwise there will be errors *filter <snip> Restarted the firewall etc., but no connection. My log files on the debian host show this (my public ip address was removed for this posting but it is correct in the actual log): Feb 6 11:11:21 localhost kernel: [171991.856941] [UFW AUDIT] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27518 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:21 localhost kernel: [171991.856963] [UFW ALLOW] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27518 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:24 localhost kernel: [171994.856701] [UFW AUDIT] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27519 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:24 localhost kernel: [171994.856723] [UFW ALLOW] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27519 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:30 localhost kernel: [172000.856656] [UFW AUDIT] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27520 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:30 localhost kernel: [172000.856678] [UFW ALLOW] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27520 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Although this is the current setup / configuration, I've also tried several variations of this; I thought maybe the ISP would be blocking 3389 for some reason and tried using different ports, but again there was no connection. Any ideas...? Did I forget to modify some file somewhere?

    Read the article

  • Expert iptables help needed?

    - by Asad Moeen
    After a detailed analysis, I collected these details. I am under a UDP Flood which is more of application dependent. I run a Game-Server and an attacker is flooding me with "getstatus" query which makes the GameServer respond by making the replies to the query which cause output to the attacker's IP as high as 30mb/s and server lag. Here are the packet details, Packet starts with 4 bytes 0xff and then getstatus. Theoretically, the packet is like "\xff\xff\xff\xffgetstatus " Now that I've tried a lot of iptables variations like state and rate-limiting along side but those didn't work. Rate Limit works good but only when the Server is not started. As soon as the server starts, no iptables rule seems to block it. Anyone else got more solutions? someone asked me to contact the provider and get it done at the Network/Router but that looks very odd and I believe they might not do it since that would also affect other clients. Responding to all those answers, I'd say: Firstly, its a VPS so they can't do it for me. Secondly, I don't care if something is coming in but since its application generated so there has to be a OS level solution to block the outgoing packets. At least the outgoing ones must be stopped. Secondly, its not Ddos since just 400kb/s input generates 30mb/s output from my GameServer. That never happens in a D-dos. Asking the provider/hardware level solution should be used in that case but this one is different. And Yes, Banning his IP stops the flood of outgoing packets but he has many more IP-Addresses as he spoofs his original so I just need something to block him automatically. Even tried a lot of Firewalls but as you know they are just front-ends to iptables so if something doesn't work on iptables, what would the firewalls do? These were the rules I tried, iptables -A INPUT -p udp -m state --state NEW -m recent --set --name DDOS --rsource iptables -A INPUT -p udp -m state --state NEW -m recent --update --seconds 1 --hitcount 5 --name DDOS --rsource -j DROP It works for the attacks on un-used ports but when the server is listening and responding to the incoming queries by the attacker, it never works. Okay Tom.H, your rules were working when I modified them somehow like this: iptables -A INPUT -p udp -m length --length 1:1024 -m recent --set --name XXXX --rsource iptables -A INPUT -p udp -m string --string "xxxxxxxxxx" --algo bm --to 65535 -m recent --update --seconds 1 --hitcount 15 --name XXXX --rsource -j DROP They worked for about 3 days very good where the string "xxxxxxxxx" would be rate-limited, blocked if someone flooded and also didn't affect the clients. But just today, I tried updating the chain to try to remove a previously blocked IP so for that I had to flush the chain and restore this rule ( iptables -X and iptables -F ), some clients were already connected to servers including me. So restoring the rules now would also block some of the clients string completely while some are not affected. So does this mean I need to restart the server or why else would this happen because the last time the rules were working, there was no one connected?

    Read the article

  • Capistrano + Nginx + Passenger = 403

    - by slimchrisp
    I asked this over at stackoverflow as well, but still haven't received any answers that have helped me to solve this problem. I have spent almost a week at this point trying to solve the issue, and I'm just not making any headway. It seems that this issue is pretty common, but none of the solutions I found online work for me. A buddy of mine is actually creating the same setup, and he is having the same issue. After a few days stuck with the 403 error I started over using this tutorial: http://blog.ninjahideout.com/posts/a-guide-to-a-nginx-passenger-and-rvm-server I had hoped starting from scratch using this tutorial would work, but no dice. Either way, if you view the tutorial you can see what steps I have taken. Here is essentially what I have going on. I have a VPS account on linode.com Server OS is Ubuntu 10.04 Local OS (shouldn't matter, but just so you know) used to deploy with Capistrano is Snow Leopard 10.6.6 I use RVM on the server. Version is 1.2.2 I was previously on ruby-1.9.2-p0 [ i386 ], but per the tutorial listed above I switched to ree-1.8.7-2010.02 [ i386 ]. Running 'which ruby' from the command line verifies that I am using 1.8.7 with the following output: /usr/local/rvm/rubies/ree-1.8.7-2010.02/bin/ruby passenger -v prints the following: Phusion Passenger version 3.0.2 Running 'nginx -v' gives me a message that the command nginx could not be found. The server is definitely there and running as I can use nginx to serve static files, but this could have something to do with my problem. I have two users dealing with the install. root which I used to install everything, and deployer which is a user I created specifically to for deploying my applications My web app directory is in the deployer user's home directory as follows: /home/deployer/webapps/mysite.com/public Per Capistrano default deploy, a symbolic link called current is created in the public folder, and points to /home/deployer/webapps/mysite.com/public/releases/most_current_release I have chmodded the deployer directory recursively to 777 /opt/nginx permissions: rwxr-xr-x /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2 permissions: rwxrwsrwx My nginx config file has gone through just short of eternity variations, but currently looks like this: ================================================================================== worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2; passenger_ruby /usr/local/rvm/bin/passenger_ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { # listen *:80; server_name mysite.com www.mysite.com; root /home/deployer/webapps/mysite.com/public/current; passenger_enabled on; passenger_friendly_error_pages on; access_log logs/mysite.com/server.log; error_log logs/mysite.com/error.log info; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } ================================================================================== I bounce nginx, hit the site, and boom. 403, and logs say directory index of /home/deployer... is forbidden As others with a similar problem have said, you can drop an index.html into the public/releases/current_release and it will render. But rails no worky. That's basically it. At this point I have just about completely exhausted every possible solution attempt I can think of. I am a programmer and definitely not a sysadmin, so I am 99% sure this has something to do with permissions that I have hosed, but for the life of me I just can't figure out where. If anyone can help I would really really appreciate it. If there's any specific permission things you want me to check (ie groups/permissions), can you please include the commands to do so as well. Hopefully this will help others in the future who read this post. Let me know if there is any other information I can provide, and thanks in advance!!!

    Read the article

  • Parallelism in .NET – Part 14, The Different Forms of Task

    - by Reed
    Before discussing Task creation and actual usage in concurrent environments, I will briefly expand upon my introduction of the Task class and provide a short explanation of the distinct forms of Task.  The Task Parallel Library includes four distinct, though related, variations on the Task class. In my introduction to the Task class, I focused on the most basic version of Task.  This version of Task, the standard Task class, is most often used with an Action delegate.  This allows you to implement for each task within the task decomposition as a single delegate. Typically, when using the new threading constructs in .NET 4 and the Task Parallel Library, we use lambda expressions to define anonymous methods.  The advantage of using a lambda expression is that it allows the Action delegate to directly use variables in the calling scope.  This eliminates the need to make separate Task classes for Action<T>, Action<T1,T2>, and all of the other Action<…> delegate types.  As an example, suppose we wanted to make a Task to handle the ”Show Splash” task from our earlier decomposition.  Even if this task required parameters, such as a message to display, we could still use an Action delegate specified via a lambda: // Store this as a local variable string messageForSplashScreen = GetSplashScreenMessage(); // Create our task Task showSplashTask = new Task( () => { // We can use variables in our outer scope, // as well as methods scoped to our class! this.DisplaySplashScreen(messageForSplashScreen); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This provides a huge amount of flexibility.  We can use this single form of task for any task which performs an operation, provided the only information we need to track is whether the task has completed successfully or not.  This leads to my first observation: Use a Task with a System.Action delegate for any task for which no result is generated. This observation leads to an obvious corollary: we also need a way to define a task which generates a result.  The Task Parallel Library provides this via the Task<TResult> class. Task<TResult> subclasses the standard Task class, providing one additional feature – the ability to return a value back to the user of the task.  This is done by switching from providing an Action delegate to providing a Func<TResult> delegate.  If we decompose our problem, and we realize we have one task where its result is required by a future operation, this can be handled via Task<TResult>.  For example, suppose we want to make a task for our “Check for Update” task, we could do: Task<bool> checkForUpdateTask = new Task<bool>( () => { return this.CheckWebsiteForUpdate(); }); Later, we would start this task, and perform some other work.  At any point in the future, we could get the value from the Task<TResult>.Result property, which will cause our thread to block until the task has finished processing: // This uses Task<bool> checkForUpdateTask generated above... // Start the task, typically on a background thread checkForUpdateTask.Start(); // Do some other work on our current thread this.DoSomeWork(); // Discover, from our background task, whether an update is available // This will block until our task completes bool updateAvailable = checkForUpdateTask.Result; This leads me to my second observation: Use a Task<TResult> with a System.Func<TResult> delegate for any task which generates a result. Task and Task<TResult> provide a much cleaner alternative to the previous Asynchronous Programming design patterns in the .NET framework.  Instead of trying to implement IAsyncResult, and providing BeginXXX() and EndXXX() methods, implementing an asynchronous programming API can be as simple as creating a method that returns a Task or Task<TResult>.  The client side of the pattern also is dramatically simplified – the client can call a method, then either choose to call task.Wait() or use task.Result when it needs to wait for the operation’s completion. While this provides a much cleaner model for future APIs, there is quite a bit of infrastructure built around the current Asynchronous Programming design patterns.  In order to provide a model to work with existing APIs, two other forms of Task exist.  There is a constructor for Task which takes an Action<Object> and a state parameter.  In addition, there is a constructor for creating a Task<TResult> which takes a Func<Object, TResult> as well as a state parameter.  When using these constructors, the state parameter is stored in the Task.AsyncState property. While these two overloads exist, and are usable directly, I strongly recommend avoiding this for new development.  The two forms of Task which take an object state parameter exist primarily for interoperability with traditional .NET Asynchronous Programming methodologies.  Using lambda expressions to capture variables from the scope of the creator is a much cleaner approach than using the untyped state parameters, since lambda expressions provide full type safety without introducing new variables.

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • CodePlex Daily Summary for Thursday, March 11, 2010

    CodePlex Daily Summary for Thursday, March 11, 2010New ProjectsASP.NET Wiki Control: This ASP.NET user control allows you to embed a very useful wiki directly into your already existing ASP.NET website taking advantage of the popula...BabyLog: Log baby daily activity.buddyHome: buddyHome is a project that can make your home smarter. as good as your buddy. Cloud Community: Cloud Community makes it easier for organizations to have a simple to use community platform. Our mission is to create an easy to use community pl...Community Connectors for Microsoft CRM 4.0: Community Connectors for Microsoft CRM 4.0 allows Microsoft CRM 4.0 customers and partners to monitor and analyze customers’ interaction from their...Console Highlighter: Hightlights Microsoft Windows Command prompt (cmd.exe) by outputting ANSI VT100 Control sequences to color the output. These sequences are not hand...Cornell Store: This is IN NO WAY officially affiliated or related to the Cornell University store. Instead, this is a project that I am doing for a class. Ther...DevUtilities: This project is for creating some utility tools, and they will be useful during the development.DotNetNuke® Skin Maple: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by DyNNamite.co.uk. The package includes 4 color variations and sev...HRNet: HRNetIIS Web Site Monitoring: A software for monitor a particular web site on IIS, even if its IP is sharing between different web site.Iowa Code Camp: The source code for the Iowa Code Camp website.Leonidas: Leonidas is a virtual tutorLunch 'n Learn: The Lunch 'n Learn web application is an open source ASP.NET MVC application that allows you to setup lunch 'n learn presentations for your team, c...MNT Cryptography: A very simple cryptography classMooiNooi MVC2LINQ2SQL Web Databinder: mvc2linq2sql is a databinder for ASP.NET MVC that make able developer to clean bind object from HTML FORMS to Linq entities. Even 1 to N relations ...MoqBot: MoqBot is an auto mocking library for Moq and Ninject.mtExperience1: hoiMvcPager: MvcPager is a free paging component for ASP.NET MVC web application, it exposes a series of extension methods for using in ASP.NET MVC applications...OCal: OCal is based on object calisthenics to identify code smellsPex Custom Arithmetic Solver: Pex Custom Arithmetic Solver contains a collection of meta-heuristic search algorithms. The goal is to improve Pex's code coverage for code involvi...SetControls: Расширеные контролы для ASP.NET приложений. Полная информация ближе к релизу...shadowrage1597: CTC 195 Game Design classSharePoint Team-Mailer: A SharePoint 2007 solution that defines a generic CustomList for sending e-mails to SharePoint Groups.Sql Share: SQL Share is a collaboration tool used within the science to allow database engineers to work tightly with domain scientists.TechCalendar: Tech Events Calendar ASP.NET project.ZLYScript: A very simple script language compiler.New ReleasesALGLIB: ALGLIB 2.4.0: New ALGLIB release contains: improved versions of several linear algebra algorithms: QR decomposition, matrix inversion, condition number estimatio...AmiBroker Plug-Ins with C#: AmiBroker Plug-Ins v0.0.2: Source codes and a binaryAppFabric Caching UI Admin Tool: AppFabric Caching Beta 2 UI Admin Tool: System Requirements:.NET 4.0 RC AppFabric Caching Beta2 Test On:Win 7 (64x)Autodocs - WCF REST Automatic API Documentation Generator: Autodocs.ServiceModel.Web: This archive contains the reference DLL, instructions and license.Compact Plugs & Compact Injection: Compact Injection and Compact Plugs 1.1 Beta: First release of Compact Plugs (CP). The solution includes a simple example project of CP, called "TestCompactPlugs1". Also some fixes where made ...Console Highlighter: Console Highlighter 0.9 (preview release): Preliminary release.Encrypted Notes: Encrypted Notes 1.3: This is the latest version of Encrypted Notes (1.3). It has an installer - it will create a directory 'CPascoe' in My Documents. The last one was ...Family Tree Analyzer: Version 1.0.2: Family Tree Analyzer Version 1.0.2 This early beta version implements loading a gedcom file and displaying some basic reports. These reports inclu...FRC1103 - FRC Dashboard viewer: 2010 Documentation v0.1: This is my current version of the control system documentation for 2010. It isn't complete, but it has the information required for a custom dashbo...jQuery.cssLess: jQuery.cssLess 0.5 (Even less release): NEW - support for nested special CSS classes (like :hover) MAIN RELEASE This release, code "Even less", is the one that will interpret cssLess wit...MooiNooi MVC2LINQ2SQL Web Databinder: MooiNooi MVC2LINQ2SQL DataBinder: I didn't try this... I just took it off from my project. Please, tell me any problem implementing in your own development and I'll be pleased to h...MvcPager: MvcPager 1.2 for ASP.NET MVC 1.0: MvcPager 1.2 for ASP.NET MVC 1.0Mytrip.Mvc: Mytrip 1.0 preview 1: Article Manager Blog Manager L2S Membership(.NET Framework 3.5) EF Membership(.NET Framework 4) User Manager File Manager Localization Captcha ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.117: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version adds ...Pex Custom Arithmetic Solver: PexCustomArithmeticSolver: This is the alpha release containing the Alternating Variable Method and Evolution Strategies to try and solve constraints over floating point vari...Scrum Sprint Monitor: v1.0.0.44877: What is new in this release? Major performance increase in animations (up to 50 fps from 2 fps) by replacing DropShadow effect with png bitmaps; ...sELedit: sELedit v1.0b: + Added support for empty strings / wstrings + Fixed: critical bug in configuration files (list 53)sPWadmin: pwAdmin v0.9_nightly: + Fixed: XML editor can now open and save character templates + Added: PWI item name database + Added: Plugin SupportTechCalendar: Events Calendar v.1.0: Initial release.The Silverlight Hyper Video Player [http://slhvp.com]: Beta 2: Beta 2.0 Some fixes from Beta 1, and a couple small enhancements. Intensive testing continues, and I will continue to update the code at least ever...ThreadSafe.Caching: 2010.03.10.1: Updates to the scavanging behaviour since last release. Scavenging will now occur every 30 seconds by default and all objects in the cache will be ...VCC: Latest build, v2.1.30310.0: Automatic drop of latest buildVisual Studio DSite: Email Sender (C++): The same Email Sender program that I but made in visual c plus plus 2008 instead of visual basic 2008.Web Forms MVP: Web Forms MVP CTP7: The release can be considered stable, and is in use behind several high traffic, public websites. It has been marked as a CTP release as it is not ...White Tiger: 0.0.3.1: Now you can load or create files with whatever root element you want *check f or sets file permisionsMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesASP.NET Ajax LibraryMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsN2 CMSFasterflect - A Fast and Simple Reflection APIjQuery Library for SharePoint Web ServicesBlogEngine.NETFarseer Physics Enginepatterns & practices – Enterprise LibraryCaliburn: An Application Framework for WPF and Silverlight

    Read the article

  • Dynamic Code for type casting Generic Types 'generically' in C#

    - by Rick Strahl
    C# is a strongly typed language and while that's a fundamental feature of the language there are more and more situations where dynamic types make a lot of sense. I've written quite a bit about how I use dynamic for creating new type extensions: Dynamic Types and DynamicObject References in C# Creating a dynamic, extensible C# Expando Object Creating a dynamic DataReader for dynamic Property Access Today I want to point out an example of a much simpler usage for dynamic that I use occasionally to get around potential static typing issues in C# code especially those concerning generic types. TypeCasting Generics Generic types have been around since .NET 2.0 I've run into a number of situations in the past - especially with generic types that don't implement specific interfaces that can be cast to - where I've been unable to properly cast an object when it's passed to a method or assigned to a property. Granted often this can be a sign of bad design, but in at least some situations the code that needs to be integrated is not under my control so I have to make due with what's available or the parent object is too complex or intermingled to be easily refactored to a new usage scenario. Here's an example that I ran into in my own RazorHosting library - so I have really no excuse, but I also don't see another clean way around it in this case. A Generic Example Imagine I've implemented a generic type like this: public class RazorEngine<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase, new() You can now happily instantiate new generic versions of this type with custom template bases or even a non-generic version which is implemented like this: public class RazorEngine : RazorEngine<RazorTemplateBase> { public RazorEngine() : base() { } } To instantiate one: var engine = new RazorEngine<MyCustomRazorTemplate>(); Now imagine that the template class receives a reference to the engine when it's instantiated. This code is fired as part of the Engine pipeline when it gets ready to execute the template. It instantiates the template and assigns itself to the template: var template = new TBaseTemplateType() { Engine = this } The problem here is that possibly many variations of RazorEngine<T> can be passed. I can have RazorTemplateBase, RazorFolderHostTemplateBase, CustomRazorTemplateBase etc. as generic parameters and the Engine property has to reflect that somehow. So, how would I cast that? My first inclination was to use an interface on the engine class and then cast to the interface.  Generally that works, but unfortunately here the engine class is generic and has a few members that require the template type in the member signatures. So while I certainly can implement an interface: public interface IRazorEngine<TBaseTemplateType> it doesn't really help for passing this generically templated object to the template class - I still can't cast it if multiple differently typed versions of the generic type could be passed. I have the exact same issue in that I can't specify a 'generic' generic parameter, since there's no underlying base type that's common. In light of this I decided on using object and the following syntax for the property (and the same would be true for a method parameter): public class RazorTemplateBase :MarshalByRefObject,IDisposable { public object Engine {get;set; } } Now because the Engine property is a non-typed object, when I need to do something with this value, I still have no way to cast it explicitly. What I really would need is: public RazorEngine<> Engine { get; set; } but that's not possible. Dynamic to the Rescue Luckily with the dynamic type this sort of thing can be mitigated fairly easily. For example here's a method that uses the Engine property and uses the well known class interface by simply casting the plain object reference to dynamic and then firing away on the properties and methods of the base template class that are common to all templates:/// <summary> /// Allows rendering a dynamic template from a string template /// passing in a model. This is like rendering a partial /// but providing the input as a /// </summary> public virtual string RenderTemplate(string template,object model) { if (template == null) return string.Empty; // if there's no template markup if(!template.Contains("@")) return template; // use dynamic to get around generic type casting dynamic engine = Engine; string result = engine.RenderTemplate(template, model); if (result == null) throw new ApplicationException("RenderTemplate failed: " + engine.ErrorMessage); return result; } Prior to .NET 4.0  I would have had to use Reflection for this sort of thing which would have a been a heck of a lot more verbose, but dynamic makes this so much easier and cleaner and in this case at least the overhead is negliable since it's a single dynamic operation on an otherwise very complex operation call. Dynamic as  a Bailout Sometimes this sort of thing often reeks of a design flaw, and I agree that in hindsight this could have been designed differently. But as is often the case this particular scenario wasn't planned for originally and removing the generic signatures from the base type would break a ton of other code in the framework. Given the existing fairly complex engine design, refactoring an interface to remove generic types just to make this particular code work would have been overkill. Instead dynamic provides a nice and simple and relatively clean solution. Now if there were many other places where this occurs I would probably consider reworking the code to make this cleaner but given this isolated instance and relatively low profile operation use of dynamic seems a valid choice for me. This solution really works anywhere where you might end up with an inheritance structure that doesn't have a common base or interface that is sufficient. In the example above I know what I'm getting but there's no common base type that I can cast to. All that said, it's a good idea to think about use of dynamic before you rush in. In many situations there are alternatives that can still work with static typing. Dynamic definitely has some overhead compared to direct static access of objects, so if possible we should definitely stick to static typing. In the example above the application already uses dynamics extensively for dynamic page page templating and passing models around so introducing dynamics here has very little additional overhead. The operation itself also fires of a fairly resource heavy operation where the overhead of a couple of dynamic member accesses are not a performance issue. So, what's your experience with dynamic as a bailout mechanism? © Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • A New Threat To Web Applications: Connection String Parameter Pollution (CSPP)

    - by eric.maurice
    Hi, this is Shaomin Wang. I am a security analyst in Oracle's Security Alerts Group. My primary responsibility is to evaluate the security vulnerabilities reported externally by security researchers on Oracle Fusion Middleware and to ensure timely resolution through the Critical Patch Update. Today, I am going to talk about a serious type of attack: Connection String Parameter Pollution (CSPP). Earlier this year, at the Black Hat DC 2010 Conference, two Spanish security researchers, Jose Palazon and Chema Alonso, unveiled a new class of security vulnerabilities, which target insecure dynamic connections between web applications and databases. The attack called Connection String Parameter Pollution (CSPP) exploits specifically the semicolon delimited database connection strings that are constructed dynamically based on the user inputs from web applications. CSPP, if carried out successfully, can be used to steal user identities and hijack web credentials. CSPP is a high risk attack because of the relative ease with which it can be carried out (low access complexity) and the potential results it can have (high impact). In today's blog, we are going to first look at what connection strings are and then review the different ways connection string injections can be leveraged by malicious hackers. We will then discuss how CSPP differs from traditional connection string injection, and the measures organizations can take to prevent this kind of attacks. In web applications, a connection string is a set of values that specifies information to connect to backend data repositories, in most cases, databases. The connection string is passed to a provider or driver to initiate a connection. Vendors or manufacturers write their own providers for different databases. Since there are many different providers and each provider has multiple ways to make a connection, there are many different ways to write a connection string. Here are some examples of connection strings from Oracle Data Provider for .Net/ODP.Net: Oracle Data Provider for .Net / ODP.Net; Manufacturer: Oracle; Type: .NET Framework Class Library: - Using TNS Data Source = orcl; User ID = myUsername; Password = myPassword; - Using integrated security Data Source = orcl; Integrated Security = SSPI; - Using the Easy Connect Naming Method Data Source = username/password@//myserver:1521/my.server.com - Specifying Pooling parameters Data Source=myOracleDB; User Id=myUsername; Password=myPassword; Min Pool Size=10; Connection Lifetime=120; Connection Timeout=60; Incr Pool Size=5; Decr Pool Size=2; There are many variations of the connection strings, but the majority of connection strings are key value pairs delimited by semicolons. Attacks on connection strings are not new (see for example, this SANS White Paper on Securing SQL Connection String). Connection strings are vulnerable to injection attacks when dynamic string concatenation is used to build connection strings based on user input. When the user input is not validated or filtered, and malicious text or characters are not properly escaped, an attacker can potentially access sensitive data or resources. For a number of years now, vendors, including Oracle, have created connection string builder class tools to help developers generate valid connection strings and potentially prevent this kind of vulnerability. Unfortunately, not all application developers use these utilities because they are not aware of the danger posed by this kind of attacks. So how are Connection String parameter Pollution (CSPP) attacks different from traditional Connection String Injection attacks? First, let's look at what parameter pollution attacks are. Parameter pollution is a technique, which typically involves appending repeating parameters to the request strings to attack the receiving end. Much of the public attention around parameter pollution was initiated as a result of a presentation on HTTP Parameter Pollution attacks by Stefano Di Paola and Luca Carettoni delivered at the 2009 Appsec OWASP Conference in Poland. In HTTP Parameter Pollution attacks, an attacker submits additional parameters in HTTP GET/POST to a web application, and if these parameters have the same name as an existing parameter, the web application may react in different ways depends on how the web application and web server deal with multiple parameters with the same name. When applied to connections strings, the rule for the majority of database providers is the "last one wins" algorithm. If a KEYWORD=VALUE pair occurs more than once in the connection string, the value associated with the LAST occurrence is used. This opens the door to some serious attacks. By way of example, in a web application, a user enters username and password; a subsequent connection string is generated to connect to the back end database. Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; In the password field, if the attacker enters "xxx; Integrated Security = true", the connection string becomes, Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; Intergrated Security = true; Under the "last one wins" principle, the web application will then try to connect to the database using the operating system account under which the application is running to bypass normal authentication. CSPP poses serious risks for unprepared organizations. It can be particularly dangerous if an Enterprise Systems Management web front-end is compromised, because attackers can then gain access to control panels to configure databases, systems accounts, etc. Fortunately, organizations can take steps to prevent this kind of attacks. CSPP falls into the Injection category of attacks like Cross Site Scripting or SQL Injection, which are made possible when inputs from users are not properly escaped or sanitized. Escaping is a technique used to ensure that characters (mostly from user inputs) are treated as data, not as characters, that is relevant to the interpreter's parser. Software developers need to become aware of the danger of these attacks and learn about the defenses mechanism they need to introduce in their code. As well, software vendors need to provide templates or classes to facilitate coding and eliminate developers' guesswork for protecting against such vulnerabilities. Oracle has introduced the OracleConnectionStringBuilder class in Oracle Data Provider for .NET. Using this class, developers can employ a configuration file to provide the connection string and/or dynamically set the values through key/value pairs. It makes creating connection strings less error-prone and easier to manager, and ultimately using the OracleConnectionStringBuilder class provides better security against injection into connection strings. For More Information: - The OracleConnectionStringBuilder is located at http://download.oracle.com/docs/cd/B28359_01/win.111/b28375/OracleConnectionStringBuilderClass.htm - Oracle has developed a publicly available course on preventing SQL Injections. The Server Technologies Curriculum course "Defending Against SQL Injection Attacks!" is located at http://st-curriculum.oracle.com/tutorial/SQLInjection/index.htm - The OWASP web site also provides a number of useful resources. It is located at http://www.owasp.org/index.php/Main_Page

    Read the article

  • Personal Development : Time, Planning , Repairs & Maintenance

    - by Rajesh Pillai
    Personal Development : Time, Planning, Repairs & Maintenance These are just my thoughts, but some you may find something interesting in it. Please think over it. We may know many things, but still we always keeps procrastinating it. I have written this as I have heard many people coming back and saying they don’t have time to do things they like. These are my thoughts buy may be useful to someone else too. Certain things in life needs periodic repairs and maintenance. To cite some examples , your CAR, your HOUSE, your personal laptop/desktop, your health etc. Likewise there are certain other things in professional life that requires repair/ maintenance /or some kind of polishing, so that you always stay on top of it. But they are not always obvious. Some of them are - Improving your communication skills - Increasing your vocabulary - Upgrading your technical skills - Pursuing your hobby - Increasing your knowledge/awareness etc… etc… And then there are certain things that we are always short of…. one is TIME. We all know TIME is one of the most precious things in life and yet we all are very miserable at managing it. Remember you can only manage it and not control it. You can only control which you own or which you create. In theory time is infinite. So, there should be abundant of it. But remember one thing, you know this, it’s not reversible. Once it has elapsed you cannot live it again. Think over it. So, how do find that golden 25th hour every day. To find the 25th hour you need to reflect back on your current daily activities. Analyze them and see where you are spending most of your time and is it really important. Even the 8 hours that you spent in the office, is it spent fruitfully. At the end of the day is the 8 precious hour that you spent was worth it. Just reflect back on your activities. Did you learn something? If yes did you make a point to NOTE IT. If you didn’t NOTED it then was the time you spent really worth it. Just ponder over it. Some calculations of your daily activities where most of the time is spent. Let’s start (in no particular order though) - Sleep (6.5 hours) [Remember you only require 6 good hours of sleep every day]. Some may thing it is 8, but it’s a myth.   o To achive 6 hours of sleep and be in good health you can practice 15 minutes of daily meditation. So effectively you can    round it to 6.5 hours. - Morning chores(2 hours) : Some may need to prepare breakfast and all other things. - Office commuting (avg. to and fro 3 hours) - Office Work (avg 9.5 hours) Total Hours: 21 hours effective time which is spent irrespective of what you do. There may be some variations here and there. Still you have 3 hours EXTRA. Where do these 3 hours go? If you can find it, then you may get that golden 25th hour out of these 3 hours. Let’s discount 2 hours for contingencies, still you have 1 hour with you. If you can’t find it then you are living a direction less life. As you can see, the 25th Hour lies within the 24 hours of the day. It’s upto each one of us to find and make use of it. Now what can you do with that 25th hour i.e. 1 hour extra of your life. Imagine the possibility. Again some calculations 1 hour daily * 30 days = 30 hours every month 30 hours pm * 12 month = 360 hours every year. 360 hours every year seems very promising. Let’s add some contingencies, say, let’s be optimistic and say 50 % contingency. Still you have 180 hours every year. That leaves with 30 minutes every day of extra time. That’s hell a lot of time, if you could manage it. These may sound like a high talk [yes, it is, unless you apply these simple rules and rationalize your everyday living and stop procrastinating]. NOTE: I haven’t taken weekend, holidays and leaves into account. So, that leaves us with a lot of buffer time. You can meet family friends, relatives, other tasks, and yet have these 180 pure hours of joy every year. Do whatever you want to do with it. So, how important is this 180 hours per year to you? Just think over it. You may use it the way you like - 50 hours [pursue your hobby like drawing, crafting, learn dance, learn juggling, learn swimming, travelling hmm.. anything you like doing and you didn’t had time to do it.] - 30 hours you can learn a new programming language or technology (i.e. you can get comfortable with it) - 50 hours [improve existing skills] - 20 hours [improve you communication skill]. Do some light reading. - 30 hours [YOU DECIDE WHAT TO DO]? So, if you had done this for one year you would have learnt a new programming language, upgraded existing skills, improved you communication etc.. If you had done this for two years.. imagine the level of personal development or growth which you may have attained….. If you had done this for three years….. NOW I think I don’t need to mention this… So, you still have TIME, as they say TIME is infinite. So, make judicious use of this precious thing. And never ever comeback saying “I don’t have time”. So, if you are RICH in TIME, everything else will be automatically taken care of, as those things may just be a byproduct of how you spend your time… So, happy TIMING your TIME everyday.

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

< Previous Page | 18 19 20 21 22 23 24  | Next Page >