Search Results

Search found 11766 results on 471 pages for 'info plist'.

Page 451/471 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • Getting 404 in Android app while trying to get xml from localhost

    - by Patrick
    This must be something really stupid, trying to solve this issue for a couple of days now and it's really not working. I searched everywhere and there probably is someone with the same problem, but I can't seem to find it. I'm working on an Android app and this app pulls some xml from a website. Since this website is down a lot, I decided to save it and run it locally. Now what I did; - I downloaded the kWs app for hosting the downloaded xml file - I put the file in the right directory and could access it through the mobile browser, but not with my app (same code as I used with pulling it from some other website, not hosted by me, only difference was the URL obviously) So I tried to host it on my PC and access it with my app from there. Again the same results, the mobile browsers had no problem finding it, but the app kept saying 404 Not Found: "The requested URL /test.xml&parama=Someone&paramb= was not found on this server." Note: Don't mind the 2 parameters I am sending, I needed that to get the right stuff from the website that wasn't hosted by me. My code: public String getStuff(String name){ String URL = "http://10.0.0.8/test.xml"; ArrayList<NameValuePair> params = new ArrayList<NameValuePair>(2); params.add(new BasicNameValuePair("parama", name)); params.add(new BasicNameValuePair("paramb", "")); APIRequest request = new APIRequest(URL, params); try { RequestXML rxml = new RequestXML(); AsyncTask<APIRequest, Void, String> a = rxml.execute(request); ... } catch(Exception e) { e.printStackTrace(); } return null; } That should be working correctly. Now the RequestXML class part: class RequestXML extends AsyncTask<APIRequest, Void, String>{ @Override protected String doInBackground(APIRequest... uri) { HttpClient httpclient = new DefaultHttpClient(); String completeUrl = uri[0].url; // ... Add parameters to URL ... HttpGet request = null; try { request = new HttpGet(new URI(completeUrl)); } catch (URISyntaxException e1) { e1.printStackTrace(); } HttpResponse response; String responseString = ""; try { response = httpclient.execute(request); StatusLine statusLine = response.getStatusLine(); if(statusLine.getStatusCode() == HttpStatus.SC_OK){ // .. It crashes here, because statusLine.getStatusCode() returns a 404 instead of a 200. The xml is just plain xml, nothing special about it. I changed the contents of the .htaccess file into "ALLOW FROM ALL" (works, cause the browser on my mobile device can access it and shows the correct xml). I am running Android 4.0.4 and I am using the default browser AND chrome on my mobile device. I am using MoWeS to host the website on my PC. Any help would be appreciated and if you need to know anything before you can find an answer to this problem, I'll be more than happy to give you that info. Thank you for you time! Cheers.

    Read the article

  • Infinite sharing system (PHP/MySQLi)

    - by Toine Lille
    I'm working on a discount system for whichever customer shares a product and brings in new customers. Each unique visit = $0.05 off, each new customer = $0.50 off (it's a cheap product so yeah, no big numbers). When a new customer shares the site, the customer initially responsible for the new customer (if any) will get half of the new customer's discount as well. The initial customer would get a fourth for the next level and the new customer half of that, etc, creating a tree or pyramid that way that could be infinite. Initial customer ($1.35 discount: 2 new+3 visits + half of 1 new+2 visits) Visitor ($0) Visitor ($0) New customer ($0.60) Visitor ($0) Visitor ($0) Newer customer ($0) New customer ($0) Visitor ($0) The customers are saved along with their IP addresses (bin2hex(inet_pton)) in a database table (customers) with info like a unique id, e-mail address and first date/time the purchased a product (= time of registration). The shares are saved in a separate table within the same database (sharing). Each unique IP addresses that visits the site creates a new row featuring the IP address (also saved as bin2hex(inet_pton)), the id of the customer who shared it and the date/time of the visit. Sharing goes via URL, featuring a GET element containing the customer's id. Visits and new customers overlap, as visits will always occur before the new customer does. That's fine. The date/times are used just to make it a little more secure (I also use the IP along with cookies to see if people cheat the system). If an IP is already in the sharing or customer tables, it does not count and will not create a new entry. Now the problem is, how to make the infinity happen and apply the different values to it? That's all I'd need to know. It needs to calculate the discount for each customer separately, but also allow for monitoring altogether (though that's just a matter of passing all ID's through it). I figured I'd start (after the database connection) with $stmt = $con->prepare('SELECT ip,datetime FROM sharing WHERE sender=?'); $stmt->bind_param('i',$customerid); $stmt->execute(); $stmt->store_result(); $discount = $discount + ($stmt->num_rows * 0.05); $stmt->bind_result($ip,$timeofsharing); to translate all the visits to $0.05 of discount each. To check for the new customers that came from these visits, I wrote the following: while ($sql->fetch()) { $stmt2 = $con->prepare("SELECT datetime FROM users WHERE ip=?"); $stmt2->bind_param('s',$ip); $stmt2->execute(); $stmt2->store_result(); $stmt2->bind_result($timeofpurchase); Followed by a little more security comparing the datetimes: while ($stmt2->fetch()) { if (strtotime($timeofpurchase) < strtotime($timeofsharing)) { $discount = $discount + $0.50; } But this is just for the initial customer's direct results. If I'd want to check for the next level, I'd basically have to put the exact same check and loop in itself, checking each new customer the initial customer they brought to the site, and then for the next level again to check all of the newer customers, etc, etc. What to do? / Where to go? / What would be the correct practice for this? Thanks!

    Read the article

  • Finding Image and Place into another img source

    - by saorabh
    My Html Markup is like this <div id="main"> <div id="slider"> <img src=""/> </div> <div class="clear slider"></div> <div class="slider ndslider"> <a href="#" rel="bookmark"> <img src="#" title=""/> </a> </div> <div class="slider rdslider"> <a href="" rel="bookmark"> <img src="" title=""/> </a> </div> <div class="slider thslider nivoSlider"> <a rel="bookmark" href="" class="nivo-imageLink" > <img src="image src" > </a> <a rel="bookmark" href="" class="nivo-imageLink"> <img title="" src=""> </a> <a rel="bookmark" href="#" class="nivo-imageLink"> <img title="" src=""> </a> <div class="nivo-slice"></div> <div class="nivo-slice"></div> <div class="nivo-slice"></div> <div class="nivo-directionNav"> <a class="nivo-prevNav">Prev</a> <a class="nivo-nextNav">Next</a> </div> <div class="nivo-controlNav"> <a rel="0" class="nivo-control active"> <img alt="" src="undefined"> </a> <a rel="1" class="nivo-control"> <img alt="" src="undefined"> </a> </div> </div> </div> I want to find the all the images inside this DIV <div class="slider thslider nivoSlider"> </div> and set that images on to this <div class="nivo-controlNav"> <a rel="0" class="nivo-control active"> <img alt="" src="undefined"> </a> </div> For Doing the same i had written my custom jQuery but it is not working. jQuery(document).ready(function() { jQuery('.thslider a img').each(function(){ var imgSrc = jQuery(this).attr('src'); var newSrc = 'http://saorabh-test1.rtcamp.info/wp-content/themes/twentyten/timthumb.php?src='+ imgSrc +' &h=50&w=50&zc=1'; jQuery(".thslider .nivo-controlNav a img").attr('src',newSrc); }); }); Help me..

    Read the article

  • Using Dynamic Proxies to centralize JPA code

    - by Daziplqa
    Hi All, Actually, This is not a question but really I need your opinions in a matter... I put his post here because I know you always active, so please don't consider this a bad question and share me your opinions. I've used Java dynamic proxies to Centralize The code of JPA that I used in a standalone mode, and Here's the dynamic proxy code: package com.forat.service; import java.lang.reflect.InvocationHandler; import java.lang.reflect.Method; import java.lang.reflect.Proxy; import java.util.logging.Level; import java.util.logging.Logger; import javax.persistence.EntityManager; import javax.persistence.EntityManagerFactory; import javax.persistence.EntityTransaction; import javax.persistence.Persistence; import com.forat.service.exceptions.DAOException; /** * Example of usage : * <pre> * OnlineFromService onfromService = * (OnlineFromService) DAOProxy.newInstance(new OnlineFormServiceImpl()); * try { * Student s = new Student(); * s.setName("Mohammed"); * s.setNationalNumber("123456"); * onfromService.addStudent(s); * }catch (Exception ex) { * System.out.println(ex.getMessage()); * } *</pre> * @author mohammed hewedy * */ public class DAOProxy implements InvocationHandler{ private Object object; private Logger logger = Logger.getLogger(this.getClass().getSimpleName()); private DAOProxy(Object object) { this.object = object; } public static Object newInstance(Object object) { return Proxy.newProxyInstance(object.getClass().getClassLoader(), object.getClass().getInterfaces(), new DAOProxy(object)); } @Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { EntityManagerFactory emf = null; EntityManager em = null; EntityTransaction et = null; Object result = null; try { emf = Persistence.createEntityManagerFactory(Constants.UNIT_NAME); em = emf.createEntityManager();; Method entityManagerSetter = object.getClass(). getDeclaredMethod(Constants.ENTITY_MANAGER_SETTER_METHOD, EntityManager.class); entityManagerSetter.invoke(object, em); et = em.getTransaction(); et.begin(); result = method.invoke(object, args); et.commit(); return result; }catch (Exception ex) { et.rollback(); Throwable cause = ex.getCause(); logger.log(Level.SEVERE, cause.getMessage()); if (cause instanceof DAOException) throw new DAOException(cause.getMessage(), cause); else throw new RuntimeException(cause.getMessage(), cause); }finally { em.close(); emf.close(); } } } And here's the link that contains more info (http://m-hewedy.blogspot.com/2010/04/using-dynamic-proxies-to-centralize-jpa.html) (plz don't consider this as adds, as I can copy and past the entire topic here if you want that) So, Please give me your opinions. Thanks.

    Read the article

  • Including objects from external .js files

    - by Molle
    I have been searching for many hours over several days for this answer and though there are many topics on how to include files in a project (also here at Stack Overflow), I have not yet found THE solution to my problem. I'm working on a project where I want to include one single object at a time, from many different files (I do not want to include the files themselves, only their content). All the object in all the files have the same name, only the content is different. It is important that I do not get a SCRIPT tag in the head section of the page as all the content from the files will have the same names. None of the files will have functions anyways, only one single object, that will need to be loaded one at the time and then discarded when the next element is loaded. The objects will hold the data that will be shown on the page and they will be called from the menu by an 'onclick' event. function setMenu() // The menu is being build. { var html = ''; html += '<table border="0">'; for (var i = 0; i<menu.pages.length; i++) { html += '<tr class="menuPunkt"><td width="5"></td><td onclick="pageName(this)">'+ menu.pages[i] +'</td><td width="5"></td></tr>'; } // menu is a global object containing elements such as an array with // all the pages that needs to be shown and styling for the menu. html += '</table>'; document.getElementById("menu").innerHTML = html; style.setMenu(); // The menu is being positioned and styled. } Now, when I click on a menu item the pageName function is triggered and I'm sending the HTML element to the function as well, it is here that I want the content from my external file to be loaded into a local variable and used to display content on the page. ** The answer I want is "How to load the external obj into the function where I need it? (It may be an external file, but only in the term of not being included in the head section. I'm still loading the the file from my own local library.** function pageName(elm) // The element that I clicked is elm. { var page = info.innerHTML; // I need only the innerHTML from the element. var file = 'sites/' + page + '.js'; // The file to be loaded is created. var obj = ?? // Here I somehow want the object from the external file to be loaded. // Before doing stuff the the obj. style.content(); } The content from the external file could look like this: // The src for the external page: 'sites/page.js' var obj = new Object() { obj.innerHTML = 'Text to be shown'; obj.style = 'Not important for problem at hand'; obj.otherStuff = ' --||-- '; } Any help will be appreciated, Molle

    Read the article

  • SQL: Using a CASE Statement to update a 1000 rows at once, how??

    - by SoLoGHoST
    Ok, I would like to use a CASE STATEMENT for this, but I am lost with this. Basically, I need to update a ton of rows, but just on the "position" column. I need to update all "position" values from 0 - count(position) for each id_layout_position column per id_layout column. Here's what I got for a regular update, but I don't wanna throw this into a foreach loop, as it would take forever to do it. I'm using SMF (Simple Machines Forums), so it might look a little different, but the idea is the same, and CASE statements are supported... $smcFunc['db_query']('', ' UPDATE {db_prefix}dp_positions SET position = {int:position} WHERE id_layout_position = {int:id_layout_position} AND id_layout = {int:id_layout}', array( 'position' => $position++, 'id_layout_position' => (int) $id_layout_position, 'id_layout' => (int) $id_layout, ) ); Anyways, I need to apply some sort of CASE on this so that I can auto-increment by 1 all values that it finds and update to the next possible value. I know I'm doing this wrong, even in this QUERY. But I'm totally lost when it comes to CASES. Here's an example of a CASE being used within SMF, so you can see this and hopefully relate: $conditions = ''; foreach ($postgroups as $id => $min_posts) { $conditions .= ' WHEN posts >= ' . $min_posts . (!empty($lastMin) ? ' AND posts <= ' . $lastMin : '') . ' THEN ' . $id; $lastMin = $min_posts; } // A big fat CASE WHEN... END is faster than a zillion UPDATE's ;). $smcFunc['db_query']('', ' UPDATE {db_prefix}members SET id_post_group = CASE ' . $conditions . ' ELSE 0 END' . ($parameter1 != null ? ' WHERE ' . (is_array($parameter1) ? 'id_member IN ({array_int:members})' : 'id_member = {int:members}') : ''), array( 'members' => $parameter1, ) ); Before I do the update, I actually have a SELECT which throws everything I need into arrays like so: $disabled_sections = array(); $positions = array(); while ($row = $smcFunc['db_fetch_assoc']($request)) { if (!isset($disabled_sections[$row['id_group']][$row['id_layout']])) $disabled_sections[$row['id_group']][$row['id_layout']] = array( 'info' => $module_info[$name], 'id_layout_position' => $row['id_layout_position'] ); // Increment the positions... if (!is_null($row['position'])) { if (!isset($positions[$row['id_layout']][$row['id_layout_position']])) $positions[$row['id_layout']][$row['id_layout_position']] = 1; else $positions[$row['id_layout']][$row['id_layout_position']]++; } else $positions[$row['id_layout']][$row['id_layout_position']] = 0; } Thanks, I know if anyone can help me here it's definitely you guys and gals... Anyways, here is my question: How do I use a CASE statement in the first code example, so that I can update all of the rows in the position column from 0 - total # of rows found, that have that id_layout value and that id_layout_position value, and continue this for all different id_layout values in that table? Can I use the arrays above somehow? I'm sure I'll have to use the id_layout and id_layout_position values for this right? But how can I do this?

    Read the article

  • Jquery / PhP / Joomla Select one of two comboboxes does not get updated

    - by bluesbrother
    I am making a Joomla component wich has 3 comboboxes/selects on the page. One with languages and 2 with subjects. If you change the language the other two get filled with the same data (the subjects in the selected language) the name of the selectbox are different but otherwise the same. I get an error for one of the subject boxes (hence the url gets red), but there is no logic in wich one will give an error. In Firebug i get the HTML back for the one without the other and this one gets updated but the other one gives nothing back. If i right click in firebug on the one that gave the error, and do "send again" it will load fine. Is their a timing problem? The change event of the language selectbox: jQuery('#cmbldcoi_ldlink_language').bind('change', function() { var cmbLangID = jQuery('#cmbldcoi_ldlink_language').val(); if (cmbLangID !=0) { getSubjectCmb_lang(cmbLangID, 'cmbldcoi_ldlink_subjects', '#ldlinksubjects'); } }); Function that requests the php file to create the html for the select: function getSubjectCmb_lang(langID, cmbName, DivWhereIn) { var xdate = new Date().getTime(); var url = 'index.php?option=com_ldadmin&view=ldadmin&format=raw&task=getcmbsubj_lang&langid=' + langID + '&cmbname=' + cmbName + '&'+ xdate; jQuery(DivWhereIn).load(url, function(){ }); } And in the php file there is a connection made to the database to ge the information to build the selectbox. I use a function for this that is okay because it makes al my selectboxes. The only place where there are problems with select boxes is on the pages that has 2 selects that need to change when a third one changed. My guess it is somewhere in the Jquery where this goes wrong. And i think it has to do with timing. But i am open for all sugestions. Thanx. UPDATE: No the ID and Name fields are different. They are named : cmbldcoi_child cmbldcoi_parent Here is my code: The change event for the first combobox which makes the other two change: jQuery('#cmbldcoi_language_chain_subj').bind('change', function(){ var langID = jQuery('#cmbldcoi_language_chain_subj').val(); if (langID != 0){ getSubjectCmb_lang(langID, 'cmbldcoi_child', '#div_cmbldcoi_child'); getSubjectCmb_lang(langID, 'cmbldcoi_parent', '#div_cmbldcoi_parent'); } }); } The function wicht calls the php file to get the info from the database: function getSubjectCmb_lang(langID, cmbName, DivWhereIn){ var xdate = new Date().getTime(); var url = 'index.php?option=com_ldadmin&view=ldadmin&format=raw&task=getcmbsubj_lang&langid=' + langID + '&cmbname=' + cmbName + '&'+ xdate; jQuery(DivWhereIn).load(url, function(){ }); } The PHP code function getcmbsubj_lang(){ $langid = JRequest::getVar('langid'); if ($langid > 0 ){ $langid = JRequest::getVar('langid'); }else{ $langid = 1; } $cmbName = JRequest::getVar('cmbname'); //$lang_sufx = self::get_#__sufx($langid); print ld_html::ld_create_cmb_html($cmbName, '#__ldcoi_subjects','id', 'subject_name', " WHERE id_language={$langid} ORDER BY subject_name" ); } There is a class wich is called ld_html wich has an funnction in it that creates a combobox. ld_html::ld_create_cmb_html() It gets an table name, id field, namefield and optional an where clause. It all works fine if there is just one combobox thats needs updating. It give a problem when there are two. Thanks for the help !

    Read the article

  • Button with cell inside to big

    - by josh_24_2
    When i load it loads its cell with the button inside to big.Link to an example image: http://i.stack.imgur.com/x5j40.jpg. Also is there a way to change size of cells :) Please help me, If you need more info please must ask and I will be happy to give more information. <div class="content"> <?php $con = mysql_connect("","",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("", $con); $result = mysql_query("SELECT * FROM users ORDER by user_id"); echo "<link rel='stylesheet' href='<?php echo URL; ?>public/css/style.css' /><div class='CSSTableGenerator' > <table > <tr> <td> </td> <td> ID </td> <td > Username </td> <td> Email </td> <td> Admin </td> <td> User Active </td> </tr>"; while($row = mysql_fetch_array($result)) { if ($row['user_perm_level'] == "2") { $admin = 'Yes'; } else { $admin = 'No'; } if ($row['user_active'] == "1") { $active = 'Yes'; } else { $active = 'No'; } echo "<tr>"; echo "<td><input type='checkbox' name='1' value='1'></td>"; echo "<td>" . $row['user_id'] . "</td>"; echo "<td>" . $row['user_name'] . "</td>"; echo "<td>" . $row['user_email'] . "</td>"; echo "<td>" . $admin . "</td>"; echo "<td>" . $active . "</td>"; echo "</tr>"; } echo "</table></div>"; mysql_close($con); ?> </div> Thanks josh_24_2

    Read the article

  • Not sure I am using inheritance/polymorphism issue?

    - by planker1010
    So for this assignment I have to create a car class(parent) and a certifiedpreowned (child) and I need to have the parent class have a method to check if it is still under warranty. *checkWarrantyStatus(). that method calls the boolean isCoveredUnderWarranty() to veryify if the car still has warranty. My issue is in the certifiedpreowned class I have to call the isCoveredUnderWarranty() as well to see if it is covered under the extended warranty and then have it be called via the checkWarrantyStatus() in the car method. I hope this makes sense. So to sum it up I need to in the child class have it check the isCoveredUnderWarranty with extended warranty info. Then it has to move to the parent class so it can be called via checkWarrantyStatus. Here is my code, I have 1 error. public class Car { public int year; public String make; public String model; public int currentMiles; public int warrantyMiles; public int warrantyYears; int currentYear =java.util.Calendar.getInstance().get(java.util.Calendar.YEAR); /** construct car object with specific parameters*/ public Car (int y, String m, String mod, int mi){ this.year = y; this.make = m; this.model = mod; this.currentMiles = mi; } public int getWarrantyMiles() { return warrantyMiles; } public void setWarrantyMiles(int warrantyMiles) { this.warrantyMiles = warrantyMiles; } public int getWarrantyYears() { return warrantyYears; } public void setWarrantyYears(int warrantyYears) { this.warrantyYears = warrantyYears; } public boolean isCoveredUnderWarranty(){ if (currentMiles < warrantyMiles){ if (currentYear < (year+ warrantyYears)) return true; } return false; } public void checkWarrantyStatus(){ if (isCoveredUnderWarranty()){ System.out.println("Your car " + year+ " " + make+ " "+ model+ " With "+ currentMiles +" is still covered under warranty"); } else System.out.println("Your car " + year+ " " + make+ " "+ model+ " With "+ currentMiles +" is out of warranty"); } } public class CertifiedPreOwnCar extends Car{ public CertifiedPreOwnCar(int y, String m, String mod, int mi) { super(mi, m, mod, y); } public int extendedWarrantyYears; public int extendedWarrantyMiles; public int getExtendedWarrantyYears() { return extendedWarrantyYears; } public void setExtendedWarrantyYears(int extendedWarrantyYears) { this.extendedWarrantyYears = extendedWarrantyYears; } public int getExtendedWarrantyMiles() { return extendedWarrantyMiles; } public void setExtendedWarrantyMiles(int extendedWarrantyMiles) { this.extendedWarrantyMiles = extendedWarrantyMiles; } public boolean isCoveredUnderWarranty() { if (currentMiles < extendedWarrantyMiles){ if (currentYear < (year+ extendedWarrantyYears)) return true; } return false; } } public class TestCar { public static void main(String[] args) { Car car1 = new Car(2014, "Honda", "Civic", 255); car1.setWarrantyMiles(60000); car1.setWarrantyYears(5); car1.checkWarrantyStatus(); Car car2 = new Car(2000, "Ferrari", "F355", 8500); car2.setWarrantyMiles(20000); car2.setWarrantyYears(7); car2.checkWarrantyStatus(); CertifiedPreOwnCar car3 = new CertifiedPreOwnCar(2000, "Honda", "Accord", 65000); car3.setWarrantyYears(3); car3.setWarrantyMiles(30000); car3.setExtendedWarrantyMiles(100000); car3.setExtendedWarrantyYears(7); car3.checkWarrantyStatus(); } }

    Read the article

  • XForms and multiple inputs for same model tag

    - by iHeartGreek
    Hi! I apologize ahead of time if I am not asking this properly.. it is hard to put into words what I am asking.. I have XForms model such as: <file> <criteria> <criterion></criterion> </criteria> </file> I want to have multiple input text boxes that create a new criterion tag. user interface such as: <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> And I would like the XML output to look like this (once user has entered in info): <file> <criteria> <criterion>AAA</criterion> <criterion>BBB</criterion> <criterion>CCC</criterion> </criteria> </file> The way I have it doesn't work, as it sees the 3 input fields to be referring all to the same criterion tag. How do I differentiate? Thanks! I hope that made some sense! BEGIN FIRST EDIT Thanks for the responses for the basic text box! However, I now need to do this with a listbox. But for the life of me, I can't figure out how. I read somewhere to use with the xforms:select and deselect events.. but I didn't know where to place them, and the places I tried gave me very weird behaviour. I am currently implementing the following: <xf:select ref="instance('criteria_data')/criteria/criterion" selection="" appearance="compact" > <xf:label>Choose criteria</xf:label> <xf:itemset nodeset="instance('criteria_choices')/choice"> <xf:label ref="@label"></xf:label> <xf:value ref="."></xf:value> </xf:itemset> </xf:select> However when multiple choices are submitted, all selection values are inserted into the same node, separated by spaces. For example: If AAA and BBB and FFF were selected from listbox, it would result in the following XML: <criterion>AAA BBB FFF</criterion> How do I change my code to have each selection be in a separate node? i.e. I want it to look like this: <criterion>AAA</criterion> <criterion>BBB</criterion> <criterion>FFF</criterion> Thanks! END FIRST EDIT BEGIN SECOND EDIT: For the listboxes (ie xf:select appearance="compact") I ended up allowing the spaces to occur in the same node and then just transformed that xml using xsl to generate a properly formatted new xml doc (with separate individual nodes). Unfortunately, I did not find a less cumbersome solution by inserting them originally into separate nodes. The selected answer works very well for text boxes however, hence why I selected it as the answer. END SECOND EDIT

    Read the article

  • Weird behavior of fork() and execvp() in C

    - by ron
    After some remarks from my previous post , I made the following modifications : int main() { char errorStr[BUFF3]; while (1) { int i , errorFile; char *line = malloc(BUFFER); char *origLine = line; fgets(line, 128, stdin); // get a line from stdin // get complete diagnostics on the given string lineData info = runDiagnostics(line); char command[20]; sscanf(line, "%20s ", command); line = strchr(line, ' '); // here I remove the command from the line , the command is stored in "commmand" above printf("The Command is: %s\n", command); int currentCount = 0; // number of elements in the line int *argumentsCount = &currentCount; // pointer to that // get the elements separated char** arguments = separateLineGetElements(line,argumentsCount); printf("\nOutput after separating the given line from the user\n"); for (i = 0; i < *argumentsCount; i++) { printf("Argument %i is: %s\n", i, arguments[i]); } // here we call a method that would execute the commands pid_t pid ; if (-1 == (pid = fork())) { sprintf(errorStr,"fork: %s\n",strerror(errno)); write(errorFile,errorStr,strlen(errorStr + 1)); perror("fork"); exit(1); } else if (pid == 0) // fork was successful { printf("\nIn son process\n"); // if (execvp(arguments[0],arguments) < 0) // for the moment I ignore this line if (execvp(command,arguments) < 0) // execute the command { perror("execvp"); printf("ERROR: execvp failed\n"); exit(1); } } else // parent { int status = 0; pid = wait(&status); printf("Process %d returned with status %d.", pid, status); } // print each element of the line for (i = 0; i < *argumentsCount; i++) { printf("Argument %i is: %s\n", i, arguments[i]); } // free all the elements from the memory for (i = 0; i < *argumentsCount; i++) { free(arguments[i]); } free(arguments); free(origLine); } return 0; } When I enter in the Console : ls out.txt I get : The Command is: ls execvp: No such file or directory In son process ERROR: execvp failed Process 4047 returned with status 256.Argument 0 is: > Argument 1 is: out.txt So I guess that the son process is active , but from some reason the execvp fails . Why ? Regards REMARK : The ls command is just an example . I need to make this works with any given command . EDIT 1 : User input : ls > qq.out Program output : The Command is: ls Output after separating the given line from the user Argument 0 is: > Argument 1 is: qq.out In son process >: cannot access qq.out: No such file or directory Process 4885 returned with status 512.Argument 0 is: > Argument 1 is: qq.out

    Read the article

  • jQuery autocomplete. Doesn't reveal existing matches.

    - by Alexander
    Hello fellow engineers. I have come across a problem I just can't solve. I am using autocomplete plugin for jQuery on an input. The HTML looks something like this: <tr id="row_house" class="no-display"> <td class="col_num">4</td> <td class="col_label">House Number</td> <td class="col_data"> <input type="text" title="House Number" name="house" id="house"/> <button class="pretty_button ui-state-default ui-corner-all button-finish">Get house info</button> </td> </tr> I am sure that this is the only id="house" field. Other fields that are before this one work fine with autocomplete, and it's basically the same algorithm (other variables, other data, other calls). So why doesn't it work like it should work with the following init. code: $("#house").autocomplete(["1/4","6","6/1","6/4","8","8/1","8/5","10","10/1","10/3","10/4","12","12/1","12/5","12/6","14","14/1","15","15/1","15/2","15/4","15/5","16","16/1","16/2","16/21","16/2B","16/3","16/4","17","17/1","17/2","17/4","17/5","17/6","17/7","17/8","18","18/1","18/2","18/3","18/5","18/95","19","19/1","19/2","19/3","19/4","19/5","19/6","19/7","19/8","20","20/1","20/2","20/3","20/4","21","21/1","21/2","21/3","21/4","22","22/9","23","23/2","23/4","24","24/1","24/2","24/3","24/A","25","25/1","25/10","25/2","25/4","25/5","25/6","25/7","25/8","25/9","26","26/1","26/6","27","27/2","28","28/1","29","29/2","29/3","29/4","30","30/1","30/2","30/3","31","31/1","31/3","32/A","33","34","34/1","34/11","34/2","34/3","35","35/1","35/2","35/4","36","36/1","36/A","37","37/1","37/2","38","38/1","38/2","39/1","39/2","39/3","39/4","40","40/1","41","41/2","42","43","44","45","45/1","45/10","45/11","45/12","45/13","45/14","45/15","45/16","45/17","45/2","45/3","45/6","45/7","45/8","45/9","46","47","47/2","49","49/1","50","51","51/1","51/2","52","53","54","55/7","66","109","122","190/8","412"], {minChars:1, mustMatch:true}).result(function(event, result, formatted) { var found=false; for(var index=0; index<HChouses.length; index++) //HChouses is the same array used for init, but each entry is paired with a database ID. if(HChouses[index][0]==result) { found=true; HChouseId=HChouses[index][1]; $("#row_house .button-finish").click(function() { QueryServer("HouseConnect","FillData",true,HChouseId); //this performs an AJAX request }); break; } if(!found) $("#row_house .button-finish").unbind("click"); }); Each time I start typing (say I press the "1" button), the text appears and gets deleted instantly. Rarely at all after repeated presses I get the list (although much shorter than it should be) But if after that I press the second digit, the whole thing disappears again. P.S. I use Firefox 3.6.3 for development.

    Read the article

  • Opening a file with a variable as name and checking for undefined values

    - by Harm De Weirdt
    I'm having some problems writing data into a file using Perl. sub startNewOrder{ my $name = makeUniqueFileName(); open (ORDER, ">$name.txt") or die "can't open file: $!\n"; format ORDER_TOP = PRODUCT<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<CODE<<<<<<<<AANTAL<<<<EENHEIDSPRIJS<<<<<<TOTAAL<<<<<<< . format ORDER = @<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< @<<<<<<<< @<<<< @<<<<<< @<<<<< $title, $code, $amount, $price, $total . close (ORDER); } This is the sub I use to make the file. (I translated most of it.) The makeUniqueFileName method makes a fileName based upon current time("minuteshoursdayOrder"). The problem now is that I have to write to this file in another sub. sub addToOrder{ print "give productcode:"; $code = <STDIN>; chop $code; print "Give amount:"; $amount = <STDIN>; chop $amount; if($inventory{$code} eq undef){ #Does the product exist? print "This product does not exist"; }elsif($inventory{$code}[2] < $amount && !defined($inventaris{$code}[2]) ){ #Is there enough in the inventory? print "There is not enough in stock" }else{ $inventory{$code}[2] -= $amount; #write in order file open (ORDER ">>$naam.txt") or die "can't open file: $!\n"; $title = $inventory{$code}[0]; $code = $code; $amount = $inventory{$code}[2]; $price = $inventory{$code}[1]; $total = $inventory{$code}[1]; write; close(ORDER); } %inventory is a hashtable that has the productcode as key and an array with the title, price and amount as value. There are two problems here: when I enter an invalid product number, I still have to enter an amount even while my code says it should print the error directly after checking if there is a product with the given code. The second problem is that the writing doesn't seem to work. It always give's a "No such file or directory" error. Is there a way to open the ORDER file I made in the first sub without having to make $name not local? Or just a way to write in this file? I really don't know how to start here. I can't really find much info on writing a file that has been closed before, and in a different sub. Any help is appreciated, Harm

    Read the article

  • Opening a file with a variable as name & checking for undefined values

    - by Harm De Weirdt
    Hello everyone. I'm having some problems writing data into a file using perl. sub startNewOrder{ my $name = makeUniqueFileName(); open (ORDER, ">$name.txt") or die "can't open file: $!\n"; format ORDER_TOP = PRODUCT<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<CODE<<<<<<<<AANTAL<<<<EENHEIDSPRIJS<<<<<<TOTAAL<<<<<<< . format ORDER = @<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< @<<<<<<<< @<<<< @<<<<<< @<<<<< $title, $code, $amount, $price, $total . close (ORDER); } This is the sub I use to make the file. (I translated most of it) The makeUniqueFileName method makes a fileName based upon current time("minuteshoursdayOrder"). The problem now is that I have to write in this file in another sub. sub addToOrder{ print "give productcode:"; $code = <STDIN>; chop $code; print "Give amount:"; $amount = <STDIN>; chop $amount; if($inventory{$code} eq undef){ #Does the product exist? print "This product does not exist"; }elsif($inventory{$code}[2] < $amount && !defined($inventaris{$code}[2]) ){ #Is there enough in the inventory? print "There is not enough in stock" }else{ $inventory{$code}[2] -= $amount; #write in order file open (ORDER ">>$naam.txt") or die "can't open file: $!\n"; $title = $inventory{$code}[0]; $code = $code; $amount = $inventory{$code}[2]; $price = $inventory{$code}[1]; $total = $inventory{$code}[1]; write; close(ORDER); } %inventory is a hashtable that has the productcode as key and an array with the title, price and amount as value. There are two problems here: when I enter an invalid product number, I still have to enter an amount even while my code says it should print the error directly after checking if there is a product with the given code. The second problem is that the writing doesn't seem to work. It always give's a "No such file or directory" error. Is there a way to open the ORDER file i made in the first sub without having to make $name not local? Or just a way to write in this file? I really don't know how to start here. I can't really find much info on writing a file that has been closed before, and in a different sub.. Any help is appreciated, Harm

    Read the article

  • mapping rect in small image to larger image (in order to do a copyPixels operation)

    - by skinnyTOD
    Hi all - this is (I think) a relatively simple math question but I've spent a day banging my head against it and have only the dents and no solution... I'm coding in actionscript 3 - the functionality is: large image loaded at runtime. The bitmapData is stored and a smaller version is created to display on the available screen area (I may end up just scaling the large image since it is in memory anyway). The user can create a rectangle hotspot on the smaller image (the functionality will be more complex: multiple rects with transparency: example a donut shape with hole, etc) 3 When the user clicks on the hotspot, the rect of the hotspot is mapped to the larger image and a new bitmap "callout" is created, using the larger bitmap data. The reason for this is so the "callout" will be better quality than just scaling up the area of the hotspot. The image below shows where I am at so far- the blue rect is the clicked hotspot. In the upper left is the "callout" - copied from the larger image. I have the aspect ratio right but I am not mapping to the larger image correctly. Ugly code below... Sorry this post is so long - I just figured I ought to provide as much info as possible. Thanks for any tips! --trace of my data values *source BitmapDada 1152 864 scaled to rect 800 600 scaled BitmapData 800 600 selection BitmapData 58 56 scaled selection 83 80 ratio 1.44 before (x=544, y=237, w=58, h=56) (x=544, y=237, w=225.04, h=217.28) * Image here: http://i795.photobucket.com/albums/yy237/skinnyTOD/exampleST.jpg public function onExpandCallout(event:MouseEvent):void{ if (maskBitmapData.getPixel32(event.localX, event.localY) != 0){ var maskClone:BitmapData = maskBitmapData.clone(); //amount to scale callout - this will vary/can be changed by user var scale:Number =150 //scale percentage var normalizedScale :Number = scale/=100; var w:Number = maskBitmapData.width*normalizedScale; var h:Number = maskBitmapData.height*normalizedScale; var ratio:Number = (sourceBD.width /targetRect.width); //creat bmpd of the scaled size to copy source into var scaledBitmapData:BitmapData = new BitmapData(maskBitmapData.width * ratio, maskBitmapData.height * ratio, true, 0xFFFFFFFF); trace("source BitmapDada " + sourceBD.width, sourceBD.height); trace("scaled to rect " + targetRect.width, targetRect.height); trace("scaled BitmapData", bkgnImageSprite.width, bkgnImageSprite.height); trace("selection BitmapData", maskBitmapData.width, maskBitmapData.height); trace("scaled selection", scaledBitmapData.width, scaledBitmapData.height); trace("ratio", ratio); var scaledBitmap:Bitmap = new Bitmap(scaledBitmapData); var scaleW:Number = sourceBD.width / scaledBitmapData.width; var scaleH:Number = sourceBD.height / scaledBitmapData.height; var scaleMatrix:Matrix = new Matrix(); scaleMatrix.scale(ratio,ratio); var sRect:Rectangle = maskSprite.getBounds(bkgnImageSprite); var sR:Rectangle = sRect.clone(); var ss:Sprite = new Sprite(); ss.graphics.lineStyle(8, 0x0000FF); //ss.graphics.beginFill(0x000000, 1); ss.graphics.drawRect(sRect.x, sRect.y, sRect.width, sRect.height); //ss.graphics.endFill(); this.addChild(ss); trace("before " + sRect); w = uint(sRect.width * scaleW); h = uint(sRect.height * scaleH); sRect.inflate(maskBitmapData.width * ratio, maskBitmapData.height * ratio); sRect.offset(maskBitmapData.width * ratio, maskBitmapData.height * ratio); trace(sRect); scaledBitmapData.copyPixels(sourceBD, sRect, new Point()); addChild(scaledBitmap); scaledBitmap.x = offsetPt.x; scaledBitmap.y = offsetPt.y; } }

    Read the article

  • parsing xml with php, children

    - by moustafa
    Hello I successfully created my parser Everything is working great except one thing since my xml is formated a little different and I am totally lost on how to assign variable to the children of . xml portion <item> <url /> <name /> - <photos> <photo>1020944_0.jpg</photo> <photo>1020944_1.jpg</photo> <photo>1020944_2.jpg</photo> </photos> <user_id /> </item> PHP code <? global $insideitem, $tag, $name, $photos, $user_id; global $count,$db; $db = mysql_connect("localhost", "user","pass"); mysql_select_db("db_name",$db); $result = mysql_query("SELECT user_id FROM table,$db); while ($myrow = mysql_fetch_array($result)){ $uid=$myrow['user_id']; $UN_ID[$uid]=$uid; } $count=1; $count2=1; // ########################################################## // ************* START ELEMENT FUNCTION ********************* // ########################################################## function startElement($parser, $name, $attrs) { global $insideitem, $tag, $name, $photos, $user_id; if ($insideitem) { $tag = $name; } elseif($name == "ITEM"){ $insideitem = true; } } function endElement($parser, $name) { global $insideitem, $tag, $name, $photos, $user_id; global $count,$count2,$db,$UN_ID; if ($name == "ITEM") { if(!$UN_ID[$unique_id]){ $name=addslashes($name); $photo1=addslashes($photo); $photo2=addslashes($photo); $photo3=addslashes($photo); $photo4=addslashes($photo); $user_id=addslashes($category); $sql = "INSERT INTO table ( name, photo1, photo2, photo3, photo4, user_id ) VALUES ( '$name', '$photo', '$photo', '$photo', '$photo', '$user_id', )"; $resultupdate = mysql_query($sql); } $name=''; $photos=''; $user_id=''; } } function characterData($parser, $data) { global $insideitem, $tag, $name, $photos, $user_id; if ($insideitem) { switch ($tag) { case "NAME": $name .= $data; break; case "PHOTOS": $photos .= $data; break; case "USER_ID": $user_id .= $data; break; } } } $xml_parser = xml_parser_create(); xml_set_element_handler($xml_parser, "startElement", "endElement"); xml_set_character_data_handler($xml_parser, "characterData"); $fp = fopen("../myfile.xml","r") or die("Error reading RSS data."); while ($data = fread($fp, 4096)) // Parse each 4KB chunk with the XML parser created above xml_parse($xml_parser, $data, feof($fp)) // Handle errors in parsing or die(sprintf("XML error: %s at line %d", xml_error_string(xml_get_error_code($xml_parser)), xml_get_current_line_number($xml_parser))); fclose($fp); // ########################################################## // *********************** FREE MEMORY ********************** // ########################################################## xml_parser_free($xml_parser); ?> The number of tags can range between 1-4. I have tried searching everywhere for info on how to do this and tried everything but I just cant get it. After several days of this giving me headaches I really hope some one can enlighten me.

    Read the article

  • How can I Display an extract from MySQL database entry?

    - by ThatMacLad
    I'm after creating a webpage that includes a blog section and it currently displays the whole post on the homepage. I'd like to set it so that it only displays a certain part of the entry i.e 50 words. I'd then like to be able to set it so that I have a read more button below the post that links to the post id. I currently use post.php?=# (# = whatever the post id is). Here is the homepage: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Blog Name</title> <link rel="stylesheet" href="css/style.css" type="text/css" /> <body> <div id="upper-bar"> <div id="bar-content"> <a href="#">Home</a> <a href="#">Archives</a> <a href="#">Contact</a> <a href="#">About</a> <a href="#"><img src="images/twitter.png" id="tweet"></a><a href="#"><img src="images/feed.png" id="feed"></a> </div> </div> <div id="clear"> </div> <div class="main"> <h1>Blog Name</h1> <div class="post-col"> <?php mysql_connect ('localhost', 'root', 'root') ; mysql_select_db ('tmlblog'); $sql = "SELECT * FROM php_blog ORDER BY timestamp DESC LIMIT 5"; $result = mysql_query($sql) or print ("Can't select entries from table php_blog.<br />" . $sql . "<br />" . mysql_error()); while($row = mysql_fetch_array($result)) { $date = date("l F d Y", $row['timestamp']); $title = stripslashes($row['title']); $entry = stripslashes($row['entry']); $id = $row['id']; ?> <div id='post-info'><?php echo "<p id='title'><strong><a href=\"post.php?id=". $id . "\">" . $title . "</a></strong></p>"; ?><br /></div> <div id="post"> <?php echo $entry; ?> <!--<br /><br /> Posted on <?php echo $date; ?> !--> </p> </div> </p> </div> <?php } ?> </div> </div> </body> </html>

    Read the article

  • Shrinking image by 57% and centering inside css structure

    - by Johua
    Hy, i'm really stuck. I'll go step by step and hope to make it short. This is the html structure: <li class="FAVwithimage"> <a href=""> <img src="pics/Joshua.png"> <span class="name">Joshua</span> <span class="comment">Developer</span> <span class="arrow"></span> </a> </li> Before i paste the css classes, some info about the exact goal to accomplish: Resize the picture (img) by 57%. If it cannot be done with css, then jquery/javascript solution. For example: Original pic is 240x240px, i need to resize it by 57%. That means that a pic of 400x400 would be bigger after resizing. After resizing, the picture needs to be centered vertical&horizontal inside a: 68x90 boundaries. So you have an LI element, wich has an A element, and inside A we have IMG, IMG is resized by 57% and centered where the maximum width can be of course 68px and maximum height 90px. No for that to work i was adding a SPAN element arround the IMG. This is what i was thinking: <li class="FAVwithimage"> <a href=""> <span class="picHolder"><img src="pics/Joshua.png"></span> <span class="name">Joshua</span> <span class="comment">Developer</span> <span class="arrow"></span> </a> </li> Then i would give the span element: display:block and w=68px, h=90px. But unforunatelly that didn't work. I know it's a long post but i'v did my best to describe it very simple. Beneath are the css classes and a picture to see what i need. li.FAVwithimage { height: 90px!important; } li.FAVwithimage a, li.FAVwithimage:hover a { height: 81px!important; } That's it what's relevant. I have not included the classes for: name,comment,arrow And now the classes that are incomplete and refer to IMG. li.FAVwithimage a span.picHolder{ /*put the picHolder to the beginning of the LI element*/ position: absolute; left: 0; top: 0; width: 68px; height: 90px; diplay:block; border:1px solid #F00; } Border is used just temporary to show the actuall picHolder. It is now on the beginning of LI, width and height is set. li.FAVwithimage span.picHolder img { max-width:68px!important; max-height:90px!important; } This is the class wich should shrink the pic by 57% and center inside picHolder Here I have a drawing describing what i need:

    Read the article

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

  • Making Sense of ASP.NET Paths

    - by Rick Strahl
    ASP.Net includes quite a plethora of properties to retrieve path information about the current request, control and application. There's a ton of information available about paths on the Request object, some of it appearing to overlap and some of it buried several levels down, and it can be confusing to find just the right path that you are looking for. To keep things straight I thought it a good idea to summarize the path options along with descriptions and example paths. I wrote a post about this a long time ago in 2004 and I find myself frequently going back to that page to quickly figure out which path I’m looking for in processing the current URL. Apparently a lot of people must be doing the same, because the original post is the second most visited even to this date on this blog to the tune of nearly 500 hits per day. So, I decided to update and expand a bit on the original post with a little more information and clarification based on the original comments. Request Object Paths Available Here's a list of the Path related properties on the Request object (and the Page object). Assume a path like http://www.west-wind.com/webstore/admin/paths.aspx for the paths below where webstore is the name of the virtual. .blackborder td { border-bottom: solid 1px silver; border-left: solid 1px silver; } Request Property Description and Value ApplicationPath Returns the web root-relative logical path to the virtual root of this app. /webstore/ PhysicalApplicationPath Returns local file system path of the virtual root for this app. c:\inetpub\wwwroot\webstore PhysicalPath Returns the local file system path to the current script or path. c:\inetpub\wwwroot\webstore\admin\paths.aspx Path FilePath CurrentExecutionFilePath All of these return the full root relative logical path to the script page including path and scriptname. CurrentExcecutionFilePath will return the ‘current’ request path after a Transfer/Execute call while FilePath will always return the original request’s path. /webstore/admin/paths.aspx AppRelativeCurrentExecutionFilePath Returns an ASP.NET root relative virtual path to the script or path for the current request. If in  a Transfer/Execute call the transferred Path is returned. ~/admin/paths.aspx PathInfo Returns any extra path following the script name. If no extra path is provided returns the root-relative path (returns text in red below). string.Empty if no PathInfo is available. /webstore/admin/paths.aspx/ExtraPathInfo RawUrl Returns the full root relative URL including querystring and extra path as a string. /webstore/admin/paths.aspx?sku=wwhelp40 Url Returns a fully qualified URL including querystring and extra path. Note this is a Uri instance rather than string. http://www.west-wind.com/webstore/admin/paths.aspx?sku=wwhelp40 UrlReferrer The fully qualified URL of the page that sent the request. This is also a Uri instance and this value is null if the page was directly accessed by typing into the address bar or using an HttpClient based Referrer client Http header. http://www.west-wind.com/webstore/default.aspx?Info Control.TemplateSourceDirectory Returns the logical path to the folder of the page, master or user control on which it is called. This is useful if you need to know the path only to a Page or control from within the control. For non-file controls this returns the Page path. /webstore/admin/ As you can see there’s a ton of information available there for each of the three common path formats: Physical Path is an OS type path that points to a path or file on disk. Logical Path is a Web path that is relative to the Web server’s root. It includes the virtual plus the application relative path. ~/ (Root-relative) Path is an ASP.NET specific path that includes ~/ to indicate the virtual root Web path. ASP.NET can convert virtual paths into either logical paths using Control.ResolveUrl(), or physical paths using Server.MapPath(). Root relative paths are useful for specifying portable URLs that don’t rely on relative directory structures and very useful from within control or component code. You should be able to get any necessary format from ASP.NET from just about any path or script using these mechanisms. ~/ Root Relative Paths and ResolveUrl() and ResolveClientUrl() ASP.NET supports root-relative virtual path syntax in most of its URL properties in Web Forms. So you can easily specify a root relative path in a control rather than a location relative path: <asp:Image runat="server" ID="imgHelp" ImageUrl="~/images/help.gif" /> ASP.NET internally resolves this URL by using ResolveUrl("~/images/help.gif") to arrive at the root-relative URL of /webstore/images/help.gif which uses the Request.ApplicationPath as the basepath to replace the ~. By convention any custom Web controls also should use ResolveUrl() on URL properties to provide the same functionality. In your own code you can use Page.ResolveUrl() or Control.ResolveUrl() to accomplish the same thing: string imgPath = this.ResolveUrl("~/images/help.gif"); imgHelp.ImageUrl = imgPath; Unfortunately ResolveUrl() is limited to WebForm pages, so if you’re in an HttpHandler or Module it’s not available. ASP.NET Mvc also has it’s own more generic version of ResolveUrl in Url.Decode: <script src="<%= Url.Content("~/scripts/new.js") %>" type="text/javascript"></script> which is part of the UrlHelper class. In ASP.NET MVC the above sort of syntax is actually even more crucial than in WebForms due to the fact that views are not referencing specific pages but rather are often path based which can lead to various variations on how a particular view is referenced. In a Module or Handler code Control.ResolveUrl() unfortunately is not available which in retrospect seems like an odd design choice – URL resolution really should happen on a Request basis not as part of the Page framework. Luckily you can also rely on the static VirtualPathUtility class: string path = VirtualPathUtility.ToAbsolute("~/admin/paths.aspx"); VirtualPathUtility also many other quite useful methods for dealing with paths and converting between the various kinds of paths supported. One thing to watch out for is that ToAbsolute() will throw an exception if a query string is provided and doesn’t work on fully qualified URLs. I wrote about this topic with a custom solution that works fully qualified URLs and query strings here (check comments for some interesting discussions too). Similar to ResolveUrl() is ResolveClientUrl() which creates a fully qualified HTTP path that includes the protocol and domain name. It’s rare that this full resolution is needed but can be useful in some scenarios. Mapping Virtual Paths to Physical Paths with Server.MapPath() If you need to map root relative or current folder relative URLs to physical URLs or you can use HttpContext.Current.Server.MapPath(). Inside of a Page you can do the following: string physicalPath = Server.MapPath("~/scripts/ww.jquery.js")); MapPath is pretty flexible and it understands both ASP.NET style virtual paths as well as plain relative paths, so the following also works. string physicalPath = Server.MapPath("scripts/silverlight.js"); as well as dot relative syntax: string physicalPath = Server.MapPath("../scripts/jquery.js"); Once you have the physical path you can perform standard System.IO Path and File operations on the file. Remember with physical paths and IO or copy operations you need to make sure you have permissions to access files and folders based on the Web server user account that is active (NETWORK SERVICE, ASPNET typically). Note the Server.MapPath will not map up beyond the virtual root of the application for security reasons. Server and Host Information Between these settings you can get all the information you may need to figure out where you are at and to build new Url if necessary. If you need to build a URL completely from scratch you can get access to information about the server you are accessing: Server Variable Function and Example SERVER_NAME The of the domain or IP Address wwww.west-wind.com or 127.0.0.1 SERVER_PORT The port that the request runs under. 80 SERVER_PORT_SECURE Determines whether https: was used. 0 or 1 APPL_MD_PATH ADSI DirectoryServices path to the virtual root directory. Note that LM typically doesn’t work for ADSI access so you should replace that with LOCALHOST or the machine’s NetBios name. /LM/W3SVC/1/ROOT/webstore Request.Url and Uri Parsing If you still need more control over the current request URL or  you need to create new URLs from an existing one, the current Request.Url Uri property offers a lot of control. Using the Uri class and UriBuilder makes it easy to retrieve parts of a URL and create new URLs based on existing URL. The UriBuilder class is the preferred way to create URLs – much preferable over creating URIs via string concatenation. Uri Property Function Scheme The URL scheme or protocol prefix. http or https Port The port if specifically specified. DnsSafeHost The domain name or local host NetBios machine name www.west-wind.com or rasnote LocalPath The full path of the URL including script name and extra PathInfo. /webstore/admin/paths.aspx Query The query string if any ?id=1 The Uri class itself is great for retrieving Uri parts, but most of the properties are read only if you need to modify a URL in order to change it you can use the UriBuilder class to load up an existing URL and modify it to create a new one. Here are a few common operations I’ve needed to do to get specific URLs: Convert the Request URL to an SSL/HTTPS link For example to take the current request URL and converted  it to a secure URL can be done like this: UriBuilder build = new UriBuilder(Request.Url); build.Scheme = "https"; build.Port = -1; // don't inject port Uri newUri = build.Uri; string newUrl = build.ToString(); Retrieve the fully qualified URL without a QueryString AFAIK, there’s no native routine to retrieve the current request URL without the query string. It’s easy to do with UriBuilder however: UriBuilder builder = newUriBuilder(Request.Url); builder.Query = ""; stringlogicalPathWithoutQuery = builder.ToString(); What else? I took a look through the old post’s comments and addressed as many of the questions and comments that came up in there. With a few small and silly exceptions this update post handles most of these. But I’m sure there are a more things that go in here. What else would be useful to put onto this post so it serves as a nice all in one place to go for path references? If you think of something leave a comment and I’ll try to update the post with it in the future.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • CodePlex Daily Summary for Monday, April 12, 2010

    CodePlex Daily Summary for Monday, April 12, 2010New Projects3 Hour Game Design Contest: The 3 Hour Game Design Contest is a programming contest for making simple games in 3 hours. 3 hours may not seem like enough time to make a game, b...BI Monkey SSIS ETL Framework: The BI Monkey SSIS ETL Framework is an ETL Execution, Control and Logging system for ETL projects using SSIS. It is supported by a SQL Server metad...Blend Sample Data Helpers: Helper behavior classes to generate sample images and data from Internet sources such as Flickr images. Bold TCP for Delphi 7: Open Sourcing the Bold TCP for Delphi 7.cfThreadingTools: This library project contains classes and extensions which will allow easy handling of multi-threaded UI-accesses.CuBiX_SDL: CuBiX_SDL : CuBiX est un projet personnel.Draglets: Draglets makes it easier for editors and CMS-developers to move and reorder content at their web sites. It's developed in ASP.NET, C# with WCF and ...DSQLT - Dynamic SQL Templates: DSQLT - Dynamic SQL Templates Use Stored Procedures as templates for dynamic SQL statements. Substitute parameters @0-@9 with values like objectna...Edtter: Edtter is a sample web application built on ASP.NET MVC 2 Framework. (Japanese Version Only)Forms Based Authentication Management - SharePoint2007FBA: This is my own update to Stacy Draper's FBABasic project for Forms Based Authentication in MOSS 2007. In additon to managing your fba user's roles,...Height Map to 3D World at XNA: Height Map to 3D World is a XNA project that developed firstly by Eric Grossinger and secondly improved by Karadeniz Technical University Computer ...HouseFly: A simple contact and note taking applicationITM 495 - iPhone Web App: School ProjectKaufleute: This will be finished laterLR: this project is about connecting toPowerShell Integration Services: A set of tools aimed at Extract Transform and Load tasks. Focused on getting the most common ETL tasks done without SSIS. Salient: A collection of, hopefully, useful libraries.Samurai.Validation: Extensible and flexible .Net object validation frameworkSamurai.Workflow: Samurai Workflow is a slim, easy-to-use workflow framework for WPF applications.SharePoint User Management WebPart: SharePoint User Management WebPartUrl shorte(ne)r: It's simple Url Shortener (like: http://tinyurl.com) Currently only Polish language is supported. In future will be provided multi language suppor...Yasbg: Yasbg (pronounced yas-bug) is Yet Another Static Blog Generator. It is made in C# using MarkdownSharp for markdown. Currently in alpha. New Releases.NET Extensions - Extension Methods Library: Release 2010.06: Added an universal approach for grouping extension methods like conversions. Conversion are now available on any data type (it's actually extension...3 Hour Game Design Contest: 3H-GDC mVII: This is the collection of game files for the 7th 3H-GDCB&W Port Scanner: Black`n`White Port Scanner 3.0: B&W Port Scanner 3 includes FTP Server detection tool, Better stability, Optimized memory management, Saving & Opening Result sets ... and more new...BI Monkey SSIS ETL Framework: Framework v1 Alpha: This Alpha release is not fully tested and some functionality is not operating as intended.Bluetooth Radar: Version 1.7: UI Changes Device UserControl Randomly placed devices.BugTracker.NET: BugTracker.NET 3.4.1: For the tasks/time tracking feature, added a way of viewing all the tasks at once, not just the tasks for one bug. Also added a way of exporting a...cfThreadingTools: cfThreadingTools 0.1.1.8: This is the first public available release. Following items are included: BaseTools-class which allows thread-safe setting of properties and callin...DeepZoom Pivot Constructor: DeepZoom Pivot Constructor v0.1: This is a test release of the library platform - Targets .NET 3.5 No samples yet, etc., but it works well :-)DSQLT - Dynamic SQL Templates: Initial release with License Included: nothing changed but license print procedure included the zip file contains database backup SQL script readmeForms Based Authentication Management - SharePoint2007FBA: SharePoint2007FBA 1.0.0.0: Downloads for the Project solution and the WSP package. Please read the Setup Guide. If you are unfamiliar with setting up Forms Based Authenticati...Foursquare BlogEngine Widget: foursquare widget for BlogEngine.NET version 0.3: To see the changes which have been made, visit http://philippkueng.ch/post/Foursquare-BlogEngineNET-widget-version-03.aspx For installation instruc...Framework Detector: FrameworkDetect Support .NET 4 v2: FrameworkDetect Support .NET 4Happy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: Happy Turtle 1.0.46860: This is the second beta release of the SVN based version incrementor. Please feel free to create a thread in the discussion tabs and provide feedb...Height Map to 3D World at XNA: 3DWorld: Just open .rar file and extract it any folder and run Proje2Dto3D.exe file.HTML Ruby: 6.20.2: Removed rubyLineSpace option Improved options panel Fixed ruby text font-size rendering issue with complex ruby annotation Removed more waste...HTML Ruby: 6.20.3: Removed unused code Temporary partial fix for Firefox 3.7a4pre nightly buildHTML Ruby: 6.21.0: Added support for current HTML5 ruby annotation format. All ruby annotations are converted to XHTML 1.1 complex ruby annotation.Kooboo HTML form: Kooboo HTML Form Module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Menu: Kooboo CMS Menu for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Meta: Kooboo Meta Module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo PageMenu: Kooboo CMS PageMenu for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Search: Kooboo CMS Search module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Numina Application/Security Framework: Numina.Framework Core 50212: Added bulk import user page Added General settings page for updating Company Name, Theme, and API Key Add/Edit application calls Full URL to h...Rawr: Rawr 2.3.14: - Rawr3: Tons of fixes for Rawr3 compatability and UI. - Significant performance improvements all around. - More fixes and improvements to Wowhea...Rich Ajax empowered Web/Cloud Applications: 6.4 beta 2: The first fully featured version of Visual webGui offering web/cloud development tool that puts all ASP.NET Ajax limits behind with enhanced perfor...SharePoint User Management WebPart: User Management Web part 1.0: Most of the organization have one SharePoint Site which is configured with windows authenticated which is for internal employees having AD authenti...SkeinLibManaged: SkeinLibManaged 1.1.0.0 (Beta): This is the compiled DLL with XML documentation, so there should be plenty of context sensitive help and Intellisense. This is the Release version,...VCC: Latest build, v2.1.30411.0: Automatic drop of latest buildVFPX: Code References 1.1 Beta: Visit the Code References Info Page for complete information about this release.VisioAutomation: VisioAutomation 2.5.0: VisioAutomation 2.5.0- General cleanup/bugfixes - Many low-level changes the the VisioAutomation extension methods - these are far fewer now - This...Visual Studio DSite: English To Spanish Translator (Visual C++ 2008): A simple english to spanish translator made in visual c 2008, using the Google Translate API.WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.10.00: Whats NewFile Browser: Inherits Folder Permissions from DotNetNuke Updated the Editor to Version 3.2.1 revision 5372 Added CkEditor jQuery Adap...Web/Cloud Applications Development Framework | Visual WebGui: 6.4 beta 2: The first fully featured version of Visual webGui offering web/cloud development tool that puts all ASP.NET Ajax limits behind with unique develope...WPF Data Virtualization: 1.0.0.0: First ReleaseYasbg: Yasbg Alpha: ReadmeYet Another Static Blog Generator is a command line utility that generates static html files for blogs. Currently, it is NOT feed enabled. I...異世界の新着動画: Ver. 10-04-12: ニコ生の仕様変更に対応 アンケート時間の設定追加Most Popular ProjectsWBFS ManagerRawrASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesFacebook Developer ToolkitMost Active ProjectsRawrnopCommerce. Open Source online shop e-commerce solution.AutoPocopatterns & practices – Enterprise LibraryShweet: SharePoint 2010 Team Messaging built with PexFarseer Physics EngineNB_Store - Free DotNetNuke Ecommerce Catalog ModuleIonics Isapi Rewrite FilterBlogEngine.NETBeanProxy

    Read the article

  • Top tweets SOA Partner Community – October 2013

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Ronald Luttikhuizen ?My latest upload: SOA Made Simple | Introduction to SOA on @slideshare http://www.slideshare.net/rluttikhuizen/soa-made-simple-introduction-to-soa … via @SlideShare OTNArchBeat ?ArchBeat Link-o-Rama for October 4, 2013 #cloud #linux #oaam #soa http://pub.vitrue.com/y4SK Lucas Jellema ?My blog article shows news on the new SOA Suite 12c release - as it was publicly available during #oow13 see: http://technology.amis.nl/2013/09/27/oow13-soa-suite-12c/ … Yogesh Sontakke ?Introducing OER's new Express Workflows - Simplified Lifecycle Management. Blog post: http://bit.ly/16JKHCf @soacommunity #soagovernance SrinivasPadmanabhuni ?"@OTNArchBeat: SOA and User Interfaces - by @soacommunity @HajoNormann @gschmutz @t_winterberg et al #industrialsoa http://pub.vitrue.com/KmOp " SOA Community ?SOA and User-Interfaces http://servicetechmag.com/I76/0913-2 article published part of #industrialSOA at Service Technology Magazine #soacommunity Estafet Limited ?@Estafet win @UKOUG Middleware Partner of the Year 2013 Yogesh Sontakke ?RT @VikasAatOracle: #Oracle #B2B - written by experts #soa #soacommunity #oraclesoa - time to get a copy ! @SOAScott Danilo Schmiedel ?Thanks a lot to Juergen @soacommunity for the super interesting and well-organized Partner Advisory Council yesterday! Such a Great Value! OTNArchBeat ?Case management supporting re-landscaping application portfolios | @leonsmiers http://pub.vitrue.com/MC5j Samantha Searle ?Apply for the #GartnerBPM 2014 Excellence Awards - find out how via this link http://ow.ly/ptaNQ #Gartner #bpm #process #entarch #cio OTNArchBeat ?SOA and User Interfaces - by @soacommunity @hajonormann @gschmutz @t_winterberg et al #industrialsoa http://pub.vitrue.com/KmOp Dain Hansen ?Hybrid #cloud is on the rise, but is the IT department's culture standing in the way? http://add.vc/eJN #CloudIntegration #OracleSOA OTNArchBeat #SOASuite 11g ps6 - Download your log files directly from the Enterprise Manager | @whitehorsenl http://pub.vitrue.com/KrJ2 Whitehorses ?Whiteblog: SOA Suite 11g ps6 - Download your log files directly from the Enterprise Manager (http://goo.gl/2Gqiax ) Rajesh Raheja ?Cloud integration session recap #oow13 http://blog.raastech.com/2013/09/recap-of-real-world-cloud-integration.html?m=1 … Vikas Anand ?@Ahmed_Aboulnaga thanks for the excellent summary and kind words. #oow13 #cloud #oraclesoa http://blog.raastech.com/2013/09/recap-of-real-world-cloud-integration.html?m=1 … Luis Augusto Weir ?REST is also SOA. Check it out http://www.soa4u.co.uk/2013/09/restful-is-also-soa.html?m=1 … #soacommunity Graham ?“@OracleBPM & @soacommunity: 5 Ways to Modernize Applications with BPM #AppAdvantage" #oracleday http://bit.ly/15yC6e3 SOA Community ?#ACED director asked me for BPM references in FSI - ever visited my #SOACommunity workspace? https://beehiveonline.oracle.com/teamcollab/overview/SOA_Community_Workspace … #soacommunity #bpm OracleBlogs ?SOA Community Newsletter September 2013 http://ow.ly/2Aj6oK OTNArchBeat ?OOW13: First glimpses of the new #SOASuite12c | @LucasJellema http://pub.vitrue.com/2YgX sbernhardt ?Just published new blog entry on OOW 2013 wrap up. http://thecattlecrew.wordpress.com/2013/09/30/oracle-open-world-2013-wrap-up/ … #oow13 @OC_WIRE @soacommunity Emiel Paasschens ?Home with family after an overwhelming #OOW week in San Francisco with lot of info & meetings. Special thanx to @OracleBelux & @soacommunity Robert van Mölken ?Had a awesome week at #OOW13 in SF. Highlights were the @soacommunity Wine tour, @OracleBelux meet-ups and @OracleSOA CAB. Thanks to all :) SOA Community ?The place Oracle Fusion middleware comes from - Oracle 200 - TKs office - next Oracle 100 - SOA & BPM #soacommunity pic.twitter.com/qibFOQVbRo Oracle BPM ?5 Ways to Modernize Applications with BPM #AppAdvantage http://pub.vitrue.com/l2dn Simon Haslam ?Ha ha - how did we miss that! RT @lucasjellema: Post conference announcement of a new middleware appliance? #oow13 pic.twitter.com/3NvcjPfjXb OTNArchBeat ?The OTNArchBeat Daily is out! http://paper.li/OTNArchBeat/1329828521 … ? Top stories today via @lucasjellema @myfear @TylerJewell Packt Publishing ?Get 50% off ALL our DRM-free eBooks - this weekend only! Go to http://www.packtpub.com/ and use code BIG50, as often as you like! #BIG50 OracleBlogs ?Global Perspective: ACE Director from EMEA Weighs in on AppAdvantage http://ow.ly/2Afek2 orclateamsoa ?#orclateamsoa Blog: BPM Auditing Demystified - I've heard from a couple of customers recently asking about BPM aud... http://ow.ly/2AfbAn AMIS, Oracle & Java ?Cool #soasuite 12c feature managed file transfer - visit Dave Barry at demo point sr212 #oow #soacommunity pic.twitter.com/gb4HLbUarR SOA Community ?Let us know what was best at #OOW @soacommunity save trip home - thanks for coming to #SF ;-) see you at #OOW2014 pic.twitter.com/xbWXjRapqh Lonneke Dikmans ?Nice @dschmied is talking about the different steps in his project. He starts with explaining the user interface design #oow13 #ux #acm Lonneke Dikmans ?Saving the best for the end: managing knowledge worker processes by @dschmied and Prasen.#oow13 #acm cool stuff: adaptive case management Luis Augusto Weir ?SOA Governance is more than just OER. Requires people, processes and tools. Check it out #SOA #soacommunity http://youtu.be/Ohn06smVKVw Lonneke Dikmans ?“@OracleSOA: #oow Join us for:Enterprise SOA Infrastructure Best Practices Thu 9/26 2:00 PM - 3:00 PM Moscone West - 2020 SOA Community ?Business Process Management (BPM) 11g PS6 Awareness Course http://wp.me/p10C8u-1as Ajay Khanna ?Detect, Analyze, Act - Fast! http://wp.me/p10C8u-1ao via @soacommunity #OracleBPM Simone Geib ?It took a while, but I finally reached 500 followers. Thanks everybody and especially @soacommunity :) SOA Community ?Functional Testing Business Processes In Oracle BPM Suite 11g by Arun Pareek http://wp.me/p10C8u-1aq SOA Community Distribute the September edition of the SOA Community newsletter READ it! Didn't receive it register http://www.oracle.com/goto/emea/soa #soacommunity SOA Community ?Detect, Analyze, Act - Fast! by Ajay Khanna http://wp.me/p10C8u-1ao Robert van Mölken ?Finalised my #OOW presentation #CON8736 and live demo on wednesday 25th at 11:45am. Also giving a short version at the SOA CAB on thursday. Rajesh Raheja ?"The AppAdvantage of Oracle Cloud & On-premises Integration" http://bit.ly/14RYHmZ SOA Community ?Additional new content SOA & BPM Partner Community http://wp.me/p10C8u-1aw Dain Hansen ?Right now #oow13 SOA, BPM - Customer Advisory Boards. 'No tweeting' says @SOASimone. Instagram of funny cats still ok. leonsmiers ?Case Management with Oracle BPM Suite our presentation on #oow13 http://www.slideshare.net/leonsmiers/oracle-open-world-2013-case-management-smiers-kitson … #capgemini @nkitson72 Mark Simpson ?Flextronics reduced cost of processing an invoice to <$1 from $7 due to BPM @OracleBPM #oow13 saving millions. Way less than industry avg. Holger Mueller ?#Siemens Shared Services CIO says that #Fusion #Middleware made the difference for #Oracle over #Workday. #Integration matters. #OOW13 oracleopenworld ?Miss any #oow13 keynotes, or simply want to rewatch? Check out the live streaming site for keynotes on demand: http://pub.vitrue.com/RG4D SOA Community ?Analyze your m2m data and act on it! Big data Pattern matching, fast data & soa #soacommunity #oow pic.twitter.com/48Q1z4ckh7 SOA Community ?Top tweets SOA Partner Community – September 2013 http://wp.me/p10C8u-1cR Simone Geib ?#oraclesoa hands on lab at #oow13 pic.twitter.com/IJJrqXIMiu Danilo Schmiedel #oow13 CON8436: Managing Knowledge Worker Processes. Come & get a free Adaptive Case Management poster @soacommunity pic.twitter.com/FRc2CSyLwb John Sim ?Great job again Jurgen @soacommunity helping bring Ace Community together! Danilo Schmiedel ?Excellent #OracleBPM Adaptive Case Management intro by @heidibuelowBPM and Prasen at the #oow13 demo ground.Last chance today @soacommunity SOA Community ?Thanks to all our #bpm #soa and #weblogic partners for the great middleware business #oow #soacommunity pic.twitter.com/dBwZ8DMHfH Whitehorses ?Thanks @soacommunity for the party tonight. Great to meet product management & see all the talented EMEA middleware specialists. #oow13 Danilo Schmiedel ?Great tool demo from Link Consulting about managing your SOA with OER #oow13 @soacommunity Torsten Winterberg ?“@soacommunity: thanks to @dschmied and @OC_WIRE for making it happen to have our case management poster as printed version hier at #oow13 Ronald Luttikhuizen ?These were the architects involved in the diagram excitement :) just after State of SOA podcast with @OTNArchBeat pic.twitter.com/5B8jIrVTA9 SOA Community ?Tanks to AVIO for the excellent #bpmn poster and the great bpm business - visit then at #OOW & get the poster pic.twitter.com/ebTg9pFY1C Dain Hansen ?Kurian introducing Oracle Platform-as-a-Service developments. #oow13 #OracleCloud pic.twitter.com/evJLTU53rx Bruce Tierney ?API Management "multi-level pie chart" at #oow13 by Oracle's Tim Hall pic.twitter.com/q12OIRdaue Dain Hansen ?This is not your Daddy's BAM @soacommunity: Is this BAM? Very cool in #soasuite 12c get a demo at sr225 pic.twitter.com/EvwqXW9U5j SOA Community ?Is this BAM? Very cool in #soasuite 12c get a demo at sr225 pic.twitter.com/LybHxyF362 SOA Community ?SOA governance by @Yogesh_Sontakke at demo point sr214 many good new features - key for soa projects #oow #soa pic.twitter.com/DFK0ummsK1 SOA Community ?Cool #soasuite 12c feature managed file transfer - visit Dave Barry at demo point sr212 #oow #soacommunity pic.twitter.com/GDKcqDGhCF SOA Community ?Adaptive Case Management demo point at #OOW visit @heidibuelowBPM get a demo and cmmn notation poster #soacommunity pic.twitter.com/T7yEyI7tdn Lonneke Dikmans ?In case you missed it: http://blog.vennster.nl/2013/09/case-management-part-1.html?spref=tw … Lucas Jellema ?SOA Suite news: Cloud Adapters RightNow and SalesForce plus SDK to develop custom cloud adapters (CY13); REST/JSON support in SB/SCA (12c) Oracle SOA ?Cloud Integration and AppAdvantage: Transform your Enterprise #soa #oow13 http://pub.vitrue.com/UfPB Dain Hansen ?Cloud Integration and AppAdvantage: Transform your Enterprise #soa #oow13 http://pub.vitrue.com/4QWA Hajo Normann ?#BigData, eventing & real time #analytics suggest timely next actions in #oracleBPM & #oracleACM; #oow13 #FastData pic.twitter.com/aFVGrTXPqu Mark Simpson ?OEP CQL engine now used in BAM12c for event stream summary computation with temporal and pattern match features to feed dashboards. #oow13 Mark Simpson ?BAM12c virtually a new product. Analytics that senses ahead of time and also compares to historical trends to guide process or case #oow13 Andrejus Baranovskis ?Enabling UI Shell 12c/11g Multitasking Behavior http://fb.me/18l9vxQfA Amit Zavery ?Oracle Fusion Middleware Empowers Business Users, EVP Thomas Kurian's session summary http://onforb.es/18Ta1jf #oow13 #oraclemiddle #oracle Vikas Anand ?#oow13 #oracleopenworld BPM on display at Middleware keynote by Thomas Kurian pic.twitter.com/PMm719S0Ui SOA Community ?BPM composer - business user empowerment #oow #soacommunity #bpmsuite pic.twitter.com/0Qgl6oVh0h SOA Community ?Model your process in BPMN - make is executable and analyze & improve them #oow #soacommunity pic.twitter.com/jkLlObDdoi Bruce Tierney ?@demed and Thomas Kurian talk mobile and cloud at #oow13 pic.twitter.com/bAAeqn5a2V Amit Zavery ?Thomas Kurian showcasing all the new features of Oracle Fusion Middleware #oraclemiddle #oow13 SOA Community ?Demo time cloud adapters in #soasuite at Thomas Kurian keynote. Build and integrate mobile apps in minutes #oow pic.twitter.com/qTnCOJLLwS SOA Community ?Soa suite cloud adapters and mobile apps by @demed at Thomas Kurian keynote #oow #oracle #soacommunity pic.twitter.com/5aMLkNH4Ng Danilo Schmiedel ?First impressions from Oracle Open World 2013 http://wp.me/p2fG8x-77 @soacommunity @OC_WIRE SOA Community ?Good morning SFO let us know if you attend #OOW & #OPN keynote - #soacommunity pic.twitter.com/hzLYGDlRgE Simon Haslam ?Had a very useful @wlscommunity PAC meeting yesterday... & probably the best swag to date! pic.twitter.com/Lqus8ysbp7 Vikas Anand ?Oracle SOA Suite - Team Blog http://bit.ly/18I1Zj7 Rajesh Raheja ?Introducing new Cloud Connectivity Adapters #soa #demopod #oow13. I'll be there Sep 23 & 24 3-6pm to meetup http://bit.ly/18I1Zj7 leonsmiers ?..and again a very successful Oracle SOA/BPM partner council on the eve of #oow13. Thanks Jurgen! @soacommunity pic.twitter.com/aM1LMlb7Yw Vikas Anand ?#oow13 #soa #oep #exalogic Canon Delivers Fast Data with Oracle Event Processing (Oracle SOA Suite) http://bit.ly/1dwPeHb #soacommunity Rolf Scheuch ?The ACM poster is a big success. Great talks and .... I am soon out of posters! #bpmcon #ACM pic.twitter.com/TriaUyXRWK Oracle SOA ?British Telecom Sucess with Oracle B2B #oow #soa #b2b http://pub.vitrue.com/1RWi leonsmiers ?(Oracle) Case Management supporting re-platforming, a pre-read before our presentation at #oow13 http://leonsmiers.blogspot.com/2013/09/case-management-supporting-re.html … #capgemini #yammer SOA & BPM Partner CommunityFor regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Twitter,SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >