Search Results

Search found 28303 results on 1133 pages for 'multi site'.

Page 333/1133 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • DallasXAML.com – A New User Group for Silverlight, WPF, XBAP, etc.

    - by vblasberg
                                     http://DallasXAML.com   I’ve devoted much of last month to starting the DallasXAML User Group.  I finally got back into user group management after 2 years away from leading the Dallas C# SIG.  Now I’m having fun getting a Silverlight/WPF user group going strong for the Dallas / Ft. Worth community.  Our first meeting was March 3rd at the Improving Enterprises offices in North Dallas.  We had about 25 to 35 attendees in the first meeting and it went well.  We covered the most important topic that everyone should understand well – data binding.   So I chose the XAML user group so we can get together for a common group improvement in the Dallas / Ft. Worth area and learn cross-technology information that we can use now.  It is not a lecture hall.  The great thing is that we’ll provide hands-on experience with most every meeting.  The goal is to get the experience that we can use the next work day.  I unfortunately broke that rule by speaking all through the first meeting, but next month is part two with more hands-on data binding.   The differentiation is this group concentrates on XAML, not Silverlight or Windows Client alone.  What we learn in one area, we gain for all areas.  That includes the Silverlight for Windows Phone 7 coming later this year.  Next year it may be Windows Phone 8, 9, or whatever.    I started developing WPF seriously almost a year ago.  I experienced the painful learning curve.  Anyone who reports that there isn’t a big learning curve either thinks in XAML before it was developed, is on the Silverlight or WPF development team, or has already conquered the learning and forgot the pain.  So I wanted to share the pain or make it easier for others – same thing.  I have found that the more I learn and use good disciplined techniques, the more interesting and rewarding development is again.   A few months ago, I was sitting in the iPhone development session at the Dallas C# SIG.  After the meeting, the audience was polled for future topics.  After a few suggestions, Silverlight got the big hands up.  That makes sense because it’s still the hot topic for many Microsoft developers.  So I surfed around and found that there aren’t enough user groups to help in this area.  I polled a few local group leaders and did the work to start the group.  This week I got a telerik controls licence and improved the site with some great controls, namely the RadHtmlPlaceholder control.  It provides a Silverlight control to show HTML in an IFrame-like area.  On DallasXAML.com, the newsletters and resource pages display in HTML because Silverlight just isn’t there yet.  I’m looking forward to a Silverlight XPS viewer with flow documents.  There are some good commercial version available, but this is a non-profit group.    The DallasXAML.com site points to many other resources such as podcasts and webcasts.  I would rather give them the credit than try to out-do them.  So check out the DallasXAML user group site and attend our meetings if you can.  We meet the first Tuesday of the month.   -Vince DallasXAML User Group Leader  

    Read the article

  • Creating a thematic map

    - by jsharma
    This post describes how to create a simple thematic map, just a state population layer, with no underlying map tile layer. The map shows states color-coded by total population. The map is interactive with info-windows and can be panned and zoomed. The sample code demonstrates the following: Displaying an interactive vector layer with no background map tile layer (i.e. purpose and use of the Universe object) Using a dynamic (i.e. defined via the javascript client API) color bucket style Dynamically changing a layer's rendering style Specifying which attribute value to use in determining the bucket, and hence style, for a feature (FoI) The result is shown in the screenshot below. The states layer was defined, and stored in the user_sdo_themes view of the mvdemo schema, using MapBuilder. The underlying table is defined as SQL> desc states_32775  Name                                      Null?    Type ----------------------------------------- -------- ----------------------------  STATE                                              VARCHAR2(26)  STATE_ABRV                                         VARCHAR2(2) FIPSST                                             VARCHAR2(2) TOTPOP                                             NUMBER PCTSMPLD                                           NUMBER LANDSQMI                                           NUMBER POPPSQMI                                           NUMBER ... MEDHHINC NUMBER AVGHHINC NUMBER GEOM32775 MDSYS.SDO_GEOMETRY We'll use the TOTPOP column value in the advanced (color bucket) style for rendering the states layers. The predefined theme (US_STATES_BI) is defined as follows. SQL> select styling_rules from user_sdo_themes where name='US_STATES_BI'; STYLING_RULES -------------------------------------------------------------------------------- <?xml version="1.0" standalone="yes"?> <styling_rules highlight_style="C.CB_QUAL_8_CLASS_DARK2_1"> <hidden_info> <field column="STATE" name="Name"/> <field column="POPPSQMI" name="POPPSQMI"/> <field column="TOTPOP" name="TOTPOP"/> </hidden_info> <rule column="TOTPOP"> <features style="states_totpop"> </features> <label column="STATE_ABRV" style="T.BLUE_SERIF_10"> 1 </label> </rule> </styling_rules> SQL> The theme definition specifies that the state, poppsqmi, totpop, state_abrv, and geom columns will be queried from the states_32775 table. The state_abrv value will be used to label the state while the totpop value will be used to determine the color-fill from those defined in the states_totpop advanced style. The states_totpop style, which we will not use in our demo, is defined as shown below. SQL> select definition from user_sdo_styles where name='STATES_TOTPOP'; DEFINITION -------------------------------------------------------------------------------- <?xml version="1.0" ?> <AdvancedStyle> <BucketStyle> <Buckets default_style="C.S02_COUNTRY_AREA"> <RangedBucket seq="0" label="10K - 5M" low="10000" high="5000000" style="C.SEQ6_01" /> <RangedBucket seq="1" label="5M - 12M" low="5000001" high="1.2E7" style="C.SEQ6_02" /> <RangedBucket seq="2" label="12M - 20M" low="1.2000001E7" high="2.0E7" style="C.SEQ6_04" /> <RangedBucket seq="3" label="&gt; 20M" low="2.0000001E7" high="5.0E7" style="C.SEQ6_05" /> </Buckets> </BucketStyle> </AdvancedStyle> SQL> The demo defines additional advanced styles via the OM.style object and methods and uses those instead when rendering the states layer.   Now let's look at relevant snippets of code that defines the map extent and zoom levels (i.e. the OM.universe),  loads the states predefined vector layer (OM.layer), and sets up the advanced (color bucket) style. Defining the map extent and zoom levels. function initMap() {   //alert("Initialize map view");     // define the map extent and number of zoom levels.   // The Universe object is similar to the map tile layer configuration   // It defines the map extent, number of zoom levels, and spatial reference system   // well-known ones (like web mercator/google/bing or maps.oracle/elocation are predefined   // The Universe must be defined when there is no underlying map tile layer.   // When there is a map tile layer then that defines the map extent, srid, and zoom levels.      var uni= new OM.universe.Universe(     {         srid : 32775,         bounds : new OM.geometry.Rectangle(                         -3280000, 170000, 2300000, 3200000, 32775),         numberOfZoomLevels: 8     }); The srid specifies the spatial reference system which is Equal-Area Projection (United States). SQL> select cs_name from cs_srs where srid=32775 ; CS_NAME --------------------------------------------------- Equal-Area Projection (United States) The bounds defines the map extent. It is a Rectangle defined using the lower-left and upper-right coordinates and srid. Loading and displaying the states layer This is done in the states() function. The full code is at the end of this post, however here's the snippet which defines the states VectorLayer.     // States is a predefined layer in user_sdo_themes     var  layer2 = new OM.layer.VectorLayer("vLayer2",     {         def:         {             type:OM.layer.VectorLayer.TYPE_PREDEFINED,             dataSource:"mvdemo",             theme:"us_states_bi",             url: baseURL,             loadOnDemand: false         },         boundingTheme:true      }); The first parameter is a layer name, the second is an object literal for a layer config. The config object has two attributes: the first is the layer definition, the second specifies whether the layer is a bounding one (i.e. used to determine the current map zoom and center such that the whole layer is displayed within the map window) or not. The layer config has the following attributes: type - specifies whether is a predefined one, a defined via a SQL query (JDBC), or in a json-format file (DATAPACK) theme - is the predefined theme's name url - is the location of the mapviewer server loadOnDemand - specifies whether to load all the features or just those that lie within the current map window and load additional ones as needed on a pan or zoom The code snippet below dynamically defines an advanced style and then uses it, instead of the 'states_totpop' style, when rendering the states layer. // override predefined rendering style with programmatic one    var theRenderingStyle =      createBucketColorStyle('YlBr5', colorSeries, 'States5', true);   // specify which attribute is used in determining the bucket (i.e. color) to use for the state   // It can be an array because the style could be a chart type (pie/bar)   // which requires multiple attribute columns     // Use the STATE.TOTPOP column (aka attribute) value here    layer2.setRenderingStyle(theRenderingStyle, ["TOTPOP"]); The style itself is defined in the createBucketColorStyle() function. Dynamically defining an advanced style The advanced style used here is a bucket color style, i.e. a color style is associated with each bucket. So first we define the colors and then the buckets.     numClasses = colorSeries[colorName].classes;    // create Color Styles    for (var i=0; i < numClasses; i++)    {         theStyles[i] = new OM.style.Color(                      {fill: colorSeries[colorName].fill[i],                        stroke:colorSeries[colorName].stroke[i],                       strokeOpacity: useGradient? 0.25 : 1                      });    }; numClasses is the number of buckets. The colorSeries array contains the color fill and stroke definitions and is: var colorSeries = { //multi-hue color scheme #10 YlBl. "YlBl3": {   classes:3,                  fill: [0xEDF8B1, 0x7FCDBB, 0x2C7FB8],                  stroke:[0xB5DF9F, 0x72B8A8, 0x2872A6]   }, "YlBl5": {   classes:5,                  fill:[0xFFFFCC, 0xA1DAB4, 0x41B6C4, 0x2C7FB8, 0x253494],                  stroke:[0xE6E6B8, 0x91BCA2, 0x3AA4B0, 0x2872A6, 0x212F85]   }, //multi-hue color scheme #11 YlBr.  "YlBr3": {classes:3,                  fill:[0xFFF7BC, 0xFEC44F, 0xD95F0E],                  stroke:[0xE6DEA9, 0xE5B047, 0xC5360D]   }, "YlBr5": {classes:5,                  fill:[0xFFFFD4, 0xFED98E, 0xFE9929, 0xD95F0E, 0x993404],                  stroke:[0xE6E6BF, 0xE5C380, 0xE58A25, 0xC35663, 0x8A2F04]     }, etc. Next we create the bucket style.    bucketStyleDef = {       numClasses : colorSeries[colorName].classes, //      classification: 'custom',  //since we are supplying all the buckets //      buckets: theBuckets,       classification: 'logarithmic',  // use a logarithmic scale       styles: theStyles,       gradient:  useGradient? 'linear' : 'off' //      gradient:  useGradient? 'radial' : 'off'     };    theBucketStyle = new OM.style.BucketStyle(bucketStyleDef);    return theBucketStyle; A BucketStyle constructor takes a style definition as input. The style definition specifies the number of buckets (numClasses), a classification scheme (which can be equal-ranged, logarithmic scale, or custom), the styles for each bucket, whether to use a gradient effect, and optionally the buckets (required when using a custom classification scheme). The full source for the demo <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Oracle Maps V2 Thematic Map Demo</title> <script src="http://localhost:8080/mapviewer/jslib/v2/oraclemapsv2.js" type="text/javascript"> </script> <script type="text/javascript"> //var $j = jQuery.noConflict(); var baseURL="http://localhost:8080/mapviewer"; // location of mapviewer OM.gv.proxyEnabled =false; // no mvproxy needed OM.gv.setResourcePath(baseURL+"/jslib/v2/images/"); // location of resources for UI elements like nav panel buttons var map = null; // the client mapviewer object var statesLayer = null, stateCountyLayer = null; // The vector layers for states and counties in a state var layerName="States"; // initial map center and zoom var mapCenterLon = -20000; var mapCenterLat = 1750000; var mapZoom = 2; var mpoint = new OM.geometry.Point(mapCenterLon,mapCenterLat,32775); var currentPalette = null, currentStyle=null; // set an onchange listener for the color palette select list // initialize the map // load and display the states layer $(document).ready( function() { $("#demo-htmlselect").change(function() { var theColorScheme = $(this).val(); useSelectedColorScheme(theColorScheme); }); initMap(); states(); } ); /** * color series from ColorBrewer site (http://colorbrewer2.org/). */ var colorSeries = { //multi-hue color scheme #10 YlBl. "YlBl3": { classes:3, fill: [0xEDF8B1, 0x7FCDBB, 0x2C7FB8], stroke:[0xB5DF9F, 0x72B8A8, 0x2872A6] }, "YlBl5": { classes:5, fill:[0xFFFFCC, 0xA1DAB4, 0x41B6C4, 0x2C7FB8, 0x253494], stroke:[0xE6E6B8, 0x91BCA2, 0x3AA4B0, 0x2872A6, 0x212F85] }, //multi-hue color scheme #11 YlBr. "YlBr3": {classes:3, fill:[0xFFF7BC, 0xFEC44F, 0xD95F0E], stroke:[0xE6DEA9, 0xE5B047, 0xC5360D] }, "YlBr5": {classes:5, fill:[0xFFFFD4, 0xFED98E, 0xFE9929, 0xD95F0E, 0x993404], stroke:[0xE6E6BF, 0xE5C380, 0xE58A25, 0xC35663, 0x8A2F04] }, // single-hue color schemes (blues, greens, greys, oranges, reds, purples) "Purples5": {classes:5, fill:[0xf2f0f7, 0xcbc9e2, 0x9e9ac8, 0x756bb1, 0x54278f], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Blues5": {classes:5, fill:[0xEFF3FF, 0xbdd7e7, 0x68aed6, 0x3182bd, 0x18519C], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Greens5": {classes:5, fill:[0xedf8e9, 0xbae4b3, 0x74c476, 0x31a354, 0x116d2c], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Greys5": {classes:5, fill:[0xf7f7f7, 0xcccccc, 0x969696, 0x636363, 0x454545], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Oranges5": {classes:5, fill:[0xfeedde, 0xfdb385, 0xfd8d3c, 0xe6550d, 0xa63603], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Reds5": {classes:5, fill:[0xfee5d9, 0xfcae91, 0xfb6a4a, 0xde2d26, 0xa50f15], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] } }; function createBucketColorStyle( colorName, colorSeries, rangeName, useGradient) { var theBucketStyle; var bucketStyleDef; var theStyles = []; var theColors = []; var aBucket, aStyle, aColor, aRange; var numClasses ; numClasses = colorSeries[colorName].classes; // create Color Styles for (var i=0; i < numClasses; i++) { theStyles[i] = new OM.style.Color( {fill: colorSeries[colorName].fill[i], stroke:colorSeries[colorName].stroke[i], strokeOpacity: useGradient? 0.25 : 1 }); }; bucketStyleDef = { numClasses : colorSeries[colorName].classes, // classification: 'custom', //since we are supplying all the buckets // buckets: theBuckets, classification: 'logarithmic', // use a logarithmic scale styles: theStyles, gradient: useGradient? 'linear' : 'off' // gradient: useGradient? 'radial' : 'off' }; theBucketStyle = new OM.style.BucketStyle(bucketStyleDef); return theBucketStyle; } function initMap() { //alert("Initialize map view"); // define the map extent and number of zoom levels. // The Universe object is similar to the map tile layer configuration // It defines the map extent, number of zoom levels, and spatial reference system // well-known ones (like web mercator/google/bing or maps.oracle/elocation are predefined // The Universe must be defined when there is no underlying map tile layer. // When there is a map tile layer then that defines the map extent, srid, and zoom levels. var uni= new OM.universe.Universe( { srid : 32775, bounds : new OM.geometry.Rectangle( -3280000, 170000, 2300000, 3200000, 32775), numberOfZoomLevels: 8 }); map = new OM.Map( document.getElementById('map'), { mapviewerURL: baseURL, universe:uni }) ; var navigationPanelBar = new OM.control.NavigationPanelBar(); map.addMapDecoration(navigationPanelBar); } // end initMap function states() { //alert("Load and display states"); layerName = "States"; if(statesLayer) { // states were already visible but the style may have changed // so set the style to the currently selected one var theData = $('#demo-htmlselect').val(); setStyle(theData); } else { // States is a predefined layer in user_sdo_themes var layer2 = new OM.layer.VectorLayer("vLayer2", { def: { type:OM.layer.VectorLayer.TYPE_PREDEFINED, dataSource:"mvdemo", theme:"us_states_bi", url: baseURL, loadOnDemand: false }, boundingTheme:true }); // add drop shadow effect and hover style var shadowFilter = new OM.visualfilter.DropShadow({opacity:0.5, color:"#000000", offset:6, radius:10}); var hoverStyle = new OM.style.Color( {stroke:"#838383", strokeThickness:2}); layer2.setHoverStyle(hoverStyle); layer2.setHoverVisualFilter(shadowFilter); layer2.enableFeatureHover(true); layer2.enableFeatureSelection(false); layer2.setLabelsVisible(true); // override predefined rendering style with programmatic one var theRenderingStyle = createBucketColorStyle('YlBr5', colorSeries, 'States5', true); // specify which attribute is used in determining the bucket (i.e. color) to use for the state // It can be an array because the style could be a chart type (pie/bar) // which requires multiple attribute columns // Use the STATE.TOTPOP column (aka attribute) value here layer2.setRenderingStyle(theRenderingStyle, ["TOTPOP"]); currentPalette = "YlBr5"; var stLayerIdx = map.addLayer(layer2); //alert('State Layer Idx = ' + stLayerIdx); map.setMapCenter(mpoint); map.setMapZoomLevel(mapZoom) ; // display the map map.init() ; statesLayer=layer2; // add rt-click event listener to show counties for the state layer2.addListener(OM.event.MouseEvent.MOUSE_RIGHT_CLICK,stateRtClick); } // end if } // end states function setStyle(styleName) { // alert("Selected Style = " + styleName); // there may be a counties layer also displayed. // that wll have different bucket ranges so create // one style for states and one for counties var newRenderingStyle = null; if (layerName === "States") { if(/3/.test(styleName)) { newRenderingStyle = createBucketColorStyle(styleName, colorSeries, 'States3', false); currentStyle = createBucketColorStyle(styleName, colorSeries, 'Counties3', false); } else { newRenderingStyle = createBucketColorStyle(styleName, colorSeries, 'States5', false); currentStyle = createBucketColorStyle(styleName, colorSeries, 'Counties5', false); } statesLayer.setRenderingStyle(newRenderingStyle, ["TOTPOP"]); if (stateCountyLayer) stateCountyLayer.setRenderingStyle(currentStyle, ["TOTPOP"]); } } // end setStyle function stateRtClick(evt){ var foi = evt.feature; //alert('Rt-Click on State: ' + foi.attributes['_label_'] + // ' with pop ' + foi.attributes['TOTPOP']); // display another layer with counties info // layer may change on each rt-click so create and add each time. var countyByState = null ; // the _label_ attribute of a feature in this case is the state abbreviation // we will use that to query and get the counties for a state var sqlText = "select totpop,geom32775 from counties_32775_moved where state_abrv="+ "'"+foi.getAttributeValue('_label_')+"'"; // alert(sqlText); if (currentStyle === null) currentStyle = createBucketColorStyle('YlBr5', colorSeries, 'Counties5', false); /* try a simple style instead new OM.style.ColorStyle( { stroke: "#B8F4FF", fill: "#18E5F4", fillOpacity:0 } ); */ // remove existing layer if any if(stateCountyLayer) map.removeLayer(stateCountyLayer); countyByState = new OM.layer.VectorLayer("stCountyLayer", {def:{type:OM.layer.VectorLayer.TYPE_JDBC, dataSource:"mvdemo", sql:sqlText, url:baseURL}}); // url:baseURL}, // renderingStyle:currentStyle}); countyByState.setVisible(true); // specify which attribute is used in determining the bucket (i.e. color) to use for the state countyByState.setRenderingStyle(currentStyle, ["TOTPOP"]); var ctLayerIdx = map.addLayer(countyByState); // alert('County Layer Idx = ' + ctLayerIdx); //map.addLayer(countyByState); stateCountyLayer = countyByState; } // end stateRtClick function useSelectedColorScheme(theColorScheme) { if(map) { // code to update renderStyle goes here //alert('will try to change render style'); setStyle(theColorScheme); } else { // do nothing } } </script> </head> <body bgcolor="#b4c5cc" style="height:100%;font-family:Arial,Helvetica,Verdana"> <h3 align="center">State population thematic map </h3> <div id="demo" style="position:absolute; left:68%; top:44px; width:28%; height:100%"> <HR/> <p/> Choose Color Scheme: <select id="demo-htmlselect"> <option value="YlBl3"> YellowBlue3</option> <option value="YlBr3"> YellowBrown3</option> <option value="YlBl5"> YellowBlue5</option> <option value="YlBr5" selected="selected"> YellowBrown5</option> <option value="Blues5"> Blues</option> <option value="Greens5"> Greens</option> <option value="Greys5"> Greys</option> <option value="Oranges5"> Oranges</option> <option value="Purples5"> Purples</option> <option value="Reds5"> Reds</option> </select> <p/> </div> <div id="map" style="position:absolute; left:10px; top:50px; width:65%; height:75%; background-color:#778f99"></div> <div style="position:absolute;top:85%; left:10px;width:98%" class="noprint"> <HR/> <p> Note: This demo uses HTML5 Canvas and requires IE9+, Firefox 10+, or Chrome. No map will show up in IE8 or earlier. </p> </div> </body> </html>

    Read the article

  • IE9 and the Mystery of the Broken Video Tag

    - by David Wesst
    I was very excited when Microsoft released the Internet Explorer 9 Release Candidate. As far as I was concerned, this was another nail in the coffin for IE6 and step in the right direction for us .NET web developers as our base camp was finally starting to support the latest and greatest future-web standards. Unfortunately, my celebration was short lived as I soon hit a snag while loading up an HTML5 site I was building in Visual Studio 2010. The Mystery After updating Internet Explorer, I ran my HTML5 site that had the oh-so-lovely HTML5 video tag showing a video. Even though this worked in IE9 Beta, it appeared that IE9 RC could not load the same file. I figured that it was the video codec. Maybe IE9 RC no longer supported the video codec I used to encode my video. Here's the code I used: <video width="854" height="480" id="myOtherVideo" autoplay="" controls=""> <source src="/DemoSite1/Media/big_buck_bunny.mp4"/> <div> <p>Your browser does not support HTML5 Video.</p> </div> </video> As you can see from the code, I had the "fail-safe" code inside the video tag. The idea there being that if the video tag, or the video files themselves, are not supported by the browser my video should fail gracefully. What was even more strange was the fact that it worked in all the other HTML5 browsers that supported video. The Investigation Whoa! DJ stop the music. How can any of that make sense? Would the IE team really take such huge strides forward only to forget to include a feature that was already in the beta? I don't think so. I did plenty of searching on the web and asking around on the web, but could not seem to find anyone else having the same problem. Eventually I came across this post talking about declaring the MIME type in the .htaccess file. That got me thinking: does my web server support the video MIME type? I was using VS2010, so how do I know what kind of MIME types are supported by default? Still, my page hosted in Cassini (the web development server in VS2010) works on the other browsers. Why wouldn't it work with IE9 RC? To answer that, it was time to open up the upgraded toolbox known as the Developer's Tools in IE9 and use the new Network Tab. The Conclusion If you take a closer look at the results displayed from the Network tab, you can see that IE9 RC has interpreted the video file as text/html rather than video/mp4. To make this work, I decided to use IIS to debug my HTML5 web application by setting the web project's properties. Then, I added the MIME types that I want to support (i.e. video/mp4, video/ogg, video/webm). Et voila! The Mystery of the Broken Video Tag is solved. After Thoughts After solving the mystery, I still had the question about why my site worked in Chrome, Safari, and Firefox 3.6. After asking around, the best answer that I received was from my colleague Tyler Doerksen. He said that IE9 likely depends on the server telling it what kind of file it is downloading rather than trying to read the metadata about the data it is trying to download before doing anything. I have no facts to back this up, but it makes sense to me. In a browser war where milliseconds can make your browser fall back a few places in the race for supremacy, maybe the IE team opted to depend on the server knowing what kind of content it is serving up. Makes sense to me. In any case, that is just an educated guess. If you have any comments, feel free to post on them below. This post also appears at http://david.wes.st

    Read the article

  • How to retrieve Sharepoint data from a Windows Forms Application.

    - by Michael M. Bangoy
    In this demo I'm going to demonstrate how to retrieve Sharepoint data and display it on a Windows Forms Application. 1. Open Visual Studio 2010 and create a new Project. 2. In the project template select Windows Forms Application. 3. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI. 4. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. That's it we're ready to write our codes. Note: In this example I've added to controls on the form, the controls are Button, TextBox, Label and DataGridView. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Data.Objects; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client; namespace ClientObjectModel { public partial class Form1 : Form { // declare string url of the Sharepoint site string _context = "theurlofyoursharepointsite"; public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { } private void getsitetitle() {    SP.ClientContext context = new SP.ClientContext(_context);    SP.Web _site = context.Web;    context.Load(_site);    context.ExecuteQuery();    txttitle.Text = _site.Title;    context.Dispose(); } private void loadlist() { using (SP.ClientContext _clientcontext = new SP.ClientContext(_context)) {    SP.Web _web = _clientcontext.Web;    SP.ListCollection _lists = _clientcontext.Web.Lists;    _clientcontext.Load(_lists);    _clientcontext.ExecuteQuery();    DataTable dt = new DataTable();    DataColumn column;    DataRow row;    column = new DataColumn();    column.DataType = Type.GetType("System.String");    column.ColumnName = "List Title";    dt.Columns.Add(column);    foreach (SP.List listitem in _lists)    {       row = dt.NewRow();       row["List Title"] = listitem.Title;       dt.Rows.Add(row);    }       dataGridView1.DataSource = dt;    } private void cmdload_Click(object sender, EventArgs e) { getsitetitle(); loadlist(); } } That's it. Running the application and clicking the Load Button will retrieve the Title of the Sharepoint site and display it on the TextBox and also it will retrieve ALL of the Sharepoint List on that site and populate the DataGridView with the List Title. Hope this helps. Thank you.

    Read the article

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • Web Safe Area (optimal resolution) for web app design?

    - by M.A.X
    I'm in the process of designing a new web app and I'm wondering for what 'Web Safe Area' should I optimize the app layout and design. By Web Safe Area I mean the actual area available to display the website in the browser (which is influenced by monitor resolution as well as the space taken up by the browser and OS) I did some investigation and thinking on my own but wanted to share this to see what the general opinion is. Here is what I found: Optimal Display Resolution: w3schools web stats seems to be the most referenced source (however they state that these are results from their site and is biased towards tech savvy users) http://www.w3counter.com/globalstats.php (aggregate data from something like 15,000 different sites that use their tracking services) StatCounter Global Stats Display Resolution (Stats are based on aggregate data collected by StatCounter on a sample exceeding 15 billion pageviews per month collected from across the StatCounter network of more than 3 million websites) NetMarketShare Screen Resolutions (marketshare.hitslink.com) (a web analytics consulting firm, they get data from browsers of site visitors to their on-demand network of live stats customers. The data is compiled from approximately 160 million visitors per month) Display Resolution Summary: There is a bit of variation between the above sources but in general as of Jan 2011 looks like 1024x768 is about 20%, while ~85% have a higher resolution of at least 1280x768 (1280x800 is the most common of these with 15-20% of total web, depending on the source; 1280x1024 and 1366x768 follow behind with 9-14% of the share). My guess would be that the higher resolution values will be even more common if we filter on North America, and even higher if we filter on N.American corporate users (unfortunately I couldn't find any free geographically filtered statistics). Another point to note is that the 1024x768 desktop user population is likely lower than the aforementioned 20%, seeing as the iPad (1024x768 native display) is likely propping up those number (the app I'm designing is flash based, Apple mobile devices don't support flash so iPad support isn't a concern). My recommendation would be to optimize around the 1280x768 constraint (*note: 1280x768 is actually a relatively rare resolution, but I think it's a valid constraint range considering that 1366x768 is relatively common and 1280 is the most common horizontal resolution). Browser + OS Constraints: To further add to the constraints we have to subtract the space taken up by the browser (assuming IE, which is the most space consuming) and the OS (assuming WinXP-Win7): Win7 has the biggest taskbar footprint at a height of 40px (XP's and Vista's is 30px) The default IE8 view uses up 25px at the bottom of the screen with the status bar and a further 120px at the top of the screen with the windows title bar and the browser UI (assuming the default 'favorites' toolbar is present, it would instead be 91px without the favorites toolbar). Assuming no scrollbar, we also loose a total of 4px horizontally for the window outline. This means that we are left with 583px of vertical space and 1276px of horizontal. In other words, a Web Safe Area of 1276 x 583 Is this a correct line of thinking? I'm really surprised that I couldn't find this type of investigation anywhere on the web. Lots of websites talk about designing for 1024x768, but that's only half the equation! There is no mention of browser/OS influences on the actual area you have to display the site/app. Any help on this would be greatly appreciated! Thanks. EDIT Another caveat to my line of thinking above is that different browsers actually take up different amounts of pixels based on the OS they're running on. For example, under WinXP IE8 takes up 142px on top of the screen (instead the aforementioned 120px for Win7) because the file menu shows up by default on XP while in Win7 the file menu is hidden by default. So it looks like on WinXP + IE8 the Web Safe Area would be a mere 572px (768px-142-30-24=572)

    Read the article

  • Test your internet connection - Emtel Fixed Broadband

    Already at the begin of April, I had a phone conversation with my representative at Emtel Ltd. about some upcoming issues due to the ongoing construction work in my neighbourhood. Unfortunately, they finally raised the house two levels above ours, and of course this has to have a negative impact on the visibility between the WiMAX outdoor unit on the roof and the aimed access point at Medine. So, today I had a technical team here to do a site survey and to come up with potential solutions. Short version: It doesn't look good after all. The site survey Well, the two technicians did their work properly, even re-arranged the antenna to check the connection with another end point down at La Preneuse. But no improvements. Looks like we are out of luck since the construction next door hasn't finished yet and at the moment, it even looks like they are planning to put at least one more level on top. I really wonder about the sanity of the responsible bodies at the local district council. But that's another story. Anyway, the outdoor unit was once again pointed towards Medine and properly fixed with new cable guides (air from the sea and rust...). Both of them did a good job and fine-tuned the reception signal to a mere 3 over 9; compared to the original 7 over 9 I had before the daily terror started. The site survey has been done, and now it's up to Emtel to come up with (better) solutions. Well, I wouldn't mind to have an unlimited, symmetric 3G/UMTS or even LTE connection. Let's see what they can do... Testing the connection There are several online sites available which offer you to check certain aspects of your internet connection. Personally, I'm used to speedtest.net and it works very well. I think it is good and necessary to check your connection from time to time, and only a couple of days ago, I posted the following on Emtel's wall at Facebook (21.05.2013 - 14:06 hrs): Dear Emtel, could you eventually provide an answer on the miserable results of SpeedTest? I chose Rose Hill (Hosted by Emtel Ltd.) as testing endpoint... Sadly, no response to this. Seems that the marketing department is not willing to deal with customers on Facebook. Okay, over at speedtest.net you can use their Flash-based test suite to check your connection to quite a number of servers of different providers world-wide. It's actually very interesting to see the results for different end points and to compare them to each other. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 30.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Fixed Broadband) Speedtest.net result of 30.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Fixed Broadband) Luckily, the results are quite similar in terms of connection speed; which is good. I'm currently on a WiMAX tariff called 'Classic Browsing 2', or Fixed Broadband as they call it now, which provides a symmetric line of 768 Kbps (or roughly 0.75 Mbps). In terms of downloads or uploads this means that I would be able to transfer files in either direction with approximately 96 KB/s. Frankly speaking, thanks to compression, my choice of browser and operating system I usually exceed this value and I have download rates up to 120 KB/s - not too bad after all. Only the ping times are a little bit of concern. Due to the difference in distance, or better said based on the number of hubs between the endpoints, they indicate the amount of time that it takes to send a package from your machine to the remote server and get a response back. A lower value is better, and usually the ping is less than 300 ms between Mauritius and Europe. The alternatives in Mauritius Not sure whether I should note this done because for my requirements there are no alternatives to Emtel WiMAX at the moment. It would be great to have your opinion on the situation of internet connectivity in Mauritius. Are there really alternatives? And if so, what are the conditions?

    Read the article

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • Cannot get ATI Drivers installed

    - by bittoast67
    I am trying to install the Catalyst driver. The best I can get is a strange resolution problem and firefox acts all wonkt. The worst I have gotten is low graphics mode in which I just reinstall Ubuntu. I have a HP Pavilion Dv7 laptop. With Radeon 3200 HD. I plan to try again with a fresh install of Ubuntu 12.4.3 as I have heard its the most compatible. This is what I have done: I have tried just the easy way of going to synaptic and installing the drivers that way. the fglrx package (not the fglrx update). And if memory serves I think that boots me into low graphics mode. So, fresh install of Ubuntu and tried again. I have done everything a couple times from this site (http://wiki.cchtml.com/index.php/Ubuntu_Precise_Installation_Guide) following every instruction to a T. That gets me something, such as a lowered fan speed and a much cooler computer, but I also lose most of my resolution. And displays says its the best resolution I can get. I also have a very screwy firefox. Using this method I can see AMD Catalyst Control Center in my dash (two of them really one administrator and one not) but when I try to open it it says no amd driver detected. So again, ubuntu reinstall. I have tried the GUI method from the Legacy driver I got from AMD's site. It runs through smoothly and at the very end after I exit the installer it gives me an error. I have also tried various other methods using terminal, as well as various different drivers (the one from the amd's site and the one suggested in the above link for my graphics card) both to no avail. When I try the method in the link on number 2, and I get the super low res and screwy fire fox. I type in, fglrxinfo ,and get a badrequest error. I have yet to type in fglrxinfo and get anything like what I am supposed to. UPDATE: I am now currently reinstalling Ubuntu 12.4. I tried the above mentioned link - thank you very much!- just to see on the previously failed driver attempt by following the purge commands. And to no avail when typing fglrxinfo I still get the badrequest thing. I will update again after a try with a true fresh install. Thanks again!! UPDATE: Alright everyone. Still no go. I have done everything word per word in the provided tutorial. I have rebooted my computer again to a fucked up resolution and this is what I get when typing fglrxinfo: $ fglrxinfo X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 153 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 I would like to add that when installing this file: fglrx_8.970-0ubuntu1_amd64.deb I got this: Building initial module for 3.8.0-29-generic Error! Bad return status for module build on kernel: 3.8.0-29-generic (x86_64) Consult /var/lib/dkms/fglrx/8.970/build/make.log for more information. update-initramfs: deferring update (trigger activated) Processing triggers for ureadahead ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-29-generic Processing triggers for libc-bin ... ldconfig deferred processing now taking place Any ideas? Anyone? I cant for the life of me figure out what I am doing wrong.

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • How to be Agile when new work keeps affecting completed work?

    - by jdln
    The project I'm working on is to re-skin an existing website. The functionally will stay the same, its just the styles that are changing. The HTML is not changing, I'm only modifying the CSS files. The site is pretty complex. There are dozens of pages. Users can be logged in and have a number of different roles. Depending on their role the content of the page and what pages they are allowed to see varys. We're using GIT and Github. I'm trying to write CSS that works as components. So when the same form elements, headings, etc appear on multiple pages they are already styled and are consistent. Most of time this is working well. Sadly the format and class names in the HTML are at times messy and unpredictable. When I fix something on one page it can break another. The job is also harder as no one knows exactly all the variations that are possible due to the user roles. As such I'm continuously finding new variations as I go along. I'm making headway by putting a lot of comments in my CSS. If I need to remove a CSS rule Ill comment it out so I can still see it with the chrome dev tools, and ill put a comment in the CSS saying why I removed it and for what page this was done. This means that if on another page I'm about to add add the rule to fix a different problem, there is more of a chance I will see how this would break the first page. This allows me to either find a different solution that will work for both pages, or I can make the override page specific. This has been working quite well for me. If I had complete free reign and the only deadline was to finish the project by the end then this method would be fine. However my manager is trying to mitigate risk by breaking the work into areas to be completed per sprint. This is counter to how I have been approaching things as something like my typography styles will affect all other pages on the site. The other issue is that the different stakeholders want to sign off each section as I go along. However once I've finished a section it may change if I change CSS that affects it and also affects a new section I'm working on. I've asked that the stakeholders have a quick unofficial sign off in stages (eg per sprint), and have the final official sign off at the end of the project, but this is being met with resistance. I do understand why it would be higher risk to do this, but the only way to guarantee that a signed off section will not change is to make ALL future changes page specific. In addition to this I'm being told that all work that I push to the Git repo should be ready to go live, and as such should not contain any code comments. This is risky for me as I wont know until I've finished the site if I will ever benefit from these comments or not. Has anyone else been in a similar situation and managed to find a compromise that worked for my development approach and also the desires of management and stakeholders to have a more Agile approach? A more Agile workflow works great when you can break the work into components and know that once something is done it wont be affected by future work. However the nature of this project makes this hard to achieve.

    Read the article

  • How can I get TFS2010 to run MSDEPLOY for me through MSBUILD?

    - by Simon_Weaver
    There is an excellent PDC talk available here which describes the new MSDEPLOY features in Visual Studio 2010 - as well as how to deploy an application within TFS. You can use MSBUILD within TFS2010 to call through to MSDEPLOY to deploy your package to IIS. This is done by means of parameters to MSBUILD. The talk explains some of the command line parameters such as : /p:DeployOnBuild /p:DeployTarget=MsDeployPublish /p:CreatePackageOnPublish=True /p:MSDeployPublishMethod=InProc /p:MSDeployServiceURL=localhost /p:DeployIISAppPath="Default Web Site" But where is the documentation for this - I can't find any? I've been spending all day trying to get this to work and can't quite get it right and keep ending up with various errors. If I run the package's cmd file it deploys perfectly. But I want to get the whole deployment running through msbuild using these arguments and not a separate call to msdeploy or running the package .cmd file. How can I do this? PS. Yes I do have the Web Deployment Agent Service running. I also have the management service running under IIS. I've tried using both. Args I'm using : /p:DeployOnBuild=True /p:DeployTarget=MsDeployPublish /p:Configuration=Release /p:CreatePackageOnPublish=True /p:DeployIisAppPath=staging.example.com /p:MsDeployServiceUrl=https://staging.example.com:8172/msdeploy.axd /p:AllowUntrustedCertificate=True giving me : C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets (2660): VsMsdeploy failed.(Remote agent (URL https://staging.example.com:8172/msdeploy.axd?site=staging.example.com) could not be contacted. Make sure the remote agent service is installed and started on the target computer.) Error detail: Remote agent (URL https://staging.example.com:8172/msdeploy.axd?site=staging.example.com) could not be contacted. Make sure the remote agent service is installed and started on the target computer. An unsupported response was received. The response header 'MSDeploy.Response' was '' but 'v1' was expected. The remote server returned an error: (401) Unauthorized.

    Read the article

  • new ActiveXObject('Word.Application') creates new winword.exe process when IE security does not allo

    - by Mark Ott
    We are using MS Word as a spell checker for a few fields on a private company web site, and when IE security settings are correct it works well. (Zone for the site set to Trusted, and trusted zone modified to allow control to run without prompting.) The script we are using creates a word object and closes it afterward. While the object exists, a winword.exe process runs, but it is destroyed when the word object is closed. If our site is not set in the trusted zone (Internet zone with default security level) the call that creates the word object fails as expected, but the winword.exe process is still created. I do not have any way to interact with this process in the script, so the process stays around until the user logs off (users have no way to manually destroy the process, and it wouldn't be a good solution even if they did.) The call that attempts to create the object is... try { oWordApplication = new ActiveXObject('Word.Application'); } catch(error) { // irrelevant code removed, described in comments.. // notify user spell check cannot be used // disable spell check option } So every time the page is loaded this code may be run again, creating yet another orphan winword.exe process. oWordApplication is, of course, undefined in the catch block. I would like to be able to detect the browser security settings beforehand, but I have done some searching on this and do not think that it is possible. Management here is happy with it as it is. As long as IE security is set correctly it works, and it works well for our purposes. (We may eventually look at other options for spell check functionality, but this was quick, inexpensive, and does everything we need it to do.) This last problem bugs me and I'd like to do something about it, but I'm out of ideas and I have other things that are more in need of my attention. Before I put it aside, I thought I'd ask for suggestions here...

    Read the article

  • Return template as string - Django

    - by Ninefingers
    Hi All, I'm still not sure this is the correct way to go about this, maybe not, but I'll ask anyway. I'd like to re-write wordpress (justification: because I can) albeit more simply myself in Django and I'm looking to be able to configure elements in different ways on the page. So for example I might have: Blog models A site update message model A latest comments model. Now, for each page on the site I want the user to be able to choose the order of and any items that go on it. In my thought process, this would work something like: class Page(models.Model) Slug = models.CharField(max_length=100) class PageItem(models.Model) Page = models.ForeignKey(Page) ItemType = models.CharField(max_length=100) InstanceNum = models.IntegerField() # all models have primary keys. Then, ideally, my template would loop through all the PageItems in a page which is easy enough to do. But what if my page item is a site update as opposed to a blog post? Basically, I am thinking I'd like to pull different item types back in different orders and display them using the appropriate templates. Now, I thought one way to do this would be to, in views.py, to loop through all of the objects and call the appropriate view function, return a bit of html as a string and then pipe that into the resultant template. My question is - is this the best way to go about doing things? If so, how do I do it? If not, which way should I be going? I'm pretty new to Django so I'm still learning what it can and can't do, so please bear with me. I've checked SO for dupes and don't think this has been asked before...

    Read the article

  • Windows Sharepoint Services - FullTextSqlQuery Document library Unable to find items created by SYST

    - by Ashok
    We have created an ASP.NET web app that upload files to WSS Doc Libary. The files get added under 'SYSTEM ACCOUNT' in the library. The FullTextSqlQuery class is used to search the document libary items. But it only searches files that has been uploaded by a windows user account like 'Administrator' and ignores the ones uploaded by 'SYSTEM ACCOUNT'. As a result the search results are empty even though we have the necessary data in the document library. What could be the reason for this? The code is given below: public static List GetListItemsFromFTSQuery(string searchText) { string docLibUrl = "http://localhost:6666/Articles%20Library/Forms/AllItems.aspx"; List items = new List(); DataTable retResults = new DataTable(); SPSecurity.RunWithElevatedPrivileges(delegate { using (SPSite site = new SPSite(docLibUrl)) { SPWeb CRsite = site.OpenWeb(); SPList ContRep = CRsite.GetListFromUrl(docLibUrl); FullTextSqlQuery fts = new FullTextSqlQuery(site); fts.QueryText = "SELECT Title,ContentType,Path FROM portal..scope() WHERE freetext('" + searchText + "') AND (CONTAINS(Path,'\"" + ContRep.RootFolder.ServerRelativeUrl + "\"'))"; fts.ResultTypes = ResultType.RelevantResults; fts.RowLimit = 300; if (SPSecurity.AuthenticationMode != System.Web.Configuration.AuthenticationMode.Windows) fts.AuthenticationType = QueryAuthenticationType.PluggableAuthenticatedQuery; else fts.AuthenticationType = QueryAuthenticationType.NtAuthenticatedQuery; ResultTableCollection rtc = fts.Execute(); if (rtc.Count > 0) { using ( ResultTable relevantResults = rtc[ResultType.RelevantResults]) retResults.Load(relevantResults, LoadOption.OverwriteChanges); foreach (DataRow row in retResults.Rows) { if (!row["Path"].ToString().EndsWith(".aspx")) //if (row["ContentType"].ToString() == "Item") { using ( SPSite lookupSite = new SPSite(row["Path"].ToString())) { using (SPWeb web = lookupSite.OpenWeb()) { SPFile file = web.GetFile(row["Path"].ToString()); items.Add(file.Item); } } } } } } //using ends here }); return items; }

    Read the article

  • AuthSub token from Google/YouTube API is always returned as invalid

    - by Miriam Raphael Roberts
    Anyone out there have experience with the YouTube/Google API? I am trying to login to Google/Youtube using clientLogin, retrieve an AuthSub token, exchange it for a multi-session token and then use it in our upload form. Just a note that we are not going to have other users logging into our (secure) website, this is for our use only (no multi-users). We just want a way to upload videos to our YT account via our own website without having to login/upload to YouTube. Ultimately, everything is dependent on the first step. My AuthSub token is always being returned as invalid (Error '403'). All the steps I used are below with username/password changed. Anyone have an insight on why my AuthSub is always invalid? I am spending an enormous amount of time trying to get this to work. STEP 1: Getting the authsub token from Youtube/Google POST /youtube/accounts/ClientLogin HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Content-Length: 86 Email=MyGoogleUsername&Passwd=MyGooglePasswd&accountType=GOOGLE&service=youtube&source=Test RESPONSE RECEIVED: Auth=AIwbFAR99f3iACfkT-5PXCB-1tN4vlyP_1CiNZ8JOn6P-......yv4d4zeGRemNm4il1e-M6czgfDXAR0w9fQ YouTubeUser=MyYouTubeUsername CURL COMMAND USED: /usr/bin/curl -S -v --location https://www.google.com/youtube/accounts/ClientLogin --data Email=MyGoogleUsername&Passwd=MyGooglePasswd&accountType=GOOGLE&service=youtube&source=Test --header Content-Type:application/x-www-form-urlencoded STEP 2: Exchanging the AuthSub token for a multi-use token GET /accounts/AuthSubSessionToken HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Authorization: AuthSub token="AIwbFASiRR3XDKs......p5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" Response received: 403 Invalid AuthSub token. curl command used: /usr/bin/curl -S -v --location https://www.google.com/accounts/AuthSubSessionToken --header Content-Type:application/x-www-form-urlencoded -H Authorization: AuthSub token="AIwbFAQR_4xG2g.....vp3BQZW5XEMyIj_wFozHSTEQ-BQRfYuIY-1CyqLeQ" STEP 3: Checking to see if the token is good/valid GET /accounts/AuthSubTokenInfo HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Authorization: AuthSub token="AIwbFASiRR3XDKsNkaIoPaujN5RQhKs3u.....A_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" Received response: 403 Invalid AuthSub token. curl command used: /usr/bin/curl -S -v --location https://www.google.com/accounts/AuthSubTokenInfo --header Content-Type:application/x-www-form-urlencoded -H Authorization: AuthSub token="AIwbFAQR_4xG2gHoAKDsNdFqdZdwWjGeNquOLpvp3BQZW5XEMyIj_wFozHSTEQ-BQRfYuIY-1CyqLeQ" STEP 4: Trying to get the upload token using the authsub token POST /action/GetUploadToken HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: gdata.youtube.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/atom+xml Authorization: AuthSub token="AIwbFASiRR3XDKsNkaIoPaujN5RQhp5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" X-Gdata-Key:key="AI39si5EQyo-TZPFAnmGjxJGFKpxd_7a6hEERh_3......R82AShoQ" Content-Length:0 GData-Version:2 Recevied Response: 401 Token invalid - Invalid AuthSub token. Curl command used: /usr/bin/curl -S -v --location http://gdata.youtube.com/action/GetUploadToken -H Content-Type:application/atom+xml -H Authorization: AuthSub token="AIwbFASiRR3XDKs....sYDp5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" -H X-Gdata-Key:key="AI39si5EQyo-TZPFAnmGjxJGF......Kpxd6dN2J1oHFQYTj_7a6hEERh_3E48R82AShoQ" -H Content-Length:0 -H GData-Version:2

    Read the article

  • Asp.Net membership via ASP.NET Website Administrator Tool

    - by luppi
    I created a database with aspnet_regsql, the database was created in sql sever 2008 and not in data folder in my project (do I need to move it to the folder manually?). Next, in Web Site Administration Tool I went to provider section and clicked don Test button. I got an error: Could not establish a connection to the database. If you have not yet created the SQL Server database, exit the Web Site Administration tool, use the aspnet_regsql command-line utility to create and configure the database, and then return to this tool to set the provider. Maybe I need to set something in a web.config, like membership settings or connection strings (or ASP.NET Website Administrator Tool should create those settings for me)? Update: Maybe it happens because I am using SQL server 2008 full and not express? Update 2: After setting membership section and connection string to my aspnetdb database in Web Site Administration Tool I've opened security-Security Setup Wizard-Define Roles (stage 4) I got this error: An error was encountered. Please return to the previous page and try again. The following message may help in diagnosing the problem: Unable to connect to SQL Server database. at System.Web.Administration.WebAdminPage.CallWebAdminHelperMethod(Boolean isMembership, String methodName, Object[] parameters, Type[] paramTypes) at ASP.security_wizard_wizardpermission_ascx.OnInit(EventArgs e) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

    Read the article

  • Silverlight 4 - MVC 2 ASP.NET Membership integration "single sign on"

    - by Scrappydog
    Scenario: I have an ASP.NET MVC 2 site using ASP.NET Forms Authentication. The site includes a Silverlight 4 application that needs to securely call internal web services. The web services also need to be publically exposed for third party authenticated access. Challenges: Securely accessing webservices from Silverlight using the current users identity without requiring the user to re-login in in the Silverlight application. Providing a secure way for third party applications to access the same webservices the same users credentials, ideally with out using ASP.NET Forms Authentication. Additional details and limitations: This application is hosted in Azure. We would rather NOT use RIA Services if at all possible. Solutions Under Consideration: I think that if the webservices are part of the same MVC site that hosts the Silverlight application then forms authentication should probably "just work" from Silverlight based on the users forms auth cookies. But this seems to rule out the possibility of hosting the webservices seperately (which is desirable in our scenario). For third-party access to the web services I'm guessing that seperate endpoints with a different authenication solution is probably the right answer, but I would rather only support one version of the services if possible... Questions: Can anybody point me towards any sample applications that implements something like this? How would you recommend implementing this solution?

    Read the article

  • Why do I get a connection error / timeout when using python suds to connect to Microsoft CRM?

    - by Chris R
    When I try to connect to an MS CRM web service using suds/python-ntlm, I am getting a timeout on requests. However, the code that I'm trying to replace -- which calls out to the cURL command line app to do the same call -- succeeds. Clearly something is different in the way that cURL is sending the command data, but I'll be damned if I know what the difference is. Below are the full details of the various calls. Anyone got any tips? Here's the code that is making the request, followed by the output. The cURL command code is below that, and its response follows. Hosts, users, and passwords have been changed to protect the innocent, of course. wsdl_url = 'https://client.service.host/MSCrmServices/2007/MetadataService.asmx?WSDL' username = r'domain\user.name' password = 'userpass' from suds.transport.https import WindowsHttpAuthenticated from suds.client import Client import logging logging.basicConfig(level=logging.INFO) logging.getLogger('suds.client').setLevel(logging.DEBUG) logging.getLogger('suds.transport').setLevel(logging.DEBUG) ntlmTransport = WindowsHttpAuthenticated(username=username, password=password) metadata_client = Client(wsdl_url, transport=ntlmTransport) request = metadata_client.factory.create('RetrieveAttributeRequest') request.MetadataId = '00000000-0000-0000-0000-000000000000' request.EntityLogicalName = 'opportunity' request.LogicalName = 'new_typeofcontact' request.RetrieveAsIfPublished = 'false' attr = metadata_client.service.Execute(request) print attr Here's the output: DEBUG:suds.client:sending to (http://client.service.host/MSCrmServices/2007/MetadataService.asmx) message: <SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/crm/2007/WebServices" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header/> <ns0:Body> <ns1:Execute> <ns1:Request xsi:type="ns1:RetrieveAttributeRequest"> <ns1:MetadataId>00000000-0000-0000-0000-000000000000</ns1:MetadataId> <ns1:EntityLogicalName>opportunity</ns1:EntityLogicalName> <ns1:LogicalName>new_typeofcontact</ns1:LogicalName> <ns1:RetrieveAsIfPublished>false</ns1:RetrieveAsIfPublished> </ns1:Request> </ns1:Execute> </ns0:Body> </SOAP-ENV:Envelope> DEBUG:suds.client:headers = {'SOAPAction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"', 'Content-Type': 'text/xml'} DEBUG:suds.transport.http:sending: URL:http://client.service.host/MSCrmServices/2007/MetadataService.asmx HEADERS: {'SOAPAction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"', 'Content-Type': 'text/xml', 'Content-type': 'text/xml', 'Soapaction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"'} MESSAGE: <SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/crm/2007/WebServices" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header/> <ns0:Body> <ns1:Execute> <ns1:Request xsi:type="ns1:RetrieveAttributeRequest"> <ns1:MetadataId>00000000-0000-0000-0000-000000000000</ns1:MetadataId> <ns1:EntityLogicalName>opportunity</ns1:EntityLogicalName> <ns1:LogicalName>new_typeofcontact</ns1:LogicalName> <ns1:RetrieveAsIfPublished>false</ns1:RetrieveAsIfPublished> </ns1:Request> </ns1:Execute> </ns0:Body> </SOAP-ENV:Envelope> ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (16, 0)) --------------------------------------------------------------------------- URLError Traceback (most recent call last) /Users/crose/projects/2366/crm/<ipython console> in <module>() /var/folders/nb/nbJAzxR1HbOppPcs6xO+dE+++TY/-Tmp-/python-67186icm.py in <module>() 19 request.LogicalName = 'new_typeofcontact' 20 request.RetrieveAsIfPublished = 'false' 21 ---> 22 attr = metadata_client.service.Execute(request) 23 print attr /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in __call__(self, *args, **kwargs) 537 return (500, e) 538 else: --> 539 return client.invoke(args, kwargs) 540 541 def faults(self): /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in invoke(self, args, kwargs) 596 self.method.name, timer) 597 timer.start() --> 598 result = self.send(msg) 599 timer.stop() 600 metrics.log.debug( /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in send(self, msg) 621 request = Request(location, str(msg)) 622 request.headers = self.headers() --> 623 reply = transport.send(request) 624 if retxml: 625 result = reply.message /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/https.pyc in send(self, request) 62 def send(self, request): 63 self.addcredentials(request) ---> 64 return HttpTransport.send(self, request) 65 66 def addcredentials(self, request): /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/http.pyc in send(self, request) 75 request.headers.update(u2request.headers) 76 log.debug('sending:\n%s', request) ---> 77 fp = self.u2open(u2request) 78 self.getcookies(fp, u2request) 79 result = Reply(200, fp.headers.dict, fp.read()) /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/http.pyc in u2open(self, u2request) 116 return url.open(u2request) 117 else: --> 118 return url.open(u2request, timeout=tm) 119 120 def u2opener(self): /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in open(self, fullurl, data, timeout) 381 req = meth(req) 382 --> 383 response = self._open(req, data) 384 385 # post-process response /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in _open(self, req, data) 399 protocol = req.get_type() 400 result = self._call_chain(self.handle_open, protocol, protocol + --> 401 '_open', req) 402 if result: 403 return result /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in _call_chain(self, chain, kind, meth_name, *args) 359 func = getattr(handler, meth_name) 360 --> 361 result = func(*args) 362 if result is not None: 363 return result /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in http_open(self, req) 1128 1129 def http_open(self, req): -> 1130 return self.do_open(httplib.HTTPConnection, req) 1131 1132 http_request = AbstractHTTPHandler.do_request_ /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in do_open(self, http_class, req) 1103 r = h.getresponse() 1104 except socket.error, err: # XXX what error? -> 1105 raise URLError(err) 1106 1107 # Pick apart the HTTPResponse object to get the addinfourl URLError: <urlopen error [Errno 60] Operation timed out> The cURL command is: /opt/local/bin/curl --ntlm -u "domain\user.name:userpass" -k -d @- -A "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.1)" -H "Connection: Keep-Alive" -H "Content-Type: text/xml; charset=utf-8" -H "SOAPAction: http://schemas.microsoft.com/crm/2007/WebServices/Execute" https://client.service.host/MSCrmServices/2007/MetadataService.asmx The data that is piped to that cURL command: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Header> <CrmAuthenticationToken xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <AuthenticationType xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">0</AuthenticationType> <CrmTicket xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes"></CrmTicket> <OrganizationName xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">CMIFS</OrganizationName> <CallerId xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">00000000-0000-0000-0000-000000000000</CallerId> </CrmAuthenticationToken> </soap:Header> <soap:Body> <Execute xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <Request xsi:type="RetrieveAttributeRequest"> <MetadataId>00000000-0000-0000-0000-000000000000</MetadataId> <EntityLogicalName>opportunity</EntityLogicalName> <LogicalName>new_typeofcontact</LogicalName> <RetrieveAsIfPublished>false</RetrieveAsIfPublished> </Request> </Execute> </soap:Body> </soap:Envelope> Here's the response: <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <ExecuteResponse xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <Response xsi:type="RetrieveAttributeResponse"> <AttributeMetadata xsi:type="PicklistAttributeMetadata"> <MetadataId>101346cf-a6af-4eb4-a4bf-9c3c6bbd6582</MetadataId> <SchemaName>New_TypeofContact</SchemaName> <LogicalName>new_typeofcontact</LogicalName> <EntityLogicalName>opportunity</EntityLogicalName> <AttributeType> <Value>Picklist</Value> </AttributeType> <!-- stuff here --> </AttributeMetadata> </Response> </ExecuteResponse> </soap:Body> </soap:Envelope>

    Read the article

  • Visual Studio 2010 Web Deployment

    - by Cranialsurge
    I am trying to use VS2010's 1-Click Publish feature to deploy a test site from my laptop to my server. I have the firewall turned off on both machines and the MS Deployment Service is up and running on both my laptop and the server. However when I try and publish from VS2010 on my laptop I get the following error Error 1 Web deployment task failed.(Remote agent (URL https://192.168.1.181/:8172/msdeploy.axd?site=LocationsTest) could not be contacted. Make sure the remote agent service is installed and started on the target computer.) The requested resource does not exist, or the requested URL is incorrect. Error details: Remote agent (URL https://192.168.1.181/:8172/msdeploy.axd?site=LocationsTest) could not be contacted. Make sure the remote agent service is installed and started on the target computer. An unsupported response was received. The response header 'MSDeploy.Response' was '' but 'v1' was expected. The remote server returned an error: (404) Not Found. 0 0 Test.Web Any idea what I am doing wrong here ?

    Read the article

  • How do I point a ListFieldIterator to a subweb when ControlMode = Display?

    - by Rich Bennema
    I have a custom page with the following ListFieldIterator: <SharePoint:ListFieldIterator runat="server" ControlMode="Display" OnInit="listFieldIterator_Init" /> Here is the Init event: protected void listFieldIterator_Init(object sender, EventArgs e) { ListFieldIterator listFieldIterator = sender as ListFieldIterator; SPContext current = SPContext.Current; SPFieldUrlValue value = new SPFieldUrlValue(current.ListItem[SPBuiltInFieldId.URL].ToString()); Uri uri = new Uri(value.Url); using (SPWeb web = current.Site.OpenWeb(uri.AbsolutePath)) { SPListItem item = web.GetListItem(uri.PathAndQuery); if (null != item) { listFieldIterator.ItemContext = SPContext.GetContext(this.Context, item.ID, item.ParentList.ID, web); } } } Everything works great if the target List Item is in the same site as the page. But once it points to a different site, all the fields appear, but the values all appear in the following format: Failed to render "Title" column because of an error in the "Single line of text" field type control. See details in log. Exception message: List does not exist The page you selected contains a list that does not exist. It may have been deleted by another user.. If I change the ControlMode to Edit, the values are displayed correctly. So how do I get it to work while in Display mode?

    Read the article

  • URL detection with JavaScript

    - by josh
    Hi! I'm using the following script to force a specific page - when loaded for the first time - into a (third-party) iFrame. <script type="text/javascript"> if(window.top==window) { location.reload() } else { } </script> (For clarification: This 'embedding' is done automatically by the third-party system but only if the page is refreshed once - for styling and some other reasons I want it there from the beginning.) Right now, I'm wondering if this script could be enhanced in ways that it's able to detect the current URL of its 'parent' document to trigger a specific action? Let's say the URL of the third-party site is 'http://cgi.site.com/hp/...' and the URL of the iFrame 'http://co.siteeps.com/hp/...'. Is it possible to realize sth. like this with JS: <script type="text/javascript"> if(URL is 'http://cgi.site.com/hp/...') { location.reload() } if(URL is 'http://co.siteeps.com/hp/...') { location.do-not.reload() resp. location.do-nothing() } </script> TIA josh

    Read the article

  • configuring local W3C validator on xampp - windows xp sp3

    - by Gabriel
    Hi I have no experience on perl. I am trying to configure the W3C validator on my localhost (win32). I have already followed all the instructions given by W3C @http://validator.w3.org/docs/install_win.html, but I am getting the following error: Can't locate loadable object for module Encode::HanExtra in @INC (@INC contains: C:/xampp/perl/lib C:/xampp/perl/site/lib .) at C:/xampp/validator-0.8.6/httpd/cgi-bin/check line 49 Compilation failed in require at C:/xampp/validator-0.8.6/httpd/cgi-bin/check line 49. BEGIN failed--compilation aborted at C:/xampp/validator-0.8.6/httpd/cgi-bin/check line 49. I'm running perl 5.10.1 on xampp and the following HandExtra Module http://cpansearch.perl.org/src/AUDREYT/Encode-HanExtra-0.23/lib/Encode/HanExtra.pm This is C:/xampp/validator-0.8.6/httpd/cgi-bin/check line 49 : use Encode::HanExtra qw(); # for some chinese character encodings if I document that line using # I get a similar message related to other object: Can't locate loadable object for module Sub::Name in @INC (@INC contains: C:/xampp/perl/lib C:/xampp/perl/site/lib .) at C:/xampp/perl/site/lib/Moose.pm line 12 This is Line 12 at Moose.pm : use Sub::Name 'subname'; I don't know how to proceed. I will appreciate any advise. Thanks

    Read the article

  • ASP.NET MVC GoogleBot Issues

    - by Khalid Abuhakmeh
    I wrote a site using ASP.NET MVC, and although it is not completely SEO optimized at this point I figured it is a good start. What I'm finding is that when I use Google's Webmaster Tools to fetch my site (to see what a GoogleBot sees) it sees this. HTTP/1.1 200 OK Cache-Control: public, max-age=1148 Content-Type: application/xhtml+xml; charset=utf-8 Expires: Mon, 18 Jan 2010 18:47:35 GMT Last-Modified: Mon, 18 Jan 2010 17:07:35 GMT Vary: * Server: Microsoft-IIS/7.0 X-AspNetMvc-Version: 2.0 X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Mon, 18 Jan 2010 18:28:26 GMT Content-Length: 254 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title> Index </title> </head> <body> </body> </html> Obviously this is not what my site looks like. I have no clue where Google is getting that HTML from. Anybody have an answer and a solution? Anybody experience the same issues? Thanks in advance.

    Read the article

  • SSL Slow in IE 8.0.7600.16385IC

    - by discovery.jerrya
    I'm having a performance problem on my company's web site using a specific version of IE 8 to load a page using https. Here's what I know. Server: Virtual machine running on VMWare ESX Windows Server 2003 Enterprise Edition SP 2 Tomcat 6.0.16 Client: Windows XP and Window 7 Internet Explorer 8.0.7600.16385IC Page loads/refreshes in under 1 second using HTTP. Page loads/refreshes in 15-16 seconds in HTTPS using this version of IE. Problem reproduced on multiple client machines with same IE version. Problem reproduced on multiple client machines with different Windows versions (XP and 7). No performance problem using Chrome, Firefox, Opera, or Safari from same machine. No performance problem using other versions of IE 8 on other machines. Slow load causes virtually no CPU, memory, or I/O spike on server or client machine. No performance problem on other sites using HTTPS on same client machine. The pages in question use JavaScript and innerHTML to replace the contents of div elements to create a collapsible menu, and an iframe to display some content. A couple of the div elements contain images. If I remove the iframe and the JavaScript, the performance issues go away. However, rewriting the entire site to make these changes would be very time consuming. We're in the process of replacing the whole site, but it may be 2-3 months before we do so and we really cannot live with this slowdown that long. I've already looked at several IE tuning options, such as disabling add ons, running IE-rereg, and resetting IE, with no luck. Does anyone have any suggestions?

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >