Search Results

Search found 21160 results on 847 pages for 'vs 2010'.

Page 508/847 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • Can I stop SharePoint from prompting during file edit?

    - by uSlackr
    We use a SharePoint 2007 site internally with Office 2010. Whenever I open a Word document to edit it, I get a prompt saying: Some files can harm your computer. If the file information below looks suspicious, or you do not fully trust the source, do not open the file. I've been unable to find a reliable answer around the web. Some suggested using the Windows File Types dialog to remove the prompt on download option, but this dialog is not available in Windows 7.

    Read the article

  • Have optical drive connectors changed in the last year?

    - by Zippityboomba
    In early 2010 my motherboard freaked out, and I was able to drop in a new board and connect my two DVD drives that were initially bought in 2005. Tonight my new components arrived to build my next system, and I figured I'd just reuse the old DVD drives. Surprise, the long flat connector cable has no mate on the new motherboard. Reckon I'll just pick up a new DVD drive for $25, but what gives? Thanks!

    Read the article

  • Small Business Server 2011 and Remote access to documents

    - by Tim Long
    Assume I'm working away from the office; its a hotel computer Windows 7, Office 2010 and fast - so the best possible conditions. Using Companyweb - Every time I open a document, I have to go through the logon process - seems odd to have to do that. Is this a 'by design' feature or is something wrong with my configuration? When I do open the documents, are they being stored somewhere locally and should I be looking to delete on this computer - or are they in a temporary file?

    Read the article

  • Powerpoint: control image sequence with a slider

    - by Mat
    I have an image sequence (10 images) that step by step visualize the construction of something. I'd like to include these images into my powerpoint presentation in such a way that i can step between them by moving a slider below the image, similar to the timebar of a movie player (in quicktime for example you can step through a move file frame by frame by moving the bar on the bottom). What's the easiest way to do this with Microsoft Powerpoint 2010?

    Read the article

  • Script to restart server when no user is remotely connected

    - by RitonLaJoie
    I'm looking for something special : here is a windows server that is used for development by 4/5 people remotely working on it through remote control. I'm installing Visual Studio 2010 on it, and the installation won't continue until the server has been restarted. I'm looking for a way to reboot when the server is idle (when nobody is connected through remote desktop on it) so that the reboot doesn't bug anyone. Is there some software that can do that ? Thanks !

    Read the article

  • Is there a fix for Windows 8 Release Preview freezing intermittently on Macbook Air

    - by RyanCEI
    I've successfully installed Windows 8 x64 Release Preview on a 2010 Macbook Air (4GB/256GB). I was also able to install the Apple Boot Camp tools without incident, but the computer will intermittently freeze up completely (need a forced shutdown) for no apparent reason. As such, there is no logging information to help me troubleshoot this any further. I have no other software installed at this point; only the latest updates from Microsoft. Any ideas?

    Read the article

  • How can I install Java on Windows 7 without messing up the system?

    - by robert_d
    I've installed java 1.6.0_31 32bit on Windows 7 64bit system, but this installation messed up my system, e.g. when I start Google Chrome I get error Your preferences can not be read Visual Studio 2010 after launching shows error that The Application Data folder for Visual Studio could not be created The shortcut to the Downloads folder in Windows Explorer no longer works. Is there a way to install Java without messing up Windows 7? Or maybe this mess can be cleaned up after installation of Java, but how?

    Read the article

  • XSL testing empty strings with <xsl:if> and sorting

    - by AdRock
    I am having trouble with a template that has to check 3 different nodes and if they are not empty, print the data I am using for each node then doing the output but it is not printing anything. It is like the test returns zero. I have selected the parent node of each node i want to check the length on as the template match but it still doesn't work. Another thing, how do i sort the list using . I tried using this but i get an error about loading the stylesheet. If i take out the sort it works <xsl:template match="folktask/member"> <xsl:if test="user/account/userlevel='3'"> <xsl:sort select="festival/event/datefrom"/> <div class="userdiv"> <xsl:apply-templates select="user"/> <xsl:apply-templates select="festival"/> </div> </xsl:if> </xsl:template> <xsl:template match="festival"> <xsl:apply-templates select="contact"/> </xsl:template> This should hopefully finish all my stylesheets. This is the template I am calling <xsl:template match="contact"> <xsl:if test="string-length(contelephone)!=0"> <div class="small bold">TELEPHONE:</div> <div class="large"> <xsl:value-of select="contelephone/." /> </div> </xsl:if> <xsl:if test="string-length(conmobile)!=0"> <div class="small bold">MOBILE:</div> <div class="large"> <xsl:value-of select="conmobile/." /> </div> </xsl:if> <xsl:if test="string-length(fax)!=0"> <div class="small bold">FAX:</div> <div class="large"> <xsl:value-of select="fax/." /> </div> </xsl:if> </xsl:template> and a section of my xml. If you need me to edit my post so you can see the full code i will but the rest works fine. <folktask> <member> <user id="4"> <personal> <name>Connor Lawson</name> <sex>Male</sex> <address1>12 Ash Way</address1> <address2></address2> <city>Swindon</city> <county>Wiltshire</county> <postcode>SN3 6GS</postcode> <telephone>01791928119</telephone> <mobile>07338695664</mobile> <email>[email protected]</email> </personal> <account> <username>iTuneStinker</username> <password>3a1f5fda21a07bfff20c41272bae7192</password> <userlevel>3</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id="1"> <event> <eventname>Oxford Folk Festival</eventname> <url>http://www.oxfordfolkfestival.com/</url> <datefrom>2010-04-07</datefrom> <dateto>2010-04-09</dateto> <location>Oxford</location> <eventpostcode>OX1 9BE</eventpostcode> <coords> <lat>51.735640</lat> <lng>-1.276136</lng> </coords> </event> <contact> <conname>Stuart Vincent</conname> <conaddress1>P.O. Box 642</conaddress1> <conaddress2></conaddress2> <concity>Oxford</concity> <concounty>Bedfordshire</concounty> <conpostcode>OX1 3BY</conpostcode> <contelephone>01865 79073</contelephone> <conmobile></conmobile> <fax></fax> <conemail>[email protected]</conemail> </contact> </festival> </member> </folktask>

    Read the article

  • “Being Agile” Means No Documentation, Right?

    - by jesschadwick
    Ask most software professionals what Agile is and they’ll probably start talking about flexibility and delivering what the customer wants.  Some may even mention the word “iterations”.  But inevitably, they’ll say at some point that it means less or even no documentation.  After all, doesn’t creating, updating, and circulating painstakingly comprehensive documentation that everyone and their mother have officially signed off on go against the very core of Agile?  Of course it does!  But really, they’re missing the point! Read The Agile Manifesto. (No, seriously - read it now. It’s short. I’ll wait.)  It’s essentially a list of values.  More specifically, it’s a right-side/left-side weighted list of values:  “Value this over that”. Many people seem to get the impression that this is really a “good vs. bad” list and that those values on the right side are evil and should essentially be tossed on the floor.  This leads to the conclusion that in order to be Agile we must throw away our fancy expensive tools, document as little as possible, and scoff at the idea of a project plan.  This conclusion is quite convenient because it essentially means “less work, more productivity!” (particularly in regards to the documentation and project planning).  I couldn’t disagree with this conclusion more. My interpretation of the Manifesto targets “over” as the operative word.  It’s not just a list of right vs. wrong or good vs. bad.  It’s a list of priorities.  In other words, none of the concepts on the list should be removed from your development lifecycle – they are all important… just not equally important.  This is not a unique interpretation, in fact it says so right at the end of the manifesto! So, the next time your team sits down to tackle that big new project, don’t make the first order of business to outlaw all meetings, documentation, and project plans.  Instead, collaborate with both your team and the business members involved (you do have business members sitting in the room, directly involved in the project planning, right?) and determine the bare minimum that will allow all of you to work and communicate in the best way possible.  This often means that you can pick and choose which parts of the Agile methodologies and process work for your particular project and end up with an amalgamation of Waterfall, Agile, XP, SCRUM and whatever other methodologies the members of your team have been exposed to (my favorite is “SCRUMerfall”). The biggest implication of this is that there is no one way to implement Agile.  There is no checklist with which you can tick off boxes and confidently conclude that, “Yep, we’re Agile™!”  In fact, depending on your business and the members of your team, moving to Agile full-bore may actually be ill-advised.  Such a drastic change just ends up taking everyone out of their comfort zone which they inevitably fall back into by the end of the project.  This often results in frustration to the point that Agile is abandoned altogether because “we just need to ship something!”  Needless to say, this is far more devastating to a project. Instead, I offer this approach: keep it simple and take it slow.  If your business members or customers are only involved at the beginning phases and nowhere to be seen until the project is delivered, invite them to your daily meetings; encourage them to keep up to speed on what’s going on on a daily basis and provide feedback.  If your current process is heavy on the documentation, try to reduce it as opposed to eliminating it outright.  If you need a “TPS Change Request” signed in triplicate with a 5-day “cooling off period” before a change is implemented, try a simple bug tracking system!  Tighten the feedback loop! Finally, at the end of every “iteration” (whatever that means to you, as long as it’s relatively frequent), take as much time as you can spare (even if it’s an hour or so) and perform some kind of retrospective.  Learn from your mistakes.  Figure out what’s working for you and what’s not, then fix it.  Before you know it you’ve got a handful of iterations and/or projects under your belt and you sit down with your team to realize that, “Hey, this is working - we’re pretty Agile!”  After all, Agile is a Zen journey.  It’s a destination that you aim for, not force, and even if you never reach true “enlightenment” that doesn’t mean your team can’t be exponentially better off from merely taking the journey.

    Read the article

  • How-to filter table filter input to only allow numeric input

    - by frank.nimphius
    In a previous ADF Code Corner post, I explained how to change the table filter behavior by intercepting the query condition in a query filter. See sample #30 at http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html In this OTN Harvest post I explain how to prevent users from providing invalid character entries as table filter criteria to avoid problems upon re-querying the table. In the example shown next, only numeric values are allowed for a table column filter. To create a table that allows data filtering, drag a View Object – or a data collection of a Web Service or JPA business service – from the DataControls panel and drop it as a table. Choose the Enable Filtering option in the Edit Table Columns dialog so the table renders with the column filter boxes displayed. The table filter fields are created using implicit af:inputText components that need to be customized for you to apply a custom filter input component, or to change the input behavior. To change the input filter, so only a defined set of input keys is allowed, you need to change the default filter field with your own af:inputText field to which you apply an af:clientListener tag that filters user keyboard entries. For this, in the Oracle JDeveloper visual editor, select the column which filter you want to change and expand the column node in the Oracle JDeveloper Structure Window. Part of the column definition is the Column facet node. Expand the facets so you see the filter facet entry. The filter facet is grayed out as there is no custom facet defined. In a next step, open theComponent Palette (ctrl+shift+P) and drag an Input Text component onto the facet. This demarks the first part in the filter customization. To make the custom filter component work, you need to map the af:inputText component value property to the ADF filter criteria that is exposed in the Expression Builder. Open the Expression Builder for the filter input component value property by clicking the arrow icon to its right. In the Expression Builder expand the JSP Objects | vs | filterCriteria node to select the attribute name represented by the table column. The vs entry is the name of a variable that is defined on the table and that grants you access to the table attributes. Now that the filter works as before – though using a custom filter input component – you can add the af:clientListener tag to your custom filter component – af:inputText – to call out to JavaScript when users type in the column filter field Point the client filter method property to a JavaScript function that you reference or add through using the af:resource tag and set the type property value to keyDown. <af:document id="d1">     <af:resource type="javascript" source="/js/filterHandler.js"/> … The filter definition looks as shown below <af:inputText label="Label 1" id="it1"                         value="#{vs.filterCriteria.Employe        <af:clientListener method="suppressCharacterInput"                                     type="keyDown"/> </af:inputText> The JavaScript code that you can use to either filter character inputs or numeric inputs is shown below. Just store this code in an external JavaScript (.js) file and reference it from the af:resource tag. //Allow numbers, cursor control keys and delete keys function suppressCharacterInput(evt) {     var _keyCode = evt.getKeyCode();     var _filterField = evt.getCurrentTarget();     var _oldValue = _filterField.getValue();     if (!((_keyCode < 57) ||(_keyCode > 96 && _keyCode < 105))) {         _filterField.setValue(_oldValue);         evt.cancel();     } } //Allow characters, cursor control keys and delete keys function suppressNumericInput(evt) {  var _keyCode = evt.getKeyCode();  var _filterField = evt.getCurrentTarget();  var _oldValue = _filterField.getValue();  //check for numbers  if ((_keyCode < 57 && _keyCode > 47) ||      (_keyCode > 96 && _keyCode < 105)){     _filterField.setValue(_oldValue);     evt.cancel();   } } But what if browsers don't allow JavaScript ? Don't worry about this. If browsers would not support JavaScript then ADF Faces as a whole would not work and you had a different problem.

    Read the article

  • GWT Query fails second time -only.

    - by Koran
    HI, I have a visualization function in GWT which calls for two instances of the same panels - two queries. Now, suppose one url is A and the other url is B. Here, I am facing an issue in that if A is called first, then both A and B works. If B is called first, then only B works, A - times out. If I call both times A, only the first time A works, second time it times out. If I call B twice, it works both times without a hitch. Even though the error comes at timed out, it actually is not timing out - in FF status bar, it shows till - transferring data from A, and then it gets stuck. This doesnt even show up in the first time query. The only difference between A and B is that B returns very fast, while A returns comparitively slow. The sample code is given below: public Panel(){ Runnable onLoadCallback = new Runnable() { public void run() { Query query = Query.create(dataUrl); query.setTimeout(60); query.send(new Callback() { public void onResponse(QueryResponse response) { if (response.isError()){ Window.alert(response.getMessage()); } } } } VisualizationUtils.loadVisualizationApi(onLoadCallback, PieChart.PACKAGE); } What could be the reason for this? I cannot think of any reason why this should happen? Why is this happening only for A and not for B? EDIT: More research. The query which works all the time (i.e. B is the example URL given in GWT visualization site: see comment [1]). So, I tried in my app engine to reproduce it - the following way s = "google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});"; response = HttpResponse(s, content_type="text/plain; charset=utf-8") response['Expires'] = time.strftime('%a, %d %b %Y %H:%M:%S GMT', time.gmtime()) return response Where s is the data when we run the query for B. I tried to add Expires etc too, since that seems to be the only header which has the difference, but now, the query fails all the time. For more info - I am now sending the difference between my server response vs the working server response. They seems to be pretty similar. HTTP/1.0 200 OK Content-Type: text/plain Date: Wed, 16 Jun 2010 11:07:12 GMT Server: Google Frontend Cache-Control: private, x-gzip-ok="" google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});Connection closed by foreign host. Mac$ telnet spreadsheets.google.com 80 Trying 209.85.231.100... Connected to spreadsheets.l.google.com. Escape character is '^]'. GET http://spreadsheets.google.com/tq?key=pWiorx-0l9mwIuwX5CbEALA&range=A1:B12&gid=0&headers=-1 HTTP/1.0 200 OK Content-Type: text/plain; charset=UTF-8 Date: Wed, 16 Jun 2010 11:07:58 GMT Expires: Wed, 16 Jun 2010 11:07:58 GMT Cache-Control: private, max-age=0 X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Server: GSE google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});Connection closed by foreign host. Also, please note that App engine did not allow the Expires header to go through - can that be the reason? But if that is the reason, then it should not fail if B is sent first and then A. Comment [1] : http://spreadsheets.google.com/tq?key=pWiorx-0l9mwIuwX5CbEALA&range=A1:B12&gid=0&headers=-1

    Read the article

  • Problem using Blend 3 Interaction.Behaviours in VS2010

    - by Andre Luus
    There seems to be a problem with support for the Interactivity namespace of Blend 3 in the VS2010 xaml editor. I have the following installed: VS2010 Blend 3 + Blend 3 SDK I am trying to compile a demo project that is targeted at .Net 4 Client Profile and has a reference to System.Windows.Interactivity (in the Blend 3 folder). In the object browser everything appears to be fine. I can also access Interaction.Behaviours from code-behind, but if I put the namespace xmlns:i="http://schemas.microsoft.com/expression/2010/interactivity" in the xaml file and try to use it, the intellisense is blank. If I copy something in there anyway, the compiler says: The tag 'Interaction.Behaviors' does not exist in XML namespace 'http://schemas.microsoft.com/expression/2010/interactivity'. Do I need to install Blend 4 RC or something?

    Read the article

  • Driver error when using multiple shaders

    - by Jinxi
    I'm using 3 different shaders: a tessellation shader to use the tessellation feature of DirectX11 :) a regular shader to show how it would look without tessellation and a text shader to display debug-info such as FPS, model count etc. All of these shaders are initialized at the beginning. Using the keyboard, I can switch between the tessellation shader and regular shader to render the scene. Additionally, I also want to be able toggle the display of debug-info using the text shader. Since implementing the tessellation shader the text shader doesn't work anymore. When I activate the DebugText (rendered using the text-shader) my screens go black for a while, and Windows displays the following message: Display Driver stopped responding and has recovered This happens with either of the two shaders used to render the scene. Additionally: I can start the application using the regular shader to render the scene and then switch to the tessellation shader. If I try to switch back to the regular shader I get the same error as with the text shader. What am I doing wrong when switching between shaders? What am I doing wrong when displaying text at the same time? What file can I post to help you help me? :) thx P.S. I already checked if my keyinputs interrupt at the wrong time (during render or so..), but that seems to be ok Testing Procedure Regular Shader without text shader Add text shader to Regular Shader by keyinput (works now, I built the text shader back to only vertex and pixel shader) (somthing with the z buffer is stil wrong...) Remove text shader, then change shader to Tessellation Shader by key input Then if I add the Text Shader or switch back to the Regular Shader Switching/Render Shader Here the code snipet from the Renderer.cpp where I choose the Shader according to the boolean "m_useTessellationShader": if(m_useTessellationShader) { // Render the model using the tesselation shader ecResult = m_ShaderManager->renderTessellationShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), worldMatrix, viewMatrix, projectionMatrix, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), (D3DXVECTOR3)m_Camera->getPosition(), TESSELLATION_AMOUNT); } else { // todo: loaded model depends on distance to camera // Render the model using the light shader. ecResult = m_ShaderManager->renderShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), lod_level, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), worldMatrix, viewMatrix, projectionMatrix); } And here the code snipet from the Mesh.cpp where I choose the Typology according to the boolean "useTessellationShader": // RenderBuffers is called from the Render function. The purpose of this function is to set the vertex buffer and index buffer as active on the input assembler in the GPU. Once the GPU has an active vertex buffer it can then use the shader to render that buffer. void Mesh::renderBuffers(ID3D11DeviceContext* deviceContext, bool useTessellationShader) { unsigned int stride; unsigned int offset; // Set vertex buffer stride and offset. stride = sizeof(VertexType); offset = 0; // Set the vertex buffer to active in the input assembler so it can be rendered. deviceContext->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset); // Set the index buffer to active in the input assembler so it can be rendered. deviceContext->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, 0); // Check which Shader is used to set the appropriate Topology // Set the type of primitive that should be rendered from this vertex buffer, in this case triangles. if(useTessellationShader) { deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_3_CONTROL_POINT_PATCHLIST); }else{ deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); } return; } RenderShader Could there be a problem using sometimes only vertex and pixel shader and after switching using vertex, hull, domain and pixel shader? Here a little overview of my architecture: TextClass: uses font.vs and font.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); RegularShader: uses vertex.vs and pixel.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); TessellationShader: uses tessellation.vs, tessellation.hs, tessellation.ds, tessellation.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-HSSetShader(m_hullShader, NULL, 0); deviceContext-DSSetShader(m_domainShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); ClearState I'd like to switch between 2 shaders and it seems they have different context parameters, right? In clearstate methode it says it resets following params to NULL: I found following in my Direct3D Class: depth-stencil state - m_deviceContext-OMSetDepthStencilState rasterizer state - m_deviceContext-RSSetState(m_rasterState); blend state - m_device-CreateBlendState viewports - m_deviceContext-RSSetViewports(1, &viewport); I found following in every Shader Class: input/output resource slots - deviceContext-PSSetShaderResources shaders - deviceContext-VSSetShader to - deviceContext-PSSetShader input layouts - device-CreateInputLayout sampler state - device-CreateSamplerState These two I didn't understand, where can I find them? predications - ? scissor rectangles - ? Do I need to store them all localy so I can switch between them, because it doesn't feel right to reinitialize the Direct3d and the Shaders by every switch (key input)?!

    Read the article

  • Get foreign key in external content type to show up in list view

    - by Rob
    I have come across blogs about how to setup an external content type (e.g. http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2010/02/02/it-s-easy-to-configure-an-external-list-with-business-connectivity-services-bcs-in-sharepoint-foundation-2010.aspx) but I have not seen any examples of what to do when your external SQL DB has foreign keys. For example. I have a database that has orders and customers. An order has one and only one customer and a customer can have many orders. How can I setup external content types in such a way that when in the list view of these external content types, I can jump between and possible lookup values to that other type?

    Read the article

  • Query data using LINQ to SQL and Entity Framework with foreign key? (Help me please)

    - by The Wind
    Hello! There is a problem I need help with your query on the data using LINQ to SQL and Entity Framework (I'm using Visual Studio 2010). My picure here: http://img.tamtay.vn/files/photo2/2010/5/28/10/962/4bff3a3b_1093f58f_untitled-1.gif I have three tables: tbl NewsDetails tblNewsCategories tblNewsInCategories (See screen 1 in my picture) Now, I want to retrieve records in the tblNewsDetails table, with condition: CategoryId=1, as the following results: (See screen 2 in my picture) But NewsID and CategoryId in tblNewsInCategories table is two foreign key, I do not see them and I do not know how to use them in your code. My code has errors: (See screen 3 in my picture) Please help me. Thanks! (I am a new member, should not have the right to insert images)

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • SQL Where clause in ORACLE

    - by ArneRie
    Hi, does someone has an idea, how to get END_DATE / START_DATE where TO_DATE('06/1/2010','MM/DD/YYYY') ? SELECT "PROJECT"."ID", "PROJECT"."CLIENT", "PROJECT"."NAME", "PROJECT"."STATE", "PROJECT"."EARLIEST_START", "PROJECT"."LATEST_END", "PROJECT"."EFFORT", "PROJECT"."LINK", "PROJECT"."STATUS", "PROJECT"."DESCRIPTION", (SELECT SUM((END_DATE - START_DATE + 1) * (WORKLOAD / 100)) FROM WORKITEM WHERE PROJECT = PROJECT.ID ) AS "P_A", (SELECT COUNT(*) FROM PUBLIC_HOLIDAY WHERE HOLIDAY_DATE BETWEEN TO_DATE('06/1/2010','MM/DD/YYYY') AND TO_DATE('06/2/2010','MM/DD/YYYY')) AS P_B, "PROJECT_STATE"."STATE", "PERSON"."DISPLAY_NAME" AS "RESPONSIBLE" FROM "PROJECT" INNER JOIN "PROJECT_STATE" ON PROJECT.STATE = PROJECT_STATE.ID INNER JOIN "PERSON" ON RESPONSIBLE = PERSON.ID WHERE (PROJECT.CLIENT = '1') AND (PROJECT.STATE = 1) ORDER BY "PROJECT"."NAME" ASC

    Read the article

  • PHP File Serving Script: Unreliable Downloads?

    - by JGB146
    This post started as a question on ServerFault ( http://serverfault.com/questions/131156/user-receiving-partial-downloads ) but I determined that our php script was the culprit. So I'm issuing an updated question here about what I believe is the actual issue. I am using a php script to verify permissions and then serve up a file for users of my website to download. Most of the time, this works, but recently one user has been seeing problems with larger downloads. He is only getting ~80% of downloads for files that are 100MB in size. Also, all downloads from this script fail to report a filesize. Further, tests revealed that the same user COULD reliably download each of the failed files if given a direct link (at which point the filesize is reported). Here's the relevant snippet of code that we are using to serve the file: header("Content-type:$contenttype"); $len = filesize($filename); header("Content-Length: $len"); header("Content-Disposition: attachment; filename=".$title.".".$ext); readfile($filename); Note that $contenttype, $filename, $title, and $ext are all set correctly before we get here. These have been triple-checked. None of them are the problem. Also, $len does provide the correct filesize. While researching this issue, I came across this post: http://stackoverflow.com/questions/1334471/content-length-header-always-zero It seems that I am encountering the same issue. When I use the script, I get chunked encoding on the file and no size is set for content-length. I'm hypothesizing that something is going wrong on the large downloads, leading him to get a zero-length chunk before the end of the file. Here's what the headers look like for a direct request: http://www.grinderschool.com/videos/zfff5061b65ae00e8b21/KillsAids021.wmv GET /videos/zfff5061b65ae00e8b21/KillsAids021.wmv HTTP/1.1 Host: www.grinderschool.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.grinderschool.com/phpBB3/viewtopic.php?f=14&p=29468 Cookie: style_cookie=printonly; phpbb3_7c544_u=2; phpbb3_7c544_k=44b832912e5f887d; phpbb3_7c544_sid=e8852df42e08cc1b2250300c2897f78f; __utma=174624884.2719561324781918700.1251850714.1270986325.1270989003.575; __utmz=174624884.1264524375.411.12.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=low%20stakes%20poker%20videos; phpbb3_cmviy_k=; phpbb3_cmviy_u=2; phpbb3_cmviy_sid=d8df5c0943863004ca40ef9c392d371d; __utmb=174624884.4.10.1270989003; __utmc=174624884 Pragma: no-cache Cache-Control: no-cache HTTP/1.1 200 OK Date: Sun, 11 Apr 2010 12:57:41 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 Last-Modified: Sun, 04 Apr 2010 12:51:06 GMT Etag: "eb42d6-7d9b843-48368aa6dc280" Accept-Ranges: bytes Content-Length: 131708995 Keep-Alive: timeout=10, max=30 Connection: Keep-Alive Content-Type: video/x-ms-wmv And here's what they look like for the request answered by my script: http://www.grinderschool.com/download_video_test.php?t=KillsAids021&format=wmv GET /download_video_test.php?t=KillsAids021&format=wmv HTTP/1.1 Host: www.grinderschool.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cookie: style_cookie=printonly; phpbb3_7c544_u=2; phpbb3_7c544_k=44b832912e5f887d; phpbb3_7c544_sid=e8852df42e08cc1b2250300c2897f78f; __utma=174624884.2719561324781918700.1251850714.1270986325.1270989003.575; __utmz=174624884.1264524375.411.12.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=low%20stakes%20poker%20videos; phpbb3_cmviy_k=; phpbb3_cmviy_u=2; phpbb3_cmviy_sid=d8df5c0943863004ca40ef9c392d371d; __utmb=174624884.4.10.1270989003; __utmc=174624884 HTTP/1.1 200 OK Date: Sun, 11 Apr 2010 12:58:02 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 X-Powered-By: PHP/5.2.11 Content-Disposition: attachment; filename=KillsAids021.wmv Vary: Accept-Encoding Content-Encoding: gzip Keep-Alive: timeout=10, max=30 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: video/x-ms-wmv So the question is...what can I do to make downloads from the script work properly? Again, for 99% of users, it works as is (though I find it annoying now that no filesize is reported and thus that no time estimate can be computed about the download).

    Read the article

  • Finding it Hard to Deliver Right Customer Experience: Think BPM!

    - by Ajay Khanna
    Our relationship with our customers is not a just a single interaction and we should not treat it like one. A customer’s relationship with a vendor is like a journey which starts way before customer makes a purchase and lasts long after that. The journey may start with customer researching a product that may lead to the eventual purchase and may continue with support or service needs for the product. A typical customer journey can be represented as shown below: As you may notice, customers tend to use multiple channels to interact with a company throughout their journey.  They also expect that they should get consistent experience, no matter what interaction channel they may choose. Customers do not like to repeat the information they have already provided and expect companies to remember their preferences, and offer them relevant products and services. If the company fails to meet this expectation, customers not only will abandon the purchase and go to the competitor but may also influence others’ purchase decision. Gone are the days when word of mouth was the only medium, and the customer could influence “Six” others. This is the age of social media and customer’s good or bad experience, especially bad get highly amplified and may influence hundreds of others. Challenges that face B2C companies today include: Delivering consistent experience: The reason that delivering consistent experience is challenging is due to fragmented data, disjointed systems and siloed multichannel interactions. Customers tend to get different service quality if they use web vs. phone vs. store. They get different responses from different service agents or get inconsistent answers if they call sales vs. service group in the company. Such inconsistent experiences result in lower customer satisfaction or NPS (net promoter score) numbers. Increasing Revenue: To stay competitive companies frequently introduce new products and services. Delay in launching such offerings has a significant impact on revenue realization. In addition to new product revenue, there are multiple opportunities to up-sell and cross-sell that impact bottom line. If companies are not able to identify such opportunities, bring a product to market quickly, or not offer the right product to the right customer at the right time, significant loss of revenue may occur. Ensuring Compliance: Companies must be compliant to ever changing regulations, these could be about Know Your Customer (KYC), Export/Import regulations, or taxation policies. In addition to government agencies, companies also need to comply with the SLA that they have committed to their customers. Lapse in meeting any of these requirements may lead to serious fines, penalties and loss in business. Companies have to make sure that they are in compliance will all such regulations and SLA commitments, at any given time. With the advent of social networks and mobile technology, companies not only need to focus on process efficiency but also on customer engagement. Improving engagement means delivering the customer experience as the customer is expecting and interacting with the customer at right time using right channel. Customers expect to be able to contact you via any channel of their choice (web, email, chat, mobile, social media), purchase via any viable channel (web, phone, store, mobile). Customers expect companies to understand their particular needs and remember their preferences on repeated visits. To deliver such an integrated, consistent, and contextual experience, power of BPM in must. Your company may be organized in departments like Marketing, Sales, Service. You may hold prospect data in SFA, order information in ERP, customer issues in CRM. However, the experience delivered to the customer must not be constrained by your system legacy. BPM helps in designing the right experience for the right customer and integrates all the underlining channels, systems, applications to make sure right information will be delivered to the right knowledge worker or to the customer every single time.     Orchestrating information across all systems (MDM, CRM, ERP), departments (commerce, merchandising, marketing service) and channels (Email, phone, web, social)  is the key, and that’s what BPM delivers. In addition to orchestrating systems and channels for consistency, BPM also provides an ability for analysis and decision management. By using data from historical transactions, social media and from other systems, users can determine the customer preferences, customer value, and churn propensity. This information, in the context, is then used while making a decision at a process step. Working with real-time decision management system can also suggest right up-sell or cross-sell offers, discounts or next-best-action steps for a particular customer. Timely action on customer issues or request is also a key tenet of a good customer experience. BPM’s complex event processing capabilities help companies to take proactive actions before issues get escalated. BPM system can be designed to listen to a certain event patters then deduce from those customer situations (credit card stolen, baggage lost, change of address) and do a triage before situation goes out of control. If such a situation arises you can send alerts to right people or immediately invoke corrective actions. Last but not least one of BPM’s key values is to drive continuous improvement. Learning about customers past experiences, interactions and social conversations, provide valuable insight. Such insight can be used to improve products, customer facing processes, and customer experience. You may take these insights as an input to design better more efficient and customer friendly sales, contact center or self-service processes. If customer experience is important for your business, make sure you have incorporated BPM as a part of your strategy to design, orchestrate and improve your customer facing processes.

    Read the article

  • Perl LWP::UserAgent mishandling UTF-8 response

    - by RedGrittyBrick
    When I use LWP::UserAgent to retrieve content encoded in UTF-8 it seems LWP::UserAgent doesn't handle the encoding correctly. Here's the output after setting the Command Prompt window to Unicode by the command chcp 65001 Note that this initially gives the appearance that all is well, but I think it's just the shell reassembling bytes and decoding UTF-8, From the other output you can see that perl itself is not handling wide characters correctly. C:\perl getutf8.pl ====================================================================== HTTP/1.1 200 OK Connection: close Date: Fri, 31 Dec 2010 19:24:04 GMT Accept-Ranges: bytes Server: Apache/2.2.8 (Win32) PHP/5.2.6 Content-Length: 75 Content-Type: application/xml; charset=utf-8 Last-Modified: Fri, 31 Dec 2010 19:20:18 GMT Client-Date: Fri, 31 Dec 2010 19:24:04 GMT Client-Peer: 127.0.0.1:80 Client-Response-Num: 1 <?xml version="1.0" encoding="UTF-8"? <nameBudejovický Budvar</name ====================================================================== response content length is 33 ....v....1....v....2....v....3....v....4 <nameBudejovický Budvar</name . . . . v . . . . 1 . . . . v . . . . 2 . . . . v . . . . 3 . . . . 3c6e616d653e427564c49b6a6f7669636bc3bd204275647661723c2f6e616d653e < n a m e B u d ? ? j o v i c k ? ? B u d v a r < / n a m e Above you can see the payload length is 31 characters but Perl thinks it is 33. For confirmation, in the hex, we can see that the UTF-8 sequences c49b and c3bd are being interpreted as four separate characters and not as two Unicode characters. Here's the code #!perl use strict; use warnings; use LWP::UserAgent; my $ua = LWP::UserAgent-new(); my $response = $ua-get('http://localhost/Bud.xml'); if (! $response-is_success) { die $response-status_line; } print '='x70,"\n",$response-as_string(), '='x70,"\n"; my $r = $response-decoded_content((charset = 'UTF-8')); $/ = "\x0d\x0a"; # seems to be \x0a otherwise! chomp($r); # Remove any xml prologue $r =~ s/^<\?.*\?\x0d\x0a//; print "Response content length is ", length($r), "\n\n"; print "....v....1....v....2....v....3....v....4\n"; print $r,"\n"; print ". . . . v . . . . 1 . . . . v . . . . 2 . . . . v . . . . 3 . . . . \n"; print unpack("H*", $r), "\n"; print join(" ", split("", $r)), "\n"; Note that Bud.xml is UTF-8 encoded without a BOM. How can I persuade LWP::UserAgent to do the right thing? P.S. Ultimately I want to translate the Unicode data into an ASCII encoding, even if it means replacing each non-ASCII character with one question mark or other marker. I have accepted Ysth's "upgrade" answer - because I know it is the right thing to do when possible. However I am going to use a work-around (which may depress Tom further): $r = encode("cp437", decode("utf8", $r));

    Read the article

  • Social Engagement: One Size Doesn't Fit Anyone

    - by Mike Stiles
    The key to achieving meaningful social engagement is to know who you’re talking to, know what they like, and consistently deliver that kind of material to them. Every magazine for women knows this. When you read the article titles promoted on their covers, there’s no mistaking for whom that magazine is intended. And yet, confusion still reigns at many brands as to exactly whom they want to talk to, what those people want to hear, and what kind of content they should be creating for them. In most instances, the root problem is brands want to be all things to all people. Their target audience…the world! Good luck with that. It’s 2012, the age of aggregation and custom content delivery. To cope with the modern day barrage of information, people have constructed technological filters so that content they regard as being “for them” is mostly what gets through. Even if your brand is for men and women, young and old, you may want to consider social properties that divide men from women, and young from old. Yes, a man might find something in a women’s magazine that interests him. But that doesn’t mean he’s going to subscribe to it, or buy even one issue. In fact he’ll probably never see the article he’d otherwise be interested in, because in his mind, “This isn’t for me.” It wasn’t packaged for him. News Flash: men and women are different. So it’s a tall order to craft your Facebook Page or Twitter handle to simultaneously exude the motivators for both. The Harris Interactive study “2012 Connecting and Communicating Online: State of Social Media” sheds light on the differing social behaviors and drivers. -65% of women (vs. 59% of men) stay glued to social because they don’t want to miss anything. -25% of women check social when they wake up, before they check email. Only 18% of men check social before e-mail. -95% of women surveyed belong to Facebook vs. 86% of men. -67% of women log in to Facebook once a day or more vs. 54% of men. -Conventional wisdom is Pinterest is mostly a woman-thing, right? That may be true for viewing, but not true for sharing. Men are actually more likely to share on Pinterest than women, 23% to 10%. -The sharing divide extends to YouTube. 68% of women use it mainly for consumption, as opposed to 52% of men. -Women are as likely to have a Twitter account as men, but they’re much less likely to check it often. 54% of women check it once a week compared to 2/3 of men. Obviously, there are some takeaways from this depending on your target. Women don’t want to miss out on anything, so serialized content might be a good idea, right? Promotional posts that lead to a big payoff could keep them hooked. Posts for women might be better served first thing in the morning. If sharing is your goal, maybe male-targeted content is more likely to get those desired shares. And maybe Twitter is a better place to aim your male-targeted content than Facebook. Some grocery stores started experimenting with male-only aisles. The results have been impressive. Why? Because while it’s true men were finding those same items in the store just fine before, now something has been created just for them. They have a place in the store where they belong. Each brand’s strategy and targets are going to differ. The point is…know who you’re talking to, know how they behave, know what they like, and deliver content using any number of social relationship management targeting tools that meets their expectations. If, however, you’re committed to a one-size-fits-all, “our content is for everybody” strategy (or even worse, a “this is what we want to put out and we expect everybody to love it” strategy), your content will miss the mark for more often than it hits. @mikestilesPhoto via stock.schng

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • how to handle .something in the url

    - by dorelal
    I am using rails 2.3.5 . I have a resource for event. map.resources :events respond_to do |format| format.html format.js { render :text => @event.to_json, :layout => false } end It is a public site and sometimes I get urls like http://domain.com/events/14159-international-hardware-show-2010+91+"prashant"+2010+OR+email+OR+data+OR+base+-ALIBA.BACOM&hl=en&ct=clnk I keep getting hoptoad exception email. How do I handle such cases?

    Read the article

  • data type mismatch when comparing dates in MS-Access

    - by jonos
    I have dates stored in an MS-Access table in the 'General Date' format. I'm trying to create a query that returns records between a specific date range (all records from March 2010) however I encounter a 'data type mismatch in critera expression' message. Here is my statement; SELECT Loan.loan_datetimeLeant, product_name, [product_artist/director], product_category, loanItem_cost FROM Loan INNER JOIN ((Product INNER JOIN Item ON Product.[product_id] = Item.[product_id]) INNER JOIN Loan_Items ON Item.[item_id] = Loan_Items.[item_id]) ON (Loan.[cust_id] = Loan_Items.[cust_id]) AND (Loan.[loan_datetimeLeant] = Loan_Items.[loan_datetimeLeant]) WHERE Loan.loan_datetimeLeant = '01/03/2010' AND Loan.loan_datetimeLeant <= '31/03/2010' ORDER BY Loan.loan_datetimeLeant ; I have tried variations on the date format (mm/dd/yyyy, dd/mm/yyyy 00:00:00)

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >