Search Results

Search found 28279 results on 1132 pages for 'syntax case'.

Page 23/1132 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Using C# and gppg, how would I construct an abstract syntax tree?

    - by Rupert
    Is there a way to do this almost out-of-the-box? I could go and write a big method that would use the collected tokens to figure out which leaves should be put in which branches and in the end populate a TreeNode object, but since gppg already handled everything by using supplied regular expressions, I was wondering if there's an easier way? Even if not, any pointers as to how best to approach the problem of creating an AST would be appreciated. Apologies if I said anything silly, I'm only just beginning to play the compiler game. :)

    Read the article

  • When using Query Syntax in C# "Enumeration yielded no results". How to retrieve output

    - by Shantanu Gupta
    I have created this query to fetch some result from database. Here is my table structure. What exaclty is happening. DtMapGuestDepartment as Table 1 DtDepartment as Table 2 Are being used var dept_list= from map in DtMapGuestDepartment.AsEnumerable() where map.Field<Nullable<long>>("GUEST_ID") == DRowGuestPI.Field<Nullable<long>>("PK_GUEST_ID") join dept in DtDepartment.AsEnumerable() on map.Field<Nullable<long>>("DEPARTMENT_ID") equals dept.Field<Nullable<long>>("DEPARTMENT_ID") select dept.Field<string>("DEPARTMENT_ID"); I am performing this query on DataTables and expect it to return me a datatable. Here I want to select distinct department from Table 1 as well which will be my next quest. Please answer to that also if possible.

    Read the article

  • Does a syntax for this exist? In any language?

    - by Michael
    It seems pretty common to me to have an argument, in a dynamically typed language that is either an Object or a key to lookup that object. For instance when I'm working with a database I might have a method getMyRelatedStuff(person) All I really need to lookup the related stuff is the id of the person so my method could look like this in python: def getMyRelatedStuff(person_or_id): id = person_or_id.id if isinstance(person,User) else person_or_id #do some lookup Or going the other direction: def someFileStuff(file_or_name): file = file_or_name if hasattr(file,'write') else open(file_or_name)

    Read the article

  • What do I do with a Concrete Syntax Tree?

    - by Cap
    I'm using pyPEG to create a parse tree for a simple grammar. The tree is represented using lists and tuples. Here's an example: [('command', [('directives', [('directive', [('name', 'retrieve')]), ('directive', [('name', 'commit')])]), ('filename', [('name', 'f30502')])])] My question is what do I do with it at this point? I know a lot depends on what I am trying to do, but I haven't been able to find much about consuming/using parse trees, only creating them. Does anyone have any pointers to references I might use? Thanks for your help.

    Read the article

  • Sublime text 2 syntax highlighter?

    - by BigSack
    I have coded my first custom syntax highlighter for sublime text 2, but i don't know how to install it. It is based on notepad++ highlighter found here https://70995658-a-62cb3a1a-s-sites.googlegroups.com/site/lohanplus/files/smali_npp.xml?attachauth=ANoY7criVTO9bDmIGrXwhZLQ_oagJzKKJTlbNDGRzMDVpFkO5i0N6hk_rWptvoQC1tBlNqcqFDD5NutD_2vHZx1J7hcRLyg1jruSjebHIeKdS9x0JCNrsRivgs6DWNhDSXSohkP1ZApXw0iQ0MgqcXjdp7CkJJ6pY_k5Orny9TfK8UWn_HKFsmPcpp967NMPtUnd--ad-BImtkEi-fox2tjs7zc5LabkDQ%3D%3D&attredirects=0&d=1 <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>fileTypes</key> <array> <string>smali</string> </array> <dict> <key>Word1</key> <string>add-double add-double/2addr add-float add-float/2addr add-int add-int/2addr add-int/lit16 add-int/lit8 add-long add-long/2addr aget aget-boolean aget-byte aget-char aget-object aget-short aget-wide and-int and-int/2addr and-int/lit16 and-int/lit8 and-long and-long/2addr aput aput-boolean aput-byte aput-char aput-object aput-short aput-wide array-length check-cast cmp-long cmpg-double cmpg-float cmpl-double cmpl-float const const-class const-string const-string-jumbo const-wide const-wide/16 const-wide/32 const-wide/high16 const/16 const/4 const/high16 div-double div-double/2addr div-float div-float/2addr div-int div-int/2addr div-int/lit16 div-int/lit8 div-long div-long/2addr double-to-float double-to-int double-to-long execute-inline fill-array-data filled-new-array filled-new-array/range float-to-double float-to-int float-to-long goto goto/16 goto/32 if-eq if-eqz if-ge if-gez if-gt if-gtz if-le if-lez if-lt if-ltz if-ne if-nez iget iget-boolean iget-byte iget-char iget-object iget-object-quick iget-quick iget-short iget-wide iget-wide-quick instance-of int-to-byte int-to-char int-to-double int-to-float int-to-long int-to-short invoke-direct invoke-direct-empty invoke-direct/range invoke-interface invoke-interface/range invoke-static invoke-static/range invoke-super invoke-super-quick invoke-super-quick/range invoke-super/range invoke-virtual invoke-virtual-quick invoke-virtual-quick/range invoke-virtual/range iput iput-boolean iput-byte iput-char iput-object iput-object-quick iput-quick iput-short iput-wide iput-wide-quick long-to-double long-to-float long-to-int monitor-enter monitor-exit move move-exception move-object move-object/16 move-object/from16 move-result move-result-object move-result-wide move-wide move-wide/16 move-wide/from16 move/16 move/from16 mul-double mul-double/2addr mul-float mul-float/2addr mul-int mul-int/2addr mul-int/lit8 mul-int/lit16 mul-long mul-long/2addr neg-double neg-float neg-int neg-long new-array new-instance nop not-int not-long or-int or-int/2addr or-int/lit16 or-int/lit8 or-long or-long/2addr rem-double rem-double/2addr rem-float rem-float/2addr rem-int rem-int/2addr rem-int/lit16 rem-int/lit8 rem-long rem-long/2addr return return-object return-void return-wide rsub-int rsub-int/lit8 sget sget-boolean sget-byte sget-char sget-object sget-short sget-wide shl-int shl-int/2addr shl-int/lit8 shl-long shl-long/2addr shr-int shr-int/2addr shr-int/lit8 shr-long shr-long/2addr sparse-switch sput sput-boolean sput-byte sput-char sput-object sput-short sput-wide sub-double sub-double/2addr sub-float sub-float/2addr sub-int sub-int/2addr sub-int/lit16 sub-int/lit8 sub-long sub-long/2addr throw throw-verification-error ushr-int ushr-int/2addr ushr-int/lit8 ushr-long ushr-long/2addr xor-int xor-int/2addr xor-int/lit16 xor-int/lit8 xor-long xor-long/2addr</string> </dict> <dict> <key>Word2</key> <string>v0 v1 v2 v3 v4 v5 v6 v7 v8 v9 v10 v11 v12 v13 v14 v15 v16 v17 v18 v19 v20 v21 v22 v23 v24 v25 v26 v27 v28 v29 v30 v31 v32 v33 v34 v35 v36 v37 v38 v39 v40 v41 v42 v43 v44 v45 v46 v47 v48 v49 v50 p0 p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15 p16 p17 p18 p19 p20 p21 p22 p23 p24 p25 p26 p27 p28 p29 p30</string> </dict> <dict> <key>Word3</key> <string>array-data .catch .catchall .class .end .end\ local .enum .epilogue .field .implements .line .local .locals .parameter .prologue .registers .restart .restart\ local .source .subannotation .super</string> </dict> <dict> <key>Word4</key> <string>abstract bridge constructor declared-synchronized enum final interface native private protected public static strictfp synchronized synthetic system transient varargs volatile</string> </dict> <dict> <key>Word4</key> <string>(&quot;0)&quot;0</string> </dict> <dict> <key>Word5</key> <string>.method .annotation .sparse-switch .packed-switch</string> </dict> <dict> <key>word6</key> <string>.end\ method .end\ annotation .end\ sparse-switch .end\ packed-switch</string> </dict> <dict> <key>word7</key> <string>&quot; ( ) , ; { } &gt;</string> </dict> <key>uuid</key> <string>27798CC6-6B1D-11D9-B8FA-000D93589AF6</string> </dict> </plist>

    Read the article

  • The Business case for Big Data

    - by jasonw
    The Business Case for Big Data Part 1 What's the Big Deal Okay, so a new buzz word is emerging. It's gone beyond just a buzzword now, and I think it is going to change the landscape of retail, financial services, healthcare....everything. Let me spend a moment to talk about what i'm going to talk about. Massive amounts of data are being collected every second, more than ever imaginable, and the size of this data is more than can be practically managed by today’s current strategies and technologies. There is a revolution at hand centering on this groundswell of data and it will change how we execute our businesses through greater efficiencies, new revenue discovery and even enable innovation. It is the revolution of Big Data. This is more than just a new buzzword is being tossed around technology circles.This blog series for Big Data will explain this new wave of technology and provide a roadmap for businesses to take advantage of this growing trend. Cases for Big Data There is a growing list of use cases for big data. We naturally think of Marketing as the low hanging fruit. Many projects look to analyze twitter feeds to find new ways to do marketing. I think of a great example from a TED speech that I recently saw on data visualization from Facebook from my masters studies at University of Virginia. We can see when the most likely time for breaks-ups occurs by looking at status changes and updates on users Walls. This is the intersection of Big Data, Analytics and traditional structured data. Ted Video Marketers can use this to sell more stuff. I really like the following piece on looking at twitter feeds to measure mood. The following company was bought by a hedge fund. They could predict how the S&P was going to do within three days at an 85% accuracy. Link to the article Here we see a convergence of predictive analytics and Big Data. So, we'll look at a lot of these business cases and start talking about what this means for the business. It's more than just finding ways to use Hadoop + NoSql and we'll talk about that too. How do I start in Big Data? That's what is coming next post.

    Read the article

  • Windows Azure Use Case: Fast Acquisitions

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many organizations absorb, take over or merge with other organizations. In these cases, one of the most difficult parts of the process is the merging or changing of the IT systems that the employees use to do their work, process payments, and even get paid. Normally this means that the two companies have disparate systems, and several approaches can be used to have the two organizations use technology between them. An organization may choose to retain both systems, and manage them separately. The advantage here is speed, and keeping the profit/loss sheets separate. Another choice is to slowly “sunset” or stop using one organization’s system, and cutting to the other system immediately or at a later date. Although a popular choice, one of the most difficult methods is to extract data and processes from one system and import it into the other. Employees at the transitioning system have to be trained on the new one, the data must be examined and cleansed, and there is inevitable disruption when this happens. Still another option is to integrate the systems. This may prove to be as much work as a transitional strategy, but may have less impact on the users or the balance sheet. Implementation: A distributed computing paradigm can be a good strategic solution to most of these strategies. Retaining both systems is made more simple by allowing the users at the second organization immediate access to the new system, because security accounts can be created quickly inside an application. There is no need to set up a VPN or any other connections than just to the Internet. Having the users stop using one system and start with the other is also simple in Windows Azure for the same reason. Extracting data to Azure holds the same limitations as an on-premise system, and may even be more problematic because of the large data transfers that might be required. In a distributed environment, you pay for the data transfer, so a mixed migration strategy is not recommended. However, if the data is slowly migrated over time with a defined cutover, this can be an effective strategy. If done properly, an integration strategy works very well for a distributed computing environment like Windows Azure. If the Azure code is architected as a series of services, then endpoints can expose the service into and out of not only the Azure platform, but internally as well. This is a form of the Hybrid Application use-case documented here. References: Designing for Cloud Optimized Architecture: http://blogs.msdn.com/b/dachou/archive/2011/01/23/designing-for-cloud-optimized-architecture.aspx 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0

    Read the article

  • The Case of the Missing Date/Time Stamp: Reporting Services 2008 R2 Snapshots

    - by smisner
    This week I stumbled upon an undocumented “feature” in SQL Server 2008 R2 Reporting Services as I was preparing a demonstration on how to set up and use report snapshots. If you’re familiar with the main changes in this latest release of Reporting Services, you probably already know that Report Manager got a facelift this time around. Although this facelift was generally a good thing, one of the casualties – in my opinion – is the loss of the snapshot label that served two purposes… First, it flagged the report as a snapshot. Second, it let you know when that snapshot was created. As part of my standard operating procedure when demonstrating report snapshots, I point out this label, so I was rather taken aback when I didn’t see it in the demonstration I was preparing. It sort of upset my routine, and I’m rather partial to my routines. I thought perhaps I wasn’t looking in the right place and changed Report Manager from Tile View to Detail View, but no – that label was still missing. In the grand scheme of life, it’s not an earth-shattering change, but you’ll have to look at the Modified Date in Details View to know when the snapshot was run. Or hope that the report developer included a textbox to show the execution time in the report. (Hint: this is a good time to add this to your list of report development best practices, whether a report gets set up as a report snapshot or not!) A snapshot from the past In case you don’t remember how a snapshot appeared in Report Manager back in the old days (of SQL Server 2008 and earlier), here’s an image I snagged from my Reporting Services 2008 Step by Step manuscript: A snapshot in the present A report server running in SharePoint integrated mode had no such label. There you had to rely on the Report Modified date-time stamp to know the snapshot execution time. So I guess all platforms are now consistent. Here’s a screenshot of Report Manager in the 2008 R2 version. One of these is a snapshot and the rest execute on demand. Can you tell which is the snapshot? Consider descriptions as an alternative So my report snapshot demonstration has one less step, and I’ll need to edit the Denali version of the Step by Step book. Things are simpler this way, but I sure wish we had an easier way to identify the execution methods of the reports. Consider using the description field to alert users that the report is a snapshot. It might save you a few questions about why the data isn’t up-to-date if the users know that something changed in the source of the report. Notice that the full description doesn’t display in Tile View, so keep it short and sweet or instruct users to open Details View to see the entire description.

    Read the article

  • Why scala not allowing '$' identifier in case statement?

    - by Alex R
    this works as expected scala 3 match { case x:Int = 2*x } res1: Int = 6 why does this fail? scala 3 match { case $x:Int = 2*$x } :1: error: '=' expected but ':' found. 3 match { case $x:Int = 2*$x } ^ scala 3 match { case `$x`:Int = 2*$x } :1: error: '=' expected but ':' found. 3 match { case `$x`:Int = 2*$x } ^ scala 3 match { case `$x` : Int = 2*$x } :1: error: '=' expected but ':' found. 3 match { case `$x` : Int = 2*$x } '$' is supposed to be a valid identifier character, as demonstrated here: scala var y = 1 y: Int = 1 scala var $y = 2 $y: Int = 2 Thanks

    Read the article

  • Strange behavior of Switch Case statement in Java

    - by supernova
    I understand that Java switch case are designed this way but why is this behavior in Java int x = 1; switch(x){ case 1: System.out.println(1); case 2: System.out.println(2); case 3: System.out.println(3); default: System.out.println("default"); } output : 1 2 3 default My question is why case 2 and 3 are executed? I know I omitted break statement but x was never 2 or 3 but case 2 and case 3 still executes?

    Read the article

  • Switch/Case statements in C++

    - by vgoklani
    Regarding the switch/case statement in the C++ code below: "Case 1" is obviously false, so how/why does it enter the do-while loop? #include <iostream> using namespace std; int main() { int test = 4; switch(test) { case 1: do { case 2: test++; case 3: test++; case 4: cout << "How did I get inside the do-while loop?" << endl; break; case 5: test++; } while(test > 0); cout << test << endl; } }

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • Faster Trip to Innovation with Simplified Data Integration: Sabre Holdings Case Study

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Author: Irem Radzik, Director of Product Marketing, Data Integration, Oracle In today’s fast-paced, competitive environment, IT teams are under pressure to deliver technology solutions for many critical business initiatives as fast as possible. When the focus is on speed, it can be easy to continue to use old style, point-to-point custom scripts that grow organically to the point where they are unmanageable and too costly to maintain. As data volumes, data sources, and end users grow, uncoordinated data integration efforts create significant inefficiencies for both IT and business users. In addition to losing IT productivity due to maintaining spaghetti architecture, data integrity becomes a concern as well. Errors caused by inconsistent, data and manual data entry can prove very costly for companies and disrupt business activities. Many industry leaders recognize now that data should be moved in an automated and reliable manner across all platforms to have one version of the truth. By simplifying their data integration architecture and standardizing on a centralized approach, IT teams now accelerate time to market. Especially, using a centralized, shared-service approach brings agility, increases IT productivity, and frees up resources for innovation. One such industry leader that simplified its data integration architecture is Sabre Holdings. Sabre Holdings provides distribution and technology solutions for the travel industry, and is a winner of Oracle Excellence Awards for Fusion Middleware in 2011 in the data integration category. I had the pleasure to host Sabre Holdings on a public webcast and discuss their data integration best practices for data warehousing. In this webcast Sabre’s Amjad Saeed, presented how the company reduced complexity by consolidating systems and standardizing development on Oracle Data Integrator and Oracle GoldenGate for its global data warehouse development team. With Oracle’s complete real-time data integration solution, Sabre also streamlined support and maintenance operations, achieved real-time view in the execution of the integration processes, and can manage the data warehouse and business intelligence solution performance on demand. By reducing complexity and leveraging timely market insights, the company was able to decrease time to market by 40%. You can now listen to the webcast on demand: Sabre Holdings Case Study: Accelerating Innovation using Oracle Data Integration I invite you to hear directly from Sabre how to use advanced data integration capabilities to enable accelerated innovation. To learn more about Oracle’s data integration offering you can download our free resources.

    Read the article

  • Multiple foldmethods in vim

    - by bjarkef
    I use the folding option of vim quite a lot, and have usually set foldmethod to syntax. Recently I discovered that it is possible to add custom folds, such that I can put whole blocks in /*{{{*/ and /*}}}*/ which is very useful for grouping large sections of a source file together. However to use that feature I need to set foldmethod to marker, and I loose the syntax folding. Is it possible to have two active foldmethods at the same time in vim? set foldmethod=syntax,marker does not work.

    Read the article

  • Less than in Groovy case/switch statement

    - by UltraVi01
    I have the following switch statement switch (points) { case 0: name = "new"; break; case 1..14: badgeName = "bronze-coin"; break; case 15..29: badgeName = "silver-coin"; break; default: badgeName = "ruby"; } I'd like the first case (case 0) to include points less than or equal to 0. How can I do this in Groovy?

    Read the article

  • Secret, unlogged, transparent, case-sensitive proxy in IIS6?

    - by Ian Boyd
    Does IIS have a secret, unlogged, transparent, case-sensitive proxy built into it? A file exists on the web-server: GET http://www.stackoverflow.com/javascript/ModifyQuoteArea.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:20:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... Problem is that a changes made to the file will not get served, the old (i.e. February of last year) version keeps getting served: HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:23:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... The same old file gets served, even though we've: renamed the file deleted the file restarted IIS The request for this file does not appear in the IIS logs (e.g. C:\WINNT\System32\LogFiles\W3SVC7\) And this only happens from the outside (i.e. the internet). If you issue the request locally on the server, then you will: - get the current file (file there) - 404 (file renamed) - 404 (file deleted) But if i change the case of the requested resource, i.e.: GET http://www.stackoverflow.com/javascript/MoDiFyQuOtEArEa.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com Note: MoDiFyQuOtEArEa.js verses ModifyQuoteArea.js Then i do get the proper file (or get the 404 as i expect if the file is renamed or deleted). But any subsequent changes to the file will not show up until i change the case of the file i'm asking for. Checking the IIS logs all indicate that the (internet) requests are all coming the correct client on the internet (i.e. not from some intermediate proxy). Since the file doesn't exist on the hard drive anymore, i conclude that there is a proxy. The requests serviced from this proxy are not logged in the IIS logs. The requests for new files are logged, and from the client IP, not a proxy IP. The proxy is case sensitivie. This does not sound like something Microsoft, or IIS, would do: - a transparent proxy - case-sensitivie - unlogged - surviving restarts of IIS - surviving in a cache for hours i can't believe that our customer's IIS is doing these things. i'm assuming there is some other transparent proxy in front of IIS. Or, does IIS have a transparent, unlogged, case-sensitive, memory based, proxy, that caches content for at least 7 hours? (Come Monday morning, IIS is serving the correct file, unlogged).

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >