Search Results

Search found 12144 results on 486 pages for 'old'.

Page 403/486 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Apache attack on compromised server, iframe injected by string replace

    - by Quang-Tuan Luong
    My server has been compromised recently. This morning, I have discovered that the intruder is injecting an iframe into each of my HTML pages. After testing, I have found out that the way he does that is by getting Apache (?) to replace every instance of <body> by <iframe link to malware></iframe></body> For example if I browse a file residing on the server consisting of: </body> </body> Then my browser sees a file consisting of: <iframe link to malware></iframe></body> <iframe link to malware></iframe></body> I have immediately stopped Apache to protect my visitors, but so far I have not been able to find what the intruder has changed on the server to perform the attack. I presume he has modified an Apache config file, but I have no idea which one. In particular, I have looked for recently modified files by time-stamp, but did not find anything noteworthy. Thanks for any help. Tuan. PS: I am in the process of rebuilding a new server from scratch, but in the while, I would like to keep the old one running, since this is a business site.

    Read the article

  • Custom webserver caching

    - by Mark Kinsella
    I'm working with a custom webserver on an embedded system and having some problems correctly setting my HTTP Headers for caching. Our webserver is generating all dynamic content as XML and we're using semi-static XSL files to display it with some dynamic JSON requests thrown in for good measure along with semi-static images. I say "semi-static" because the problems occur when we need to do a firmware update which might change the XSL and image files. Here's what needs to be done: cache the XSL and image files and do not cache the XML and JSON responses. I have full control over the HTTP response and am currently: Using ETags with the XSL and image files, using the modified time and size to generate the ETag Setting Cache-Control: no-cache on the XML and JSON responses As I said, everything works dandy until a firmware update when the XSL and image files are sometimes cached. I've seen it work fine with the latest versions of Firefox and Safari but have had some problems with IE. I know one solution to this problem would be simply rename the XSL and image files after each version (eg. logo-v1.1.png, logo-v1.2.png) and set the Expires header to a date in the future but this would be difficult with the XSL files and I'd like to avoid this. Note: There is a clock on the unit but requires the user to set it and might not be 100% reliable which is what might be causing my caching issues when using ETags. What's the best practice that I should employ? I'd like to avoid as many webserver requests as possible but invalidating old XSL and image files after a software update is the #1 priority.

    Read the article

  • How can I modify my code to line through the bezier control points?

    - by WillyCornbread
    HI all - I am using anchor points and control points to create a shape using curveTo. It's all working fine, but I cannot figure out how to get my lines to go through the center of the control points (blue dots) when the line is not straight. Here is my code for drawing the shape: // clear old line and draw new / begin fill var g:Graphics = graphics; g.clear(); g.lineStyle(2, 0, 1); g.beginFill(0x0099FF,.1); //move to starting anchor point var startX:Number = anchorPoints[0].x; var startY:Number = anchorPoints[0].y; g.moveTo(startX, startY); // Connect the dots var numAnchors:Number = anchorPoints.length; for (var i:Number=1; i<numAnchors; i++) { // curve to next anchor through control g.curveTo(controlPoints[i].x,controlPoints[i].y, anchorPoints[i].x, anchorPoints[i].y); } // Close the loop g.curveTo(controlPoints[0].x,controlPoints[0].y,startX,startY); And the shape I'm drawing for reference: How can I modify my code so that the lines go directly through the blue control points? Thanks in advance! b

    Read the article

  • Why would fopen fail to open a file that exists?

    - by void
    I'm on Windows XP using Visual Studio 6 (yes I know it's old) building/maintaining a C++ DLL. I'm encountered a problem with fopen failing to open an existing file, it always returns NULL. I've tried: Checking errno and _doserrno by setting both to zero and then checking them again, both remain zero, and thus GetLastError() reports no errors. I know fopen isn't required to set errno when it encounters an error according to a C standard. Hardcoding the file path, which are not relative. Tried on another developers machine which the same result. The really strange thing is CreateFile works and the file can be read with ReadFile. We believe this works in a release build, however we are also seeing some very odd behaviour in other areas of the application and we're not sure if this is related. The code is below, I don't see anything odd it looks quite standard to me. The source file hasn't changed for just under half a year. HRESULT CDataHandler::LoadFile( CStdString szFilePath ) { //Code FILE* pFile; if ( NULL == ( pFile = fopen( szFilePath.c_str(), "rb") ) ) { return S_FALSE; } //More code }

    Read the article

  • Auditing in Entity Framework.

    - by Gabriel Susai
    After going through Entity Framework I have a couple of questions on implementing auditing in Entity Framework. I want to store each column values that is created or updated to a different audit table. Rightnow I am calling SaveChanges(false) to save the records in the DB(still the changes in context is not reset). Then get the added | modified records and loop through the GetObjectStateEntries. But don't know how to get the values of the columns where their values are filled by stored proc. ie, createdate, modifieddate etc. Below is the sample code I am working on it. //Get the changed entires( ie, records) IEnumerable<ObjectStateEntry> changes = context.ObjectStateManager.GetObjectStateEntries(EntityState.Modified); //Iterate each ObjectStateEntry( for each record in the update/modified collection) foreach (ObjectStateEntry entry in changes) { //Iterate the columns in each record and get thier old and new value respectively foreach (var columnName in entry.GetModifiedProperties()) { string oldValue = entry.OriginalValues[columnName].ToString(); string newValue = entry.CurrentValues[columnName].ToString(); //Do Some Auditing by sending entityname, columnname, oldvalue, newvalue } } changes = context.ObjectStateManager.GetObjectStateEntries(EntityState.Added); foreach (ObjectStateEntry entry in changes) { if (entry.IsRelationship) continue; var columnNames = (from p in entry.EntitySet.ElementType.Members select p.Name).ToList(); foreach (var columnName in columnNames) { string newValue = entry.CurrentValues[columnName].ToString(); //Do Some Auditing by sending entityname, columnname, value } }

    Read the article

  • Index Tuning for SSIS tasks

    - by Raj More
    I am loading tables in my warehouse using SSIS. Since my SSIS is slow, it seemed like a great idea to build indexes on the tables. There are no primary keys (and therefore, foreign keys), indexes (clustered or otherwise), constraints, on this warehouse. In other words, it is 100% efficiency free. We are going to put indexes based on usage - by analyzing new queries and current query performance. So, instead of doing it our old fashioned sweat and grunt way of actually reading the SQL statements and execution plans, I thought I'd put the shiny new Database Engine Tuning Advisor to use. I turned SQL logging off in my SSIS package and ran a "Tuning" trace, saved it to a table and analyzed the output in the Tuning Advisor. Most of the lookups are done as: exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',1 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',2 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',3 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',4 and when analyzed, these statements have the reason "Event does not reference any tables". Huh? Does it not see the FROM dbo.Company??!! What is going on here? So, I have multiple questions: How do I get it to capture the actual statement executing in my trace, not what was submitted in a batch? Are there any best practices to follow for tuning performance related to SSIS packages running against SQL Server 2008?

    Read the article

  • database schema eligible for delta synchronization

    - by WilliamLou
    it's a question for discussion only. Right now, I need to re-design a mysql database table. Basically, this table contains all the contract records I synchronized from another database. The contract record can be modified, deleted or users can add new contract records via GUI interface. At this stage, the table structure is exactly the same as the Contract info (column: serial number, expiry date etc.). In that case, I can only synchronize the whole table (delete all old records, replace with new ones). If I want to delta(only synchronize with modified, new, deleted records) synchronize the table, how should I change the database schema? here is the method I come up with, but I need your suggestions because I think it's a common scenario in database applications. 1)introduce a sequence number concept/column: for each sequence, mark the new added records, modified records, deleted records with this sequence number. By recording the last synchronized sequence number, only pass those records with higher sequence number; 2) because deleted contracts can be added back, and the original table has primary key constraints, should I create another table for those deleted records? or add a flag column to indicate if this contract has been deleted? I hope I explain my question clearly. Anyway, if you know any articles or your own suggestions about this, please let me know. Thanks!

    Read the article

  • Vertical crosshair reposition on Google Map after map resize Issue

    - by joe
    I use the following function to add a crosshair to my Google Map. It works great for horizontal resizing. I can not figure out how to make it work for vertical. After resizing the map it requires the user to either zoom in or out 1 layer, upon which the crosshair snaps to center. var crosshairsSize=17; GMap2.prototype.addCrosshairs=function(){ var container=this.getContainer(); if(this.crosshairs){$(this.crosshairs).remove();} var crosshairs=document.createElement("img"); crosshairs.src='../images/crosshair2.gif'; crosshairs.style.width=crosshairsSize+'px'; crosshairs.style.height=crosshairsSize+'px'; crosshairs.style.border='0'; crosshairs.style.position='relative'; crosshairs.style.top=((parseInt(container.clientHeight)-crosshairsSize)/2)+'px'; crosshairs.style.left="0px"; // The map is centered so 0 will do crosshairs.style.zIndex='500'; container.appendChild(crosshairs); this.crosshairs=crosshairs; return crosshairs;}; I use map.checkResize(); and the map is correct, it's just the crosshair that is off. I've tried firing a javascript zoom in one level but it doesn't work like zooming in with the mouse. It's only zooming in/out with the mouse scroll wheel too... clicking and dragging the zoom slider doesn't work so somehow it's liking the mouse scroll wheel. Firing addCrosshairs(); after resize makes no difference. It places the new crosshair in old spot and still requires a mouse scroll zoom.

    Read the article

  • Extand legacy site with another server-side programming platform best practice

    - by Andrew Florko
    Company I work for have a site developed 6-8 years ago by a team that was enthusiastic enough to use their own private PHP-based CMS. I have to put dynamic data from one intranet company database on this site in one week: 2-3 pages. I contacted company site administrator and she showed me administrative part - CMS allows only to insert html blocks & manage site map (site is deployed on machine that is inside company & fully accessible & upgradable). I'm not a PHP-guy & I don't want to dive into legacy hardly-who-ever-heard-about CMS engine I also don't want to contact developers team, 'cos I'm not sure they are still present and capable enough to extend this old days site and it'll take too much time anyway. I am about to deploy helper asp.net site on iis with 2-3 pages required & refer helper site via iframe from present site. New pages will allow to download some dynamic content from present site also. Is it ok and what are the pitfalls with iframe approach?

    Read the article

  • running .net application over a network

    - by Marlon
    Hello, I need some advice please. I need to enable a .Net application to run over a network share, the problem is that this will be on clients network shares and so the path will not be identical. I've had a quick look at ClickOnce and the publish options in VS2008 but it wants a specific network share location - and I'm assuming this location gets stored somewhere when it does its thing. At the moment the job is being done with a old VB6 application and so gets around all these security issues, but that application is poorly written and almost impossible to maintain so it really needs to go. Is it possible for the domain controller to be set up to allow this specific .Net application to execute? Any other options would be welcomed as I want to get this little application is very business critical. I aught to say that the client networks are schools, and thus are often quite locked down as are the client machines, so manually adding exceptions to each client machine is a big no no. Marlon --Edit-- Apologies, I forgot to mention we're restricted to .net 2.0 for the moment, we are planning to upgrade this to 4.0 but that won't be immediate.

    Read the article

  • Setting Connection Parameters via ADO for MSSQL

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some psuedo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • git merge with renamed files

    - by Kevin
    I have a large website that I am moving into a new framework and in the process adding git. The current site doesn't have any version control on it. I started by copying the site into a new git repository. I made a new branch and made all of the changes that were needed to make it work with the new framework. One of those steps was changing the file extension of all of the pages. Now in the time that I have been working on the new site changes have been made to files on the old site. So I switched to master and copied all of those changes in. The problem is when I merge the branch with the new framework back onto master there is a conflict on every file that was changed on the master branch. I wouldn't be to worried about it but there are a couple of hundred files with changes. I have tried git rebase and git rebase --merge with no luck. How can I merge these 2 branches without dealing with every file?

    Read the article

  • I installed XAMPP in a virtual drive and now I can't run its services. Why?

    - by Haris
    Hi, The description is quite long. Please spend some time to read it. ^:)^ I have an old PHP application and I'm trying to test and debug it. Unfortunately, the application uses important data so I can't just click this and that. Now, what I'm trying to do is create a copy of the application in a different computer. From now on, I will call the computer running my original PHP application as 'Computer A' and the computer which I'm going to use to run the copy of the application as 'Computer B'. To prevent missing link problems since the application contains static paths, such in images or tags, I have to copy all files and folder related to my PHP application from Computer A to the same path in Computer B. Unfortunately, Computer B only has drive C while Computer A has drive D and the files of my PHP application is located in 'D:\xampp\htdocs' in Computer A. OK, now I have to create drive D in computer B. At first, I tried to create a second partition in Computer B by using PowerQuest Partition Magic 8, but somehow Partition Magic doesn't run in Computer B. I have tried to reinstall it but it still doesn't run. So, another alternative is to create a virtual drive. That is what I did. I created a virtual drive by running the 'subst' command in Command Prompt. The virtual drive is D and it refers to a directory, which is 'C:\Virtual'. After I have drive D in Computer B, I installed XAMPP there. The installation was successful. Now, I also have 'D:\xampp\htdocs' in Computer B. However, when I ran the Apache, MySQL, or Filezilla service, I receive an error message "Error 3: The system cannot find the file specified.". In Computer B, there is no IIS or process using the port 80. What should I do? Please help me. Many thanks in advance, Haris

    Read the article

  • Does a VCL OrgChart component with decent features exists? Is there a viable alternative?

    - by user193655
    I am using DevExpress OrgChart component that is still maintained but not developed since 2003 (fortunately bugs are fixed, but nothing more). Honestly this component, even if it starts to look too old still suffices my requirements except for 2 things: 1) it doesn't support at all the staff feature, for understanding what I mean see this image (where the items in staff are Administration, Communication, IT, Special Projects). 2) it arranges the items without optimizing the space, for example if there are 3 items at top level, and only the second item has 2 childs, the top items items are drawn more distantly, because of the 2 childs, there is no an option for "shirinking" the diagram. Of course the component misses tons of the features one would expect from an OrgChart tool, but in my case Those 2, and expecially (1) are important, the rest is lack of eye-candy. I look for VCL components, but if (as I fear, since I never found it) such component doesn't exist) I can see the following alternatives: i) using Hydra with .net winforms components ii) using ActiveX components. Between the 2 I would prefer ActiveX because of the .NET deployment hell (what I like about Delphi is that you ship the exe to the customer witn Win2k and it works). Anyway I never used an activeX control and I don't know which are the deployment issues, but I fear I will lose the opportunity of replacing an exe and upgrading the software. iii) hire a delphi component develoeper that can customize the DevEx component by adding feature (1) and maybe (2). I am stuck.

    Read the article

  • Creating a draft version of the page before publishing in Drupal 6?

    - by Marco
    I've been looking for a good way to handle revisions in Drupal, but I am yet to succeed. For some reason there is no built in way to save a draft (that I've found so far), and the modules I've tried so far do not seem to fully work. First I tried save_as_draft, which seemed to do almost what I wanted, and if I'm not mistaken, also handles CCK fields. Sadly it seems to be broken somehow, so I can't edit a page once I've saved it as a draft.. maybe I could fix it by going through the code, but that would not be my preferred solution. The other module I tried is aptly named draft, but from what I can tell, this module only handles the title and body fields, and does this in a way that appear odd to me. Is there some common practice to solve this? I couldn't imagine that nobody had to solve this before, but I haven't found any good solution to it yet. Clarification I need this functionality for already existing content, that is, I want to be able to create and edit a draft version of an already published page, while the "old" version would still be available to anonymous users.

    Read the article

  • What is SSIS order of data transformation component method calls

    - by Ron Ruble
    I am working on a custom data transformation component. I'm using NUnit and NMock2 to test as I code. Testing and getting the custom UI and other features right is a huge pain, in part because I can't find any documentation about the order in which SSIS invokes methods on the component at design time as well as runtime. I can correct the issues readily enough, but it's tedious and time consuming to unregister the old version, register the new version, fire up the test ssis package, try to display the UI, get an obscure error message, backtrace it, modify the component and continue. One of the big issues involves the UI component needing access to the componentmetadata and buffermanager properties of the component at design time, and what I need to provide for to support properties that won't be initialized until after the user enters them in the UI. I can work through it; but if someone knows of some docs or tips that would speed me up, I'd greatly appreciate it. The samples I've found havn't been much use; they seem to be directed to showing off cool stuff (Twitter, weather.com) rather than actual work. Thanks in advance.

    Read the article

  • Difference in F# and Clojure when calling redefined functions

    - by Michiel Borkent
    In F#: > let f x = x + 2;; val f : int -> int > let g x = f x;; val g : int -> int > g 10;; val it : int = 12 > let f x = x + 3;; val f : int -> int > g 10;; val it : int = 12 In Clojure: 1:1 user=> (defn f [x] (+ x 2)) #'user/f 1:2 user=> (defn g [x] (f x)) #'user/g 1:3 user=> (g 10) 12 1:4 user=> (defn f [x] (+ x 3)) #'user/f 1:5 user=> (g 10) 13 Note that in Clojure the most recent version of f gets called in the last line. In F# however still the old version of f is called. Why is this and how does this work?

    Read the article

  • webkit - css transition effect on a div

    - by Mak Killbers
    I am trying to do fadein/out effect when I change innerHTML of a div. Even tried example given at safari doc I want the div to fade-out, change the contents and then fade-in again with new content. <style> .cv { background-color: #eee; -webkit-transition: all 1s ease-in-out;} .cf {opacity: 0;} </style> <body> <div id="main"> <div id="c"> OLD </div> </div> </body> <script> var c = document.getElementById("c"); c.setAttribute("class", "cf"); c.innerHTML = ''; c.appendChild(fragment); c.setAttribute("class", "cv"); </script> Please help me here

    Read the article

  • How to manipulate a header and then continue with it in C#?

    - by Simon Linder
    Hi all, I want to replace an old ISAPI filter that ran on IIS6. This filter checks if the request is of a special kind, then manipulates the header and continues with the request. Two headers are added in the manipulating method that I need for calling another special ISAPI module. So I have ISAPI C++ code like: DWORD OnPreProc(HTTP_FILTER_CONTEXT *pfc, HTTP_FILTER_PREPROC_HEADERS *pHeaders) { if (ManipulateHeaderInSomeWay(pfc, pHeaders)) { return SF_STATUS_REQ_NEXT_NOTIFICATION; } return SF_STATUS_REQ_FINISHED; } I now want to rewrite this ISAPI filter as a managed module for the IIS7. So I have something like this: private void OnMapRequestHandler(HttpContext context) { ManipulateHeaderInSomeWay(context); } And now what? The request seems not to do what it should? I already wrote an IIS7 native module that implements the same method. But this method has a return value with which I can tell what to do next: REQUEST_NOTIFICATION_STATUS CMyModule::OnMapRequestHandler(IN IHttpContext *pHttpContext, OUT IMapHandlerProvider *pProvider) { if (DoSomething(pHttpContext)) { return RQ_NOTIFICATION_CONTINUE; } return RQ_NOTIFICATION_FINISH_REQUEST; } So is there a way to send my manipulated context again?

    Read the article

  • Benefits of migrating my work to a new web development framework?

    - by John
    When I first started programming with PHP, I was ignorant of other php frameworks (like code igniter, cake php, etc...). So I fell into the trap of re-inventing wheels, which had the benefit of being "fun" and "educational". Overtime, I discovered other open source products that I found useful, like smarty templating engine, jquery library, tcpdf library, fdf etc...so I started bundling these technologies along with things I've built over the years into a LAMP development framework to make life easier for myself. This pass year, I've been having fun developing on the code igniter framework. It does many of the things I do in my framework. Coding in CI feels natural because the MVC and ORM feels similar to the MVC and ORM of my framework. So now I'm contemplating migrating a lot of the plugins in my framework over to CI. The pros and cons I can think of for such a project are: Pros: benefit from the vast community of CI developers lots of other developers will be familiar with it better documentation Cons: I've built a lot of useful plugins against my own framework, and it will take a lot of time to move even just the essential ones at the moment, I still, work faster against my own framework than CI, just because I'm more familiar with it even if I did migrate to CI, there will always be newer and better frameworks in the near future, and i'll be contemplating this scenario again So my question is the following: perhaps I should leave my old framework as is, and for each new project I receive, I make a decision on whether the requirements are best served by developing with CI or my own framework. Is this the right approach?

    Read the article

  • How to structure applications as multiple projects an name the packages in Java

    - by lostiniceland
    Hello Everyone I would like to know how you set up your projects in Java For example, in my current work-project, a six year old J2EE app with approximately 2 million LoC, we only have one project in Eclipse. The package structure is split into tiers and then domains, so it follows guidelines from Sun/Oracle. A huge ant-script is building different jars out of this one source-folder Personally I think it would be better to have multiple projects, at least for each tier. Recently I was playing around with a projects-structure like this: Domainproject (contains only annotated pojos, needed by all other projects) Datalayer (only persistence) Businesslogic (services) Presenter View This way, it should be easier to exchange components and when using a build tool like Maven I can have everything in a repository so when only working on the frontend I can get the rest as a dependecy in my classpath. Does this makes sense to you? Do you use different approaches and how do they look like? Furthermore I am struggeling how to name my packages/projects correctly. Right now, the above project-structure reflects in the names of the packages, eg. de.myapp.view and it continues with some technical subfolders like internal or interfaces. What I am missing here, and I dont know how to do this properly, is the distinction to a certain domain. When the project gets bigger it would be nice to recognise a particular domain but also the technical details to navigate more easily within the project. This leads to my second question: how do you name your projects and packages?

    Read the article

  • Internal "Tee" setup

    - by RadlyEel
    I have inherited some really old VC6.0 code that I am upgrading to VS2008 for building a 64-bit app. One required feature that was implemented long, long ago is overriding std::cout so its output goes simultaneously to a console window and to a file. The implementation depended on the then-current VC98 library implementation of ostream and, of course, is now irretrievably broken with VS2008. It would be reasonable to accumulate all the output until program termination time and then dump it to a file. I got part of the way home by using freopen(), setvbuf(), and ios::sync_with_stdio(), but to my dismay, the internal library does not treat its buffer as a ring buffer; instead when it flushes to the output device it restarts at the beginning, so every flush wipes out all my accumulated output. Converting to a more standard logging function is not desirable, as there are over 1600 usages of "std::cout << " scattered throughout almost 60 files. I have considered overriding ostream's operator<< function, but I'm not sure if that will cover me, since there are global operator<< functions that can't be overridden. (Or can they?) Any ideas on how to accomplish this?

    Read the article

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

  • Explicit localization problem

    - by X-Dev
    when trying to translate the confirmation message to Norwegian i get the following error: Cannot have more than one binding on property 'OnClientClick' on 'System.Web.UI.WebControls.LinkButton'. Ensure that this property is not bound through an implicit expression, for example, using meta:resourcekey. i use Explicit localization in the following manner: <asp:LinkButton ID="lnkMarkInvoiced" runat="server" OnClick="lnkMarkInvoiced_OnClick" OnClientClick="<%# Resources: lnkMarkInvoicedResource.OnClientClick%>" Visible="False" CssClass="stdtext" meta:resourcekey="lnkMarkInvoicedResource" ></asp:LinkButton> here's the local resource file entry: <data name="lnkMarkInvoicedResource.OnClientClick" xml:space="preserve"> <value>return confirm('Er du sikker?');</value> if i remove the meta attribute i get the English text(default). how do i get the Norwegian text appearing without resorting to using the code behind? Update: removing the meta attribute prevents the exception from occurring but the original problem still exists. I can't get the Norwegian text to show. only the default English text shows. Another Update: I know this question is getting old but i still can't get the Norwegian text to display. If anyone has some tips please post a response.

    Read the article

  • Setting Connection Parameters via ADO for SQL Server

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some pseudo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >