Search Results

Search found 25246 results on 1010 pages for 'cache control'.

Page 41/1010 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Empty $upstream_http_location variable if response was cached

    - by Ivaldi
    I would like to cache the response of an redirect. (Cache the request to some site which returns a redirect and cache the second request which returns the actual content.) So far my config looks like this: location = /proxy { error_page 301 302 307 = @redir; resolver 8.8.8.8; proxy_pass $arg_url; proxy_intercept_errors on; proxy_cache pcache; proxy_cache_key $arg_url; proxy_cache_valid 200 301 302 307 1d; proxy_cache_min_uses 1; proxy_ignore_client_abort on; proxy_ignore_headers Set-Cookie Expires Cache-Control; } location @redir { resolver 8.8.8.8; # we need to assign $upstream_http_location to another var in order to use it with proxy_pass set $target $upstream_http_location; proxy_pass $target; proxy_cache predirects; proxy_cache_key $upstream_http_location; proxy_cache_valid 200 301 302 307 1d; proxy_cache_min_uses 1; proxy_ignore_headers Set-Cookie Expires Cache-Control; } It works for the first request or without the 30x codes for proxy_cache_valid in the /proxy part, but $target and $upstream_http_location are empty, if the response was cached. Is there a nice solution to cache both requests? Thanks!

    Read the article

  • Set default form textfield value (webbrowser control/DOM Javscript)

    - by Khou
    Hi I would like my application to load a webpage and set default the form textfield value a predefine value. Requirements: -The application is a windows form, it is to use the web browser control, to load a web page. -Textfield values are define by within the application. -When textfield on the webpage matches the applications predefined elements, the predefine fixed value is set and can not be changed by the end user. Example If my application defines element "FirstName" equal to value "John", the text field for value for element "FirstName" will always equal "John" and this value can not be changed by the end user. Below is html/javascript code to perform this functionality, now how do I implement this in a windows form? (without having to modify the loaded webpage source code (if possible). HTML <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <html> <head> <title>page title</title> <script script type="text/javascript" src="demo1.js"></script> </head> <body onload="def(document.someform, 'name', 'my default name value');"> <h2 style="color: #8e9182">test form title</h2> <form name="someform" id="someform_frm" action="#"> <table cellspacing="1"> <tr><td><label for="name">NameX: </label></td><td><input type="text" size="30" maxlength="155" name="name" onchange="def(document.someform, 'name', 'my default name value');"></td></tr> <tr><td><label for="name2">NameY: </label></td><td><input type="text" size="30" maxlength="155" name="name2"></td></tr> <tr><td colspan="2"><input type="button" name="submit" value="Submit" onclick="showFormData(this.form);" ></td></table> </form> </body> </html> JAVASCRIPT function def(oForm, element_name, def_txt) { oForm.elements[element_name].value = def_txt; }

    Read the article

  • Set default form textfield value (webbrowser control)

    - by Khou
    Hi I would like my application to load a webpage and set default the form textfield value a predefine value. Requirements: -The application is a windows form, it is to use the web browser control, to load a web page. -Textfield values are define by within the application. -When textfield on the webpage matches the applications predefined elements, the predefine fixed value is set and can not be changed by the end user. Example If my application defines element "FirstName" equal to value "John", the text field for value for element "FirstName" will always equal "John" and this value can not be changed by the end user. Below is html/javascript code to perform this functionality, now how do I implement this in a windows form? (without having to modify the loaded webpage source code (if possible). HTML <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <html> <head> <title>page title</title> <script script type="text/javascript" src="demo1.js"></script> </head> <body onload="def(document.someform, 'name', 'my default name value');"> <h2 style="color: #8e9182">test form title</h2> <form name="someform" id="someform_frm" action="#"> <table cellspacing="1"> <tr><td><label for="name">NameX: </label></td><td><input type="text" size="30" maxlength="155" name="name" onchange="def(document.someform, 'name', 'my default name value');"></td></tr> <tr><td><label for="name2">NameY: </label></td><td><input type="text" size="30" maxlength="155" name="name2"></td></tr> <tr><td colspan="2"><input type="button" name="submit" value="Submit" onclick="showFormData(this.form);" ></td></table> </form> </body> </html> JAVASCRIPT function def(oForm, element_name, def_txt) { oForm.elements[element_name].value = def_txt; }

    Read the article

  • Release management with a distributed version control system

    - by See Sharp Cheddar
    We're considering a switch from SVN to a distributed VCS at my workplace. I'm familiar with all the reasons for wanting to using a DVCS for day-to-day development: local version control, easier branching and merging, etc., but I haven't seen that much that's compelling in terms of managing software releases. Here's our release process: Discover what changes are available for merging. Run a query to find the defects/tickets associated with these changes. Filter out changes associated with "open" tickets. In our environment, tickets must be in a closed state in order to merged with a release branch. Filter out changes we don't want in the release branch. We are very conservative when it comes to merging changes. If a change isn't absolutely necessary, it doesn't get merged. Merge available changes, preferably in chronological order. We group changes together if they're associated with the same ticket. Block unwanted changes from the release branch (svnmerge block) so we don't have to deal with them again. Sometimes we can be juggling 3-5 different milestones at a time. Some milestones have very different constraints, and the block list can get quite long. I've been messing around with git, mercurial and plastic, and as far as I can tell none of them address this model very well. It seems like they would work very well when you have only one product you're releasing, but I can't imagine using them for juggling multiple, very different products from the same codebase. For example, cherry-picking seems to be an afterthought in mercurial. (You have to use the 'transplant' command). After you cherry-pick a change into a branch it still shows up as an available integration. Cherry-picking breaks the mercurial way of working. DVCS seems to be better suited for feature branches. There's no need for cherry-picking if you merge directly from a feature branch to trunk and the release branch. But who wants to do all that merging all the time? And how do you query for what's available to merge? And how do you make sure all the changes in a feature branch belong together? It sounds like total chaos. I'm torn because the coder in me wants DVCS for day-to-day work. I really want it. But I fear the day when I have to put the release manager hat and sort out what needs to be merged and what doesn't. I want to write code, I don't want to be a merge monkey.

    Read the article

  • How to Add Control Panel to “My Computer” in Windows 7 or Vista

    - by The Geek
    Back in the Windows XP days, you could easily add Control Panel to My Computer with a simple checkbox in the folder view settings. Windows 7 and Vista don’t make this quite as easy, but there’s still a way to get it back. To make this tweak, we’ll be doing a quick registry hack, but there’s a downloadable version provided as well. Manual Registry Tweak to Add Control Panel Open up regedit.exe through the start menu search or run box, and then browse down to the following key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace Now that you’re there, you’ll need to right-click and create a new key… If you want to add the regular Control Panel view, with the categories, you’ll need to use one GUID as the name of the key. If you want the icon view instead, you can use the other key. Here they are: Category View:  {26EE0668-A00A-44D7-9371-BEB064C98683} Icon View: {21EC2020-3AEA-1069-A2DD-08002B30309D} Once you’re done, it should look like this: Now over in the Computer view, just hit the F5 key to refresh the panel, and you should see the new icon pop up in the list: Now when you click on the icon you’ll be taken to Control Panel. If you didn’t know how to change the view before, you can use the drop-down box on the right-hand side to switch between Category and icon view. Downloadable Registry Hack Rather than deal with manual registry editing, you can simply download the file, extract it, and then either double-click on the AddCategoryControlPanel.reg to add the Category view icon, or AddIconControlPanel.reg to add the other icon. There’s an uninstall script provided for each. Download ControlPanelMyComputer Registry Hack from howtogeek.com Similar Articles Productive Geek Tips Disable User Account Control (UAC) the Easy Way on Win 7 or VistaHow To Figure Out Your PC’s Host Name From the Command PromptRestore Missing Desktop Icons in Windows 7 or VistaNew Vista Syntax for Opening Control Panel Items from the Command-lineAdd Registry Editor to Control Panel TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • ASP.NET Hosting :: ASP.NET File Upload Control

    - by mbridge
    The asp.net FileUpload control allows a user to browse and upload files to the web server. From developers perspective, it is as simple as dragging and dropping the FileUpload control to the aspx page. An extra control, like a Button control, or some other control is needed, to actually save the file. <asp:FileUploadID="FileUpload1"runat="server"/> <asp:ButtonID="B1"runat="server"Text="Save"OnClick="B1_Click"/> By default, the FileUpload control allows a maximum of 4MB file to be uploaded and the execution timeout is 110 seconds. These properties can be changed from within the web.config file’s httpRuntime section. The maxRequestLength property determines the maximum file size that can be uploaded. The executionTimeout property determines the maximum time for execution. <httpRuntimemaxRequestLength="8192"executionTimeout="220"/> From code behind, the mime type, size of the file, file name and the extension of the file can be obtained. The maximum file size that can be uploaded can be obtained and modified using the System.Web.Configuration.HttpRuntimeSection class. Files can be alternatively saved using the System.IO.HttpFileCollection class. This collection class can be populated using the Request.Files property. The collection contains HttpPostedFile class which contains a reference to the class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.IO; using System.Configuration; using System.Web.Configuration;   namespace WebApplication1 {     public partial class WebControls : System.Web.UI.Page     {         protected void Page_Load(object sender, EventArgs e)         {         }           //Using FileUpload control to upload and save files         protected void B1_Click(object sender, EventArgs e)         {             if (FileUpload1.HasFile && FileUpload1.PostedFile.ContentLength > 0)             {                 //mime type of the uploaded file                 string mimeType = FileUpload1.PostedFile.ContentType;                   //size of the uploaded file                 int size = FileUpload1.PostedFile.ContentLength; // bytes                   //extension of the uploaded file                 string extension = System.IO.Path.GetExtension(FileUpload1.FileName);                                  //save file                 string path = Server.MapPath("path");                                 FileUpload1.SaveAs(path + FileUpload1.FileName);                              }             //maximum file size allowed             HttpRuntimeSection rt = new HttpRuntimeSection();             rt.MaxRequestLength = rt.MaxRequestLength * 2;             int length = rt.MaxRequestLength;                     //execution timeout             TimeSpan ts = rt.ExecutionTimeout;             double secomds = ts.TotalSeconds;           }           //Using Request.Files to save files         private void AltSaveFile()         {             HttpFileCollection coll = Request.Files;             for (int i = 0; i < coll.Count; i++)             {                 HttpPostedFile file = coll[i];                   if (file.ContentLength > 0)                     ;//do something             }         }     } }

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3)

    - by Ankit G
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Delighted to announce the GA of EM Cloud Control Release 3 on all supported platforms. This release includes a new 12.1.0.3 version of platform (OMS & Agent), along with revised new versions of several Plug-ins and Metadata plug-ins (including a brand new Metadata plug-in for Oracle Virtual Networking). This release marks yet another major & significant milestone for Enterprise Manager 12c Cloud Control product releases. Following shows the list of new plug-ins versions available along the Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3). The new plug-ins have dependency on 12.1.0.3 platform, and customer needs to be on minimum 12.1.0.3 platform (OMS/Agent) version of the product before being able to deploy/use these plug-in versions. (In other words, the new plug-in versions cannot be deployed, unless Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3) is installed or upgraded to). Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3) release includes tons of new features, along with several stability and performance bug fixes and is available for download for all platforms from OTN:Installation/Upgrade paths: EM Customers can do a fresh installation using "Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3)", and will get the latest version of the platform, along with all the latest versions of plug-ins and Metadata plug-ins out of the box. EM Customers who are on Release1 (12.1.0.1+BP1) or Release 2 (12.1.0.2), or on older releases 11g and 10.2.0.5, can choose to use Oracle Enterprise Manager Cloud Control 12c Release 3 bits, to upgrade directly to the latest Release 3, and the plug-ins will be automatically upgraded to the latest versions. Enterprise Manager Certification Matrix is also now available on My Oracle Support - here.

    Read the article

  • How does a web server/the http protocol handle version control and compression?

    - by Sune Rasmussen
    When a client browser requests a file from the web server, I know that some kind of check is performed, because the files needed to serve the web page may already be cached by the web browser. So, if a file exists in the cache, no files are sent. But if the file on the server has changed since the file was cached in the browser, the file is sent and updated anyhow. Then, if you have compression like gzipping enabled on the server, the files that are to be provided to the client must be gzipped on the way, requiring some amount of server side processing. But how is this managed? The logical approach seems to me, that the web server should have a cache as well, containing the newest version of all files that have been requested within a certain time span, thus a compressed version of these files, so that compression would not have to be done each time a files is requested. And also, how are files eventually requested? Does the browser ask for files, each time it encounters one in the HTML code and the specific file is not stored in the local cache, or does it sum all the files that are needed up and ask for the whole bunch at the same time? But that's only guessing from a programming point of view, and I don't really know. If the answers are very different among web server systems, I'm primarily interested in Apache, but other answers are appreciated, too.

    Read the article

  • How to Use Offline Files in Windows to Cache Your Networked Files Offline

    - by Taylor Gibb
    The problem with storing all your files on a file server or networked machine is that when you leave the network, how are you going to access your files? Instead of using a VPN or Dropbox, you can use the Offline Files feature built into Windows. Note: You should probably not be using this guide to make your 2 terabyte movie collection available offline—while it may work, it is not recommended just because the Offline Files feature isn’t made for storing massive amounts of data offline. How to Use Offline Files in Windows to Cache Your Networked Files Offline How to See What Web Sites Your Computer is Secretly Connecting To HTG Explains: When Do You Need to Update Your Drivers?

    Read the article

  • Microsoft gives you your cache back

    - by Dave Ballantyne
    The system works and its called Microsoft Connect , who would of thought it :) Following on from my previous blog post MicroSoft – Follow best practices, on the connect item , the followup stated that changes had been made in 2008.  I genuinely thought that a change would take an age to trickle through to the customer. But after firing up 2008R2 RTM and examining the SqlAgent traffic with profiler , where before i would see non-parameterized sql, I now see RPC calls.    Excellent , i get my cache back.

    Read the article

  • .bash_history and .cache

    - by John Isaacks
    I have a user who's home directory is a Mercurial repository. Mercurial notified me that there were 2 new unversioned files in my repository. .bash_history and .cache/motd.legal-displayed. I assume bash_history is the history of bash commands for my user. I have no idea what the other is. I don't want these files to be versioned by Mercurial, are they safe to just delete, or will they come back, or mess something up? Can they be moved to somewhere else? Or do I have to add them to my .hgignore file?

    Read the article

  • Random Cache Expiry

    - by mahemoff
    I've been experimenting with random cache expiry times to avoid situations where an individual request forces multiple things to update at once. For example, a web page might include five different components. If each is set to time out in 30 minutes, the user will have a long wait time every 30 minutes. So instead, you set them all to a random time between 15 and 45 minutes to make it likely at most only one component will reload for any given page load. I'm trying to find any research or guidelines on this topic, e.g. optimal variance parameters. I do recall seeing one article about how Google (?) uses this technique, but can't locate it, and there doesn't seem to be much written about the topic.

    Read the article

  • Implementing cache system in Java Web Application

    - by TGM
    I worked with JPA (Eclipselink implementation) and Hibernate. As I understand these two have great caching systems. I am interested in caching in a Web application and in order to better understand the process I'm trying to implement something on my own. Sadly, I cannot find any in depth documentation about this subject. I'm interested in things like high scalability, sharing memory on different machines and other important theoretical matters. Is there any tutorial or open project I could check out? Thank you! *LE: * I want to cache DB information in POJOs just like JPA or Eclipselink

    Read the article

  • SQL Server source control from Visual Studio

    - by David Atkinson
    Developers have long since had to context switch between two IDEs, Visual Studio for application code development and SQL Server Management Studio for database development. While this is accepted, especially given the richness of the database development feature set in SSMS, loading a separate tool can seem a little overkill. This is where SQL Connect comes in. This is an add-in to Visual Studio that provides a connected development experience for the SQL Server developer. Connected database development involves modifying a development sandbox database, as opposed to offline development, where SQL text files are modified independently of the database. One of the main complaints of Data Dude (VS DBPro) is that it enforces the offline approach. This gripe is what SQL Connect addresses. If you don't already use SQL Source Control, you can get up and running with SQL Connect by adding a new project to your Visual Studio solution as follows: Then choose your existing development database and you're ready to go. If you already use SQL Source Control, you will need to link SQL Connect to your existing database scripts folder repository, so SQL Connect and SQL Source Control can be used collaboratively (note that SQL Source Control v.3.0.9.18 or later is required). Locate the repository (this can be found in the Setup tab in SQL Source Control). .and create a working folder for it (here I'm using TortoiseSVN). Back in Visual Studio, locate the SQL Connect panel (in the View menu if it hasn't auto loaded) and select Import SQL Source Control project Locate your working folder and click Import. This creates a Red Gate database project under your solution: From here you can modify your development database, and manage your changes in source control. To associate your development database with the project, right click on the project node, select Properties, set the database and Save. Now you're ready to make some changes. Locate the object you'd like to modify in the Solution Explorer, and double click it to invoke a query window or table designer. You also have the option to edit the creation SQL directly using Edit SQL File in Project. Keeping the development database and Visual Studio project in sync is as easy as clicking on a button. One you've made your change, you can use whichever mechanism you choose to commit to source control. Here I'm using the free open-source AnkhSVN to integrate Subversion with Visual Studio. Maintaining your database in a Visual Studio solution means that you can commit database changes and application code changes in the same changeset. This is desirable if you have continuous integration set up as you want to ensure that all files related to a change are committed atomically, so you avoid an interim "broken build". More discussion on SQL Connect and its benefits can be found in the following article on Simple Talk: No More Disconnected SQL Development in Visual Studio The SQL Connect project team is currently assessing the backlog for the next development effort, and they'd appreciate your feature suggestions, as well as your votes on their suggestions site: http://redgate.uservoice.com/forums/140800-sql-connect-for-visual-studio- A 28-day free trial of SQL Connect is available from the Red Gate website. Technorati Tags: SQL Server

    Read the article

  • Sweet and Sour Source Control

    - by Tony Davis
    Most database developers don't use Source Control. A recent anonymous poll on SQL Server Central asked its readers "Which Version Control system do you currently use to store you database scripts?" The winner, with almost 30% of the vote was...none: "We don't use source control for database scripts". In second place with almost 28% of the vote was Microsoft's VSS. VSS? Given its reputation for being buggy, unstable and lacking most of the basic features required of a proper source control system, answering VSS is really just another way of saying "I don't use Source Control". At first glance, it's a surprising thought. You wonder how database developers can work in a team and find out what changed, when the system worked before but is now broken; to work out what happened to their changes that now seem to have vanished; to roll-back a mistake quickly so that the rest of the team have a functioning build; to find instantly whether a suspect change has been deployed to production. Unfortunately, the survey didn't ask about the scale of the database development, and correlate the two questions. If there is only one database developer within a schema, who has an automated approach to regular generation of build scripts, then the need for a formal source control system is questionable. After all, a database stores far more about its metadata than a traditional compiled application. However, what is meat for a small development is poison for a team-based development. Here, we need a form of Source Control that can reconcile simultaneous changes, store the history of changes, derive versions and builds and that can cope with forks and merges. The problem comes when one borrows a solution that was designed for conventional programming. A database is not thought of as a "file", but a vast, interdependent and intricate matrix of tables, indexes, constraints, triggers, enumerations, static data and so on, all subtly interconnected. It is an awkward fit. Subversion with its support for merges and forks, and the tolerance of different work practices, can be made to work well, if used carefully. It has a standards-based architecture that allows it to be used on all platforms such as Windows Mac, and Linux. In the words of Erland Sommerskog, developers should "just do it". What's in a database is akin to a "binary file", and the developer must work only from the file. You check out the file, edit it, and save it to disk to compile it. Dependencies are validated at this point and if you've broken anything (e.g. you renamed a column and broke all the objects that reference the column), you'll find out about it right away, and you'll be forced to fix it. Nevertheless, for many this is an alien way of working with SQL Server. Subversion is the powerhouse, not the GUI. It doesn't work seamlessly with your existing IDE, and that usually means SSMS. So the question then becomes more subtle. Would developers be less reluctant to use a fully-featured source (revision) control system for a team database development if they had a turn-key, reliable system that fitted in with their existing work-practices? I'd love to hear what you think. Cheers, Tony.

    Read the article

  • Redirect WinForms web browser Control pop ups to another WinForms web browser control with form data

    - by Scott Chantry
    I have a c# web browser and one of the pages it displays has a form. When the form is submitted it posts data to a new window in IE. I want to catch that data and forward it to another C# web browser that I have in my application. So I don't want it to open an IE browser when the javascript function window.open is called, I want it to open it in the 2nd browser window.

    Read the article

  • Control flex application from embedded Flash control

    - by aip.cd.aish
    I have a flex application and have embedded a flash (SWF) file into it using <mx:SWFLoader>. There is an "Exit" button on the Flash file. I want to be able to handle the button click event on the flex application. So when that button in the flash file is clicked, I want to perform an action in the parent flex application. How can I do this? Thanks!

    Read the article

  • How do you combine "Revision Control" with "WorkFlow" for R?

    - by Tal Galili
    Hello all, I remember coming across R users writing that they use "Revision control" (e.g: "Source control"), and I am curious to know: How do you combine "Revision control" with your statistical analysis WorkFlow? Two (very) interesting discussions talk about how to deal with the WorkFlow. But neither of them refer to the revision control element: http://stackoverflow.com/questions/1266279/how-to-organize-large-r-programs http://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing A Long Update To The Question: Following some of the people's answers, and Dirk's question in the comment, I would like to direct my question a bit more. After reading the Wiki article about "revision control" (which I was previously not familiar with), it was clear to me that when using revision control, what one does is to build a development structure of his code. This structure either leads to a "final product" or to several branches. When building something like, let's say, a website. There is usually one end product you work towards (the website), with some prototypes along the way. But when doing a statistical analysis, the work (to my view) is different. Sometimes you know where you want to get to. But more often, you explore. Explore cleaning the dataset. Explore different methods for statistical analysis, and ask various questions of your data (and I am writing this, knowing how Frank Harrell, and other experience statisticians feels about Data dredging). That is way the WorkFlow question with statistical programming is (in my view) a serious and deep question, raising many issues, The simpler ones are technical: Which revision control software do you use (and why) ? Which IDE do you use(and why) ? The more interesting question are about work process: How do you structure your files? What do you keep as a separate file and what as a revision? or asking in a different way - What should be a "branch" and what should be a "sub project" in your code? For example: When starting to explore your data, should a plot be creating and then erased because it didn't lead any where (but kept as a revision) or should there be a backup file of that path? How you solve this tension was my initial curiosity. The second question is "what might I be missing?". What rules (of thumb) should one follow so to avoid common pitfalls doing statistical programming with version control? In my intuition, I feel that statistical programming is inherently different then software development (I am writing this without being a real expert in statistical programming, and even less so in software development). That's way I am unsure which of the lessons I have read here about version control would be applicable. Thanks a lot, Tal

    Read the article

  • What is a good generic sibling control Javascript communication strategy?

    - by James
    I'm building a webpage that is composed of several controls, and trying to come up with an effective somewhat generic client side sibling control communication model. One of the controls is the menu control. Whenever an item is clicked in here I wanted to expose a custom client side event that other controls can subscribe to, so that I can achieve a loosely coupled sibling control communication model. To that end I've created a simple Javascript event collection class (code below) that acts as like a hub for control event registration and event subscription. This code certainly gets the job done, but my question is is there a better more elegant way to do this in terms of best practices or tools, or is this just a fools errand? /// Event collection object - acts as the hub for control communication. function ClientEventCollection() { this.ClientEvents = {}; this.RegisterEvent = _RegisterEvent; this.AttachToEvent = _AttachToEvent; this.FireEvent = _FireEvent; function _RegisterEvent(eventKey) { if (!this.ClientEvents[eventKey]) this.ClientEvents[eventKey] = []; } function _AttachToEvent(eventKey, handlerFunc) { if (this.ClientEvents[eventKey]) this.ClientEvents[eventKey][this.ClientEvents[eventKey].length] = handlerFunc; } function _FireEvent(eventKey, triggerId, contextData ) { if (this.ClientEvents[eventKey]) { for (var i = 0; i < this.ClientEvents[eventKey].length; i++) { var fn = this.ClientEvents[eventKey][i]; if (fn) fn(triggerId, contextData); } } } } // load new collection instance. var myClientEvents = new bsdClientEventCollection(); // register events specific to the control that owns it, this will be emitted by each respective control. myClientEvents.RegisterEvent("menu-item-clicked"); Here is the part where this code above is consumed by source and subscriber controls. // menu control $(document).ready(function() { $(".menu > a").click( function(event) { //event.preventDefault(); myClientEvents.FireEvent("menu-item-clicked", $(this).attr("id"), null); }); }); <div style="float: left;" class="menu"> <a id="1" href="#">Menu Item1</a><br /> <a id="2" href="#">Menu Item2</a><br /> <a id="3" href="#">Menu Item3</a><br /> <a id="4" href="#">Menu Item4</a><br /> </div> // event subscriber control $(document).ready(function() { myClientEvents.AttachToEvent("menu-item-clicked", menuItemChanged); myClientEvents.AttachToEvent("menu-item-clicked", menuItemChanged2); myClientEvents.AttachToEvent("menu-item-clicked", menuItemChanged3); }); function menuItemChanged(id, contextData) { alert('menuItemChanged ' + id); } function menuItemChanged2(id, contextData) { alert('menuItemChanged2 ' + id); } function menuItemChanged3(id, contextData) { alert('menuItemChanged3 ' + id); }

    Read the article

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • How does this ajax call persist DOM changes in the browser cache?

    - by Greg
    For the purpose of the question I need to create a simple fictitious scenario. I have the following trivial page with one link, call it page A: <a class="red-anchor" onclick="change_color(event);" href="http://mysite.com/b/">B</a> With the associated Javascript function: function change_color(e) { var event = e || window.event; var link = event.target; link.className = "green-anchor"; } And I have the appropriate CSS to make the anchor red or green based on the classname. This is working. That is, when I click the anchor it changes color from red to green, which is briefly visible before the browser loads page B. But if I then use the BACK button to return to page A I get different behavior in different browsers. In Safari, the anchor is still green (desired behavior) In Firefox it reverts to red I imagine that Safari is somehow updating its cached version of the page, whereas Firefox isn't. So my first question is: is there any way to get FF to update the cached page, or is something else happening here? Secondly: I have a different implementation where I use an ajax call. In this I set the class of the anchor using a session variable, something like... <a class="<?php echo $_SESSION["color"]; ?>" ...[snip]... >B</a> And the javascript function makes an additional ajax call that changes the "color" session variable. In this case both Safari and Firefox work as expected. When going back from B to A the color is still green. But I can't for the life of me figure out why it should be different to the non-ajax case. I have tried many different permutations and for it to work on FF the "color" session variable MUST change (i.e. the ajax call itself is not somehow reloading the cache). But on coming BACK, the page is being reloaded from the cache (verified in Firebug), so how is the page even accessing this session variable if it isn't reprocessing the page and running that fragment of php in the anchor? I figure there must be something fundamental here that I am not understanding. Any insight would be much appreciated.

    Read the article

  • What's a good 2nd level cache for JEE applications?

    - by Hank
    Can anyone recommend a good 2nd level object caching solution for JEE 6 applications, and give background to your recommendation? I'm using JPA 2.0 as persistence provider. I am particularly worried about having to run the cache client as a single-thread / singleton bean. Is that the case? If so, is that an issue? I've good experience using memcached from a PHP webapp, but PHP is of course single-threaded, so that was never an issue...

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >