Search Results

Search found 42685 results on 1708 pages for 'page speed'.

Page 203/1708 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Server.Execute(path).. executed page returns the calling pages' url from request.url..

    - by ClarkeyBoy
    Hey, As explained in the title, I am having a problem with getting the URL of the page being executed from within a page. Basically I have a dynamic catalogue, where customers select products they are interested in. The manager of the company I am doing this for would like to be able to create an up to date offline catalogue at any given time, to send out to customers who dont have an internet connection. So far its going really well. I am using Server.Execute to get the content for each page, then putting it in static html pages and changing the dynamic links to static html links (ie changing all aspx links to htm). I am able to output all the pages for about us, contact us, home, and the entire catalogue. However, one of the stylesheets which is included in the page based on the URL (if the page is in the administration section then it is not included, otherwise it is) is not being included in the pages when it should be. I have tried outputting the URL but it just returns the URL of the calling page, not the page being called. Does anyone have any idea why this is happening? Any help would be greatly appreciated. Regards, Richard Clarke

    Read the article

  • How do I programatically add an article page to a sharepoint site?

    - by soniiic
    I've been given the task of content migration from another CMS system to SharePoint 2010. The data in the old system is fairly easy to capture and the page hierarchy is simple so I'm not worried about that. However, I am completely flummoxed about how to even create a page in code. I'm using the Microsoft.SharePoint.Client namespace as I do not have sharepoint installed on my system and am wanting to code this up as a console application and so I'm using I'm using ClientContext. (On the other hand, I am willing to go into other solutions if necessary). My end-game: To get a page uploaded into some folder hierarchy which uses a master page, has the page title in a header web part, and a big ol' content-editable web part in the body so any user can come along and edit the content. Things I've tried so far: Using FileCollection.Add() to add an aspx file to the folder "Site Pages". This renders the html in the browser but doesn't enable any features for the user to edit the page Using ListItemCollection.Add() to add a page to the site, but I didn't know what fields I needed. Also I remember it came up with a runtime error saying I should use FileCollection.Add() Uploading to 'Site Pages' instead of 'Pages' So many others... ow my head :( The only plausible thing I can see on the net is to use the PublishingPage type along with PublishingWeb. However, PublishingWeb can only be constructed from an SPWeb object which requires me to be actually hosting the sharepoint application on my workstation. If anyone can lend a hand that would be greatly appreciated :)

    Read the article

  • How to search a PDF in Acrobat Reader AND jump to a certain page via parameter?

    - by agez
    Hi, we are using lucene within a web application to search in a great number of PDF documents. The workflow is like this: A user enters a search term A list of search results is presented to the user. Each search result represents one PDF document and shows the user on which page the search term was found. Each of these pages is represented as a hyperlink. If the user now clicks on such a hyperlink, he directly jumps to that page. But now the user has the problem that the search term isn't highlighted on the page. Therefore the user has to look on his own to find the search term on the page. What we wanted is a way to highlight the search term on the specific page in the PDF. The open parameters for Acrobat Reader allow for either searching a PDF document (with hit highlighting) OR jumping to a specific page. But the combination of both parameters - which we would need - doesn't work. Does anyone have an idea how jumping to a page and highlighting a search term in a pdf document could work? I had a look at the Acrobat SDK but don't see how we can use it (it's terribly documented). Cheers, Helmut

    Read the article

  • How do I stop a page from unloading (navigating away) in JS?

    - by Natalie Downe
    Does anyone know how to stop a page from reloading or navigating away? jQuery(function($) { /* global on unload notification */ warning = true; if(warning) { $(window).bind("unload", function() { if (confirm("Do you want to leave this page") == true) { //they pressed OK alert('ok'); } else { // they pressed Cancel alert('cancel'); return false; } }); } }); I am working on an e-commerce site at the moment, the page that displays your future orders has the ability to alter the quantities of items ordered using +/- buttons. Changing the quantities this way this doesn't actually change the order itself, they have to press confirm and therefore committing a positive action to change the order. However if they have changed the quantities and navigate away from the page I would like to warn them they are doing so in case this is an accident, as the changed quantities will be lost if they navigate away or refresh the page. In the code above I am using a global variable which will be false by default (its only true for testing), when a quantity is changed I will update this variable to be true, and when they confirm the changes I will set it to false. If warning is true and the page is unloaded, I offer them a confirmation box, if they say no they would like to stay on this page I need to stop it from unloading. return false isn't working, it still lets the user navigate away (the alerts are there for debugging only) Any ideas?

    Read the article

  • How can I handle all my errors/messages in one place on an Asp.Net page?

    - by Atomiton
    Hi all, I'm looking for some guidance here. On my site I put things in Web user controls. For example, I will have a NewsItem Control, an Article Control, a ContactForm control. These will appear in various places on my site. What I'm looking for is a way for these controls to pass messages up to the Page that they exist on. I don't want to tightly couple them, so I think I will have to do this with Events/Delegates. I'm a little unclear as to how I would implement this, though. A couple of examples: 1 A contact form is submitted. After it's submitted, instead of replacing itself with a "Your mail has been sent" which limits the placement of that message, I'd like to just notify the page that the control is on with a Status message and perhaps a suggested behaviour. So, a message would include the text to render as well as an enum like DisplayAs.Popup or DisplayAs.Success 2 An Article Control queries the database for an Article object. Database returns an Exception. Custom Exception is passed to the page along with the DisplayAs.Error enum. The page handles this error and displays it wherever the errors go. I'm trying to accomplish something similar to the ValidationSummary Control, except that I want the page to be able to display the messages as the enum feels fit. Again, I don't want to tightly bind or rely a control existing on the Page. I want the controls to raise these events, but the page can ignore them if it wants. Am I going about this the right way? I'd love a code sample just to get me started. I know this is a more involved question, so I'll wait longer before voting/choosing the answers.

    Read the article

  • how to pull and display range (min-max) data for each page in pagination?

    - by Ty W
    I have a table of data that is searchable and sortable, but likely to produce hundreds or thousands of results for broad searches. Assuming the user searches for "foo" and sorts the foos in descending price order I'd like to show a quick-jump select menu like so: <option value="1">Page 1 ($25,000,000 - $1,625,000)</option> <option value="2">Page 2 ($1,600,000 - $1,095,000)</option> <option value="3">Page 3 ($1,095,000 - $815,000)</option> <option value="4">Page 4 ($799,900 - $699,000)</option> ... Is there an efficient way of querying for this information directly from the DB? I've been grabbing all of the matching records and using PHP to calculate the min and max value for each page which seems inefficient and likely to cause scaling problems. The only possible technique I've been able to come up with is some way of having a calculated variable that increments every X records (X records to a page), grouping by that, and selecting MIN/MAX for each page grouping... unfortunately I haven't been able to come up with a way to generate that variable.

    Read the article

  • Dynamic Type to do away with Reflection

    - by Rick Strahl
    The dynamic type in C# 4.0 is a welcome addition to the language. One thing I’ve been doing a lot with it is to remove explicit Reflection code that’s often necessary when you ‘dynamically’ need to walk and object hierarchy. In the past I’ve had a number of ReflectionUtils that used string based expressions to walk an object hierarchy. With the introduction of dynamic much of the ReflectionUtils code can be removed for cleaner code that runs considerably faster to boot. The old Way - Reflection Here’s a really contrived example, but assume for a second, you’d want to dynamically retrieve a Page.Request.Url.AbsoluteUrl based on a Page instance in an ASP.NET Web Page request. The strongly typed version looks like this: string path = Page.Request.Url.AbsolutePath; Now assume for a second that Page wasn’t available as a strongly typed instance and all you had was an object reference to start with and you couldn’t cast it (right I said this was contrived :-)) If you’re using raw Reflection code to retrieve this you’d end up writing 3 sets of Reflection calls using GetValue(). Here’s some internal code I use to retrieve Property values as part of ReflectionUtils: /// <summary> /// Retrieve a property value from an object dynamically. This is a simple version /// that uses Reflection calls directly. It doesn't support indexers. /// </summary> /// <param name="instance">Object to make the call on</param> /// <param name="property">Property to retrieve</param> /// <returns>Object - cast to proper type</returns> public static object GetProperty(object instance, string property) { return instance.GetType().GetProperty(property, ReflectionUtils.MemberAccess).GetValue(instance, null); } If you want more control over properties and support both fields and properties as well as array indexers a little more work is required: /// <summary> /// Parses Properties and Fields including Array and Collection references. /// Used internally for the 'Ex' Reflection methods. /// </summary> /// <param name="Parent"></param> /// <param name="Property"></param> /// <returns></returns> private static object GetPropertyInternal(object Parent, string Property) { if (Property == "this" || Property == "me") return Parent; object result = null; string pureProperty = Property; string indexes = null; bool isArrayOrCollection = false; // Deal with Array Property if (Property.IndexOf("[") > -1) { pureProperty = Property.Substring(0, Property.IndexOf("[")); indexes = Property.Substring(Property.IndexOf("[")); isArrayOrCollection = true; } // Get the member MemberInfo member = Parent.GetType().GetMember(pureProperty, ReflectionUtils.MemberAccess)[0]; if (member.MemberType == MemberTypes.Property) result = ((PropertyInfo)member).GetValue(Parent, null); else result = ((FieldInfo)member).GetValue(Parent); if (isArrayOrCollection) { indexes = indexes.Replace("[", string.Empty).Replace("]", string.Empty); if (result is Array) { int Index = -1; int.TryParse(indexes, out Index); result = CallMethod(result, "GetValue", Index); } else if (result is ICollection) { if (indexes.StartsWith("\"")) { // String Index indexes = indexes.Trim('\"'); result = CallMethod(result, "get_Item", indexes); } else { // assume numeric index int index = -1; int.TryParse(indexes, out index); result = CallMethod(result, "get_Item", index); } } } return result; } /// <summary> /// Returns a property or field value using a base object and sub members including . syntax. /// For example, you can access: oCustomer.oData.Company with (this,"oCustomer.oData.Company") /// This method also supports indexers in the Property value such as: /// Customer.DataSet.Tables["Customers"].Rows[0] /// </summary> /// <param name="Parent">Parent object to 'start' parsing from. Typically this will be the Page.</param> /// <param name="Property">The property to retrieve. Example: 'Customer.Entity.Company'</param> /// <returns></returns> public static object GetPropertyEx(object Parent, string Property) { Type type = Parent.GetType(); int at = Property.IndexOf("."); if (at < 0) { // Complex parse of the property return GetPropertyInternal(Parent, Property); } // Walk the . syntax - split into current object (Main) and further parsed objects (Subs) string main = Property.Substring(0, at); string subs = Property.Substring(at + 1); // Retrieve the next . section of the property object sub = GetPropertyInternal(Parent, main); // Now go parse the left over sections return GetPropertyEx(sub, subs); } As you can see there’s a fair bit of code involved into retrieving a property or field value reliably especially if you want to support array indexer syntax. This method is then used by a variety of routines to retrieve individual properties including one called GetPropertyEx() which can walk the dot syntax hierarchy easily. Anyway with ReflectionUtils I can  retrieve Page.Request.Url.AbsolutePath using code like this: string url = ReflectionUtils.GetPropertyEx(Page, "Request.Url.AbsolutePath") as string; This works fine, but is bulky to write and of course requires that I use my custom routines. It’s also quite slow as the code in GetPropertyEx does all sorts of string parsing to figure out which members to walk in the hierarchy. Enter dynamic – way easier! .NET 4.0’s dynamic type makes the above really easy. The following code is all that it takes: object objPage = Page; // force to object for contrivance :) dynamic page = objPage; // convert to dynamic from untyped object string scriptUrl = page.Request.Url.AbsolutePath; The dynamic type assignment in the first two lines turns the strongly typed Page object into a dynamic. The first assignment is just part of the contrived example to force the strongly typed Page reference into an untyped value to demonstrate the dynamic member access. The next line then just creates the dynamic type from the Page reference which allows you to access any public properties and methods easily. It also lets you access any child properties as dynamic types so when you look at Intellisense you’ll see something like this when typing Request.: In other words any dynamic value access on an object returns another dynamic object which is what allows the walking of the hierarchy chain. Note also that the result value doesn’t have to be explicitly cast as string in the code above – the compiler is perfectly happy without the cast in this case inferring the target type based on the type being assigned to. The dynamic conversion automatically handles the cast when making the final assignment which is nice making for natural syntnax that looks *exactly* like the fully typed syntax, but is completely dynamic. Note that you can also use indexers in the same natural syntax so the following also works on the dynamic page instance: string scriptUrl = page.Request.ServerVariables["SCRIPT_NAME"]; The dynamic type is going to make a lot of Reflection code go away as it’s simply so much nicer to be able to use natural syntax to write out code that previously required nasty Reflection syntax. Another interesting thing about the dynamic type is that it actually works considerably faster than Reflection. Check out the following methods that check performance: void Reflection() { Stopwatch stop = new Stopwatch(); stop.Start(); for (int i = 0; i < reps; i++) { // string url = ReflectionUtils.GetProperty(Page,"Title") as string;// "Request.Url.AbsolutePath") as string; string url = Page.GetType().GetProperty("Title", ReflectionUtils.MemberAccess).GetValue(Page, null) as string; } stop.Stop(); Response.Write("Reflection: " + stop.ElapsedMilliseconds.ToString()); } void Dynamic() { Stopwatch stop = new Stopwatch(); stop.Start(); dynamic page = Page; for (int i = 0; i < reps; i++) { string url = page.Title; //Request.Url.AbsolutePath; } stop.Stop(); Response.Write("Dynamic: " + stop.ElapsedMilliseconds.ToString()); } The dynamic code runs in 4-5 milliseconds while the Reflection code runs around 200+ milliseconds! There’s a bit of overhead in the first dynamic object call but subsequent calls are blazing fast and performance is actually much better than manual Reflection. Dynamic is definitely a huge win-win situation when you need dynamic access to objects at runtime.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • Optimize Images Using the ASP.NET Sprite and Image Optimization Framework

    The HTML markup of a web page includes the page's textual content, semantic and styling information, and, typically, several references to external resources. External resources are content that is part of web page, but are separate from the web page's markup - things like images, style sheets, script files, Flash videos, and so on. When a browser requests a web page it starts by downloading its HTML. Next, it scans the downloaded HTML for external resources and starts downloading those. A page with many external resources usually takes longer to completely load than a page with fewer external resources because there is an overhead associated with downloading each external resource. For starters, each external resource requires the browser to make an HTTP request to retrieve the resource. What's more, browsers have a limit as to how many HTTP requests they will make in parallel. For these reasons, a common technique for improving a page's load time is to consolidate external resources in a way to reduce the number of HTTP requests that must be made by the browser to load the page in its entirety. This article examines the free and open-source ASP.NET Sprite and Image Optimization Framework, which is a project developed by Microsoft for improving a web page's load time by consolidating images into a sprite or by using inline, base-64 encoded images. In a nutshell, this framework makes it easy to implement practices that will improve the load time for a web page that displays several images. Read on to learn more! Read More >

    Read the article

  • Tomcat 5.5, Is there a max upload speed per request?

    - by maclema
    I am having an issue when uploading files to tomcat. It seems that tomcat (or something else?) will not handle the upload as fast as I can send it. When uploading multiple files concurrently I can max out my local connection upload speed (2.1MB/s). However, when uploading only one file at a time, no matter how small or large the file, the upload will max out around 400KB/s. I have tried setting the appReadBufSize higher but it makes no difference. Is there something else that would be limiting the upload speed per request? Proxy Server: CentOS 4 Apache 2 SSL Tomcat Server: CentOS 4 Tomcat 5.5.25 (Tomcat Native Library Is Installed) Java 6 Thanks! Matt

    Read the article

  • How to override the new limited keyboard repeat rate limit?

    - by Olivier Pons
    I may be an alien around here, but here's my problem: the speed limit on old Ubuntu releases (= before 11) was very very fast. It was really great for me. Now, on Ubuntu 11, they may have thought: "who will ever want that speed? Nobody! So let's put the maximum speed to a lower limit". It's so stupid that they tried to narrow down the speed to some other famous OS. If Linux is more powerful, why remove some of its power? I don't get that. So is there any way to override that speed limit and get my keyboard as fast as it is on other previous versions?

    Read the article

  • How would you sample a real-time stream of coordinates to create a Speed Graph?

    - by Andrew Johnson
    I have a GPS device, and I am receiving continuous points, which I store in an array. These points are time stamped. I would like to graph distance/time (speed) vs. distance in real-time; however, I can only plot 50 of the points because of hardware constraints. How would you select points from the array to graph? For example, one algorithm might be to select every Nth point from the array, where N results in 50 points total. Code: float indexModifier = 1; if (MIN(50,track.lastPointIndex) == 50) { indexModifier = track.lastPointIndex/50.0f; } index = ceil(index*indexModifier); Another algorithm might be to keep an array of 50 points, and throw out the point with the least speed change each time you get a new point.

    Read the article

  • Make a compiled binary run at native speed flawlessly without recompiling from source on a another system?

    - by unknownthreat
    I know that many people, at a first glance of the question, may immediately yell out "Java", but no, I know Java's qualities. Allow me to elaborate my question first. Normally, when we want our program to run at a native speed on a system, whether it be Windows, Mac OS X, or Linux, we need to compile from source codes. If you want to run a program of another system in your system, you need to use a virtual machine or an emulator. While these tools allow you to use the program you need on the non-native OS, they sometimes have problems of performance and glitches. We also have a newer compiler called "JIT Compiler", where the compiler will parse the bytecode program to native machine language before execution. The performance may increase to a very good extent with JIT Compiler, but the performance is still not the same as running it on a native system. Another program on Linux, WINE, is also a good tool for running Windows program on Linux system. I have tried running Team Fortress 2 on it, and tried experiment with some settings. I got ~40 fps on Windows at its mid-high setting on 1280 x 1024. On Linux, I need to turn everything low at 1280 x 1024 to get ~40 fps. There are 2 notable things though: Polygon model settings do not seem to affect framerate whether I set it low or high. When there are post-processing effects or some special effects that require manipulation of drawn pixels of the current frame, the framerate will drop to 10-20 fps. From this point, I can see that normal polygon rendering is just fine, but when it comes to newer rendering methods that requires graphic card to the job, it slows down to a crawl. Anyway, this question is rather theoretical. Is there anything we can do at all? I see that WINE can run STEAM and Team Fortress 2. Although there are flaws, they can run at lower setting. Or perhaps, I should also ask, "is it possible to translate one whole program on a system to another system without recompiling from source and get native speed?" I see that we also have AOT Compiler, is it possible to use it for something like this? Or there are so many constraints (such as DirectX call or differences in software architecture) that make it impossible to have a flawless and not native to the system program that runs at native speed?

    Read the article

  • mount old ATA disk to USB adapter

    - by 213441265152351
    I am trying to recover data from an old Linux that was installed in a computer on an ATA hard drive. I found a ScanLogic USB-IDE, an ATA adapter to USB 1.0 similar to the one in the picture: and after switching it on, I plugged it into a laptop with Ubuntu 12.04. I am used to the drives being automatically mounted, but this one doesn't show up in /media. After doing a dmesg, all I got is this: [215298.671924] usb 2-1.1: new full-speed USB device number 5 using ehci_hcd [215298.767330] scsi19 : usb-storage 2-1.1:1.0 [215299.841701] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd [215300.017258] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd [215300.197050] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd [215300.372730] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd I tried plugging in the adapter to the three different USB ports in my laptop (one of them USB 3.0), but got no luck with any of them. Any ideas?

    Read the article

  • When will my old page stop appearing on Google?

    - by Bane
    I recently bought a new address for my Blogger blog, from yannbane.blogspot.com to www.yannbane.com. However, www.yannbane.com addresses do not appear when they are searched for! Is this natural? How much time will it take for Google to update its index? yannbane.blogspot.com 301's to www.yannbane.com. Both are added to my Webmaster Tools account, but it shows no data for www.yannbane.com (strangely). And, finally, is there something I could do to speed up the process?

    Read the article

  • Switching Android SensorManager speed. What's a good practice?

    - by Johnson Tey
    Hello stackoverflow! I'm interested to switch between different sensor orientation speeds over time to optimize the program ie.. battery life. The routine may be called very often. I'm looking for the right practice. sensorManager = (SensorManager)getSystemService(Context.SENSOR_SERVICE); sensorManager.registerListener(sensorListener, SensorManager.SENSOR_ORIENTATION, SensorManager.SENSOR_DELAY_FASTEST); //... 1) unregister then register new speed OR //... 2) register new speed without registering sensorManager.unregisterListener(sensorListener); Should I unregister the listener and then register with SensorManager.SENSOR_DELAY_NORMAL OR Should I not bother unregistering the listener? thanks.

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • javascript: is there any JS can test network speed?

    - by Bin Chen
    I am going to test my website speed, primary the webserver latency. Summarize what I want to achieve: 1) a webpage with javascript hosted in my website(http://myweb.com/test-speed.html) 2) I give this url to my friends 3) They don't need to do anything, they just need to access this webpage then the latency is printed out in the webpage. 4) If the webpage can also tell which state the visitor is in(using IP address range database), it will be a plus. Any existing solutions? I can modify the javascript to log the data into database, but I think the core here is how to writ the javascript to know the latency.

    Read the article

  • Equation / formula to determine an objects position on an ellipitcal path

    - by David Murphy
    I'm making a space game, as such I need objects to follow an elliptical path (orbit). I've worked out how to calculate all the important aspects of my orbits, the only remaining thing is how to have an object follow it. My Orbit class contains the major, minor (and by extension semi-major,semi-minor) lengths. The focii radius, area and circumference even. What is the equation to determine an objects x/y position (only need 2D) on an ellipse with a certain speed after a period of time. Basically, every frame I want to update the position based on the amount of elapsed time. I would like to have the speed along the path speed up and slow down according to the distance from the object it's orbiting, but not sure how to factor this in to the above given that at any point in time the speed has changed from it's previous speed. EDIT I can't answer my own question. But I found the question and answer is already on stackexchange: Kepler orbit : get position on the orbit over time

    Read the article

  • In BASH how can i find my system on active internet interface, what is the upload speed?

    - by YumYumYum
    I am trying to write an TUI bandwidth trace application which on query can instantly tell me, that my download and upload speed is XXXX. I have figured out that download i can use with wget and parse it using BASH, but how do i get the upload speed? Example of download parse method: 1) Remote download : wget http://x.x.com:7007/files/software/vnc.zip Length: 1594344 (1.5M) [application/zip] Saving to: `vnc.zip' 100%[==================================================================>] 1,594,344 573K/s in 2.7s 2012-03-24 11:35:22 (573 KB/s) - `vnc.zip' saved [1594344/1594344] 2) Local download tells Length: 1594344 (1.5M) [application/zip] Saving to: `vnc.zip' 100%[==================================================================>] 1,594,344 --.-K/s in 0.1s 2012-03-24 06:43:04 (11.4 MB/s) - `vnc.zip' saved [1594344/1594344]

    Read the article

  • Reducing number of include files to reduce server load/speed up site?

    - by rein
    You can reduce the number of HTTP requests to speed up your site, such as css sprite images. I'm wondering does reducing the number of php includes/requires also speed up your site or reduce server load? For example, I have a index.php with <?php include './file.php'; ?> If instead I copy and paste the code from file.php and just put it into index.php, thus removing the include code, would it reduce the server load? This might make things less organized, but if it does reduce server load I might need to do that. For a small to medium sized site, I assume there might not be a difference, but how about for high traffic sites? Thanks in advance.

    Read the article

  • What advantages are conferred by using server-side page rendering?

    - by user1303881
    I am developing a web app and I have currently written the entire website in html/js/css and on the backend I have servlets that host some RESTFUL services. All the presentation logic is done through getting json objects and modifying the view through javascript. The application is essentially a search engine, but it will have user accounts with different roles. I've been researching some frameworks such as Play and Spring. I'm fairly new to web development, so I was wondering what advantages using server side page rendering would provide? Is it: Speed? Easier development and workflow? Access to existing libraries? More? All of the above?

    Read the article

  • Does template class/function specialization improves compilation/linker speed?

    - by Stormenet
    Suppose the following template class is heavily used in a project with mostly int as typename and linker speed is noticeably slower since the introduction of this class. template <typename T> class MyClass { void Print() { std::cout << m_tValue << std::endl;; } T m_tValue; } Will defining a class specialization benefit compilation speed? eg. void MyClass<int>::Print() { std::cout << m_tValue << std::endl; }

    Read the article

  • How do I find the speed of an in-progress file upload in cURL?

    - by cinek1lol
    I'd like to know how to check out the speed of a file being uploaded in real time using the cURL library in C++. This is what I have written: void progress_func(void* ptr, double TotalToDownload, double NowDownloaded, double TotalToUpload,double NowUploaded) { cout<<NowUploaded<<endl; } //... int main() { //... curl_easy_setopt(curl, CURLOPT_PROGRESSFUNCTION, progress_func); } But the manual says that it shows average speed, but even this doesn't seem to work with me, because too mach fast is end. How to good write or A well-calculated?

    Read the article

  • Memory limit for running external executables within Asp.net

    - by itsbalur
    I am using WkhtmltoPdf in my C# web application running in .NET 4.0 to generate PDFs from HTML files. In general everything works fine except when the size of the HTML file is below 250KB. Once the HTML file size increases beyond that, the process which runs the wkhtmltopdf.exe gives an exception as below. On the Task Manager, I have seen that the Memory value for the wkhtmltopdf.exe process does not increase beyond a value of 40,096 K, which I believe is the reason why the process is abandoned in between. How can we configure such that the memory limit for external exes can be increased? Is there any other way of solving this issue? More info: When I run the conversion from the command line directly, the PDF is generated fine. So, its unlikely to be a problem with WkhtmlToPdf. The error is from localhost. I have tried the same on the DEV server, with the same result. Exception: > [Exception: Loading pages (1/6) [> > ] 0% [======> ] > 10% [======> ] 11% > [=======> ] 13% > [=========> ] 15% > [==========> ] 18% > [============> ] 20% > [=============> ] 22% > [==============> ] 24% > [===============> ] 26% > [=================> ] 29% > [==================> ] 31% > [===================> ] 33% > [=====================> ] 35% > [======================> ] 37% > [========================> ] 40% > [=========================> ] 42% > [==========================> ] 44% > [============================> ] 47% > [=============================> ] 49% > [==============================> ] 51% > [============================================================] 100% > Counting pages (2/6) > [============================================================] Object > 1 of 1 Resolving links (4/6) > [============================================================] Object > 1 of 1 Loading headers and footers (5/6) > Printing pages (6/6) [> > ] Preparing [=> > ] Page 1 of 49 [==> > ] Page 2 of 49 [===> > ] Page 3 of 49 [====> > ] Page 4 of 49 [======> > ] Page 5 of 49 [=======> > ] Page 6 of 49 [========> > ] Page 7 of 49 [=========> > ] Page 8 of 49 [==========> > ] Page 9 of 49 [============> > ] Page 10 of 49 [=============> > ] Page 11 of 49 [==============> > ] Page 12 of 49 [===============> > ] Page 13 of 49 [================> > ] Page 14 of 49 [==================> > ] Page 15 of 49 [===================> > ] Page 16 of 49 [====================> > ] Page 17 of 49 [=====================> > ] Page 18 of 49 [======================> > ] Page 19 of 49 [========================> > ] Page 20 of 49 [=========================> > ] Page 21 of 49 [==========================> > ] Page 22 of 49 [===========================> > ] Page 23 of 49 [============================> > ] Page 24 of 49 [==============================> > ] Page 25 of 49 [===============================> > ] Page 26 of 49 [=================================> > ] Page 27 of 49 [==================================> > ] Code that I use: var fileName = " - "; var wkhtmlDir = ConfigurationManager.AppSettings[Constants.AppSettings.ExportToPdfExecutablePath]; var wkhtml = ConfigurationManager.AppSettings[Constants.AppSettings.ExportToPdfExecutablePath] + "\\wkhtmltopdf.exe"; var p = new Process(); string switches = ""; switches += "--print-media-type "; switches += "--margin-top 10mm --margin-bottom 10mm --margin-right 5mm --margin-left 5mm "; switches += "--page-size A4 "; switches += "--disable-smart-shrinking "; var startInfo = new ProcessStartInfo { CreateNoWindow = true, FileName = wkhtml, Arguments = switches + " " + url + " " + fileName, UseShellExecute = false, RedirectStandardOutput = true, RedirectStandardError = true, RedirectStandardInput=true, WorkingDirectory=wkhtmlDir }; p.StartInfo = startInfo; p.Start();

    Read the article

  • ViewStateMode in ASP.Net 4.0

    - by sreejukg
    When asp.net introduced the concept of viewstate, it changed the way how developers maintain the state for the controls in a web page. Until then to keep the track of the control(in classic asp), it was the developer responsibility to manually assign the posted content before rendering the control again. Viewstate made allowed the developer to do it with ease. The developers are not bothered about how controls keep there state on post back. Viewstate is rendered to the browser as a hidden variable __viewstate. Since viewstate stores the values of all controls, as the number of controls in the page increases, the content of viewstate grows large. It causes some websites to load slowly. As developers we need viewstate, but actually we do not want this for all the controls in the page. Till asp.net 3.5, if viewstate is disabled from web.config (using <pages viewstate=”false”/> ..</pages>), then you can not enable it from the control level/page level. Both <%@ Page EnableViewState=”true”…. and <asp:textbox EnableViewState=”true” will not work in this case. Lot of developers demands for more control over viewstate. It will be useful if the developers are able to disable it for the entire page and enable it for only those controls that needed viewstate. With ASP.NET 4.0, this is possible, a happy news for the developers. This is achieved by introducing a new property called ViewStateMode. Let us see, What is ViewStateMode – Is a new property in asp.net 4.0, that allows developers to enable viewstate for individual control even if the parent has disabled it. This ViewStateMode property can contain either of three values Enabled- Enable view state for the control even if the parent control has view state disabled. Disabled - Disable view state for this control even if the parent control has view state enabled Inherit - Inherit the value of ViewStateMode from the parent, this is the default value. To disable view state for a page and to enable it for a specific control on the page, you can set the EnableViewState property of the page to true, then set the ViewStateMode property of the page to Disabled, and then set the ViewStateMode property of the control to Enabled. Find the example below. Page directive - <%@ Page Language="C#"  EnableViewState="True" ViewStateMode="Disabled" .......... %> Code for the control  - <asp:TextBox runat="server" ViewStateMode="Enabled" ............../> Now the viewstate will be disabled for the whole page, but enabled for the TextBox. ViewStateMode gives developers more control over the viewstate.

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >