Search Results

Search found 1797 results on 72 pages for 'bandwidth measuring'.

Page 23/72 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Using the StopWatch class to calculate the execution time of a block of code

    - by vik20000in
      Many of the times while doing the performance tuning of some, class, webpage, component, control etc. we first measure the current time taken in the execution of that code. This helps in understanding the location in code which is actually causing the performance issue and also help in measuring the amount of improvement by making the changes. This measurement is very important as it helps us understand the problem in code, Helps us to write better code next time (as we have already learnt what kind of improvement can be made with different code) . Normally developers create 2 objects of the DateTime class. The exact time is collected before and after the code where the performance needs to be measured.  Next the difference between the two objects is used to know about the time spent in the code that is measured. Below is an example of the sample code.             DateTime dt1, dt2;             dt1 = DateTime.Now;             for (int i = 0; i < 1000000; i++)             {                 string str = "string";             }             dt2 = DateTime.Now;             TimeSpan ts = dt2.Subtract(dt1);             Console.WriteLine("Time Spent : " + ts.TotalMilliseconds.ToString());   The above code works great. But the dot net framework also provides for another way to capture the time spent on the code without doing much effort (creating 2 datetime object, timespan object etc..). We can use the inbuilt StopWatch class to get the exact time spent. Below is an example of the same work with the help of the StopWatch class.             Stopwatch sw = Stopwatch.StartNew();             for (int i = 0; i < 1000000; i++)             {                 string str = "string";             }             sw.Stop();             Console.WriteLine("Time Spent : " +sw.Elapsed.TotalMilliseconds.ToString());   [Note the StopWatch class resides in the System.Diagnostics namespace] If you use the StopWatch class the time taken for measuring the performance is much better, with very little effort. Vikram

    Read the article

  • parsing css measures

    - by david
    When i write a jQuery plugin i like to specify options for spacings the CSS way. I wrote a function that returns a CSS String as values in a object. 5px 10px returns top: 5px, right: 10px, bottom: 5px, left: 10px Now i often use the returned values to do some calculations and its not very nice to have to extract the measuring unit every time... I suck in writing regular expressions could someone help me complete this function: this.cssMeasure = function(cssString, separateUnits){ if ( cssString ){ var values = {} }else{ return errorMsg } var spacing = cssString.split(' ') var errorMsg = 'please format your css values correctly dude' if( spacing[4] || (spacing[2] && !spacing[3]) ) { return errorMsg } else if ( spacing[3] ) { values = {top: spacing[0], right:spacing[1], bottom:spacing[2], left:spacing[3]} } else if ( spacing[1] ) { values = {top: spacing[0], right:spacing[1], bottom:spacing[0], left:spacing[1]} } else { values = {top: spacing[0], right:spacing[0], bottom:spacing[0], left:spacing[0]} } if (separateUnits) { $.each(values, function(i, value){ /* at this place i need to extract the measuring unit of each value and return them separately something like top: {value: 10, unit: 'px'}, right: {bla} and so on */ }) } return values } if you have any idea how to improve this function i am open to your comments.

    Read the article

  • TextRenderer.DrawText renders Arial differently on XP vs Vista

    - by Michael
    I have a c# application that does text rendering, something on par with a simple wysiwyg text editor. I'm using TextRenderer.DrawText to render the text to the screen and GetTextExtentPoint32 to measure text so I can position different font styles/sizes on the same line. In Vista this all works fine. In XP however, Arial renders differently, certain characters like 'o' and 'b' take up more width than in Vista. GetTextExtentPoint32 seems to be measuring the string as it would in Vista though, with the smaller widths. The end result is that every now and then a run of text will overlap the text preceding it because the preceding text gets measured as smaller than it actually is on the screen. Also, my text rendering code mimics ie's text rendering exactly (for simple formatting and english language only) and ie text rendering seems to be consistent between vista and xp - that's how I noticed the change in size of the different characters. Anyone have any ideas about what's going on? In short, TextRenderer.DrawText and GetTextExtentPoint32 don't match up in xp for Arial. DrawText seems to draw certain characters larger and/or smaller than it does in Vista but GetTextExtentPoint32 seems to be measuring the text as it would in Vista (which seems to match the text rendering in ie on both xp and vista). Hope that makes sense. Note: unfortunately TextRenderer.MeasureString isn't fast or accurate enough to meet my requirements. I tried using it and had to rip it out.

    Read the article

  • C# COM Cross Thread problem

    - by user364676
    Hi, we're developing a software to control a scientific measuring device. it provides a COM-Interface defines serveral functions to set measurement parameters and fires an event when it measured data. in order to test our software, i'm implementing a simulation of that device. the com-object runs a loop which periodically fires the event. another loop in the client app should now setup up the com-simulator using the given functions. i created a class for measuring parameters which will be instanciated when setting up a new measurement. // COM-Object public class MeasurementParams { public double Param1; public double Param2; } public class COM_Sim : ICOMDevice { public MeasurementParams newMeasurement; IClient client; public int NewMeasurement() { newMeasurment = new MeasurementParam(); } public int SetParam1(double val) { // why is newMeasurement null when method is called from client loop newMeasurement.Param1 = val; } void loop() { while(true) { // fire event client.HandleEvent; } } } public class Client : IClient { ICOMDevice server; public int HandleEvent() { // handle this event server.NewMeasurement(); server.SetParam1(0.0); } void loop() { while(true) { // do some stuff... server.NewMeasurement(); server.SetParam1(0.0); } } } both of the loops run in independent threads. when server.NewMeasurement() is called, the object on the server is set to a new instance. but in the next function, the object is null again. do the same when handling the server-event, it works perfectly, because the method runs in the servers thread. how to make it work from client-thread as well. as the client is meant to be working with the real device, i cannot modify the interfaces given by the manufactor. also i need to setup measurements independent from the event-handler, which will be fired not regulary. i assume this problem related to multithreaded-COM behavior but i found nothing on this topic.

    Read the article

  • Having Hotlink Protectin problem in nginx

    - by Ayaz Malik
    Hello, i am having image hotlink protection problem in my nginx need help. i have a huge issue of my site's images being submited to social networks like stumbleupon with direct link ... xxxxx.jpg which some times get huge traffic and increases cpu usage plus bandwidth usage. what i am trying to do is block direct access to image from other refrers and hotlink protection. Here is the code from my vhost.conf server { access_log off; error_log logs/vhost-error_log warn; listen 80; server_name mydomain.com www.mydomain.com; # uncomment location below to make nginx serve static files instead of Apache # NOTE this will cause issues with bandwidth accounting as files wont be logged location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ { root /home/username/public_html; expires 1d; } root /home/mydomain/public_html; } location / { client_max_body_size 10m; client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; # you can increase proxy_buffers here to suppress "an upstream response # is buffered to a temporary file" warning proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 30s; proxy_redirect http://www.mydomain.com:81 http://www.mydomain.com; proxy_redirect http://mydomain.com:81 http://mydomain.com; proxy_pass http://ip_address/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; expires 24h; } } So for hotlink protection i added this code : location ~* (\.jpg|\.png|\.gif|\.jpeg)$ { valid_referers blocked www.mydomain.com mydomain.com; if ($invalid_referer) { return 403; } This is how the current nginx code for this domain looks like but didn't worked: server { access_log off; error_log logs/vhost-error_log warn; listen 80; server_name mydomain.com www.mydomain.com; # uncomment location below to make nginx serve static files instead of Apache # NOTE this will cause issues with bandwidth accounting as files wont be logged location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ { root /home/username/public_html; expires 1d; } root /home/mydomain/public_html; } location ~* (\.jpg|\.png|\.gif|\.jpeg)$ { valid_referers blocked www.mydomain.com mydomain.com; if ($invalid_referer) { return 403; } location / { client_max_body_size 10m; client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; # you can increase proxy_buffers here to suppress "an upstream response # is buffered to a temporary file" warning proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 30s; proxy_redirect http://www.mydomain.com:81 http://www.mydomain.com; proxy_redirect http://mydomain.com:81 http://mydomain.com; proxy_pass http://ip_address/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; expires 24h; } } Thank you in advance :) cheers

    Read the article

  • Trouble in Nginx hotlink protection

    - by Ayaz Malik
    I am trying to implement image hotlink protection problem in nginx and I need help. I have a huge issue of my site's images being submitted to social networks like StumbleUpon with a direct link like http://example.com/xxxxx.jpg Which sometimes gets huge traffic and increases CPU usage and bandwidth usage. I want to block direct access to my images from other referrers and protect them from being hotlinked. Here is the code from my vhost.conf server { access_log off; error_log logs/vhost-error_log warn; listen 80; server_name mydomain.com www.mydomain.com; # uncomment location below to make nginx serve static files instead of Apache # NOTE this will cause issues with bandwidth accounting as files wont be logged location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ { root /home/username/public_html; expires 1d; } root /home/mydomain/public_html; } location / { client_max_body_size 10m; client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; # you can increase proxy_buffers here to suppress "an upstream response # is buffered to a temporary file" warning proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 30s; proxy_redirect http://www.mydomain.com:81 http://www.mydomain.com; proxy_redirect http://mydomain.com:81 http://mydomain.com; proxy_pass http://ip_address/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; expires 24h; } } For hotlink protection I added this code location ~* (\.jpg|\.png|\.gif|\.jpeg)$ { valid_referers blocked www.mydomain.com mydomain.com; if ($invalid_referer) { return 403; } This is the current nginx code for this domain, but it didn't work: server { access_log off; error_log logs/vhost-error_log warn; listen 80; server_name mydomain.com www.mydomain.com; # uncomment location below to make nginx serve static files instead of Apache # NOTE this will cause issues with bandwidth accounting as files wont be logged location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ { root /home/username/public_html; expires 1d; } root /home/mydomain/public_html; } location ~* (\.jpg|\.png|\.gif|\.jpeg)$ { valid_referers blocked www.mydomain.com mydomain.com; if ($invalid_referer) { return 403; } location / { client_max_body_size 10m; client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; # you can increase proxy_buffers here to suppress "an upstream response # is buffered to a temporary file" warning proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 30s; proxy_redirect http://www.mydomain.com:81 http://www.mydomain.com; proxy_redirect http://mydomain.com:81 http://mydomain.com; proxy_pass http://ip_address/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; expires 24h; } } How can I fix this?

    Read the article

  • IPv6: Should I have private addresses?

    - by AlReece45
    Right now, we have a rack of servers. Every server right now has at least 2 IP addresses, one for the public interface, another for the private. The servers that have SSL websites on them have more IP addresses. We also have virtual servers, that are configured similarly. Private Network The private range is currently just used for backups and monitoring. Its a gigabit port, the interface usage does not usually get very high. There are other technologies we're considering using that would use this port: iSCSI (implementations usually recommends dedicating an interface to it, which would be yet another IP network), VPN to get access to the private range (something I'd rather avoid) dedicated database servers LDAP centralized configuration (like puppet) centralized logging We don't have any private addresses in our DNS records (only public addresses). For our servers to utilize the correct IP address for the right interface (and not hard code the IP address) probably requires setting up a private DNS server (So now we add 2 different dns entries to 2 different systems). Public Network Our public range has a variety of services include web, email, and ftp. There is a hardware firewall between our network and the "public" network. We have (relatively secure) method to instruct the firewall to open and close administrative access (web interfaces, ssh, etc) for our current IP address. With either solution discussed, the host-based firewalls will be configured as well. The public network currently runs at a dedicated 20Mbps link. There are a couple of legacy servers with fast-ethernet ports, but they are scheduled for decommissioning. All of the other production boxes have at least 2 Gigabit Ethernet ports. The more traffic-heavy servers have 4-6 available (none is using more than the 2 Gigabit ports right now). IPv6 I want to get an IPv6 prefix from our ISP. So at least every "server" has at least one IPv6 interface. We'll still need to keep the IPv4 addressees up and available for legacy clients (web servers and email at the very least). We have two IP networks right now. Adding the public IPv6 address would make it three. Just use IPv6? I'm thinking about just dumping the private IPv4 range and using the IPv6 range as the primary means of all communications. If an interface starts reaching its capacity, utilize the newly free interfaces to create a trunk. It has the advantage that if either the public or private traffic needs to exceed 1Gbps. The traffic for each interface is already analyzed on a regular basis to predict future bandwidth use. In the rare instances where bandwidth unexpected peaks: utilize QoS to ensure traffic (like our limited SSH access) is prioritized correctly so the problem can be corrected (if possible, our WAN is the bottleneck right now). It also has the advantage of not needing to make an entry for every private address. We may have private DNS (or just LDAP), but it'll be much more limited in scope with less entries to duplicate. Summary I'm trying to make this network as "simple" as possible. At the same time, I want to make sure its reliable, upgradeable, scalable, and (eventually) redundant. Having one IPv6 network, and a legacy IPv4 network seems to be the best solution to me. Regarding using assigned IPv6 addresses for both networks, sharing the available bandwidth on one (more trunked if needed): Are there any technical disadvantages (limitations, buffers, scalability)? Are there any other security considerations (asides from firewalls mentioned above) to consider? Are there regulations or other security requirements (like PCI-DSS) that this doesn't meet? Is there typical software for setting up a Linux network that doesn't have IPv6 support yet? (logging, ldap, puppet) Some other thing I didn't consider?

    Read the article

  • Improving CSS With .LESS

    Cascading Style Sheets, or CSS, is a syntax used to describe the look and feel of the elements in a web page. CSS allows a web developer to separate the document content - the HTML, text, and images - from the presentation of that content. Such separation makes the markup in a page easier to read, understand, and update; it can result in reduced bandwidth as the style information can be specified in a separate file and cached by the browser; and makes site-wide changes easier to apply. For a great example of the flexibility and power of CSS, check out CSS Zen Garden. This website has a single page with fixed markup, but allows web developers from around the world to submit CSS rules to define alternate presentation information. Unfortunately, certain aspects of CSS's syntax leave a bit to be desired. Many style sheets include repeated styling information because CSS does not allow the use of variables. Such repetition makes the resulting style sheet lengthier and harder to read; it results in more rules that need to be changed when the website is redesigned to use a new primary color. Specifying inherited CSS rules, such as indicating that a elements (i.e., hyperlinks) in h1 elements should not be underlined, requires creating a single selector name, like h1 a. Ideally, CSS would allow for nested rules, enabling you to define the a rules directly within the h1 rules. .LESS is a free, open-source port of Ruby's LESS library. LESS (and .LESS, by extension) is a parser that allows web developers to create style sheets using new and improved language features, including variables, operations, mixins, and nested rules. Behind the scenes, .LESS converts the enhanced CSS rules into standard CSS rules. This conversion can happen automatically and on-demand through the use of an HTTP Handler, or done manually as part of the build process. Moreover, .LESS can be configured to automatically minify the resulting CSS, saving bandwidth and making the end user's experience a snappier one. This article shows how to get started using .LESS in your ASP.NET websites. Read on to learn more! Read More >

    Read the article

  • New Netra SPARC T3 Servers

    - by Ferhat Hatay
    Today at the Mobile World Congress 2011, Oracle announced two new carrier-grade NEBS Level 3- certified servers: Oracle’s Netra SPARC T3-1 rackmount server and Oracle’s Netra SPARC T3-1BA ATCA blade server bringing the performance, scalability and power efficiency of the newest SPARC T3 processor to the communications market.    The Netra SPARC T3-1 server enclosure has a compact 20inch-deep carrier-grade rack-optimized design The new Netra SPARC T3 servers further expand Oracle’s complete portfolio for the communications industry, which includes carrier-grade servers, storage and application software to run operations support systems and service delivery platforms with easy migration capabilities and unmatched investment protection via the binary compatibility guarantee of the Oracle Solaris operating system. With advanced reliability, networking and security features built-in to Oracle Solaris – the most widely deployed carrier-grade OS – the systems announced today are uniquely suited for mission-critical core network infrastructure and service delivery. The world’s first carrier-grade system using the 16-core, 128-thread SPARC T3 processor, the Netra SPARC T3-1 server supports 2x the I/O bandwidth, 2x the memory and is 35 percent faster than the previous generation. With integrated on-chip 10 Gigabit Ethernet, on-chip cryptographic acceleration, and built-in, no-cost Oracle VM Server for SPARC and Oracle Solaris Containers for virtualization, the Netra SPARC T3-1 server is an ideal platform for consolidation, offering 128 virtual systems in a single server. As the next generation Netra SPARC ATCA blade, Netra SPARC T3-1BA ATCA blade server brings the PICMG 3.0 compatibility, NEBS Level 3 Certification, ETSI compliance and the Netra business practices to the customer solution. The Netra SPARC T3-1BA ATCA blade server can be mixed in the Sun Netra CT900 blade chassis with other ATCA UltraSPARC and x86 blades.     The Netra SPARC T3-1BA ATCA blade server   The Netra SPARC T3-1BA ATCA blade server delivers industry-leading scalability, density and cost efficiency with up to 36 SPARC T3 processors (3456 processing threads) in a single rack – a 50 percent increase over the previous generation. The Netra SPARC T3-1BA blade server also offers high-bandwidth and high-capacity I/O, with greater memory capacity to tackle the increasing business demands of the communications industry. For service providers faced with the rapid growth of broadband networks and the dramatic surge in global smartphone adoption, the new Netra SPARC T3 systems deliver continuous availability with massive scalability, tested and certified to run in the harshest conditions. More information Oracle’s Sun Netra Servers Scaling Throughput and Managing TCO with Oracle’s Netra SPARC T3-1 Servers Enabling End-to-End 10 Gigabit Ethernet in Oracle's Sun Netra ATCA Product Family Data Sheet: Netra SPARC T3-1BA ATCA Blade Server Data Sheet: Netra SPARC T3-1 Server Oracle Solaris: The Carrier Grade Operating System

    Read the article

  • Improving CSS With .LESS

    Cascading Style Sheets, or CSS, is a syntax used to describe the look and feel of the elements in a web page. CSS allows a web developer to separate the document content - the HTML, text, and images - from the presentation of that content. Such separation makes the markup in a page easier to read, understand, and update; it can result in reduced bandwidth as the style information can be specified in a separate file and cached by the browser; and makes site-wide changes easier to apply. For a great example of the flexibility and power of CSS, check out CSS Zen Garden. This website has a single page with fixed markup, but allows web developers from around the world to submit CSS rules to define alternate presentation information. Unfortunately, certain aspects of CSS's syntax leave a bit to be desired. Many style sheets include repeated styling information because CSS does not allow the use of variables. Such repetition makes the resulting style sheet lengthier and harder to read; it results in more rules that need to be changed when the website is redesigned to use a new primary color. Specifying inherited CSS rules, such as indicating that a elements (i.e., hyperlinks) in h1 elements should not be underlined, requires creating a single selector name, like h1 a. Ideally, CSS would allow for nested rules, enabling you to define the a rules directly within the h1 rules. .LESS is a free, open-source port of Ruby's LESS library. LESS (and .LESS, by extension) is a parser that allows web developers to create style sheets using new and improved language features, including variables, operations, mixins, and nested rules. Behind the scenes, .LESS converts the enhanced CSS rules into standard CSS rules. This conversion can happen automatically and on-demand through the use of an HTTP Handler, or done manually as part of the build process. Moreover, .LESS can be configured to automatically minify the resulting CSS, saving bandwidth and making the end user's experience a snappier one. This article shows how to get started using .LESS in your ASP.NET websites. Read on to learn more! Read More >

    Read the article

  • Question about MochaHost.com Hosting Plans [duplicate]

    - by Wassim
    This question already has an answer here: How to find web hosting that meets my requirements? 5 answers This is not an advertising, I've just found this website (MochaHost) that offers a great things just for 3$/m like : 2 LifeTime FREE Domains UNLIMITED Space and bandwidth SVN (subversion) support SSH access PHP 5, Perl, Python, and Rails I need to know if any of you had taken from them a hosting plans, what do you think about it?

    Read the article

  • Book My Cloud Offering FREE PREMIUM Cpanel Accounts

    - by asd
    Book My Cloud Offering FREE PREMIUM Cpanel Accounts Reuqest Type: http://support.bookmycloud.com/ Select Request Type Free Cpanel Hosting Related Features: Resources Disk quota : 10 GB Monthly bandwidth : 300 GB Max FTP Accounts : 5 Max Email Accounts : Unlimited Max Email Lists : Unlimited Max Databases : 500 Max Sub Domains : 500 Max Parked Domains : 100 Max Addon Domains : 1000 Control Panel: Cpanel NO Ads Full DNS Management

    Read the article

  • What does it take to successfully run a URL shortener service [on hold]

    - by MxyL
    What are the costs (technology wise) for running a URL shortener service such as bit.ly or anonym.to? For example, if I decided to use an inexpensive shared hosting with "unlimited" bandwidth, would that be feasible? Or would I need a dedicated hosting? I found this question: I want to run a URL shortener for my own usage, what do I need to do?, which makes it easy to set it up, but I'm not too clear what kind of things I need to consider.

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • What are the drawbacks of sending XML to browsers and let them apply XSLT?

    - by MainMa
    Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?>. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed).

    Read the article

  • How do I duplicate a server's packages and configuration to another machine?

    - by sharjeel
    I have a production server running Ubuntu. I would like to setup similar configuration installed on my local machine. I'd like to have same packages installed. Since bandwidth is a constraint the traditional disk cloning methods won't work for me. Having same packages installed and same users with same passwords created would be wonderful; i'll tweak rest of the things manually. Is there a good solution to my requirements?

    Read the article

  • AMD FirePro V8800 2GB

    <b>Phoronix:</b> "The AMD FirePro V8800 features 2GB of GDDR5 video memory with 147.2 GB/s of bandwidth, 1600 Stream processors, four DisplayPort outputs, ATI Eyefinity support, DirectX 11.0 / OpenGL 4.0 support, OpenCL 1.0 capable, a full 30-bit display pipeline, Multi-View display support..."

    Read the article

  • Enterprise cloud put to the test

    <b>Network World:</b> "In this first-of-its-kind test, we invited cloud vendors to provide us with 20 CPUs that would be used for five instances of Windows 2008 Server and five instances of Red Hat Enterprise Linux &#8211; two CPUs per instance. We also asked for a 40GB internal or SAN/iSCSI disk connection, and 1Mbps of bandwidth from our test site to the cloud provider."

    Read the article

  • Ginormous...the new Oracle T5-8 SPARC SuperCluster!

    - by user12608550
    Ginormous...no other way to describe it...the new Oracle T5-8 SPARC SuperCluster...2000+ fast CPU threads, massive memory (DRAM & Flash), 1.2 M IOPS, HUGE storage and bandwidth...WOW! Wanna build a SPARC Cloud? This is it! Multiple virtualization technologies (VM Server for SPARC, and Solaris zones) for elasticity and resource pooling along with Oracle Enterprise Manager Ops Center 12c providing full cloud management capability. Check it out!: ORACLE SUPERCLUSTER T5-8 Overview and Frequently Asked Questions Oracle SuperCluster T5-8 Oracle T5-8 SuperCluster E-Book

    Read the article

  • First Look: H.264 and VP8 Compared

    <b>StreamingMedia:</b> "VP8 is now free, but if the quality is substandard, who cares? Well, it turns out that the quality isn't substandard, so that's not an issue, but neither is it twice the quality of H.264 at half the bandwidth. See for yourself."

    Read the article

  • Toggle CNAME entries using PHP?

    - by skibulk
    Is it possible with PHP to dynamically toggle CNAME entries? For example I have two mirrors with media to be served on my website. Mirror 1 has a monthly bandwidth cap so upon reaching it I want to automatically toggle to mirror 2. I want to use CNAME because the resulting urls appear to be identical to search engines, an SEO friendly approach. If there are SEO friendly alternatives I'd like to hear them as well.

    Read the article

  • How to See Your Estimated Data Usage in Windows 8

    - by Taylor Gibb
    Although you can use metered connections to get the most of your bandwidth in Windows 8, at times you may want to know how much data you have used for a single browsing session. Here’s how to do it. Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary

    Read the article

  • How to Stream Videos and Music Over the Network Using VLC

    - by Chris Hoffman
    VLC includes a fairly easy-to-use streaming feature that can stream music and videos over a local network or the Internet. You can tune into the stream using VLC or other media players. Use VLC’s web interface as a remote control to control the stream from elsewhere. Bear in mind that you may not have the bandwidth to stream high-definition videos over the Internet, though. How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • The Hybrid Cloud: Having your Cake

    With a hybrid cloud, can you get the freedom and flexibility of a public cloud with the security and bandwidth of a private cloud? Robert Sheldon explains all the ins and outs. Free ebook "TortoiseSVN and Subversion Cookbook - Oracle Edition"Use these recipes to work better, faster, and do things you never knew you could do with SVN. If you're new to source control, this book provides a concise guide to getting the most out of Subversion. Download it for free.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >