Search Results

Search found 16327 results on 654 pages for 'exchange 2010'.

Page 505/654 | < Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >

  • Can someone explain me implicit conversions in Scala?

    - by Oscar Reyes
    And more specifically how does the BigInt works for convert int to BigInt? In the source code it reads: ... implicit def int2bigInt(i: Int): BigInt = apply(i) ... How is this code invoked? I can understand how this other sample: "Date literals" works. In. val christmas = 24 Dec 2010 Defined by: implicit def dateLiterals(date: Int) = new { import java.util.Date def Dec(year: Int) = new Date(year, 11, date) } When int get's passed the message Dec with an int as parameter, the system looks for another method that can handle the request, in this case Dec(year:Int) Q1. Am I right in my understanding of Date literals? Q2. How does it apply to BigInt? Thanks

    Read the article

  • C# Drawing Oracle Spatial Geometries

    - by Keeper
    I need to create a simple app which can display geometries from Oracle Spatial in C#. These geometries are exported from AutoCAD Map 3D 2010 to Oracle Spatial. I need to pan, zoom, manage layers of these objects, events (like right click to popup a contextual menu, potentially different for every object), creating/deleting points (maybe also other polygons): a sort of simple AutoCAD interface. Should I look for an AutoCAD OEM license? Is there a drawing framework which can handle this or do I need to create my own?

    Read the article

  • Can Google Employees See My Saved Google Chrome Passwords?

    - by Jason Fitzpatrick
    Storing your passwords in your web browser seems like a great time saver, but are the passwords secure and inaccessible to others (even employees of the browser company) when squirreled away? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader MMA is curious if Google employees have (or could have) access to the passwords he stores in Google Chrome: I understand that we are really tempted to save our passwords in Google Chrome. The likely benefit is two fold, You don’t need to (memorize and) input those long and cryptic passwords. These are available wherever you are once you log in to your Google account. The last point sparked my doubt. Since the password is available anywhere, the storage must in some central location, and this should be at Google. Now, my simple question is, can a Google employee see my passwords? Searching over the Internet revealed several articles/messages. Do you save passwords in Chrome? Maybe you should reconsider: Talks about your passwords being stolen by someone who has access to your computer account. Nothing mentioned about the central storage security and vulnerability. There is even a response from Chrome browser security tech lead about the first issue. Chrome’s insane password security strategy: Mostly along the same line. You can steal password from somebody if you have access to the computer account. How to Steal Passwords Saved in Google Chrome in 5 Simple Steps: Teaches you how to actually perform the act mentioned in the previous two when you have access to somebody else’s account. There are many more (including this one at this site), mostly along the same line, points, counter-points, huge debates. I refrain from mentioning them here, simply carry a search if you want to find them. Coming back to my original query, can a Google employee see my password? Since I can view the password using a simple button, definitely they can be unhashed (decrypted) even if encrypted. This is very different from the passwords saved in Unix-like OS’s where the saved password can never be seen in plain text. They use a one-way encryption algorithm to encrypt your passwords. This encrypted password is then stored in the passwd or shadow file. When you attempt to login, the password you type in is encrypted again and compared with the entry in the file that stores your passwords. If they match, it must be the same password, and you are allowed access. Thus, a superuser can change my password, can block my account, but he can never see my password. So are his concerns well founded or will a little insight dispel his worry? The Answer SuperUser contributor Zeel helps put his mind at ease: Short answer: No* Passwords stored on your local machine can be decrypted by Chrome, as long as your OS user account is logged in. And then you can view those in plain text. At first this seems horrible, but how did you think auto-fill worked? When that password field gets filled in, Chrome must insert the real password into the HTML form element – or else the page wouldn’t work right, and you could not submit the form. And if the connection to the website is not over HTTPS, the plain text is then sent over the internet. In other words, if chrome can’t get the plain text passwords, then they are totally useless. A one way hash is no good, because we need to use them. Now the passwords are in fact encrypted, the only way to get them back to plain text is to have the decryption key. That key is your Google password, or a secondary key you can set up. When you sign into Chrome and sync the Google servers will transmit the encrypted passwords, settings, bookmarks, auto-fill, etc, to your local machine. Here Chrome will decrypt the information and be able to use it. On Google’s end all that info is stored in its encrpyted state, and they do not have the key to decrypt it. Your account password is checked against a hash to log in to Google, and even if you let chrome remember it, that encrypted version is hidden in the same bundle as the other passwords, impossible to access. So an employee could probably grab a dump of the encrypted data, but it wouldn’t do them any good, since they would have no way to use it.* So no, Google employees can not** access your passwords, since they are encrypted on their servers. * However, do not forget that any system that can be accessed by an authorized user can be accessed by an unauthorized user. Some systems are easier to break than other, but none are fail-proof. . . That being said, I think I will trust Google and the millions they spend on security systems, over any other password storage solution. And heck, I’m a wimpy nerd, it would be easier to beat the passwords out of me than break Google’s encryption. ** I am also assuming that there isn’t a person who just happens to work for Google gaining access to your local machine. In that case you are screwed, but employment at Google isn’t actually a factor any more. Moral: Hit Win + L before leaving machine. While we agree with zeel that it’s a pretty safe bet (as long as your computer is not compromised) that your passwords are in fact safe while stored in Chrome, we prefer to encrypt all our logins and passwords in a LastPass vault. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • how to profile multi threaded c++ app on linux?

    - by anon
    I used to do all my linux profiling with gprof. However, with my multi threaded app, it's output appears to be inconsistent. Now, I dug this up http://sam.zoy.org/writings/programming/gprof.html howver, it's from a long time ago -- and in my gprof output, it appears my gprof is listing functions used by non-main threads. So, my questions are: 1) in 2010, can I easily use gprof to profile multi threaded linux c++ apps? (ubuntu 9.10) 2) what other tools should I look into for profiling? thanks!

    Read the article

  • GEM Version Requirements Deprecated

    - by Kevin Sylvestre
    When creating a new Rails project using: rails sample Then creating a model using: script/generate model person first_name:string last_name:string Everything is fine. However, if I add any gems to my environment.rb: config.gem "authlogic" And run the same generator, I get the following: /Library/Ruby/Gems/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. The warning just recently appeared (I think), but I would like to fix it if possible. Any hints or similar experiences? Thanks.

    Read the article

  • Extremely slow insert from Delphi to Remote MySQL Database

    - by MarkRobinson
    Having a major hair-pulling issue with extremely slow inserts from Delphi 2010 to a remote MySQL 5.09 server. So far, I have tried: ADO using MySQL ODBC Driver Zeoslib v7 Alpha I have used batching and direct insert with ADO (using table access), and with Zeos I have used SQL insertion with a Query, then used Table direct mode and also cached updates Table mode using applyupdates and commit. Both technologies I have tried with compression on and off. So far I have seen a pretty much the same across the board 7.5 records per second!!! Now, I would from this point assume that the remote server is just slow, but the MySQL Workbench is amazingly fast, and the Migration toolkit managed the initial migration very quickly (to be honest, I don't recall how quickly - which kind of means that it was quick) I'm just about to try the MyDAC components as we already use SDAC (wish there was a multi-buy discount or that we'd chosen UniDAC instead now!) Any ideas?

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • BizTalk: Internals: the Partner Direct Ports and the Orchestration Chains

    - by Leonid Ganeline
    Partner Direct Port is one of the BizTalk hidden gems. It opens simple ways to the several messaging patterns. This article based on the Kevin Lam’s blog article. The article is pretty detailed but it still leaves several unclear pieces. So I have created a sample and will show how it works from different perspectives. Requirements We should create an orchestration chain where the messages should be routed from the first stage to the second stage. The messages should not be modified. All messages has the same message type. Common artifacts Source code can be downloaded here. It is interesting but all orchestrations use only one port type. It is possible because all ports are one-way ports and use only one operation. I have added a B orchestration. It helps to test the sample, showing all test messages in channel. The Receive shape Filter is empty. A Receive Port (R_Shema1Direct) is a plain Direct Port. As you can see, a subscription expression of this direct port has only one part, the MessageType for our test schema: A Filer is empty but, as you know, a link from the Receive shape to the Port creates this MessageType expression. I use only one Physical Receive File port to send a message to all processes. Each orchestration outputs a Trace.WriteLine(“<Orchestration Name>”). Forward Binding This sample has three orchestrations: A_1, A_21 and A_22. A_1 is a sender, A_21 and A_22 are receivers. Here is a subscription of the A_1 orchestration: It has two parts A MessageType. The same was for the B orchestration. A ReceivePortID. There was no such parameter for the B orchestration. It was created because I have bound the orchestration port with Physical Receive File port. This binding means the PortID parameter is added to the subscription. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for the sending orchestration, A_1. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, receive orchestrations, A_21 or A_22, but it is A_1 orchestration and S_SentFromA_1 port. Now we have to choose a Partner Orchestration parameter for the received orchestrations, A_21 and A_22. Nothing strange is here except a parameter name. We choose the port of the sender, A_1 orchestration and S_SentFromA_1 port. As you can see the Partner Orchestration parameter for the sender and receiver orchestrations is the same. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B and by A_1 A_1 sent a message forward. A message was received by B, A_21, A_22 Let’s look at a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports.     Now let’s see a subscription of the A_21 and A_22 orchestrations. Now it makes sense. That’s why we have chosen such a strange value for the Partner Orchestration parameter of the sending orchestration. Inverse Binding This sample has three orchestrations: A_11, A_12 and A_2. A_11 and A_12 are senders, A_2 is receiver. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for a receiving orchestration, A_2. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, sent orchestrations, A_11 or A_12, but it is A_2 orchestration and R_SentToA_2 port. Now we have to choose a Partner Orchestration parameter for the sending orchestrations, A_11 and A_12. Nothing strange is here except a parameter name. We choose the port of the sender, A_2 orchestration and R_SentToA_2 port. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B, A_11 and by A_12 A_11 and A_12 sent two messages forward. The messages were received by B, A_2 Let’s see what was a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports. Here is a subscription of the A_2 orchestration. Models I had a hard time trying to explain the Partner Direct Ports in simple terms. I have finished with this model: Forward Binding Receivers know a Sender. Sender doesn’t know Receivers. Publishers know a Subscriber. Subscriber doesn’t know Publishers. 1 –> 1 1 –> M Inverse Binding Senders know a Receiver. Receiver doesn’t know Senders. Subscribers know a Publisher. Publisher doesn’t know Subscribers. 1 –> 1 M –> 1 Notes   Orchestration chain It’s worth to note, the Partner Direct Port Binding creates a chain opened from one side and closed from another. The Forward Binding: A new Receiver can be added at run-time. The Sender can not be changed without design-time changes in Receivers. The Inverse Binding: A new Sender can be added at run-time. The Receiver can not be changed without design-time changes in Senders.

    Read the article

  • Web Performance testing using VS2010 "Testing a file download"

    - by cheedep
    Hi All, I am trying out the VS 2010 testing tools for the first time. And I tried recording a web performance test and my actions had a file download implemented as in the KB article here http://support.microsoft.com/kb/812406 by streaming chunks of 10000 bytes. However my test is failing at the download saying "The response stream has been closed". Please help me understand why it is happening this way also any suggestions how you would test such a file download. My main aim was to see how the download was performing for a load test with Intercontinental 350kbps connection on files of about 30-50 MB. Thanks.

    Read the article

  • In MVC2, how do I validate fields that aren't in my data model?

    - by Andy Evans
    I am playing with MVC2 in VS 2010 and am really getting to like it. In a sandbox application that I've started from scratch, my database is represented in an ADO.NET entity data model and have done much of the validation for fields in my data model using Scott Guthrie's "buddy class" approach which has worked very well. However, in a user registration form that I have designed and am experimenting with, I'd like to add a 'confirm email address' or a 'confirm password' field. Since these fields obviously wouldn't exist in my data model, how would I validate these fields client side and server side? I would like to implement something like 'Html.ValidationMessageFor', but these fields don't exist in the data model. Any help would be greatly appreciated.

    Read the article

  • Purpose of Trigraph sequences in C++?

    - by Kirill V. Lyadvinsky
    According to C++'03 Standard 2.3/1: Before any other processing takes place, each occurrence of one of the following sequences of three characters (“trigraph sequences”) is replaced by the single character indicated in Table 1. ---------------------------------------------------------------------------- | trigraph | replacement | trigraph | replacement | trigraph | replacement | ---------------------------------------------------------------------------- | ??= | # | ??( | [ | ??< | { | | ??/ | \ | ??) | ] | ??> | } | | ??’ | ˆ | ??! | | | ??- | ˜ | ---------------------------------------------------------------------------- In real life that means that code printf( "What??!\n" ); will result in printing What| because ??! is a trigraph sequence that is replaced with the | character. My question is what purpose of using trigraphs? Is there any practical advantage of using trigraphs? UPD: In answers was mentioned that some European keyboards don't have all the punctuation characters, so non-US programmers have to use trigraphs in everyday life? UPD2: Visual Studio 2010 has trigraph support turned off by default.

    Read the article

  • Is there anything exciting in perl 5.11 (to become perl 5.12)?

    - by Ether
    Perl 5.11 is now released! Is there anything really exciting in this release, or is it mostly maintenance patches? (From what I've read so far, it appears to be a rollup of improvements we have already seen in prior releases.) the CHANGES file Jesse Vincent's announcement chromatic's blog post 5.11 is the development release of what will become 5.12. The release process itself is changing to a monthly release model. UPDATE: Perl 5.12 is now released (April 12, 2010). the CHANGES file Jesse Vincent's announcement

    Read the article

  • Cannot read configuration file due to insufficient permissions

    - by mike
    Okay, I realize there are many questions relating to this error, I have read several questions and answers without resolving my problem. I have a MVC site that I'm trying to debug on local IIS web server. I check the option to use local IIS in the project properties and I've created a virtual directory in IIS. The error I get in Visual Studio is: Unable to start debugging on web server. In IIS i try browse the site but get the error: Cannot read configuration file due to insufficient permissions Config File \?\C:\Users\Mike\Documents\Visual Studio 2010\Projects\MvcApplication1\MvcApplication1\web.config I've set permissions for the pool identity on the web.config and whole project folder. I've tried localsystem identity, no luck! Please help me resolve this. I've spent several hours trying to fix this.

    Read the article

  • Dynamically changing Wordpress permalinks

    - by conrad
    i am trying to write a plugin which rewrites "post_link", "page_link", "category_link", "tag_link" with an action hook on wordpress. it's similar to this: add_action("post_link", "the_rewrite_function"); function the_rewrite_function($link) { ... return $new_link; } for example the permalink to the post is: http://www.example.com/2010/11/11/the-post-name/ i am changing it with the function to: http://www.example.com/2010/11/11/the-awesome-post-name/ of course when i do this the new permalink is going to 404 (as expected) what can i do to make the new permalink go to the original and working one? Please consider this: it should work on any permalink structure so a permalink structure spesific solution won't do any good. i need to solve this inside the plugin. thanks!

    Read the article

  • Related objects in reportingviewer 10.0.0.0 give #error

    - by Bart-Jan Poirot
    In my VB.net apps i make use of Linq2SQL and Reportviewer with RDLC reports. With Visual Studio 2010 they upgraded this reportviewer component so you can use the newer RDL specification from 2008. Now I've a problem to show related objects. For example assume you provide an order to the datasource of the report and then you can show something like Fields!Customer.Value.Name where Customer is a related entity. I also got the error in my immediate window: *Warning: The Value expression for the textrun ‘Name_1.Paragraphs[0].TextRuns[0]’ contains an error: The specified operation is not valid. (rsRuntimeErrorInExpression)*

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Display continuous dates in Pivot Chart

    - by Douglas
    I have a set of data in a pivot table with date times and events. I've made a pivot chart with this data, and grouped the data by day and year, then display a count of events for each day. So, my horizontal axis goes from 19 March 2007 to 11 May 2010, and my vertical axis is numeric, going from zero to 140. For some days, I have zero events. These days don't seem to be shown on the horizontal axis, so 2008 is narrower than 2009. How do I display a count of zero for days with no events? I'd like my horizontal axis to be continuous, so that it does not miss any days, and every month ends up taking up the same amount of horizontal space. (This question is similar to the unanswered question here, but I'd rather not generate a table of all the days in the last x number of years just to get a smooth plot!)

    Read the article

  • php script return specific xml tag value

    - by Alaa
    i need php script to extract the ip address from the following xml page www.ip-address.com/test.xml <dnstools> <service_provider>Domain Tools</service_provider> <provider_url>http://www.domaintools.com/</provider_url> <date>Wed, 12 May 2010 00:43:07 GMT</date> <unix_time>1273624987</unix_time> <ip_address>94.252.157.241</ip_address> <hostname>94.252.157.241</hostname> <blacklist_status>Clear</blacklist_status> <remote_port>43577</remote_port> <protocol>HTTP/1.0</protocol> <connection>keep-alive</connection> <keep_alive/>

    Read the article

  • basic boost date_time input format question

    - by Chris H
    I've got a pointer to a string, (char *) as input. The date/time looks like this: Sat, 10 Apr 2010 19:30:00 I'm only interested in the date, not the time. I created an "input_facet" with the format I want: boost::date_time::date_input_facet inFmt("%a %d %b %Y"); but I'm not sure what to do with it. Ultimately I'd like to create a date object from the string. I'm pretty sure I'm on the right track with that input facet and format, but I have no idea how to use it. Thanks.

    Read the article

  • LIKE query for DateTime in NHibernate

    - by Anry
    For a column of type varchar I could write such a query: public IList<Order> GetByName(string orderName) { using (ISession session = NHibernateHelper.OpenSession()) { return session.CreateCriteria<Order>(). Add(Restrictions.Like("Name", string.Format("%{0}%", orderName))). List<Order>(); } } How do I write a similar LIKE-query for a column that has type datetime? public IList<Order> GetByDateTime(DateTime dateTime) { using (ISession session = NHibernateHelper.OpenSession()) { return //LIKE-query } } That is, if the method is passed the date and part-time (eg "25.03.2010 19"), then displays all orders are carried out in this period of time.

    Read the article

  • Is strftime (hour) showing wrong time?

    - by ander163
    I'm using this line to get the beginning time of the first day of the month. t = Time.now.to_date.beginning_of_month.beginning_of_day When i display this using t.strftime("%A %b %e @ %l:%m %p") it shows: Monday Feb 1 @ 12:02 AM The hour is always 12 (instead of 00), and more wierd the minute changes to match the month in integers. For the February date, it shows 12:02 AM I use .prior_month and .next_month on the variable to move forward or backwards in time. So when I move to June, this would display as Tuesday June 1 @ 12:06 AM But when I just show the value of "t" using a straight t.to_s, I get this time of 00:00:00, which is what I expect: Mon Feb 01 00:00:00 -0700 2010 A similar error occurs using end_of_day, but the hour is always 11 PM and the minute is the same integer value that matches the month in integers, i.e the time is 11:06 PM in June, 11:02 PM in February. Qurky? Admittedly a noob to Rails. Thanks for any comments or explanations.

    Read the article

  • Download attachment issue with IE6-8 - non ssl

    - by Arun P Johny
    I'm facing an issue with file download with IE6-8 in non ssl environment. I've seen a lot of articles about the IE attachment download issue with ssl. As per the articles I tried to set the values of Pragma, Cache-Control headers, but still no luck with it. These are my response headers Cache-Control: private, max-age=5 Date: Tue, 25 May 2010 11:06:02 GMT Pragma: private Content-Length: 40492 Content-Type: application/pdf Content-Disposition: Attachment;Filename="file name.pdf" Server: Apache-Coyote/1.1 I've set the header values after going through some of these sites KB 812935 KB 316431 But these items are related to SSL. I've checked the response body and headers using fiddler, the response body is proper. I'm using window.open(url, "_blank") to download the file, if I change it to window.open(url, "_parent") or change the "Content-Disposition" to 'inline;Filename="file name.pdf"' it works fine. Please help me to solve this problem

    Read the article

  • Local web application or windows application for my system

    - by IHS
    Hello all I am going to design system that will keep track of publications due by researchers in my work place Need for this project : 1- a database housing the information . 2- Automatic reports must be generated . the users will upload PDF and we have around 25 researchers. I can’t decide which one is better for my system . My questions are : 1- Local web application or windows application ? 2- Access or SQL as database ? 3- I have visual studio 2010 for student use, Could I use it or I need to buy one ? Thanks in advance Mark from new zealand

    Read the article

  • VS2008 Express no longer opens projects after VS2010 express Phone

    - by Alikar
    So I installed the new april release of the VS2010 express for windows phone 7 and now VS2008 express C# no longer opens .csproj files. I'm at a loss as to why this is as I've not changed anything in the files themselves. I'm currently uninstalling windows phone 7 in the hopes that I will be able to open my files again. I have a feeling though I'm going to need to uninstall 2008 as well and the reinstall it to get everything to work. In the future how should I install 2010 express for win 7 phone to prevent it from interfering with 2008 express?

    Read the article

  • How do I convert this XML to KML?

    - by TruMan1
    I am a little new to this, but I need to convert the below XML to KML format so I can feed it into Google maps. Can anyone help with this? <messageList> <totalCount>1</totalCount> - <message> <esn>0-7396996</esn> <esnName>JOHN</esnName> <messageType>TEST</messageType> <messageDetail> ALL IS WELL AT CURRENT LOCATION.</messageDetail> <timestamp>2010-05-24T00:39:12.000Z</timestamp> <timeInGMTSecond>1274661552</timeInGMTSecond> <latitude>25.19483</latitude> <longitude>65.7162</longitude> </message> </messageList>

    Read the article

< Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >