Search Results

Search found 2259 results on 91 pages for 'backward compatibility'.

Page 23/91 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Why is C++ backward compatible with C ? Why isn't there some "pure" C++ language ?

    - by gokoon
    C and C++ are different languages, blababla we know that. But if those language are different, why is it still possible to use function like malloc or free ? I'm sure there are all sort of dusty things C++ has because of C, but since C++ is another language, why not remove those things to make it a little less bloat and more clean and clear ? Is it because it allows programmers to work without the OO model or because some compilers doesn't support high-level abstract features of C++ ?

    Read the article

  • Android 2.1 gallery not backward compatible with Cupcake version, now what?

    - by Schermvlieger
    I don't know why, but in Eclair, the default (non-fancy) gallery app changed its begaviour from the Cupcake version, and it broke one of my commercial applications :-( Firstly, when long-pressing a gallery and choosing "Diashow", it does not publish an Intent to be picked up by any application that implements the Intent filter anymore. Instead, it will directly call "com.android.gallery/com.android.camera.ViewImage" with extras. Question: is it still possible to intercept this intent and allow the user to choose my application to do the Diashow? Secondly, the intent extras for the VIEW intent are messed up (in my build of 2.1 anyway): Instead of providing the BucketId of the picture in the Intent's queryparameter. But in 2.1, the BucketId is moved to the Intent's extras. Except; it is not passing the BUCKET_ID, but the unlocalized BUCKET_DISPLAY_NAME instead :-/ Question: how can I still get the unique BUCKET_ID from the intent, so that I do not have to work with a potentially non-unique BUCKET_DISPLAY_NAME? Is there anybody out there who has come up with a working solution for these problems? I thought the whole idea of Android Intents was to be able to integrate your applications with the base Android environment, but my build of 2.1 proves that this idea still lives in the land of Theory :-(

    Read the article

  • What compatibility trade-offs do we need to make in order to use a hardened SSL config for Nginx?

    - by nathan.f77
    I found some hardened SSL settings in github.com/ioerror/duraconf. Here is the header from the config: This is an example of a high security, somewhat compatible SSLv3 and TLSv1 enabled HTTPS proxy server. The server only allows modes that provide perfect forward secrecy; no other modes are offered. Anonymous cipher modes are disabled. This configuation does not include the HSTS header to ensure that users do not accidentally connect to an insecure HTTP service after their first visit. It only supports strong ciphers in PFS mode: ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; # Only strong ciphers in PFS mode ssl_ciphers ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA; ssl_protocols SSLv3 TLSv1; If we were to use these settings on our website, what does "somewhat compatible" mean? For example, would IE6 still be able to connect?

    Read the article

  • Do RAID controllers commonly have SATA drive brand compatibility issues?

    - by Jeff Atwood
    We've struggled with the RAID controller in our database server, a Lenovo ThinkServer RD120. It is a rebranded Adaptec that Lenovo / IBM dubs the ServeRAID 8k. We have patched this ServeRAID 8k up to the very latest and greatest: RAID bios version RAID backplane bios version Windows Server 2008 driver This RAID controller has had multiple critical BIOS updates even in the short 4 month time we've owned it, and the change history is just.. well, scary. We've tried both write-back and write-through strategies on the logical RAID drives. We still get intermittent I/O errors under heavy disk activity. They are not common, but serious when they happen, as they cause SQL Server 2008 I/O timeouts and sometimes failure of SQL connection pools. We were at the end of our rope troubleshooting this problem. Short of hardcore stuff like replacing the entire server, or replacing the RAID hardware, we were getting desperate. When I first got the server, I had a problem where drive bay #6 wasn't recognized. Switching out hard drives to a different brand, strangely, fixed this -- and updating the RAID BIOS (for the first of many times) fixed it permanently, so I was able to use the original "incompatible" drive in bay 6. On a hunch, I began to assume that the Western Digital SATA hard drives I chose were somehow incompatible with the ServeRAID 8k controller. Buying 6 new hard drives was one of the cheaper options on the table, so I went for 6 Hitachi (aka IBM, aka Lenovo) hard drives under the theory that an IBM/Lenovo RAID controller is more likely to work with the drives it's typically sold with. Looks like that hunch paid off -- we've been through three of our heaviest load days (mon,tue,wed) without a single I/O error of any kind. Prior to this we regularly had at least one I/O "event" in this time frame. It sure looks like switching brands of hard drive has fixed our intermittent RAID I/O problems! While I understand that IBM/Lenovo probably tests their RAID controller exclusively with their own brand of hard drives, I'm disturbed that a RAID controller would have such subtle I/O problems with particular brands of hard drives. So my question is, is this sort of SATA drive incompatibility common with RAID controllers? Are there some brands of drives that work better than others, or are "validated" against particular RAID controller? I had sort of assumed that all commodity SATA hard drives were alike and would work reasonably well in any given RAID controller (of sufficient quality).

    Read the article

  • What Filesystem should be used for a 4TB drive for both Windows and OSX compatibility? [closed]

    - by Nicholas Yost
    Note: I am aware of similar questions. The one's I seen here are for Windows, OSX, and Linux (which I do not need). I also can use Mountain Lion, which the other questions did not mention. I was going to use NTFS, but OSX Mountain Lion can only read that filesystem and not write to it for some reason. I want to use something native between OSX and Windows, as I don't want to risk losing the data over filesystem incompatibilities. I have USB 3.0 and want something that will allow files greater than 4GB. I do not mind installing a small set of drivers on the Windows machine(s), but I would strongly prefer to leave the Mac machine untouched. Thanks!

    Read the article

  • what are the following keyboard shortcuts in a terminal?

    - by kloop
    I am trying to figure out a few keyboard shortcuts in a terminal in mac osx (and Linux): In the command line: go to the next word go to the previous word go to the end of the line go to the beginning of the line This will make it easier to change commands.. Right now, I am using the left/right arrow keys, which is time consuming. I used bind -p as suggested below. EDIT: What do the following key bindings mean? "\e\e[D": backward-word "\e[1;5D": backward-word "\e[5D": backward-word "\eb": backward-word and: "\e\e[C": forward-word "\e[1;5C": forward-word "\e[5C": forward-word "\ef": forward-word

    Read the article

  • What rules govern cross-version compatibility for .NET applications and the C# language?

    - by John Feminella
    For some reason I've always had trouble remembering the backwards/forwards compatibility guarantees made by the framework, so I'd like to put that to bed forever. Suppose I have two assemblies, A and B. A is older and references .NET 2.0 assemblies; B references .NET 3.5 assemblies. I have the source for A and B, Ax and Bx, respectively; they are written in C# at the 2.0 and 3.0 language levels. (That is, Ax uses no features that were introduced later than C# 2.0; likewise Bx uses no features that were introduced later than 3.0.) I have two environments, C and D. C has the .NET 2.0 framework installed; D has the .NET 3.5 framework installed. Now, which of the following can/can't I do? Running: run A on C? run A on D? run B on C? run C on D? Compiling: compile Ax on C? compile Ax on D? compile Bx on C? compile Bx on D? Rewriting: rewrite Ax to use features from the C# 3 language level, and compile it on D, while having it still work on C? rewrite Bx to use features from the C# 4 language level on another environment E that has .NET 4, while having it still work on D?' Referencing from another assembly: reference B from A and have a client app on C use it? reference B from A and have a client app on D use it? reference A from B and have a client app on C use it? reference A from B and have a client app on D use it? More importantly, what rules govern the truth or falsity of these hypothetical scenarios?

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Problem with uninstalling Microsoft .NET Framework 4 Extended Beta 2 on Windows Vista

    - by empi
    Hi. I have a problem with uninstalling Microsoft .NET Framework 4 Extended Beta 2. I wanted to uninstall it but I cancelled the process. Then I was asked if there was a problem without uninstallation if I want to change to compatibility mode. I accidentally chose to change to compatibility mode. Since then, every time I try to uninstall it, I get an error that the installer cannot run in compatibility mode. How can I fix it? I look for installer file and it's not marked to run in compatibility mode. I cannot find the file that was marked to run in compatibility mode after answering mentioned question. Thanks in advance for help.

    Read the article

  • SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014

    - by Pinal Dave
    SQL Server 2014 has new cardinality estimation logic/algorithm. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. Cardinality estimates are a prediction of the number of rows in the query result. The query optimizer uses these estimates to choose a plan for executing the query. The quality of the query plan has a direct impact on improving query performance. ~ Souce MSDN Let us see a quick example of how cardinality improves performance for a query. I will be using the AdventureWorks database for my example. Before we start with this demonstration, remember that even though you have SQL Server 2014 to see the effect of new cardinality estimates, you will need your database compatibility mode set to 120 which is for SQL Server 2014. If your server instance of SQL Server 2014 but you have set up your database compatibility mode to 110 or any other earlier version, you will get performance from your query like older version of SQL Server. Now we will execute following query in two different compatibility mode and see its performance. (Note that my SQL Server instance is of version 2014). USE AdventureWorks2014 GO -- ------------------------------- -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO -- ------------------------------- -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO Result of Statistics IO Compatibility level 120 Table ‘Person’. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Compatibility level 110 Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Person’. Scan count 0, logical reads 137, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. You will notice in the case of compatibility level 110 there 137 logical read from table person where as in the case of compatibility level 120 there are only 6 physical reads from table person. This drastically improves the performance of the query. If we enable execution plan, we can see the same as well. I hope you will find this quick example helpful. You can read more about this in my latest Pluralsight Course. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Cross-Browser Extension Installation now Possible with Opera and Google Chrome

    - by Akemi Iwaya
    People have been curious if there would be cross-browser compatibility for extensions due to Opera’s recent switch to the browser engine that Google Chrome uses. That question has now been answered. The OMG! Chrome! Blog has put together a nice tutorial on how to get cross-browser extension compatibility set up and working with your browser of choice. Screenshot courtesy of OMG! Chrome! Blog. While it is not surprising that the first steps in cross-browser extension compatibility have been taken, it will be interesting to see how it develops as the process is refined and further development occurs with the ‘new’ Opera. What are your thoughts on this? Is cross-browser extension compatibility really that important? Perhaps you feel that it does not matter? Let us know your thoughts in the comments!    

    Read the article

  • Oracle Tutor: *** CAUTION to Word .docx Users ***

    - by [email protected]
    Microsoft released a security update KB969604 for Office 2007 (around June 2009) This update causes document variables within Word docx files to be scrambled. This update might still be pushed out via Office 2007 updates DO NOT save files as docx using MS OFFICE 2007 until you apply the MS hotfix # 970942 available here If you are using Windows XP with Office 2003 or Office 2000 and have installed an older Office 2007 compatibility pack, documents saved as docx may also cause the scrambled document variables. Installing the 2007 compatibility pack published on 1/6/2010 (version 4) will prevent the document variables from becoming corrupt. Those on Windows 2000 may not be able to install the latest compatibility pack, or the compatibility pack may not function properly. This situation will hopefully be rectified in the coming months. What is a document variable? Document variables store data inside the document, invisible to the user. The Tutor software uses them when converting the document to HTML and when creating the flowchart, just to name a couple of uses. How will you know if a document's variables are scrambled? The difficulty in diagnosing the issue is that the symptoms can take myriad forms. There isn't a single error message or a single feature that one can point to and say, "test for the problem by doing this." The best clue about the error is seeing any kind of string in an error message that has garbage characters, question marks, xml code snippets, or just nonsense. Such as "Language ?????????????xlr;lwlerkjl could not be found." It is also possible to see the corrupted data in the footers of the Word docs. And, just because the footers look correct does not mean that the document variables are not corrupted. The corruption problem does not occur in every document variable in the document, just some of them. Often it is less than a quarter of them. What is the difference between docx files and doc files? Office 2007 uses Office Open XML formats with .docx and .docm filename extensions. - Docx is an Office Open XML word document. - Docm is a macro enabled Office Open XML document. This means the file structure behind the scenes is quite different from the binary file formats used prior to Office 2007 such as .doc, .dot, .xls, and .ppt. Solution Summary: For Windows XP and Word 2007: Install the hotfix, or save files as *.doc For Windows XP and Word 2000 and 2003: Install the latest compatibility pack or save files as *.doc For Windows 2000 with Word 2000 or 2003, do not use any compatibility pack, save files as *.doc Emily Chorba Principle Product Manager for Oracle Tutor

    Read the article

  • Type of AI to tackle this problem?

    - by user1154277
    I posted this on stackoverflow but want to get your recommendations as well as a user on overflow recommended I post it here. I'm going to say from the beginning that I am not a programmer, I have a cursory knowledge of different types of AI and am just a businessman building a web app. Anyways, the web app I am investing in to develop is for a hobby of mine. There are many part manufacturers, product manufacturers, upgrade and addon manufacturers etc. for hardware/products in this hobby's industry. Currently, I am in the process of building a crowd sourced platform for people who are knowledgeable to go in and mark up compatibility between those parts as its not always clear cut if they are for example: Manufacturer A makes a "A" class product, and manufacturer B makes upgrade/part that generally goes with class "A" products, but is for one reason or another not compatible with Manufacturer A's particular "A" class product. However, a good chunk (60%-70%) of the products/parts in the database can have their compatibility inferenced by their properties, For example: Part 1 is type "A" with "X" mm receiver and part 2 is also Type "A" with "X" mm interface and thus the two parts are compatible.. or Part 1 is a 8mm gear, thus all bushings of 8mm from any manufacturer is compatible with part 1. Further more, all gears can only have compatibility relationships in the database with bushing and gear boxes, but there can be no meaningful compatibility between a gear and a rail, or receiver since those parts don't interface. Now what I want is an AI to be able to learn from the decisions of the crowdsourced platform community and be able to inference compatibility for new parts/products based on their tagged attributes, what type of part they are etc. What would be the best form of AI to tackle this? I was thinking a Expert System, but explicitly engineering all of the knowledge rules would be daunting because of the complex relations between literally tens of thousands of parts, hundreds of part types and many manufacturers. Would a ANN (neural network) be ideal to learn from the many inputs/decisions of the crowdsource platform users? Any help/input is much appreciated.

    Read the article

  • Framework 4 Features: Login Id Support

    - by Anthony Shorten
    Given that Oracle Utilities Application Framework 4 is available as part of Mobile Work Force Management and other product progressively I am preparing a number of short but sweet blog entries highlighting some of the new functionality that has been implemented. This is the first entry and it is on a new security feature called Login Id. In past releases of the Oracle Utilities Application Framework, the userid used for authentication and authorization was limited to eight (8) characters in length. This mirrored what the market required in the past with LAN userids and even legacy userids being that length. The technology market has since progressed to longer userid lengths. It is very common to hear that email addresses are being used as credentials for production systems. To achieve this in past versions of the Oracle Utilities Application Framework, sites had to introduce a short userid (8 characters in length) as an alias in your preferred security store. You then configured your J2EE Web Application Server to use the alias as credentials. This sometimes was a standard feaure of the security store and/or the J2EE Web Application Server, if you were lucky. If not, some java code has to be written to implement the solution. In Oracle Utilities Application Framework 4 we introduced a new attribute on the user object called Login Id. The Login Id can be up to 256 characters in length and is an alternative to the existing userid stored on the user object. This means the Oracle Utilities Application Framework can support both long and short userids. For backward compatibility we use the Login Id for authentication but the short userid for authorization and auditing. The user object within the Oracle Utilities Application Framework holds the translation. Backward compatibility is always a consideration in any of our designs for future or changed functionality. You will see reference to this fact in the blog entries I will be composing over the next few months. We have also thought about the flexibility in implementing this feature. The Login Id can be the same value of the Userid (the default for backward compatibility) or can be different. Both the Login Id and Userid have to be unique. This avoids sharing of credentials and is also backward compatible. You can manually enter the Login Id or provision it from Oracle Identity Manager (or other tool). If you use the Login Id only, then we will not autogenerate a short userid automatically as the rules for this can vary from site to site. You have a number of options there. Most Identity provisioning tools can generate a short userid at user creation time and this can be used. If you do not use provisioning tools, then you can write a class extension using the SDK to autoegenerate the userid based upon your sites preference. When we designed the feature there were lots of styles of generating userids (random, initial and surname, numbers etc). We could not really see a clear winner in that respect so we just allowed the extension to be inserted in if necessary. Most customers indicated to us that identity provisioning was the preferred way. This is why we released an Oracle Identity Manager integration with the framework. The Login id is case sensitive now which was not supported under userid. The introduction of the Login Id allows the product to offer flexible options when configuring security whilst maintaining backward compatibility.

    Read the article

  • How do I develop browser plugins with cross-platform and cross-browser compatibility in mind?

    - by Schnapple
    My company currently has a product which relies on a custom, in-house ActiveX control. The technology it employs (TWAIN) is itself cross-platform by design, but our solution is obviously limited to Internet Explorer on Windows. Long term we would like to become cross-browser and cross-platform (i.e., support other browsers on Windows, support the Macintosh or Linux). Obviously if we wanted to support Firefox on Windows I would need to write a plugin for it. But if we wanted to support the Macintosh, how do I attack that? Is it possible to compile a version of the Firefox plugin that runs on the Mac? Would I be remiss to not also support Safari on the Mac? Are there any plugins which are cross-browser on a platform? (i.e., can any browsers run plugins for other browsers) Since TWAIN is so low-level to the operating system, I do not think Java would be a solution in any capacity, but I could be wrong. What do people generally do when they want to support multiple platforms with a process that will need to be cross-platform and cross-browser compatible?

    Read the article

  • Jquery Internet Explorer 8 compatibility issue, does not load data unless history is deleted...

    - by Scarface
    Hey guys, I have a weird problem. I have an update system that refreshes data on a time interval. It works well in all browsers except internet explorer 8. The problem is that once it loads the data, it does not matter if the data updates further, it will not update the data visually until the internet history is cleared. I am not using any cookies server-side...Anyone ever encounter something like this? Here is my javascript, thanks for any assistance in advance function prepare(response) { var d = new Date(); count++; d.setTime(response.time*1000); var mytime = d.getHours()+':'+d.getMinutes()+':'+d.getSeconds(); var string = '<li class="shoutbox-list" id="list-'+count+'">' + '<span class="shoutbox-list-nick"><a href="statistics.php?user='+response.user+'">'+response.user+'</a></span>' + ' <span class="date">'+mytime+'</span><br>' + '<span class="msg">'+response.message+'</span>' +'</li>'; return string; } function refresh() { $.getJSON(files+"shoutbox.php?action=view&time="+lastTime+"&topic_id="+topic_id, function(json) { if(json.length) { for(i=0; i < json.length; i++) { $('#daddy-shoutbox-list').prepend(prepare(json[i])); $('#list-' + count).fadeIn(1500); } var j = i-1; lastTime = json[j].time; } //alert(lastTime); }); timeoutID = setTimeout(refresh, 3000); } $(document).ready(function() { var options = { dataType: 'json', beforeSubmit: validate, success: function(response, status){ if (response.error=='success'){ success(response, status); } else { $.prompt(response.error); } } }; $('#daddy-shoutbox-form').ajaxForm(options); timeoutID = setTimeout(refresh, 100); });

    Read the article

  • Compatibility issues with <a> and calling a function(); across different web browsers

    - by Matthew
    Hi, I am new to javascript. I wrote the following function rollDice() to produce 5 random numbers and display them. I use an anchor with click event to call the function. Problem is, in Chrome it won't display, works fine in IE, in firefox the 5 values display and then the original page w/anchor appears! I am suspicious that my script tag is too general but I am really lost. Also if there is a display function that doesn't clear the screen first that would be great. diceArray = new Array(5) function rollDice() { var i; for(i=0; i<5; i++) { diceArray[i]=Math.round(Math.random() * 6) % 6 + 1; document.write(diceArray[i]); } } when I click should display 5 rand variables

    Read the article

  • Do newer versions of BJam support backwards compatibility with older versions of Boost?

    - by cmmacphe
    I'm trying to build version 1.35 of Boost with the newest version of bjam that is bundled with version 1.42 Boost. Will this adversely affect the results of the build? Is this even possible? The reason I'm trying to do this is because the newest version of BJam has support for command line options that are not included in the older version of BJam that comes bundled with 1.35 of boost.

    Read the article

  • Any difference in performance/compatibility of different languages in PostgreSQL?

    - by Igor
    In nowadays the PostgreSQL offers plenty of procedural languages: pl/pgsql, pl/perl, etc Are there any difference in the speed/memory consumption in procedures written in different languages? Does anybody have done any test? Is it true that to use the native pl/pgsql is the most correct choice? How the procedure written in C++ and compiled into loadable module differs in all parameter w.r.t. the user function written with pl/* languages?

    Read the article

  • How to get compatibility between C# and SQL2k8 AES Encryption?

    - by Victor Rodrigues
    I have an AES encryption being made on two columns: one of these columns is stored at a SQL Server 2000 database; the other is stored at a SQL Server 2008 database. As the first column's database (2000) doesn't have native functionality for encryption / decryption, we've decided to do the cryptography logic at application level, with .NET classes, for both. But as the second column's database (2008) allow this kind of functionality, we'd like to make the data migration using the database functions to be faster, since the data migration in SQL 2k is much smaller than this second and it will last more than 50 hours because of being made at application level. My problem started at this point: using the same key, I didn't achieve the same result when encrypting a value, neither the same result size. Below we have the full logic in both sides.. Of course I'm not showing the key, but everything else is the same: private byte[] RijndaelEncrypt(byte[] clearData, byte[] Key) { var memoryStream = new MemoryStream(); Rijndael algorithm = Rijndael.Create(); algorithm.Key = Key; algorithm.IV = InitializationVector; var criptoStream = new CryptoStream(memoryStream, algorithm.CreateEncryptor(), CryptoStreamMode.Write); criptoStream.Write(clearData, 0, clearData.Length); criptoStream.Close(); byte[] encryptedData = memoryStream.ToArray(); return encryptedData; } private byte[] RijndaelDecrypt(byte[] cipherData, byte[] Key) { var memoryStream = new MemoryStream(); Rijndael algorithm = Rijndael.Create(); algorithm.Key = Key; algorithm.IV = InitializationVector; var criptoStream = new CryptoStream(memoryStream, algorithm.CreateDecryptor(), CryptoStreamMode.Write); criptoStream.Write(cipherData, 0, cipherData.Length); criptoStream.Close(); byte[] decryptedData = memoryStream.ToArray(); return decryptedData; } This is the SQL Code sample: open symmetric key columnKey decryption by password = N'{pwd!!i_ll_not_show_it_here}' declare @enc varchar(max) set @enc = dbo.VarBinarytoBase64(EncryptByKey(Key_GUID('columnKey'), 'blablabla')) select LEN(@enc), @enc This varbinaryToBase64 is a tested sql function we use to convert varbinary to the same format we use to store strings in the .net application. The result in C# is: eg0wgTeR3noWYgvdmpzTKijkdtTsdvnvKzh+uhyN3Lo= The same result in SQL2k8 is: AI0zI7D77EmqgTQrdgMBHAEAAACyACXb+P3HvctA0yBduAuwPS4Ah3AB4Dbdj2KBGC1Dk4b8GEbtXs5fINzvusp8FRBknF15Br2xI1CqP0Qb/M4w I just didn't get yet what I'm doing wrong. Do you have any ideas? EDIT: One point I think is crucial: I have one Initialization Vector at my C# code, 16 bytes. This IV is not set at SQL symmetric key, could I do this? But even not filling the IV in C#, I get very different results, both in content and length.

    Read the article

  • How to set a keybinding which is valid in all modes in Emacs

    - by AnotherEmacsLearner
    Hi, I've configured my emacs to use M-j as backward-char by (global-set-key (kbd "M-j") 'backward-char) ; was indent-new-comment-line in my .emacs file. This works fine in many modes (text/org/lisp). But in c++-mode & php-mode it is bound to the default c-indent-new-comment-line How can I bind M-j to use backward-char in these modes too. And in general for ALL modes. Thanks, AnotherEmacsLearner

    Read the article

  • Browser compatibility; Before or after uploading website to server?

    - by Camran
    I am on the stage where I need to make my website cross-browser compatible. I need tips on how to get started. I have developed my website on firefox, so it works great with firefox. I guess I have to download a couple of versions of all major browsers now, right? Then just test each browser one by one? Should I do this before uploading the entire website onto a server or afterwards? All tips and SW which makes this easier is appreciated. BTW, it is a classifieds website using MySql, Solr, PHP, js etc... Thanks

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >