Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 382/537 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Git-based storage and publishing, infrastructure advice

    - by Joel Martinez
    I wanted to get some advice on moving a system to "the cloud" ... specifically, I'm looking to move into some of Windows Azure's managed services, as right now I'm managing a VM. Basically, the system operates on some data stored in a github git repository. I'll describe the current architecture: Current system (all hosted on a single server): GitHub - configured with a webhook pointing at ... ASP.NET MVC application - to accept the webhook from git. It pushes a message onto ... Azure service bus Queue - which is drained by ... Windows Service - pulls the message from the queue and ... Fetches the latest data from the git repository (using GitLib2Sharp) onto the local disk and finally ... Operates on the data in git to produce a static HTML website hosted/served by IIS. The system works really well, actually ... but I would like to get out of the business of managing the VM, and move to using some combination of Azure web and worker roles. But because the system relies so heavily on the git repository on the local filesystem, I'm finding it difficult to figure out how to architect in the cloud. I know you can get file system access, so in theory I could just fetch the repository if there's nothing on disk ... but the performance/responsiveness of the system sort of depends on the repository being available and only having to fetch diffs, which is relatively quick. As opposed to periodically having to fetch the entire (somewhat large) git repository if the web or worker role was recycled, or something. So I would love some advice on how you would architect such a system :) Ultimately, the only real requirement is to be able to serve HTML content that's been produced from the contents of a git repository (in a relatively responsive manner, from a publishing perspective) ... please feel free to ask any clarifying questions if there's something I omitted. Thanks!

    Read the article

  • The Cloud is STILL too slow!

    - by harry.foxwell(at)oracle.com
    If you've been in the computing industry sufficiently long enough to remember dialup modems and other "ancient" technologies, you might be tempted to marvel at today's wonderfully powerful multicore PCs, ginormous disks, and blazingly fast networks.  Wow, you're in Internet Nirvana, right!  Well, no, not by a long shot.Considering the exponentially growing expectations of what the Web, that is, "the Cloud", is supposed to provide, today's Web/Cloud services are still way too slow.Already we are seeing cloud-enabled consumer devices that are stressing even the most advanced public network services.  Like the iPad and its competitors, ever more powerful smart-phones, and an imminent hoard of special purpose gadgets such as the proposed "cloud camera" (see http://gdgt.com/discuss/it-time-cloud-camera-found-out-cnr/ ).And at the same time that the number and type of cloud services are growing, user tolerance for even the slightest of download delays is rapidly decreasing.  Ten years ago Web developers followed the "8-Second Rule", (average time a typical Web user would tolerate for a page to download and render).  Not anymore; now it's less than 3 seconds, and only a bit longer for mobile devices (see http://www.technologyreview.com/files/54902/GoogleSpeed_charts.pdf).  How spoiled we've become!Google, among others, recognizes this problem and is working to encourage the development of a faster Web (see http://www.technologyreview.com/web/32338/). They, along with their competitors and ISPs, will have to encourage and support significantly better Web performance in order to provide the types of services envisioned for the Cloud.  How will they do this? Through the development of faster components, better use of caching technologies, and the really tough one - exploiting parallelism. Not that parallel technologies like multicore processors are hard to build...we already have them.  It's just that we're not that good yet at using them effectively.  And if we don't get better, users will abandon cloud-based services...in less than 3 seconds.

    Read the article

  • Java Champion Jorge Vargas on Extreme Programming, Geolocalization, and Latin American Programmers

    - by Janice J. Heiss
    In a new interview, up on otn/java, titled “An Interview with Java Champion Jorge Vargas,” Jorge Vargas, a leading Mexican developer, discusses the process of introducing companies to Enterprise JavaBeans through the application of Extreme Programming. Among other things, he gives workshops about building code with agile techniques and creates a master project to build all apps based on Scrum, XP methods and Kanban. He focuses on building core components such as security, login, and menus. Vargas remarks, “This may sound easy, but it’s not—the process takes months and hundreds of hours, but it can be controlled, and with small iterations, we can translate customer requirements and problems of legacy systems to the new system.” In regard to his work with geolocalization, he says: “We have launched a beta program of Yumbling, a geolocalization-based app, with mobile clients for BlackBerry, iPhone, Android, and Nokia, with a Web interface. The first challenge was to design a simple universal mechanism providing information to all clients and to minimize maintenance provision to them. I try not to generalize a lot—to avoid low performance or misunderstanding in processing data. We use the latest Java EE technology—during the last five years, I’ve taught people how to use Java EE efficiently.” Check out the interview here.

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • What is the best way to evaluate new programmers?

    - by Rafael
    What is the best way to evaluate the best candidates to get a new job (talking merely in terms of programming skills)? In my company we have had a lot of bad experiences with people who have good grades but do not have real programming skills. Their skills are merely like code monkeys, without the ability to analyze the problems and find solutions. More things that I have to note: The education system in my country sucks--really sucks. The people that are good in this kind of job are good because they have talent for it or really try to learn on their own. The university / graduate /post-grad degree doesn't mean necessarily that you know exactly how to do the things. Certifications also mean nothing here because the people in charge of the certification course also don't have skills (or are in low paying jobs). We need really to get the good candidates that are flexible and don't have mechanical thinking (because this type of people by experience have a low performance). We are in a government institution and the people that are candidates don't necessarily come from outside, but we have the possibility to accept or not any candidates until we find the correct one. I hope I'm not sounding too aggressive in my question; and BTW I'm a programmer myself. edit: I figured out that asked something really complex here. I will un-toggle "the correct answer" only to let the discussion going fluent, without any bias.

    Read the article

  • Restrict SSL access for some paths on a apache2 server

    - by valmar
    I wanted to allow access to www.mydomain.com/login through ssl only. E.g.: Whenever someone accessed http://www.mydomain.com/login, I wanted him to be redirect to https://www.mydomain.com/login so it's impossible for him/her to access that site without SSL. I accomplished this by adding the following lines to the virtual host for www.mydomain.com on port 80 in /etc/apache2/sites-available/default: RewriteEngine on RewriteCond %{SERVER_PORT} ^80$ RewriteRule ^/login(.*)$ https://%{SERVER_NAME}/login$1 [L,R] RewriteLog "/var/log/apache2/rewrite.log" Now, I want to restrict using SSL for www.mydomain.com. That means, whenever someone accessed https://www.mydomain.com, I want him to be redirected to http://www.mydomain.com (for performance reasons). I tried this by adding the following lines to the virtual host of www.mydomain.com on port 443 in /etc/apache2/sites-available/default-ssl: RewriteEngine on RewriteCond %{SERVER_PORT} ^443$ RewriteRule ^/(.*)$ http://%{SERVER_NAME}/$1 [L,R] RewriteLog "/var/log/apache2/rewrite.log" But when I now try to access www.mydomain.com/login, I get an error message that the server has caused to many redirects. That does make sense. Obviously, the two RewriteRules are playing ping-pong against each other. How could I work around this?

    Read the article

  • XSLT is not the solution you're looking for

    - by Jeff
    I was very relieved to see that Umbraco is ditching XSLT as a rendering mechanism in the forthcoming v5. Thank God for that. After working in this business for a very long time, I can't think of any other technology that has been inappropriately used, time after time, and without any compelling reason.The place I remember seeing it the most was during my time at Insurance.com. We used it, mostly, for two reasons. The first and justifiable reason was that it tweaked data for messaging to the various insurance carriers. While they all shared a "standard" for insurance quoting, they all had their little nuances we had to accommodate, so XSLT made sense. The other thing we used it for was rendering in the interview app. In other words, when we showed you some fancy UI, we'd often ditch the control rendering and straight HTML and use XSLT. I hated it.There just hasn't been a technology hammer that made every problem look like a nail (or however that metaphor goes) the way XSLT has. Imagine my horror the first week at Microsoft, when my team assumed control of the MSDN/TechNet forums, and we saw a mess of XSLT for some parts of it. I don't have to tell you that we ripped that stuff out pretty quickly. I can't even tell you how many performance problems went away as we started to rip it out.XSLT is not your friend. It has a place in the world, but that place is tweaking XML, not rendering UI.

    Read the article

  • Grid Infrastructure Management Repository (GIMR) database now mandatory in Oracle GI 12.1.0.2

    - by Mike Dietrich
    During the installation of Oracle Grid Infrastructure 12.1.0.1 you've had the following option to choose YES/NO to install the Grid Infrastructure Management Repository (GIMR) database MGMTDB: With Oracle Grid Infrastructure 12.1.0.2 this choice has become obsolete and the above screen does not appear anymore. The GIMR database has become mandatory.  What gets stored in the GIMR? See the documentation here See the changes in Oracle Clusterware 12.1.0.2 here: Automatic Installation of Grid Infrastructure Management Repository The Grid Infrastructure Management Repository is automatically installed with Oracle Grid Infrastructure 12c release 1 (12.1.0.2). The Grid Infrastructure Management Repository enables such features as Cluster Health Monitor, Oracle Database QoS Management, and Rapid Home Provisioning, and provides a historical metric repository that simplifies viewing of past performance and diagnosis of issues. This capability is fully integrated into Oracle Enterprise Manager Cloud Control for seamless management. Furthermore what the doc doesn't say explicitly: The -MGMTDB has now become a single-tenant deployment having a CDB with one PDB This will allow the use of a Utility Cluster that can hold the CDB for a collection of GIMR PDBs When you've had already an Oracle 12.1.0.1 GIMR this database will be destroyed and recreated Preserving the CHM/OS data can be acchieved with OCULMON to dump it out into node view The data files associated with it will be created within the same disk group as OCR and VOTING  In a future release there may be an option offered to put in into a separate disk group Some important MOS Notes: MOS Note 1568402.1FAQ: 12c Grid Infrastructure Management Repository, states there's no supported procedure to enable Management Database once the GI stack is configured MOS Note 1589394.1How to Move GI Management Repository to Different Shared Storage(shows how to delete and recreate the MGMTDB) MOS Note 1631336.1Cannot delete Management Database (MGMTDB) in 12.1 -Mike

    Read the article

  • Is meta description still relevant?

    - by Jeff Atwood
    I received this bit of advice about the meta description tag recently: Meta descriptions are used by Google probably 80% of the time for the snippet. They don’t help with rankings but you should probably use them. You could just auto generate them from the first part of the question. The description tag exists in the header, like so: <meta name="Description" content="A brief summary of the content on the page."> I'm not sure why we would need this field, as Google seems perfectly capable of showing the relevant search terms in context in the search result pages, like so (I searched for c# list performance): In other words, where would a meta description summary improve these results? We want the page to show context around the actual search hits, not a random summary we inserted! Google Webmaster Central has this advice: For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions are more difficult. In the latter case, though, programmatic generation of the descriptions can be appropriate and is encouraged -- just make sure that your descriptions are not "spammy." Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation. I'm struggling to think of any scenario when I would want the Google-generated summary, that is, actual context from the page for the search terms, to be replaced by a hard-coded meta description summary of the question itself.

    Read the article

  • TDWI World Conference Features Oracle and Big Data

    - by Mandy Ho
    Oracle is a Gold Sponsor at this year's TDWI World Conference Series, held at the Manchester Grand Hyatt in San Diego, California - July 31 to Aug 1. The theme of this event is Big Data Tipping Point: BI Strategies in the Era of Big Data. The conference features an educational look at how data is now being generated so quickly that organizations across all industries need new technologies to stay ahead - to understand customer behavior, detect fraud, improve processes and accelerate performance. Attendees will hear how the internet, social media and streaming data are fundamentally changing business intelligence and data warehousing. Big data is reaching critical mass - the tipping point. Oracle will be conducting the following Evening Workshop. To reserve your space, call 1.800.820.5592 ext 10775. Title...:    Integrating Big Data into Your Data Center (or A Big Data Reference Architecture) Date.:    Wed., August 1, 2012, at 7:00 p.m Venue:: Manchester Grand Hyatt, San Diego, Room Weblogs, Social Media, smart meters, senors and other devices generate high volumes of low density information that isn't readily accessible in enterprise data warehouses and business intelligence applications today. But, this data can have relevant business value, especially when analyzed alongside traditional information sources. In this session, we will outline a reference architecture for big data that will help you maximize the value of your big data implementation. You will learn: The key technologies in a big architecture, and their specific use case The integration point of the various technologies and how they fit into your existing IT environment How effectively leverage analytical sandboxes for data discovery and agile development of data driven solutions   At the end of this session you will understand the reference architecture and have the tools to implement this architecture at your company. Presenter: Jean-Pierre Dijcks, Senior Principal Product Manager Don't miss our booth and the chance to meet with our Big data experts on the exhibition floor at booth #306. 

    Read the article

  • Which ecommerce framework is fast and easy to customize?

    - by Diego
    I'm working on a project where I have to put online an ecommerce system which will require some good amount of custom features. I'm therefore looking for a framework which makes customization easy enough (from an experienced developer's perspective, I mean). Language shoul be PHP and time is a constraint, I don't have months to learn. Additionally, the ecommerce will have to handle around 200.000 products from day one, which will increase over time, hence performance is also important. So far I examined the following: Magento - Complicated and, as far as I could read, slow when database contains many products. It's also resource intensive, and we can't afford a dedicated VPS from the beginning. OpenCart - Rough at best, documentation is extremely poor. Also, it's "free" to start, but each feature is implemented via 3rd party commercial modules. OSCommerce - Buggy, inefficient, outdated. ZenCart - Derived from OSCommerce, doesn't seem much better. Prestashop - It looks like it has many incompatibilities. Also, most of its modules are commercial, which increases the cost. In short, I'm still quite undecided, as none of the above seems to satisfy the requirements. I'm open to evaluate closed source frameworks too, if they are any better, but my knowledge about them is limited, therefore I'll welcome any suggestion. Thanks for all replies.

    Read the article

  • links for 2011-03-04

    - by Bob Rhubart
    Joao Oliveira: Forms and Reports 11g Fusion Startup Script "After Fusion Middleware 11g Linux installation (Weblogic, Forms, Reports, Discoverer and Portal or others) probably most of the newcomers will wonder how to create a startup script to start the Weblogic managed Servers when the server starts up or reboots." (tags: oracle fusionmiddleware weblogic) Anthony Shorten: SOA Suite Integration: Part 3: Loading files Anthony says: "One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process." (tags: oracle otn soa soasuite) Francisco Munoz Alvarez: Playing with Oracle 11gR2, OEL 5.6 and VirtualBox 4.0.2 (1st Part) Oracle ACE Francisco Munoz Alvarez kicks off a tutorial on creating an Oracle Database 11gR2 instance using Oracle VirtualBox and OEL. (tags: oracle database virtualbox virtualization) ORACLENERD: VirtualBox and Shared Folders Oracle ACE Chet Justice shares some tips. (tags: oracle otn oracleace virtualization virtualbox) Chris Muir: Check out the ADF content at this year's ODTUG KScope11 conference Oracle ACE Director Chris Muir shares information on this year's ODTUD Kaleidoscope event in Long Beach, CA, June 26-30. (tags: oracle otn oracleace odtug adf) Edwin Biemond: Setting a virtual IP on a specific Network interface with WebLogic 10.3.4 PS3 Edwin says: "If you want High Availability in WebLogic you need to enable the WebLogic server migration, configure the nodemanager, use a virtual / floating IP in your managed servers and channels." (tags: oracle otn oracleace highavailability weblogic virtualization) Markus Eisele: High Performance JPA with GlassFish and Coherence - Part 3 Markus says: "In this third part of my four part series I'll explain strategy number two of using Coherence with EclipseLink and GlassFish. This is all about using Coherence as Second Level Cache (L2) with EclipseLink." (tags: oracle otn oracleace glassfish coherence) Michel Schildmeijer: Set Oracle ESB montoring with Enterprise Manager Grid Control "Monitoring your Oracle SOA Suite environment...can be very complicated, but if you are using Grid Control, Oracle provides you the SOA Management Pack. Unfortunately this SOA Management Pack has pretty detailed OOTB info about BPEL, but for ESB you won’t find any OOTB metrics." (tags: oracle otn soa grid servicebus)

    Read the article

  • WPF or WinForms for Game Development and learning resources?

    - by Stephen Lee Parker
    I'm looking to create a game framework for my own personal use... I want to use WPF, but I'm unsure if that is a wise choice... The games I will be writing should not require high performance graphics, so I am hoping to build on native classes... I do not want to rely on external DLL's unless I generate them myself. The games will be for young children, say 4 to 8. Most will be learning puzzles or simple shooters. The most advanced will be a platform game (non-scrolling screen like the old Atari Miner 2049er game). I think I know how to write something like the old Atari Chopper Command (partially written and my 4 year old loves it, but I used WinForms and GDI), Pac-Man, Tetris, Astroids, Space Invaders, Slider Puzzle, but I do not really know how to write the platform game... In my mind, I'm getting caught in collision detection and how to make a character jump and how to make a character walk up a slope or steps... Can anyone point me to information on developing a platform game in C#? Would you suggest WinForms or WPF for game development? I'm not looking for great graphics and speed, just entertaining game play...

    Read the article

  • Top Reasons to Take the MySQL Cluster Training

    - by Antoinette O'Sullivan
    Here are the top reasons to take the authorized MySQL Cluster training course: Take training which was developed by MySQL Cluster product team and delivered by the MySQL Cluster experts at Oracle Learn how to develop, deploy, manage and scale your MySQL Cluster applications more efficiently Keep your mission-critical applications and essential services up and running 24x7 Deliver the highest performance and scalability using MySQL Cluster best practices In this 3 day course, experienced database users learn the important details of clustering necessary to get started with MySQL Cluster, to properly configure and manage the cluster nodes to ensure high availability, to install the different nodes and provide a better understanding of the internals of the cluster. To see the schedule for this course, go to the Oracle University Portal (click on MySQL). Should you not see an event for a location/date that suits you, register your interest in additional events. Here is a small sample of the events already on the schedule for the MySQL Cluster course:  Location  Date  Delivery Language  Prague, Czech Republic  17 September 2012  Czech  Warsaw, Poland  1 August 2012  Polish  London, United Kingdom  18 July 2012  English  Lisbon, Portugal  3 December 2012  European Portugese  Nice, France  8 October 2012  French  Barcelona, Spain  25 September 2012  Spanish  Madrid, Spain  20 August 2012  Spanish  Denver, United States  17 October 2012  English  Chicago, United States  22 August 2012  English  Petaling Jaya, Malaysia  10 October 2012  English  Singapore  21 August 2012  English  Mexico City, Mexico  23 July 2012  Spanish

    Read the article

  • CERN Announces the Discovery of a Higgs-Boson-like Particle

    - by Jason Fitzpatrick
    CERN scientists dropped a press release today indicating they’ve found a particle consistent with the long sought after Higgs Boson particle–the “God” particle, that could help radically refine our understanding of Standard Model of Particle Physics. For years scientists at CERN have been harnessing the power of the Large Hadron Collider to answer fundamental questions about the nature of particles and the universe around us. In the above video John Ellis, a theoretical physicist, answers the question “What is the Higgs Boson?” The video pairs nicely with the CERN press release: “We observe in our data clear signs of a new particle, at the level of 5 sigma, in the mass region around 126 GeV. The outstanding performance of the LHC and ATLAS and the huge efforts of many people have brought us to this exciting stage,” said ATLAS experiment spokesperson Fabiola Gianotti, “but a little more time is needed to prepare these results for publication.” “The results are preliminary but the 5 sigma signal at around 125 GeV we’re seeing is dramatic. This is indeed a new particle. We know it must be a boson and it’s the heaviest boson ever found,” said CMS experiment spokesperson Joe Incandela. “The implications are very significant and it is precisely for this reason that we must be extremely diligent in all of our studies and cross-checks.” “It’s hard not to get excited by these results,” said CERN Research Director Sergio Bertolucci. “ We stated last year that in 2012 we would either find a new Higgs-like particle or exclude the existence of the Standard Model Higgs. With all the necessary caution, it looks to me that we are at a branching point: the observation of this new particle indicates the path for the future towards a more detailed understanding of what we’re seeing in the data.” How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • Oracle Linux Pavilion is Back for Oracle OpenWorld

    - by Oracle OpenWorld Blog Team
    By Zeynep Koch Back by popular demand, Oracle will again host the Oracle Linux Pavilion at Oracle OpenWorld from October 1-3. The pavilion will be located in the Exhibition Hall at Moscone South, Booth 1033, next to the Oracle DEMOgrounds and Oracle Linux demopods. At the pavilion a select group of ISVs, IHVs, and SIs will showcase their products that have been Oracle Linux- and/or Oracle VM-certified. These certified products enable customer applications to run faster, thereby saving money.Partners exhibiting their solutions in the Oracle Linux Pavilion include: BeyondTrust: context-aware security intelligence for dynamic IT infrastructures such as cloud, mobile, and virtual technologies Centrify: control, secure, and audit access to cross-platform systems, mobile devices, and applications Data Intensity: cloud services and application management Fujitsu: technology platforms, private cloud, services, ubiquitous and device solutions HP: converged cloud, converged infrastructure, application transformation, and information optimization LSI: intelligent solid-state storage solutions for breakthrough database acceleration Mellanox: InfiniBand and Ethernet end-to-end server and storage interconnect solutions and services for data centers Micro Focus: mainframe solutions, application modernization and development tools, software quality tools NetApp: storage and data management QLogic: high performance networking Teleran: BI and data warehouse management solutions for Oracle Exadata Database Machine and Oracle Database Be sure to pick up your free Oracle Linux and Oracle VM DVD Kit if you visit one of these partners. We look forward to seeing you at the pavilion.

    Read the article

  • Java Spotlight Episode 110: Arun Gupta on the Java EE 6 Pocket Guide @arungupta

    - by Roger Brinkley
    Interview with Arun Gupta on his new Java EE 6 Pocket Guide. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Getting Started with JavaFX2 and Scene Builder Using the New CSS Analyzer in JavaFX Scene Builder JavaOne Latin America Keynotes NetBeans Podcast #62 - NetBeans Community News with Geertjan and Tinu Request for Project Nashorn (Open Source) JEP 170: JDBC 4.2 Open Sourcing: decora-compiler JPA 2.1 Schema Generation WebSocket, Java EE 7, and GlassFish Events Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Dec 14-15, IndicThreads, Pune, India Feature InterviewArun Gupta is a Java EE & GlassFish Evangelist working at Oracle. Arun has over 14 years of experience in the software industry working in various technologies, Java(TM) platform, and several web-related technologies. In his current role, he works very closely to create and foster the community around Java EE & GlassFish. He has participated in several standard bodies and worked amicably with members from other companies. He has been with the Java EE team since it’s inception. And since then he has contibuted to all Java EE releases.He is a prolific blogger at http://blogs.sun.com/arungupta with over 1000 blog entries and frequent visitors from all over the world reaching up to 25,000 hits/day. His new Java EE 6 Pocket Guide is now available on O’Reily What’s Cool Videos: Getting Started with Java Embedded JavaFX: Leverageing Multicore Performance JavaFX on BeagleBoard State of the Lambda: Libraries Edition FOSDEM 2013 CFP now open! The return of the Shark

    Read the article

  • JavaScript objects and Crockford's The Good Parts

    - by Jonathan
    I've been thinking quite a bit about how to do OOP in JS, especially when it comes to encapsulation and inheritance, recently. According to Crockford, classical is harmful because of new(), and both prototypal and classical are limited because their use of constructor.prototype means you can't use closures for encapsulation. Recently, I've considered the following couple of points about encapsulation: Encapsulation kills performance. It makes you add functions to EACH member object rather than to the prototype, because each object's methods have different closures (each object has different private members). Encapsulation forces the ugly "var that = this" workaround, to get private helper functions to have access to the instance they're attached to. Either that or make sure you call them with privateFunction.apply(this) everytime. Are there workarounds for either of two issues I mentioned? if not, do you still consider encapsulation to be worth it? Sidenote: The functional pattern Crockford describes doesn't even let you add public methods that only touch public members, since it completely forgoes the use of new() and constructor.prototype. Wouldn't a hybrid approach where you use classical inheritance and new(), but also call Super.apply(this, arguments) to initialize private members and privileged methods, be superior?

    Read the article

  • UI message passing programming paradigm

    - by Ronald Wildenberg
    I recently (about two months ago) read an article that explained some user interface paradigm that I can't remember the name of and I also can't find the article anymore. The paradigm allows for decoupling the user interface and backend through message passing (via some queueing implementation). So each user action results in a message being pased to the backend. The user interface is then updated to inform the user that his request is being processed. The assumption is that a user interface is stale by definition. When you read data from some store into memory, it is stale because another transaction may be updating the same data already. If you assume this, it makes no sense to try to represent the 'current' database state in the user interface (so the delay introduced by passing messages to a backend doesn't matter). If I remember correctly, the article also mentioned a read-optimized data store for rendering the user interface. The article assumed a high-traffic web application. A primary reason for using a message queue communicating with the backend is performance: returning control to the user as soon as possible. Updating backend stores is handled by another process and eventually these changes also become visible to the user. I hope I have explained accurately enough what I'm looking for. If someone can provide some pointers to what I'm looking for, thanks very much in advance.

    Read the article

  • Output = MAXDOP 1

    - by Dave Ballantyne
    It is widely know that data modifications on table variables do not support parallelism, Peter Larsson has a good example of that here .  Whilst tracking down a performance issue,  I saw that using the OUTPUT clause also causes parallelism to not be used. By way of example,  first lets create two tables with a simple parent and child (one to one) relationship, and then populate them with 100,000 rows. Drop table ParentDrop table Childgocreate table Parent(id integer identity Primary Key,data1 char(255))Create Table Child(id integer Primary Key)goinsert into Parent(data1)Select top 1000000 NULL from sys.columns a cross join sys.columns b insert into ChildSelect id from Parentgo If we then execute update Parent set data1 =''from Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1 We should see an execution plan using parallelism such as   However,  if the OUTPUT clause is now used update Parent set data1 =''output inserted.idfrom Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1   The execution plan shows that Parallelism was not used Make of that what you will, but i thought that this was a pretty unexpected outcome. Update : Laurence Hoff has mailed me to note that when the OUTPUT results are captured to a temporary table using the INTO clause,  then parallelism is used.  Naturally if you use a table variable then there is still no parallelism  

    Read the article

  • "Are You There?".. India Tops Logistics List of Emerging Nations

    - by [email protected]
    It's just amazing how far, wide and deep modern supply chains are extending. AMR reported on 15 Apr (M.Burkett, A.Reese) in a SCM webcast that 'Penetrating Emerging Markets" was the top priotiy for organizations based on a recent survey. I took this as both adding new consumers to their prospect-list as well as leveraging 'lower cost labor arbitrage". (Read '3 Billion Capitalists") Supply Chain Quarterly reports that India and Brazil received the highest ranking of the logistics markets in developing nations India tops the list of emerging nations that scores the attractiveness of logistics markets to foreign investors. Developed by the UK-based research firm Transport Intelligence, the new  Emerging Market Logistics Index rated 38 developing countries on 3 factors. 1. "Market size and growth attractiveness," considered a country's economic output, projected growth rate, and population size.  2. "Market compatibility," which examined how well-matched a nation was with the services offered by global logistics providers. This includes a country's security levels, market accessibility, foreign direct investment, distribution of wealth and population, and development of its service sector. 3. "Connectedness," which rated the efficiency of customs and border controls, liner shipping connections, and transportation infrastructure. India claimed the top spot due to its market size and growth prospects. Brazil is second because of its economic performance, good levels of market accessibility, and improving domestic and international transport connections. Are you there? For more information see www.transportintelligence.com/articles_papers. The top 10 emerging countries India Brazil Indonesia Mexico Russia Turkey United Arab Emirates Egypt Saudi Arabia Malaysia Source: Transport Intelligence, The Emerging Markets Logistics Index, March 2010

    Read the article

  • CodePlex Daily Summary for Tuesday, June 18, 2013

    CodePlex Daily Summary for Tuesday, June 18, 2013Popular ReleasesCODE Framework: 4.0.30618.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.Toolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.18): XrmToolbox improvement Use new connection controls (use of Microsoft.Xrm.Client.dll) New display capabilities for tools (size, image and colors) Added prerequisites check Added Most Used Tools feature Tools improvementNew toolSolution Transfer Tool (v1.0.0.0) developed by DamSim Updated toolView Layout Replicator (v1.2013.6.17) Double click on source view to display its layoutXml All tools list Access Checker (v1.2013.6.17) Attribute Bulk Updater (v1.2013.6.18) FetchXml Tester (v1.2013.6.1...Media Companion: Media Companion MC3.570b: New* Movie - using XBMC TMDB - now renames movies if option selected. * Movie - using Xbmc Tmdb - Actor images saved from TMDb if option selected. Fixed* Movie - Checks for poster.jpg against missing poster filter * Movie - Fixed continual scraping of vob movie file (not DVD structure) * Both - Correctly display audio channels * Both - Correctly populate audio info in nfo's if multiple audio tracks. * Both - added icons and checked for DTS ES and Dolby TrueHD audio tracks. * Both - Stream d...LINQ Extensions Library: 1.0.4.2: New to release 1.0.4.2 Custom sorting extensions that perform up to 50% better than LINQ OrderBy, ThenBy extensions... Extensions allow for fine tuning of the sort by controlling the algorithm each sort uses.ExtJS based ASP.NET Controls: FineUI v3.3.0: ??FineUI ?? ExtJS ??? ASP.NET ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license)。 ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI???? ExtJS ?????????,???? ExtJS ?。 ????? FineUI ? ExtJS ?:http://fineui.com/bbs/forum.php?mod=viewthrea...BarbaTunnel: BarbaTunnel 8.0: Check Version History for more information about this release.ExpressProfiler: ExpressProfiler v1.5: [+] added Start time, End time event columns [+] added SP:StmtStarting, SP:StmtCompleted events [*] fixed bug with Audit:Logout eventpatterns & practices: Data Access Guidance: Data Access Guidance Drop4 2013.06.17: Drop 4Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.94: add dstLine and dstCol attributes to the -Analyze output in XML mode. un-combine leftover comma-separates expression statements after optimizations are complete so downstream tools don't stack-overflow on really deep comma trees. add support for using a single source map generator instance with multiple runs of MinifyJavaScript, assuming that the results are concatenated to the same output file.Kooboo CMS: Kooboo CMS 4.1.1: The stable release of Kooboo CMS 4.1.0 with fixed the following issues: https://github.com/Kooboo/CMS/issues/1 https://github.com/Kooboo/CMS/issues/11 https://github.com/Kooboo/CMS/issues/13 https://github.com/Kooboo/CMS/issues/15 https://github.com/Kooboo/CMS/issues/19 https://github.com/Kooboo/CMS/issues/20 https://github.com/Kooboo/CMS/issues/24 https://github.com/Kooboo/CMS/issues/43 https://github.com/Kooboo/CMS/issues/45 https://github.com/Kooboo/CMS/issues/46 https://github....VidCoder: 1.5.0 Beta: The betas have started up again! If you were previously on the beta track you will need to install this to get back on it. That's because you can now run both the Beta and Stable version of VidCoder side-by-side! Note that the OpenCL and Intel QuickSync changes being tested by HandBrake are not in the betas yet. They will appear when HandBrake integrates them into the main branch. Updated HandBrake core to SVN 5590. This adds a new FDK AAC encoder. The FAAC encoder has been removed and now...Employee Info Starter Kit: v6.0 - ASP.NET MVC Edition: Release Home - Getting Started - Hands on Coding Walkthrough – Technology Stack - Design & Architecture EISK v6.0 – ASP.NET MVC edition bundles most of the greatest and successful platforms, frameworks and technologies together, to enable web developers to learn and build manageable and high performance web applications with rich user experience effectively and quickly. User End SpecificationsCreating a new employee record Read existing employee records Update an existing employee reco...OLAP PivotTable Extensions: Release 0.8.1: Use the 32-bit download for... Excel 2007 Excel 2010 32-bit (even Excel 2010 32-bit on a 64-bit operating system) Excel 2013 32-bit (even Excel 2013 32-bit on a 64-bit operating system) Use the 64-bit download for... Excel 2010 64-bit Excel 2013 64-bit Just download and run the EXE. There is no need to uninstall the previous release. If you have problems getting the add-in to work, see the Troubleshooting Installation wiki page. The new features in this release are: View #VALUE! Err...DirectXTex texture processing library: June 2013: June 15, 2013 Custom filtering implementation for Resize & GenerateMipMaps(3D) - Point, Box, Linear, Cubic, and Triangle TEX_FILTER_TRIANGLE finite low-pass triangle filter TEX_FILTER_WRAP, TEX_FILTER_MIRROR texture semantics for custom filtering TEX_FILTER_BOX alias for TEX_FILTER_FANT WIC Ordered and error diffusion dithering for non-WIC conversion sRGB gamma correct custom filtering and conversion DDS_FLAGS_EXPAND_LUMINANCE - Reader conversion option for L8, L16, and A8L8 legacy ...WPF Application Framework (WAF): WPF Application Framework (WAF) 3.0.0.440: Version: 3.0.0.440 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Please build the whole solution before you start one of the sample applications. Requirements .NET Framework 4.5 (The package contains a solution file for Visual Studio 2012) Changelog Legend: [B] Breaking change; [O] Marked member as obsolete Samples: Use ValueConverters via StaticResource instead of x:Static. Other Downloads Downloads OverviewBlackJumboDog: Ver5.9.1: 2013.06.13 Ver5.9.1 (1) Web??????SSI?#include???、CGI?????????????????????? (2) ???????????????????????????Lakana - WPF Framework: Lakana V2.1 RTM: - Dynamic text localization - A new application wide message busFree language translator and file converter: Free Language Translator 3.3: some bug fixes and a new link to video tutorials on Youtube.Pokemon Battle Online: ETV: ETV???2012?12??????,????,???????$/PBO/branches/PrivateBeta??。 ???????bug???????。 ???? Server??????,?????。 ?????????,?????????????,?????????。 ????????,????,?????????,???????????(??)??。 ???? ????????????。 ???????。 ???PP????,????????????????????PP????,??3。 ?????????????,??????????。 ???????? ??? ?? ???? ??? ???? ?? ?????????? ?? ??? ??? ??? ???????? ???? ???? ???????????????、???????????,??“???????”??。 ???bug ???Modern UI for WPF: Modern UI 1.0.4: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. Related downloads NuGet ModernUI for WPF is also available as NuGet package in the NuGet gallery, id: ModernUI.WPF Download Modern UI for WPF Templates A Visual Studio 2012 extension containing a collection of project and item templates for Modern UI for WPF. The extension includes the ModernUI.WPF NuGet package. DownloadNew ProjectsAux Browser: This browser is secured by system level sandbox technology, and it helps you get where you want to go in the shortest possible time. Best solution to convert Outlook OST to PST files: Convert OST to PST without fearing for data loss. An immaculate converter by Recover Data gives user a privilege to perform OST file recovery with no data loss.Bitbucket.NET: A high-performance .NET library for developing applications that use the Bitbucket service.caosu: aaaChartApp: Chart App for SharePoint 2013Custom Membership Provider SQL + LDAP with one login page: Custom membership provider to allow users to login to there portal from one login page whether its custom SQLDB or the current Active Directory.DroidBrowse: Web browser for Android 1.6 or later.haseebtestProject: This project is created to add random files and for testing purposesHiveSense: Hive monitoring system.JQuery File Upload Plugin with Backload server side component (Demo/Examples): Backload is a professional, full featured ASP.NET MVC 4 file upload controller and handler (server side). JSLocator: Locate javascript functions in the sourceMoonCMS: This is a trivial thing. It doesn't make any sense!MyLabs: MyLabs is Private Labpcvvpes: pcvvpesPrism Model Factory Extensions: Micro framework for model cloning, equality check and mergingSharePoint 2007 Solution and Packaging Guidance: WSPSolution is a standard for building SP 2007 solutions in Visual Studio, namespace planning, deployment planning, WSP creation, and build automation.Windows Phone Wi-Fi Launcher: Wi-Fi settings page launcher for Windows Phone 8.0

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 28 (sys.dm_db_stats_properties)

    - by Tamarick Hill
    The sys.dm_db_stats_properties Dynamic Management Function returns information about the statistics that are currently on your database objects. This function takes two parameters, an object_id and a stats_id. Let’s have a look at the result set from this function against the AdventureWorks2012.Sales.SalesOrderHeader table. To obtain the object_id and stats_id I will use a CROSS APPLY with the sys.stats system table. SELECT sp.* FROM sys.stats s CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.Stats_id) sp WHERE sp.object_id = object_id('Sales.SalesOrderHeader') The first two columns returned by this function are the object_id and the stats_id columns. The next column, ‘last_updated’, gives you the date and the time that a particular statistic was last updated. The next column, ‘rows’, gives you the total number of rows in the table as of the last statistic update date. The ‘rows_sampled’ column gives you the number of rows that were sampled to create the statistic. The ‘steps’ column represents the number of specific value ranges from the statistic histogram. The ‘unfiltered_rows’ column represents the number of rows before any filters are applied. If a particular statistic is not filtered, the ‘unfiltered_rows’ column will always equal the ‘rows’ column. Lastly we have the ‘modification_counter’ column which represents the number of modification to the leading column in a given statistic since the last time the statistic was updated. Probably the most important column from this Dynamic Management Function is the ‘last_updated’ column. You want to always ensure that you have accurate and updated statistics on your database objects. Accurate statistics are vital for the query optimizer to generate efficient and reliable query execution plans. Without accurate and updated statistics, the performance of your SQL Server would likely suffer. For more information about this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/jj553546.aspx Folllow me on Twitter @PrimeTimeDBA

    Read the article

  • MySQL Connect Call for Papers Open Now, until May 6

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; } MySQL Connect will take place in San Francisco September 29 and 30; you can read the Press Release here. The call for papers is open until May 6, submit your sessions now! This is your chance to present your real-world experience and share your expertise and best practices with the MySQL community. The conference includes six tracks: Performance and Scalability, High Availability, Cloud Computing, Architecture and Design, Database Administration, and Application Development. You can submit conference sessions as well as BOF (Birds-of-a-Feather) sessions. We look forward to hearing from you! Interested in sponsorship and exhibit opportunities? You will find more information here. Registration for MySQL Connect also opened today. Register now to take advantage of the Early Bird discount! MySQL Connect will be jam-packed with technical sessions, hands-on labs and Birds of a Feather (BOF) sessions delivered by MySQL community members, users, customers and MySQL engineers from Oracle. The event is a unique opportunity to learn about the latest MySQL features, discuss product roadmaps, and connect directly with the engineers behind the latest MySQL code.

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >