Search Results

Search found 15301 results on 613 pages for 'global assembly cache'.

Page 45/613 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Oracle UPK Customer Roundtable - Featuring Medtronic's Journey To Support Global Systems Implementat

    - by [email protected]
    Hear Medtronic's journey of adopting Oracle UPK globally across their SAP, Siebel, and PeopleSoft applications. Register Now for this free webinar! Thursday, April 29, 2010 -- 9:00 am PT Medtronic's success story highlights how Oracle UPK improved workforce effectiveness, addressed compliance, and ensured end user adoption. From starting out with a small group of developers using Oracle UPK to having 35 developers creating 18,000 topics, Oracle UPK has become part of Medtronic's learning infrastructure with multi-languages, help menu integration and much more.

    Read the article

  • Web.Config is Cached

    - by SGWellens
    There was a question from a student over on the Asp.Net forums about improving site performance. The concern was that every time an app setting was read from the Web.Config file, the disk would be accessed. With many app settings and many users, it was believed performance would suffer. Their intent was to create a class to hold all the settings, instantiate it and fill it from the Web.Config file on startup. Then, all the settings would be in RAM. I knew this was not correct and didn't want to just say so without any corroboration, so I did some searching. Surprisingly, this is a common misconception. I found other code postings that cached the app settings from Web.Config. Many people even thanked the posters for the code. In a later post, the student said their text book recommended caching the Web.Config file. OK, here's the deal. The Web.Config file is already cached. You do not need to re-cache it. From this article http://msdn.microsoft.com/en-us/library/aa478432.aspx It is important to realize that the entire <appSettings> section is read, parsed, and cached the first time we retrieve a setting value. From that point forward, all requests for setting values come from an in-memory cache, so access is quite fast and doesn't incur any subsequent overhead for accessing the file or parsing the XML. The reason the misconception is prevalent may be because it's hard to search for Web.Config and cache without getting a lot of hits on how to setup caching in the Web.Config file. So here's a string for search engines to index on: "Is the Web.Config file Cached?" A follow up question was, are the connection strings cached? Yes. http://msdn.microsoft.com/en-us/library/ms178683.aspx At run time, ASP.NET uses the Web.Config files to hierarchically compute a unique collection of configuration settings for each incoming URL request. These settings are calculated only once and then cached on the server. And, as everyone should know, if you modify the Web.Config file, the web application will restart. I hope this helps people to NOT write code! Steve WellensCodeProject

    Read the article

  • Web.Config is Cached

    - by SGWellens
    There was a question from a student over on the Asp.Net forums about improving site performance. The concern was that every time an app setting was read from the Web.Config file, the disk would be accessed. With many app settings and many users, it was believed performance would suffer. Their intent was to create a class to hold all the settings, instantiate it and fill it from the Web.Config file on startup. Then, all the settings would be in RAM. I knew this was not correct and didn't want to just say so without any corroboration, so I did some searching. Surprisingly, this is a common misconception. I found other code postings that cached the app settings from Web.Config. Many people even thanked the posters for the code. In a later post, the student said their text book recommended caching the Web.Config file. OK, here's the deal. The Web.Config file is already cached. You do not need to re-cache it. From this article http://msdn.microsoft.com/en-us/library/aa478432.aspx It is important to realize that the entire <appSettings> section is read, parsed, and cached the first time we retrieve a setting value. From that point forward, all requests for setting values come from an in-memory cache, so access is quite fast and doesn't incur any subsequent overhead for accessing the file or parsing the XML. The reason the misconception is prevalent may be because it's hard to search for Web.Config and cache without getting a lot of hits on how to setup caching in the Web.Config file. So here's a string for search engines to index on: "Is the Web.Config file Cached?" A follow up question was, are the connection strings cached? Yes. http://msdn.microsoft.com/en-us/library/ms178683.aspx At run time, ASP.NET uses the Web.Config files to hierarchically compute a unique collection of configuration settings for each incoming URL request. These settings are calculated only once and then cached on the server. And, as everyone should know, if you modify the Web.Config file, the web application will restart. I hope this helps people to NOT write code!   Steve WellensCodeProject

    Read the article

  • ZFS - zpool ARC cache plus L2ARC benchmarking

    - by jemmille
    I have been doing lots of I/O testing on a ZFS system I will eventually use to serve virtual machines. I thought I would try adding SSD's for use as cache to see how much faster I can get the read speed. I also have 24GB of RAM in the machine that acts as ARC. vol0 is 6.4TB and the cache disks are 60GB SSD's. The zvol is as follows: pool: vol0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM vol0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 cache c3t5001517958D80533d0 ONLINE 0 0 0 c3t5001517959092566d0 ONLINE 0 0 0 The issue is I'm not seeing any difference with the SSD's installed. I've tried bonnie++ benchmarks and some simple dd commands to write a file then read the file. I have run benchmarks before and after adding the SSD's. I've ensured the file sizes are at least double my RAM so there is no way it can all get cached locally. Am I missing something here? When am I going to see benefits of having all that cache? Am I simply not under these circumstances? Are the benchmark programs not good for testing the effect of cache because of the the way (and what) it writes and reads?

    Read the article

  • How to setup squid only cache specific domains?

    - by ???
    For example, I want squid to cache HTTP contents only for *.archive.ubuntu.com, which is blocked by firewall, and don't cache for other domains. And, only LAN (192.168.0.0/16) users can access the cached contents, but all users are allowed to access non-cached contents. User-IP Dest-Domain acl Expect ---------------- ----------------------- ------ ------------------------- 192.168.0.0/16 *.archive.ubuntu.com allow Cache Proxy, Fast 192.168.0.0/16 *.other allow Pass Proxy, Slow Other * allow Pass Proxy, Slow

    Read the article

  • The type 'XXX' is defined in an assembly that is not referenced exception after upgrade to ASP.NET 4

    - by imran_ku07
       Introduction :          I found two posts in ASP.NET MVC forums complaining that they are getting exception, The type XXX is defined in an assembly that is not referenced, after upgrading thier application into Visual Studio 2010 and .NET Framework 4.0 at here and here .   Description :           The reason why they are getting the above exception is the use of new clean web.config without referencing the assemblies which were presents in ASP.NET 3.5 web.config. The quick solution for this problem is to add the old assemblies in new web.config.          <assemblies>             <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>             <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>             <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>              <add assembly="System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />              <add assembly="System.Data.Linq, Version=4.0.0.0, Culture=neutral, publicKeyToken=b77a5c561934e089" />          </assemblies>    How It works :            Currently i have not tested the above scenario in ASP.NET 4.0 because i have not yet get it. But the above scenario can easily be tested and verified in VS 2008. Just Open the root web.config and remove           <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>             Even you add the reference of System.Core in your project, you will still get the above exception because aspx pages are compiled in separate assembly. You can check this yourself by checking Show Detailed Compiler Output: below in the yellow screen of death, you will find something,/out:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\e907aee4\5fa0acc8\App_Web_y5rd6bdg.dll"             This shows that aspx pages are compiled in separate assembly in Temporary ASP.NET Files.Summary :             After getting the above exception make sure to add the assemblies in web.config or add the Assembly directive at Page level. Hopefully this will helps to solve your problem.       

    Read the article

  • Dirty Cache Dell Equallogic Storage Array

    - by Jermal Smith
    has anyone ever run into a dirty cache issue with a Equallogic SAN. Even after replacement of the controller cards in the Equallogic Storage Array fails offline with a dirty cache. I have listed steps here on my blog to bring the SAN online again, however this is not the best solution as it continues to fail. http://jermsmit.com/dirty-cache-dell-equallogic-storage-array/ If you have any info on this please share. Thanks, Jermal

    Read the article

  • Global keyboard states

    - by Petr Abdulin
    I have following idea about processing keyboard input. We capture input in "main" Game class like this: protected override void Update(GameTime gameTime) { this.CurrentKeyboardState = Keyboard.GetState(); // main :Game class logic here base.Update(gameTime); this.PreviousKeyboardState = this.CurrentKeyboardState; } then, reuse keyboard states (which have internal scope) in all other game components. The reasons behind this are 1) minimize keyboard processing load, and 2) reduce "pollution" of all other classes with similar keyboard state variables. Since I'm quite a noob in both game and XNA development, I would like to know if all of this sounds reasonable.

    Read the article

  • Programming in the United States Airforce - How hard to get a job doing it? [closed]

    - by Holland
    I already know how to program. Been at it for a year; the language I've worked mostly with has been C++, and I'm currently studying x86 assembly programming, with the goal to move towards ARM assembly after I've finished with that. Thus, given my experience and knowledge, I'm curious to know if any "vets" around here have had any excursions in the military doing software/electrical engineering, and how hard it would be to actually get a job doing it - with someone who already has previous experience and knowledge regarding that field, however slight. By definition of "hard", in this context, I suppose I'd be referring to the required knowledge to actually be a "shoe-in" for both low level and high level software/hardware applications. I know hex fairly well, and enough to convert that hex to binary. I also have an ok knowledge of algorithms, such as Binary Search Tree, Linked List, etc. Everything I've learned so far has been self taught for the most part.

    Read the article

  • SharePointBeginners: A new group for a global noob community

    - by PeterBrunone
    Recently, a discussion broke out (go figure) on a SharePoint list that I frequent.  It had grown in size to the point where the more advanced members were sometimes turned off by the volume of questions that appeared TOO simple.  This happens all the time, as something becomes larger and specialization is necessary.Anyhoo, my response was to create the SharePoint Beginners group.  Come out and join us at http://groups.google.com/group/sharepointbeginners , where no question is too simple; all we ask is to show us that you tried to find the answer.

    Read the article

  • RAID10 without write-back cache = horrible write performance?

    - by Harry Mexican
    I have just provisioned a dedicated server on singlehop. I'm running it through some tests to know what to expect performance-wise. On the I/O side (with 4 1TB disks in RAID 10) I get: write-cache disabled 200 MB/s read throughput 30 MB/s write throughput I thought that was really low compared to my desktop HD which gets 150-150 or so. So I had a chat with them and they suggested enabling the write cache. New results: write-cache enabled 280 MB/s read 260 MB/s write which is great and all but means I'd have to add a BBU for an additional monthly cost. Is it normal for the write throughput to be 1/4 of a regular drive on RAID10, if you don't have write cache? It almost feels like its intentionally bad to force you to pony up for the BBU. I'd be happy with normal non-raid performance of 150/150.

    Read the article

  • Flightradar24 Maps Global Air Traffic in Real Time

    - by Jason Fitzpatrick
    Flightradar24 is a real time flight tracking service that shows you where thousands of planes are at any given time. Whether you’re an aviation buff or just want to show a worried kid that mom’s flight is almost home, they have you covered. Flightradar24 is a free service that tracks flights using data from the FAA and ADS-B to display the status of flights across the globe. You can filter the information to see only certain planes, planes originating from certain airports, planes at various altitudes, and more. The interface is accessible via their web site as well as via iOS and Android devices. Hit up the link below to take it for a spin. Flightradar24 How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Oracle Enterprise Data Quality Adds Global Address Verification Capabilities for Greater Accuracy and Broader Location Coverage

    - by Mala Narasimharajan
    Data quality – has many flavors to it.  Product, Customer – you name the data domain and there’s data quality associated with it.  Address verification and data quality are a little different.  in that there is a tremendous amount of variation as well as nuance attached to it.  Specifically, what makes address verification challenging is that more often than not, addresses are incomplete, riddled with misspellings, incorrect postal codes are assigned to locations or non-address items are present.  Almost all data has locations, and accurate locations power a wealth of business processes: Customer Relationship Management, data quality, delivery of materials, goods or services, fraud detection, insurance risk assessment, data analytics, store and territory planning, and much more. Oracle Address Verification Server provides location-based services as well as deeper parsing and analysis capabilities for Oracle Enterprise Data Quality.  Specifically, Pre-integrated with the EDQ platform, Oracle Address Verification Server provides robust parsing, validation, as well as specialized location information for over 240 countries – all populated countries on Earth.  Oracle Enterprise Data Quality (EDQ) is a data quality platform, dedicated to address the distinct challenges of customer and product data quality, and performs advanced data profiling to identify and measure poor quality data and identify rule requirements, as well as semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured.   EDQ is integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM.  Address Verification Server provides key address verification services for Oracle CRM and Oracle Customer Hub.  In addition, Address Verification Server provides greater accuracy when handling address data due to its expanded sources and extensible knowledge repository, solid parsing across locales and countries as well as  adept handling of extraneous data in address fields.  For more information on Oracle Address Verification Server visit:  http://bit.ly/GMUE4H and http://bit.ly/GWf7U6

    Read the article

  • Will small random dynamic snippets break caching

    - by Saif Bechan
    I am busy writing a WordPress plugin. Now most users have cache plugins installed, they cache the pages. I know also some webservers as nginx have php caching and whatnot. There are also things like memcached. Now I have to admit I do not know exactly how they work, if anyone have some good links on how they work I would be glad. Some links where it's explained simple, not to technical. Now the question. My plugin displays different statistics on posts, they are always different, will this break the caching of the page. To take is a step further, sometimes the statistics of the post needs updating, and there is a small javascript snippet added to the page. Now will these two action result in the page not caching, or am I ok.

    Read the article

  • Global UTF-encoding, the right way

    - by mowgli
    I'm curious, as to what is the right way to have UTF-8 encoding on all web files All my files (incl. CSS and JS) are made and saved in UTF-8 encoding In PHP, I set the char-set on top of the main page (this page includes all others) with: header('Content-type: text/html; charset=utf-8'); In the same page I have this html meta tag: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Then I stubled upon an external css file that has this on first line: @charset "UTF-8"; And now I wonder, should I set the charset INSIDE all my CSS/JS files too, like that? And/or should I serve each file with charset=utf-8 in the meta tag?

    Read the article

  • VISUAL STUDIO 2010 GLOBAL LAUNCH

    - by Sayre Collado
    Hello Guys, The Visual Studio 2010 is here. You can test by downloading the free trial at this link http://www.microsoft.com/visualstudio/en-us/download and view the products features in this link http://www.microsoft.com/visualstudio/en-us/products . You can even watch the live launch at http://www.microsoft.com/visualstudio/en-us/watch-it-live Happy Programming.

    Read the article

  • What to call objects that may delete cached data to meet memory constraints?

    - by Brent
    I'm developing some cross-platform software which is intended to run on mobile devices. Both iOS and Android provide low memory warnings. I plan to make a wrapper class that will free cached resources (like textures) when low memory warnings are issued (assuming the resource is not in use). If the resource returns to use, it'll re-cache it, etc... I'm trying to think of what this is called. In .Net, it's similar to a "weak reference" but that only really makes sense when dealing with garbage collection, and since I'm using c++ and shared_ptr, a weak reference already has a meaning which is distinct from the one I'm thinking of. There's also the difference that this class will be able to rebuild the cache when needed. What is this pattern/whatever is called? Edit: Feel free to recommend tags for this question.

    Read the article

  • How to increase the disk cache of Windows 7

    - by Mark Christiaens
    Under Windows 7 (64 bit), I'm reading through 9000 moderately sized files. In total, there is more than 200 MB of data. Using Java (JDK 1.6.21) I'm iterating over the files. The first 1400 or so go at full speed but then speed drops off to 4ms per file. It turns out that the main cost is incurred simply by opening the files. I'm opening the files using new FileInputStream (and of course closing them in time to avoid file leaks). After some investigating, I see that Windows' disk cache is using only 100 MB or so of RAM although I have 8 GiB available. I've tried increasing the cache size using the CacheSet tool but any values I provide are considered out of range. I've also tried enabling the LargeSystemCache registry key but (after rebooting) the CacheSet tool still indicates I'm using 100 MB of cache (and doesn't increase during the test run). Does anybody have any suggestions to "encourage" Windows 7 to cache my 9000 files?

    Read the article

  • squid cache disk configuration

    - by Gogonez
    just wondering how far drive configuration will affect squid cache performance. what kind of drive configuration that fast enough for squid ? is it true that block-level parity strip raid faster than byte-level one ? is mirrored drive config will decrease squid cache write process ? how much swap space that squid realy need to store cache (reverse mode) for 200mb web doc ? what kind of benchmark should i do to analyze squid disk performance ?

    Read the article

  • Multiple country-specific domains or one global domain [closed]

    - by CJM
    Possible Duplicate: How should I structure my urls for both SEO and localization? My company currently has its main (English) site on a .com domain with a .co.uk alias. In addition, we have separate sites for certain other countries - these are also hosted in the UK but are distinct sites with a country-specific domain names (.de, .fr, .se, .es), and the sites have differing amounts of distinct but overlapping content, For example, the .es site is entirely in Spanish and has a page for every section of the UK site but little else. Whereas the .de site has much more content (but still less than the UK site), in German, and geared towards our business focus in that country. The main point is that the content in the additional sites is a subset of the UK, is translated into the local language, and although sometimes is simply only a translated version of UK content, it is usually 'tweaked' for the local market, and in certain areas, contains unique content. The other sites get a fraction of the traffic of the UK site. This is perfectly understandable since the biggest chunk of work comes from the UK, and we've been established here for over 30 years. However, we are wanting to build up our overseas business and part of that is building up our websites to support this. The Question: I posed a suggestion to the business that we might consider consolidating all our websites onto the .com domain but with /en/de/fr/se/etc sections, as plenty of other companies seem to do. The theory was that the non-english sites would benefit from the greater reputation of the parent .com domain, and that all the content would be mutually supporting - my fear is that the child domains on their own are too small to compete on their own compared to competitors who are established in these countries. Speaking to an SEO consultant from my hosting company, he feels that this move would have some benefit (for the reasons mentioned), but they would likely be significantly outweighed by the loss of the benefits of localised domains. Specifically, he said that since the Panda update, and particularly the two sets of changes this year, that we would lose more than we would gain. Having done some Panda research since, I've had my eyes opened on many issues, but curiously I haven't come across much that mentions localised domain names, though I do question whether Google would see it as duplicated content. It's not that I disagree with the consultant, I just want to know more before I make recommendations to my company. What is the prevailing opinion in this case? Would I gain anything from consolidating country-specific content onto one domain? Would Google see this as duplicate content? Would there be an even greater penalty from the loss of country-specific domains? And is there anything else I can do to help support the smaller, country-specific domains?

    Read the article

  • What's the best way to version CSS and JS URLs?

    - by David Eyk
    As per Yahoo's much-ballyhooed Best Practices for Speeding Up Your Site, we serve up static content from a CDN using far-future cache expiration headers. Of course, we need to occasionally update these "static" files, so we currently add an infix version as part of the filename (based on the SHA1 sum of the file contents). Thus: styles.min.css Becomes: styles.min.abcd1234.css However, managing the versioned files can become tedious, and I was wondering if a GET argument notation might be cleaner and better: styles.min.css?v=abcd1234 Which do you use, and why? Are there browser- or proxy/cache-related considerations that I should consider?

    Read the article

  • Webcast - C-level Perspectives on How HR Can Take on a Bigger Role in Strategic Planning

    - by Scott Ewart
    The Economist Intelligence Unit (EIU), on behalf of IBM and Oracle, recently surveyed a number of C-level executives in North America and Western Europe to understand how HR can take on a bigger role in driving growth. The resulting reports highlight the actions senior HR leaders can take to place themselves at the heart of the debate on a company's strategic direction.In this session, IBM and Oracle HCM specialists will review the findings of the EIU research reports and provide guidance on how technology innovation can help to align talent strategies with long term business goals. Participants will gain an understanding of the following: Results of the Economist Intelligence Unit study around "Executive Perceptions of the HR Function" Differences in perspective between CEOs and CFOs Identify how the HR professional can take a bigger role in driving business growth Join us on Thursday, October 25 for a live webcast. Speakers:Gina Wells Global Oracle HCM LeaderIBM Global Business Services Michelle NewellSenior Director, HCM Applications MarketingOracle Register Here For the Webcast on Thursday, October 25.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >