Search Results

Search found 8744 results on 350 pages for 'yann core'.

Page 219/350 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Xubuntu 14.04 with Compton, strange screen tearing, only when playing videos though (advice needed)

    - by LinuxDudester
    Hello beloved community, Yet again I am in need of your great expertise. I ran into a very strange issue and just can't wrap my mind around it. I'm running Xubuntu 14.04 exclusively, with Compton installed. The OS runs great and I have absolutely no screen tearing when I move my windows around, scroll in my web browser, work in Gimp or Photoshop (wine) or even when I play very graphic demanding games, like Metro Last Light, Euro Truck Driver 2 and so on. There's not a tiny bit of tearing to see, but as soon as I play videos, in xbmc, vlc or parole media player the tearing begins (strangely enough this does not apply to youtube videos). I followed all available workarounds on askubuntu and the ubuntu forum,like the 50-xserver-command.conf, startx /etc/X11/Xsession /usr/bin/xbmc-standalone -- -bs or libsdl1.2debian fix and many more, but to no avail. I also tried the Open Source Nouveau display drivers as well, but for some odd reason they don't work so great on my system or at least with my graphics card. Even with Compton installed and configured, I have an extreme amount of screen tearing, as soon as I switch to the proprietary Nvidia drives the screen tearing is gone completely, except for the video playback with xbmc, vlc or parole media player. System info for your reference: OS: Xubuntu 14.04 Linux-x86_64 - Processor: Intel Core i7-4770S CPU @ 3.10GHz - Ram: 16 GB - GeForce GT 750M 1024 MB - Nvidia Driver: 331.38 Has anyone experienced such an odd issue or do you have any advice on how I could fix this? I would appreciate any help! Have a nice day!

    Read the article

  • Is it relatively safe to install kernel from "Canonical Kernel Team ppa" than "Mainline"

    - by tijybba
    I already referred most of the questions stating Upgrade from Mainline Builds or Compiling from latest source or PPA and also concluded that it can cause breakage to Current stable installed system. My question is regarding the kernel builds from Canonical Kernel Team which i have subscribed in Ubuntu 12.04 64-bit , states This is the core kernel team as hired by Canonical. Do not use this team, use ubuntu-kernel-team instead. My stable kernel states 3.2.0-27.42 from Ubuntu repository , also i consider Canonical Kernel Team to be Official ,currently urging me to Upgrade 3.2.0-27.43 , so from the Odd numbered and through the PPA description it is categorized as Unstable. From this ,it can be said next stable release would be 3.2.0-27.44. Is upgrading to .43 version is stable enough to continue , since .44 will be provide by Ubuntu itself based on .43 version and so on. Though i can't expect a lot of Changes ,but does it provide new Improvements or just Bug Fixes since it is just a preceding Release. Also , apart from Ubuntu mainline kernel , is Canonical Kernel Team different. If so , in what development or contribution terms. Is the Ubuntu kernel developed by Two different teams or same team. P.S.: Just noticed that sudo apt-get update && sudo apt-get upgrade provides me upgrade to .43 kernel , which normally requires sudo apt-get dist-upgrade to upgrade to newer kernel available , unless it normally provides message like " Following packages were not upgraded..." , is it an error or an exception to this Canonical Kernel PPA.

    Read the article

  • BI&EPM in Focus - November 2011

    - by Mike.Hallett(at)Oracle-BI&EPM
    Enterprise Performance Management A Thing of Beauty, by Alison WeissAvon’s enterprise performance management system delivers accurate information and critical insight to managers at every level of the organization Oracle Crystal Ball Helps Managers Guard Against Volatility, by Alison Weiss The Insight Game, by Aaron LazenbyEnterprise performance management can deliver insights crucial to navigating the volatility of the global economy—and that’s no game of checkers. KPI vs. the Bottom Line, by Edward RoskeFor managers, is tracking the key metrics for their departments enough to ensure success for the entire business? The CEO for Oracle partner interRel shares his opinion. Deep Integration, by Aaron LazenbyThe synthesis of Oracle Hyperion applications and core Oracle technologies can deliver deep benefits to analytics-driven businesses. Oracle Crystal Ball. Oracle's #1 Solution for Risk Management Follow EPM Documentation at Hyperion EPM Info for news about EPM documentation releases and updates (twitter | facebook | Linkedin) Whitepaper: Integrating XBRL Into Your Financial Reporting Process Oracle Hyperion Disclosure Management Customer Story: StealthGas Inc. Saves 12 Accountant Days Yearly, Validates XBRL-Compliant Financial Filing Data in One Day Sherwin-Williams Argentina I.C.S.A. Accelerates Budget Preparation Process by 75% BBDO Germany GmbH Consolidates Financial and Planning Processes for More Than 50 Agencies StealthGas Inc. Saves 12 Accountant Days Yearly, Validates XBRL-Compliant Financial Filing Data in One Day Business Intelligence Webcast Replay: Oracle Data Mining & BI EE - Predictive Analytics (Part 2) Innovation Award Winners - BI/EPM: HealthSouth, State of MD, Clorox Company, Telenor and Dunkin Brands Leeds Teaching Hospitals National Health Service Trust Builds Budget Reports Six Times Faster, Achieves 100% ROI in 12 Months with Oracle Business Intelligence Home Credit Group Consolidates Reporting and Saves Time across All Business Units w/ Oracle Essbase & OBIEE Autoglass Improves Business Visibility and Services to Customers and Partners with Oracle Business Intelligence Events Download Oracle OpenWorld Oct 2011 Presentations select Middleware - BI or Applications - Hyperion Oracle Business Analytics Summits:learn about the latest trends, best practices, and innovations in business intelligence, analytics applications, and data warehousing Webcast Nov 15 9am PST: Running the Last Mile, Beyond Financial Consolidations - Streamlining the Close and Addressing the SEC's XBRL Mandate Webcast Dec 13 1pm PST: Defining Your Mobile BI Strategy (BICG) New Training Available: Oracle BI Publisher 11g R1: Fundamentals Webcast Replay: How to Expand the Usage of Analytics in your Organization while Driving Down IT Spend Webcast Replay: Real-Time Decisions (RTD) Updated Use Cases for Ecommerce Personalization in Financial Services & Retail

    Read the article

  • CodePlex Daily Summary for Tuesday, October 08, 2013

    CodePlex Daily Summary for Tuesday, October 08, 2013Popular ReleasesTimeWatcher: Time Watcher Beta Stable Release: First release. For x86 & x64 computers. Tested on Windows 7 & Windows 8. The alarm functionality is not yet implemented !!Keepass2Android: 0.9 preview: Support for Dropbox (read/write/sync) Integrated custom file browserLayered Architecture Sample for Azure: Leave Sample - October 2013 (for Azure): Thank You for downloading Layered Architecture Sample. It is important that you read the accompanying README.txt file for setup and installation instructions. Please download and install the Windows Azure SDK before opening these samples. This is a set of revised samples that illustrates how the layered architecture pattern can be applied to web applications that are deployed to Windows Azure. This version is only supported on Visual Studio 2012. This set contains: LeaveSample-WebRoles - AS...NuGet: NuGet 2.7.1: Released October 07, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.7.1 Important note: After downloading the signed build of NuGet.exe, if you perform an update using the "nuget.exe update -self" command, it will revert back to the unsigned build.DNNParallax: DNNParallax_01.00.03_Install: This software package includes the DNNParallax skin and the DNNParallax container extensions. Version 01.00.03 Added Parallax JS Script Version 01.00.02 Using local scripts only Version 01.00.01 Fixed a minor packaging issueMugen MVVM Toolkit: Mugen MVVM Toolkit 2.0: IntroductionMugen MVVM Toolkit makes it easier to develop Silverlight, WPF, WinRT and WP applications using the Model-View-ViewModel design pattern. The purpose of the toolkit is to provide a simple framework and set of tools for getting up to speed quickly with applications based on the MVVM design pattern. The core of Toolkit contains a navigation system, windows management system, models, validation, etc. Mugen MVVM Toolkit contains all the MVVM classes such as ViewModelBase, RelayCommand,...Office Ribbon Project (under active development): Ribbon (07. Oct. 2013): Fixed Scrollbar Bug if DropDown Button is bigger than screen Added Office 2013 Theme Fixed closing the Ribbon caused a null reference exception in the RibbonButton.Dispose if the DropDown was not created yet Fixed Memory leak fix (unhooked events after Dispose) Fixed ToolStrip Selected Text 2013 and 2007 for Blue and Standard themesDynamics NAV Application Profiler: Dynamics NAV Application Profiler: Dynamics NAV Application ProfilerGhostscript.NET: Ghostscript.NET v.1.1.0.: v.1.1.0. added GhostscriptViewer state handling (SaveState, RestoreState) GhostscriptRasterizer constructor is extended in order to support usage of the existing GhostscriptViewer instance. fixed problem while using a 32-bit assembly with 32-bit version of Ghostscript on 64-bit Windows: It couldn't find a registry key of installed Ghostscript. Reported and fixed by "r0land". v.1.0.9. implemented EPS (Encapsulated PostScript) support for the GhostscriptViewer. added GhostscriptRasterize...Ghostscript Studio: Ghostscript.Studio v.1.0.2: Ghostscript Studio is easy to use Ghostscript IDE, a tool that facilitates the use of the Ghostscript interpreter by providing you with a graphical interface for postscript editing and file conversions. Ghostscript Studio allows you to preview postscript files, edit the code and execute them in order to convert PDF documents and other formats. The program allows you to convert between PDF, Postscript, EPS, TIFF, JPG and PNG by using the Ghostscript.NET Processor. v.1.0.2. added custom -c s...cmdradio: v0.1.1 binary: Default download in win32. For other OS see here. This is alpha version. Please report all bugs.Squiggle - A free open source LAN Messenger: Squiggle 3.3 Alpha: Allow using environment variables in configuration file (history db connection string, download folder location, display name, group and message) Fix for history viewer to show the correct history entries History saved with UTC timestamp This is alpha release and not recommended for use in productionMedia Companion: Media Companion MC3.579b: Fixed IMDB actor scraping for Movies and TV. Note: there are a couple of new functions that are not active, as this release needed to be done due to IMDB change. New* TV - context menu for rescrapeing Poster image/Banner image Fixed* Both - Fixed actor scraping from IMDB * Movie - Fixed Tableview if Movie's movieset was not in MC's list of moviesets * Movie - Rename with mediainfo now lowercase. * Both - added ignore "A " in titles. Separate option in General Preferences. * Tv - Changing ...VidCoder: 1.5.7 Beta: Updated HandBrake core to SVN 4819. About dialog now pulls down HandBrake version from the DLL. Added a confirmation dialog to Stop if the encode has been going on for more than 5 minutes. Fixed handling of unicode characters for input and output filenames. We now encode to UTF-8 before passing to HandBrake. Fixed a crash in the queue multiple titles dialog. Added code to rescue tool windows which get placed outside of the visible screen area.Wsus Package Publisher: Release v1.3.1310.05: Enhance the "Reboot Remote Computers", by adding a timer before the reboot occure. So that remote users can save their documents and close applications. You can also add a message to be display. In 'Tools'->'Settings'-> Misc Tab, you can set a default message. Enhance the "Compare Computers against AD", by choosing OUs to include in the comparison.Pulse: Pulse 0.6.7.3: Pulse is now accepting donations. To donate by Bitcoin or PayPal see https://pulse.codeplex.com/wikipage?title=Donations Lots of updates in v0.6.7.3: (Feature) New option allows you to disable wallpaper changing when a full screen application is running. This way Pulse doesn't slow down/lag your videos and games :) (Fix) Some users were getting Wallbase errors when logging in. This has been fixed. (Feature) Right click a provider and you can now make a copy of it by selecting the "Dupl...MoreTerra (Terraria World Viewer): MoreTerra 1.11.1: Release 1.11.1 =========== =Bug Fixes= =========== Added more tile blocks (Clouds, crimstone) Added items (binoculars, rope, Pirahna Gun) Added ores (Lead, Tin) Chests now work, I broke them yesterday. =============== =Known Issues= =============== I am having trouble with new background walls. So you will see a red outline for crimson then a pink inside. Same with where I think the queen bee lives.VG-Ripper & PG-Ripper: PG-Ripper 1.4.19: NEW: Added Option to login as Guest NEW: Added Menu Option to delete an Forum Account NEW: Added Support for "ImageTeam.org links FIXED: Fixed Ripping of http://forum.babeunion.com ForumsSimpleExcelReportMaker: Serm 0.03: SourceCode and Sample .Net Framework 3.5 AnyCPU compile.Application Architecture Guidelines: App Architecture Guidelines 3.0.8: This document is an overview of software qualities, principles, patterns, practices, tools and libraries.New ProjectsAutomating windows Azure SQL DB Backup using Worker role: This tool is used for backup functionality on SQL Azure database and tables in a periodical timeline.Billard Management System: Billard Management SystemCRM 2013 Duplicate Detection: Add client side duplicate detection back into Dynamics CRM 2013Demo2: Demo2Enough HttpClientExtensions: Enough HttpClientCachingExtensions allows you to store the response of a HttpClient call if caching is allowed.GameEngine: AInvisibleShortcuts: InvisibleShortcuts, create shortcuts to launch applications, visit folders and websites, manipulate files, execute system command lines and do anything!jean108jabbr: 1MyProject2: This is projectSeries Manager: With this tool it is easy to name all your series with the right season, episode and the title of the specific episode. Meta information is provided by the TVDBsport: sportolTechResearch: Technical researchTestProjectTodor: A test project for playing with TFS

    Read the article

  • How to Properly Make use of Codeigniter's HMVC

    - by Branden Stilgar Sueper
    I have been having problems wrapping my brain around how to properly utilize the modular extension for Codeigniter. From what I understand, modules should be entirely independent of one another so I can work on one module and not have to worry about what module my teammate is working on. I am building a frontend and a backend to my site, and am having confusion about how I should structure my applications. The first part of my question is should I use the app root controllers to run modules, or should users go directly to the modules by urls? IE: in my welcome.php public function index() { $this->data['blog'] = Modules::run( 'blog' ); $this->data['main'] = Modules::run( 'random_image' ); $this->load->view('v_template', $this->data); } public function calendar() { $this->data['blog'] = Modules::run( 'blog' ); $this->data['main'] = Modules::run( 'calendar' ); $this->load->view('v_template', $this->data); } My second part of the question is should I create separate front/back end module folders -config -controllers welcome.php -admin admin.php -core -helpers -hooks -language -libraries -models -modules-back -dashboard -logged_in -login -register -upload_images -delete_images -modules-front -blog -calendar -random_image -search -views v_template.php -admin av_template.php Any help would be greatly appreciated.

    Read the article

  • Grow Your Oracle Exadata and Manageability Business: Engage With Us to Find Out How

    - by swalker
    Don't miss out on the first EMEA Partner Community Cast! If you are a business decision maker, project leader, technical leader or business development manager you will gain incredible value from these events, and we believe that this introduction to Oracle Partner Communities will bring you a wealth of new opportunities. Join Us on December 7th, 10:00 GMT (11:00 CET) for the first broadcast the Exadata and Manageability solution areas. In just 30 minutes, you will find out more about Oracle's Exadata, Manageability and Oracle Enterprise Manager 12c solutions, and the value they can generate for you and your customers. See the full agenda here. Hosted by Paul Thompson, Senior Director, Alliances and Solutions Partner Programs, Oracle EMEA and Javier Puerta, Director, Core Technology Partner Programs, Oracle EMEA, our special guests include: Steve McNickle, Vice President Europe, cVidya Dave Sanderson, Associate Partner, Technology Reply Patrick Rood, Lead for Indirect Manageability Business, Oracle EMEA Register Now Partner Community Casts are a new series of interactive broadcasts designed to help you truly engage with Oracle on an individual level, build expertise around your specialist solution area and make valuable new contacts in Oracle and other Oracle partners. Community Casts can be viewed live from our online platform. Audience members have the opportunity to submit questions during the show via chat or social media outlets, many of which are answered on-air. Learn more about EMEA Partner Community Casts Register Now to learn how participation in the Exadata and Manageability Partner Communities will help your business flourish!

    Read the article

  • MDX needs a function or macro syntax

    - by Darren Gosbell
    I was having an interesting discussion with a few people about the impact of named sets on performance (the same discussion noted by Chris Webb here: http://cwebbbi.wordpress.com/2011/03/16/referencing-named-sets-in-calculations). And apparently the core of the performance issue comes down to the way named sets are materialized within the SSAS engine. Which lead me to the thought that what we really need is a syntax for declaring a non-materialized set or to take this even further a way of declaring an MDX expression as function or macro so that it can be re-used in multiple places. Because sometimes you do want the set materialised, such as when you use an ordered set for calculating rankings. But a lot of the time we just want to make our MDX modular and want to avoid having to repeat the same code over and over. I did some searches on connect and could not find any similar suggestions so I posted one here: https://connect.microsoft.com/SQLServer/feedback/details/651646/mdx-macro-or-function-syntax Although apparently I did not search quite hard enough as Chris Webb made a similar suggestion some time ago, although he also included a request for true MDX stored procedures (not the .Net style stored procs that we have at the moment): https://connect.microsoft.com/SQLServer/feedback/details/473694/create-parameterised-queries-and-functions-on-the-server Chris also pointed out this post that he did last year http://cwebbbi.wordpress.com/2010/09/13/iccube/ where he pointed out that the icCube product already has this sort of functionality. So if you think either or both of these suggestions is a good idea then I would encourage you to click on the links and vote for them.

    Read the article

  • Agile development challenges

    - by Bob
    With Scrum / user story / agile development, how does one handle scheduling out-of-sync tasks that are part of a user story? We are a small gaming company working with a few remote consultants who do graphics and audio work. Typically, graphics work should be done at least a week (sometimes 2 weeks) in advance of the code so that it's ready for integration. However, since SCRUM is supposed to focus on user stories, how should I split the stories across iteration so that they still follow the user story model? Ideally, a user story should be completed by all the team members in the same iteration, I feel that splitting them in any way violates the core principle of user story driven development. Also, one front end developer can work at 2X pace of backend developers. However, that throws the scheduling out of sync as well because he is either constantly ahead of them or what we have done is to have him work on tasks that not specific to this iteration just to keep busy. Either way, it's the same issue as above, splitting up user story tasks. If someone can recommend an active Google agile development group that discusses these and other issues, that'll be great. Also, if you know of a free alternative to Pivotal Labs, let me know as well. I'm looking now at Agilo.

    Read the article

  • Migrating Forms to Java or ADF, the truth and no FUD

    - by Grant Ronald
    The question about migrating Forms to Java (or ADF or APEX) comes up time and time again.  I wanted to pull some core information together in a single blog post to address this question. The first question I always ask is "WHY" - Forms may still be a viable option for you so "if it ain't broke don't fix it".  Bottom line is whatever anyone tells you, its going to be a considerable effort and cost to migrate from Forms to something else so the business is going to want to know WHY you spend all those hard earned dollars switching from something that might have been serving you quite adequately. Second point, if you are going to switch, I would encourage you NOT to look at building a Forms clone.  So many times I see people trying to build an ADF application and EXACTLY mimic the Forms model - ADF is NOT a Forms clone.  You should be building to the sweet spot of your target technology, not your 20 year old client/server technology.  This is also the chance for the business to embrace change, so maybe look at new processes, channels and technology options that weren't available when you first developed your Forms applications. To help you understand what is involved, I've put together a number of resources. Thinking about migration of Forms to Java, ADF or APEX, read this to prepare yourself Oracle Forms to ADF: When, Why and How - this gives you an overview of our vision, directly from Oracle Product Management Redeveloping a Forms Application with Oracle JDeveloper and Oracle ADF.  This is a conference session from myself and Lynn Munsinger on how ADF can be used in a Forms migration/rewrite As someone who manages both Forms and ADF Product Management teams, I've a foot in either camp and am happy to see you use either tool.  However, I want you to be able to make an informed decision.  My hope is that there information sources will help you do that.

    Read the article

  • Screen flickering when using midrange brightness values on Dell XPS

    - by Eliran Malka
    After a fresh Ubuntu install on my laptop, I discovered the function keys for screen brightness control (Fn+F4 and Fn+F5) are not working. Digging around here, I managed to get it to work by following the solution suggested on this post and that one, but alas — after applying it, a strange problem occurred: Setting the brightness level to any value other than minimum or maximum, the screen starts flickering back and forth from the selected level to full brightness, apparently due to Dell's power saver attempting to dim the screen to adjust the brightness levels. I looked up for a solution here on the site, and possibly everywhere, with no avail. Also tried: To manually control the brightness by configuring the ACPI level (setting values by echo [some_value] | sudo tee /sys/class/backlight/[vendor]_backlight/[some_key], without success. Installing the Intel graphics driver, thinking it's missing. Realized it's installed out of the box by installing Mesa Utils. How to resolve this? Environment Model: Dell Studio XPS 13 OS: Windows 7 64bit / Ubuntu 12.04 32bit (dual boot) Graphics Driver: Intel HD 3000 (Sandybridge Mobile x86/MMX/SSE2) lshw -C display output: *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:47 memory:f0000000-f03fffff memory:e0000000-efffffff ioport:2000(size=64)

    Read the article

  • How to reduce the fan noise and how to increase battery life?

    - by mehdi
    I have a brand new Sony Vaio S series laptop.(VPCSA2DGX) It came factory installed with Windows 7 professional Edition 64bit. Runs Intel core i5, 500 GB HDD , 4GB Ram. First I installed ubuntu 11.10 64 bit along side Windows to dual boot. Later,since the problem did not solve, I installed ubuntu 12.04 64bit along side Windows to dual boot. However the problem keeps annoying me. Problem: When running ubuntu 11.10/12.04, the battery lasts only about 1.5 hours. The Fan runs loud and continuously. And there is a lot of heat generated. System monitor shows less than 5% CPU used. My laptop enjoys hybrid graphic and I tried turning off ADM graphic card and keep Intel graphic card on. However I can not get the Fan noise or heat to go away and consequently the battery drain continues. BTW, in windows, the laptop gives 4-5 hours of battery power, Fan is silent and there is no heat problem. Any ideas on how to reduce the fan noise and how to increase battery life in ubuntu 11.10/12.04?

    Read the article

  • Parallel Classloading Revisited: Fully Concurrent Loading

    - by davidholmes
    Java 7 introduced support for parallel classloading. A description of that project and its goals can be found here: http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html The solution for parallel classloading was to add to each class loader a ConcurrentHashMap, referenced through a new field, parallelLockMap. This contains a mapping from class names to Objects to use as a classloading lock for that class name. This was then used in the following way: protected Class loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // First, check if the class has already been loaded Class c = findLoadedClass(name); if (c == null) { long t0 = System.nanoTime(); try { if (parent != null) { c = parent.loadClass(name, false); } else { c = findBootstrapClassOrNull(name); } } catch (ClassNotFoundException e) { // ClassNotFoundException thrown if class not found // from the non-null parent class loader } if (c == null) { // If still not found, then invoke findClass in order // to find the class. long t1 = System.nanoTime(); c = findClass(name); // this is the defining class loader; record the stats sun.misc.PerfCounter.getParentDelegationTime().addTime(t1 - t0); sun.misc.PerfCounter.getFindClassTime().addElapsedTimeFrom(t1); sun.misc.PerfCounter.getFindClasses().increment(); } } if (resolve) { resolveClass(c); } return c; } } Where getClassLoadingLock simply does: protected Object getClassLoadingLock(String className) { Object lock = this; if (parallelLockMap != null) { Object newLock = new Object(); lock = parallelLockMap.putIfAbsent(className, newLock); if (lock == null) { lock = newLock; } } return lock; } This approach is very inefficient in terms of the space used per map and the number of maps. First, there is a map per-classloader. As per the code above under normal delegation the current classloader creates and acquires a lock for the given class, checks if it is already loaded, then asks its parent to load it; the parent in turn creates another lock in its own map, checks if the class is already loaded and then delegates to its parent and so on till the boot loader is invoked for which there is no map and no lock. So even in the simplest of applications, you will have two maps (in the system and extensions loaders) for every class that has to be loaded transitively from the application's main class. If you knew before hand which loader would actually load the class the locking would only need to be performed in that loader. As it stands the locking is completely unnecessary for all classes loaded by the boot loader. Secondly, once loading has completed and findClass will return the class, the lock and the map entry is completely unnecessary. But as it stands, the lock objects and their associated entries are never removed from the map. It is worth understanding exactly what the locking is intended to achieve, as this will help us understand potential remedies to the above inefficiencies. Given this is the support for parallel classloading, the class loader itself is unlikely to need to guard against concurrent load attempts - and if that were not the case it is likely that the classloader would need a different means to protect itself rather than a lock per class. Ultimately when a class file is located and the class has to be loaded, defineClass is called which calls into the VM - the VM does not require any locking at the Java level and uses its own mutexes for guarding its internal data structures (such as the system dictionary). The classloader locking is primarily needed to address the following situation: if two threads attempt to load the same class, one will initiate the request through the appropriate loader and eventually cause defineClass to be invoked. Meanwhile the second attempt will block trying to acquire the lock. Once the class is loaded the first thread will release the lock, allowing the second to acquire it. The second thread then sees that the class has now been loaded and will return that class. Neither thread can tell which did the loading and they both continue successfully. Consider if no lock was acquired in the classloader. Both threads will eventually locate the file for the class, read in the bytecodes and call defineClass to actually load the class. In this case the first to call defineClass will succeed, while the second will encounter an exception due to an attempted redefinition of an existing class. It is solely for this error condition that the lock has to be used. (Note that parallel capable classloaders should not need to be doing old deadlock-avoidance tricks like doing a wait() on the lock object\!). There are a number of obvious things we can try to solve this problem and they basically take three forms: Remove the need for locking. This might be achieved by having a new version of defineClass which acts like defineClassIfNotPresent - simply returning an existing Class rather than triggering an exception. Increase the coarseness of locking to reduce the number of lock objects and/or maps. For example, using a single shared lockMap instead of a per-loader lockMap. Reduce the lifetime of lock objects so that entries are removed from the map when no longer needed (eg remove after loading, use weak references to the lock objects and cleanup the map periodically). There are pros and cons to each of these approaches. Unfortunately a significant "con" is that the API introduced in Java 7 to support parallel classloading has essentially mandated that these locks do in fact exist, and they are accessible to the application code (indirectly through the classloader if it exposes them - which a custom loader might do - and regardless they are accessible to custom classloaders). So while we can reason that we could do parallel classloading with no locking, we can not implement this without breaking the specification for parallel classloading that was put in place for Java 7. Similarly we might reason that we can remove a mapping (and the lock object) because the class is already loaded, but this would again violate the specification because it can be reasoned that the following assertion should hold true: Object lock1 = loader.getClassLoadingLock(name); loader.loadClass(name); Object lock2 = loader.getClassLoadingLock(name); assert lock1 == lock2; Without modifying the specification, or at least doing some creative wordsmithing on it, options 1 and 3 are precluded. Even then there are caveats, for example if findLoadedClass is not atomic with respect to defineClass, then you can have concurrent calls to findLoadedClass from different threads and that could be expensive (this is also an argument against moving findLoadedClass outside the locked region - it may speed up the common case where the class is already loaded, but the cost of re-executing after acquiring the lock could be prohibitive. Even option 2 might need some wordsmithing on the specification because the specification for getClassLoadingLock states "returns a dedicated object associated with the specified class name". The question is, what does "dedicated" mean here? Does it mean unique in the sense that the returned object is only associated with the given class in the current loader? Or can the object actually guard loading of multiple classes, possibly across different class loaders? So it seems that changing the specification will be inevitable if we wish to do something here. In which case lets go for something that more cleanly defines what we want to be doing: fully concurrent class-loading. Note: defineClassIfNotPresent is already implemented in the VM as find_or_define_class. It is only used if the AllowParallelDefineClass flag is set. This gives us an easy hook into existing VM mechanics. Proposal: Fully Concurrent ClassLoaders The proposal is that we expand on the notion of a parallel capable class loader and define a "fully concurrent parallel capable class loader" or fully concurrent loader, for short. A fully concurrent loader uses no synchronization in loadClass and the VM uses the "parallel define class" mechanism. For a fully concurrent loader getClassLoadingLock() can return null (or perhaps not - it doesn't matter as we won't use the result anyway). At present we have not made any changes to this method. All the parallel capable JDK classloaders become fully concurrent loaders. This doesn't require any code re-design as none of the mechanisms implemented rely on the per-name locking provided by the parallelLockMap. This seems to give us a path to remove all locking at the Java level during classloading, while retaining full compatibility with Java 7 parallel capable loaders. Fully concurrent loaders will still encounter the performance penalty associated with concurrent attempts to find and prepare a class's bytecode for definition by the VM. What this penalty is depends on the number of concurrent load attempts possible (a function of the number of threads and the application logic, and dependent on the number of processors), and the costs associated with finding and preparing the bytecodes. This obviously has to be measured across a range of applications. Preliminary webrevs: http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.hotspot/ http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.jdk/ Please direct all comments to the mailing list [email protected].

    Read the article

  • Oracle Knowledge Courses

    - by mseika
    Oracle Knowledge products offer simple and convenient ways for users to access knowledge contained in corporate information stores. With Oracle Knowledge Training, you learn how to utilize tools that improve customer service and satisfaction by helping customers find more relevant answers to questions online or from a service agent guided by a scalable knowledge management platform. The following courses have been scheduled at Oracle in Utrecht: Oracle Knowledge Overview Rel 8.5 (1 day) Learn the technical architecture of Oracle Knowledge at a high-level and the key technologies including InfoCenter, iConnect, Search, Information Manager, Answerflow and Analytics. Dates: to be scheduled Knowledge Technical Architecture and Configuration Rel 8.5 (5 days) Learn to implement and maintain Oracle Knowledge’s core technologies through hands-on exercises including Intelligent Search, Information Manager, iConnect, AnswerFlow and Analytics. Dates: 13-17 January 2014 (afternoon/evening) Location: Live Virtual Class Knowledge Content Administration Rel 8.5 (2 days) Learn to implement, use and manage knowledge and content creation with Oracle Knowledge Information Manager. Dates: 4-5 December 2013 Location: Utrecht, The Netherlands Knowledge Analytics Rel 8.5 (1 day) Learn KPI analyses and how to close gaps using reports and tools provided in Oracle Business Intelligence Enterprise Edition. Dates: 6 December Location: Utrecht, The Netherlands Remember: your OPN discount is always applied to the standard prices shown on the Oracle University web pages. For assistance in booking, scheduling requests and more information contact the Education Service Desk: eMail: [email protected] Telephone: +31 30 66 27 675

    Read the article

  • How can I install oracle-java7 from webupd8 ppa?

    - by Ahmed Zain El Dein
    I installed ppa:webupd8team/java and I get the following error Output from: sudo apt-get install oracle-java7-installer Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: binfmt-support visualvm ttf-baekmuk ttf-unfonts ttf-unfonts-core ttf-kochi-gothic ttf-sazanami-gothic ttf-kochi-mincho ttf-sazanami-mincho ttf-arphic-uming The following packages will be upgraded: oracle-java7-installer 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. Need to get 0 B/16.0 kB of archives. After this operation, 64.5 kB of additional disk space will be used. Could not exec dpkg! E: Sub-process /usr/bin/dpkg returned an error code (100) i did afterwords those line of code trying to resolve the issue becuase it is not existed actually in the /usr/bin/dpkg there is no dpkg mkdir /tmp/dpkg cd /tmp/dpkg wget http://archive.ubuntu.com/ubuntu/pool/main/d/dpkg/dpkg_1.15.5.6ubuntu4_i386.deb ar x dpkg*.deb data.tar.gz tar xfvz data.tar.gz ./usr/bin/dpkg sudo cp ./usr/bin/dpkg /usr/bin/ sudo apt-get update sudo apt-get install --reinstall dpkg then i get this $ sudo apt-get install --reinstall dpkg Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 6 not upgraded. 1 not fully installed or removed. Need to get 0 B/1,814 kB of archives. After this operation, 0 B of additional disk space will be used. dpkg: warning: 'dpkg-deb' not found on PATH. dpkg: 1 expected program(s) not found on PATH. NB: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. E: Sub-process /usr/bin/dpkg returned an error code (2) How can I fix this?

    Read the article

  • Node.JS testing with Jasmine, databases, and pre-existing code

    - by Jim Rubenstein
    I've recently built the start of a core system which is likely going turn into a monster product. I'm building the system with node.js, and decided after I got a small base built, that It'd be a great idea to start using some sort of automated test suite to test the application. I decided to use jasmine, as it seems pretty solid and has a lot of features for stubbing spying and mocking methods and classes. The application has a lot of external data stores and api access (kestrel, mysql, mongodb, facebook, and more). My issue is, I've got a good amount of code written that I want to start testing - as it represents the underpinnings of the application. What are the best practices for testing methods/classes that access external APIs that I may or may not have control over? As an example, I have a data structure that fetches a bunch of data from a MySQL database. I want to test the method that retrieves the data; and I'm not sure how to go about it. I could test the fetch method which is supposed to return an array of objects, but to isolate the method from the database, I need to define my own fixture data. So what I end up doing is stubbing the mysql execution, and returning a static dataset. So, I end up writing a function that returns the dataset that makes my test pass. That doesn't seem to actually test the code, other than verifying a method is being called. I know this is kind of abstract and vague, it seems that the idea of testing is very much abstract though, so hopefully someone has some experience and can guide me in the right direction. Any advice, or reading I can do is more than welcomed. Thanks in advance.

    Read the article

  • Frequent disconnects with oneiric server using wlan AR9285

    - by John Neil
    I'm getting a large number of disconnects from my wireless when I switched to oneiric server (I did not see these happen with oneiric desktop) from my AR9285 wireless LAN device. Here is the syslog snippet: Oct 17 09:43:17 weather kernel: [ 1537.329138] wlan0: deauthenticated from 00:12:17:7a:8e:42 (Reason: 7) Oct 17 09:43:17 weather kernel: [ 1537.340409] cfg80211: All devices are disconnected, going to restore regulatory settings Oct 17 09:43:17 weather kernel: [ 1537.340423] cfg80211: Restoring regulatory settings Oct 17 09:43:17 weather kernel: [ 1537.340435] cfg80211: Calling CRDA to update world regulatory domain Oct 17 09:43:17 weather kernel: [ 1537.348571] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain Oct 17 09:43:17 weather kernel: [ 1537.348581] cfg80211: World regulatory domain updated: Oct 17 09:43:17 weather kernel: [ 1537.348586] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) Oct 17 09:43:17 weather kernel: [ 1537.348594] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Oct 17 09:43:17 weather kernel: [ 1537.348600] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Oct 17 09:43:17 weather kernel: [ 1537.348607] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Oct 17 09:43:17 weather kernel: [ 1537.348613] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Oct 17 09:43:17 weather kernel: [ 1537.348620] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Here is the relevant lspci output: # lspci | grep Atheros 02:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) I have done quite a bit of searching and saw discussions for previous versions of ubuntu that recommended installing the linux-backports-modules package. However, this does not appear to be available for oneiric (just the headers are listed as a package). Any advice on how to achieve a stable wireless connection for this server? It's location mitigates against using a wired connection.

    Read the article

  • SharePoint 2010 Single Page Apps without a Master Page

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2014/06/06/156827.aspxWell, maybe a stretch, but I am inclined to believe it is so.  I have been using  the JavaScript Client Object Model (JCSOM) for about 6 months now and I believe it can do about 80% of my job quickly without much fanfare.  When building sites in SharePoint no one wants the OOTB list views, etc. They want a custom look and feel!  I used to think in previous engagements that this would mean some custom server code or at least a data-form web part.   Since coming on-board in my current engagement, I have been forced because we don’t own the hosting site to come up with innovative ways to customize the UI of SharePoint.  We can push content via sandbox solutions and use JCSOM from within SharePoint Designer to do almost all customizations.  I have been using the following methodology to accomplish this: 1. Create an HTML file, which links CSS and JavaScript Files 2. Create and ASPX Web Part Page, Include a Content Editor Web Part and link to the HTML page created above.   So basically once it was done, I could copy , paste,  and rename those 4 items: CC, JS, HTML. ASPX and using MVVM just change the Model, View, and View-Model in the JavaScript file.  in about 5 minutes, I could create a completely new web part with SharePoint data.  Styling would take a little longer.  Some issues that would crop up: 1.  Multiple(s) of these web parts would not work well together on the same page (context). 2.  To separate the Web parts and context I would create a separate page for each web part and link them to a tabs layout via a Page Viewer web part or I frame.  Easy to do and not a problem but a big load problem as these web part pages even with minimal master had huge footprints.  (master page and page web part zones)   I kept thinking of my experience with SharePoint 2013 and apps!  The JavaScript was loaded from within the app, why can’t we do that in 2010 and skip the master page and web part zones. I thought at first, just link to sp.js but that didn’t work so I searched the web and found a link which did not work at all in my environment but helped me create a solution that would kudos to (Will). <!DOCTYPE html> <%@ Page %> <%@ Register Tagprefix="SharePoint" Namespace="Microsoft.SharePoint.WebControls" Assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Import Namespace="Microsoft.SharePoint" %> <html> <head> <link rel="Stylesheet" type="text/css" href="../CSS/smoothness/jquery-ui-1.10.4.custom.min.css"> </head> <body > <form runat="server"> <!-- the following 5 js files are required to use CSOM --> <script src="/_layouts/1033/init.js" type="text/javascript" ></script> <script src="/_layouts/MicrosoftAjax.js" type="text/javascript" ></script> <script src="/_layouts/sp.core.js" type="text/javascript" ></script> <script src="/_layouts/sp.runtime.js" type="text/javascript" ></script> <script src="/_layouts/sp.js" type="text/javascript" ></script> <!-- include your app code --> <script src="../scripts/jquery-1.9.1.js" type="text/javascript" ></script> <script src="../Scripts/jquery-ui-1.10.3.custom.min.js" type="text/javascript"></script> <script src="../scripts/App.js" type="text/javascript" ></script> <div ID="Wrapper"> </div> <SharePoint:FormDigest ID="FormDigest1" runat="server"></SharePoint:FormDigest> </form> </body> </html> Notice that I have the scripts loaded within the body! I discovered this by accident in trying to get Will’s solution to work, it made this work just like normal JCSOM from the master page.  I am sure there are other ways to do this, but I am a full time developer, so I’ll let someone else investigate the alternatives.  I have an example page showing an Announcements list as a Booklet which is a JQuery Plug-In.  Here is the page source notice the footprint is light.   <!DOCTYPE html> <html> <head> <link rel="Stylesheet" type="text/css" href="../CSS/smoothness/jquery-ui-1.10.4.custom.min.css"> <link href="../CSS/jquery.booklet.latest.css" type="text/css" rel="stylesheet" media="screen, projection, tv" /> <link href="../CSS/bookletannouncement.css" type="text/css" rel="stylesheet" media="screen, projection, tv" /> </head> <body > <form name="ctl00" method="post" action="BookletAnnouncements2.aspx" onsubmit="javascript:return WebForm_OnSubmit();" id="ctl00"> <div> <input type="hidden" name="__REQUESTDIGEST" id="__REQUESTDIGEST" value="0x3384922A8349572E3D76DC68A3F7A0984CEC14CB9669817CCA584681B54417F7FDD579F940335DCEC95CFFAC78ADDD60420F7AA82F60A8BC1BB4B9B9A57F9309,06 Jun 2014 14:13:27 -0000" /> <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwUBMGRk20t+bh/NWY1sZwphwb24pIxjUbo=" /> </div> <script type="text/javascript"> //<![CDATA[ var g_presenceEnabled = true;var _fV4UI=true;var _spPageContextInfo = {webServerRelativeUrl: "\u002fsites\u002fDemo50\u002fTeamSite", webLanguage: 1033, currentLanguage: 1033, webUIVersion:4,pageListId:"{ee707b5f-e246-4246-9e55-8db11d09a8cb}",pageItemId:167,userId:1, alertsEnabled:false, siteServerRelativeUrl: "\u002fsites\u002fdemo50", allowSilverlightPrompt:'True'};//]]> </script> <script type="text/javascript" src="/_layouts/1033/init.js?rev=lEi61hsCxcBAfvfQNZA%2FsQ%3D%3D"></script> <script type="text/javascript"> //<![CDATA[ function WebForm_OnSubmit() { UpdateFormDigest('\u002fsites\u002fDemo50\u002fTeamSite', 1440000); return true; } //]]> </script> <!-- the following 5 js files are required to use CSOM --> <script src="/_layouts/1033/init.js"></script> <script src="/_layouts/MicrosoftAjax.js"></script> <script src="/_layouts/sp.core.js"></script> <script src="/_layouts/sp.runtime.js"></script> <script src="/_layouts/sp.js"></script> <!-- include your app code --> <script src="../scripts/jquery-1.9.1.js"></script> <script src="../Scripts/jquery-ui-1.10.3.custom.min.js" type="text/javascript"></script> <script src="../Scripts/jquery.easing.1.3.js"></script> <script src="../Scripts/jquery.booklet.latest.min.js"></script> <script src="../scripts/Announcementsbooklet.js"></script> <div ID="Accord"> </div> <script type="text/javascript"> //<![CDATA[ var _spFormDigestRefreshInterval = 1440000;//]]> </script> </form> </body> </html> Here is the source to make the booklet work: ExecuteOrDelayUntilScriptLoaded(retrieveListItems, "sp.js"); var context; var collListItem; var web; var listRootFolder; var oList; //retieve the list items from the host web function retrieveListItems() { context = SP.ClientContext.get_current(); web = context.get_web(); oList = context.get_web().get_lists().getByTitle('Announcements'); var camlQuery = new SP.CamlQuery(); camlQuery.set_viewXml('<View><RowLimit>10</RowLimit></View>'); collListItem = oList.getItems(camlQuery); listRootFolder = oList.get_rootFolder(); context.load(listRootFolder); context.load(web); context.load(collListItem); context.executeQueryAsync(onQuerySucceeded, onQueryFailed); } //Model object var Dev = function (id, title, body, expires, url) { var self = this; self.ID = id; self.Title = title; self.Body = body; self.Expires = expires; self.Url = url; } //View model var DevVM = new ListViewModel() function ListViewModel() { var self = this; self.items = new Array(); } function onQuerySucceeded(sender, args) { var listItemEnumerator = collListItem.getEnumerator(); while (listItemEnumerator.moveNext()) { var oListItem = listItemEnumerator.get_current(); var javaDate = oListItem.get_item('Expires'); var fmtExpires = javaDate.format('dd MMM yyyy'); var url = ""; var goodUrl = oListItem.get_item('Url'); if (goodUrl == null) { url = web.get_serverRelativeUrl() + "/Lists/Announcements/EditForm.aspx?ID=" + oListItem.get_item('ID'); } else { url = web.get_serverRelativeUrl() + oListItem.get_item('Url') } DevVM.items.push(new Dev(oListItem.get_item('ID'), oListItem.get_item('Title'), oListItem.get_item('Body'), fmtExpires, url)); } $.each(DevVM.items, function (index) { $("#Accord").append(createAccordNode(DevVM.items[index].Title, DevVM.items[index].Body, " Expires: " + DevVM.items[index].Expires, DevVM.items[index].Url)); }); $("#Accord").booklet(); } function createAccordNode(title, body, expires, url) { return ( $("<div><h3>" + title + "</h3><p><span class='titlespan'><a href='" + url + "'>" + title + "</a></span><span class='dicussionspan'>" + body + "</span><span class='expiresspan'>" + expires + "</span></p></div>") ); } function onQueryFailed(sender, args) { alert('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace()); } The idea behind this post is that this could be used to: 1.   Create landing pages that are very un-SharePoint like! 2.   Make lightweight pages that could be used in page viewer web part or I Frame. 3.  Utilize Deep Zoom Composer and Sea-Dragon/or Silver light I will be using this for much of my development work!

    Read the article

  • What's a good Game development platform for a platformer game with these characteristics?

    - by Joe
    Yes, I know, the best way to make an indie game is to learn to code. I've got some scripting experience, but I want to do worldbuilding with already-existing tools (and communities surrounding those tools), and I've been really impressed with games like An Untitled Story that were made with pre-packaged toolsets at their core, like Game Maker. :) So I'm planning to make my game using either Game Maker or something like it. The basic parameters of my planned game: -2D platformer. -Physics/speed akin to Sonic the Hedgehog. -Large, non-linear world, flowing as seamlessly as possible -- think Super Metroid, but without the forced screen transitions. The first two points have me leaning toward Game Maker -- Plenty of 2D platformers have been made with it, and there are serviceable, openly available Sonic-the-Hedgehog-style physics engines for it that could be adapted to my needs with minimal muss and fuss. But the third makes me antsy -- from what limited information I hear, Game Maker has problems with large levels/boards/screens/whateveryoucallthem, thus necessitating transitions between screens. I want to avoid that if at all possible -- it would, I believe, fundamentally alter the flow of the game. I understand that generally speaking, the more you have loaded into memory the more things are going to chug (especially for a one-size-fits-all game development platform that isn't a model of efficient coding), but I'm hoping there are systems that can un-load objects that are sufficiently far offscreen and thus better produce seamlessness. Any thoughts, people? :) The sooner I can get a basic pre-fab physics engine and world-building program up and running, the sooner I can start prototyping areas and generally tooling around. Should I be looking at Game Maker, or elsewhere? (My current plan is to more-or-less build the game prototype-style, then worry about art and sound at the very end once the damn thing is playable.)

    Read the article

  • MaaS minimum requirements with juju-jitsu?

    - by Christopher Shen Mu Long
    I've browsed through so many different sites and found so much contradictory information. As I am getting tired of this and do belive this question affects many other users, so I would like to collect the "once and for all times" answer. Unfortunately, the documentation on MaaS and Juju is ... well, not the best, sorry to say that. What are the minimum system requirements for setting up a MaaS cluster, which is going to be orchestrated with juju-jitsu? Do they need to have the exact system specifications or can I just combine different hardware? What are the minimum requirements for the master machine? E.g. "You need at least 8GB of RAM, a dual core CPU with at least 3.0 GHz." How many machines to I need to deploy MaaS on? I've read six machines, nine machines, and so on. I clearly want to know: "You need one for the Master and e.g. five nodes." Do I need to attach as many NICs (network interface cards) to my master machine as there are nodes, or can I simply attach two NICs and a switch? One NIC for connecting to the internet, one for handling the MaaS tasks, connected to a switch, which connects my nodes to the master? Is Juju now ready for local deployment? The last time I experimented with juju and had to reboot my machine, the services orchestrated by juju were gone. This was an issue I also found on the official juju site. Unfortunately, as mentioned above, the documentation is not the best, so I could not find the necessary info on that again. So: Can I use juju on a local environment or will a reboot break my setup?

    Read the article

  • Scalable Architecture for modern Web Development [on hold]

    - by Jhilke Dai
    I am doing research about Scalable architecture for Web Development, the research is solely to support Modern Web Development with flexible architecture which can scale up/down according to the needs without losing any core functionality. By Modern Web I mean to support all the Devices used to access websites, but the loading mechanism for all devices would be different. My quest of architecture is: For PC: Accessing web in PC is faster but it also depends on the Geo-location, so, the application would check by default the capacity of Internet/Browser and load the page according to it. For Mobile: Most of the mobile design these days either hide information or use different version of same application. eg: facebook uses m.facebook.com which is completely different than PC version. Hiding the things from Mobile using JavaScript or CSS is not a solution as it'll consume the bandwidth and make the application slow. So, my architecture research is about Serving one Application, which has different stack. When the application receives the request it'd send the Packaged Stack to the received request. This way the load time for end users would be faster and maintenance of application for developers would be easier. I am researching about for 4-tier(layered) architecture like: Presentation Layer Application Logic Layer -- The main Logic layer which stores the Presentation Stack Business Logic Layer Data Layer Main Question: Have you come across of similar architecture? If so, then can you list the links here, I'm very much interested to learn about those implementations specially in real world scenario. Have you thought about similar architectures and tried your own ideas, or if you have any ideas regarding this, then I urge to share. I am open to any discussions regarding this, so, please feel free to comment/answer.

    Read the article

  • Newly installed Ubuntu 12.10 and weird graphics

    - by Benji Marshall
    My machine: 2 GB RAM Intel Pentium Dual core E2180 @ 2 GHz NVIDIA GeForce 6200 LE My friend had recommended Ubuntu to me and I thought I might as well get used to Linux in anticipation for my Raspberry Pi. He said that Wubi was the easiest way to install and I installed it using Wubi. On my first ever boot up of Ubuntu from the Windows Bootloader started normally, and I logged on in a normal fashion, and my desktop loaded normally. I then pressed the Windows key/Power key and everything went wrong. Random lines of yellow and blue appeared on my screen, and changed location when I moved my mouse. The lines stayed for a few seconds and then partially went to I could sort of use my computer. I tried moving my mouse and the entire desktop looked like it broke apart, fragments of it just scatter across my screen at random angles. I could move my mouse and the pointer would move but clicking did nothing. I had to turn off my machine by removing the plug. I would love to get off Windows, but at least the doesn't completely mess up the graphics, and is relatively usable. Please help me solve this....

    Read the article

  • What is the proper way to install the 3.4 kernel?

    - by Marcelo Ruiz
    Kernel 3.2 has an annoying error for my wireless card (rtl8192se-b) that makes the connection drop and/or prevents the card to make a connection to the wireless router. Dealing with it was very frustrating until I found out the bug was corrected in 3.4. I downloaded: linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb linux-image-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb and installed with: sudo dpkg -i * Now the wireless works fine, but I have two problems that cannot solve. The first one is minor: plymouth would not start at all. But if I boot with the 3.2 kernel it works fine. The second one is serious: sometimes the computer won't shut down or reboot. The X server terminates but the computer shows part of my grub background and will stay there forever using 100% of the CPU. I have a Toshiba Qosmio with an Core i7 and nvidia graphic card (using nvidia-current). During one shutdown, I briefly read a message that said that the virtualbox module couldn't be unloaded from the kernel. I tried to solve this by removing and purging virtualbox and installing it back. I don't see the message anymore, but sometimes the computer won't shutdown nor reboot. Am I missing something to properly configure the new kernels? Thanks!

    Read the article

  • At which point is a continuous integration server interesting?

    - by Cedric Martin
    I've been reading a bit about CI servers like Jenkins and I'm wondering: at which point is it useful? Because surely for a tiny project where you'd have only 5 classes and 10 unit tests, there's no real need. Here we've got about 1500 unit tests and they pass (on old Core 2 Duo workstations) in about 90 seconds (because they're really testing "units" and hence are very fast). The rule we have is that we cannot commit code when a test fail. So each developers launches all his tests to prevent regression. Obviously, because all the developers always launch all the test we catch errors due to conflicting changes as soon as one developer pulls the change of another (when any). It's still not very clear to me: should I set up a CI server like Jenkins? What would it bring? Is it just useful for the speed gain? (not an issue in our case) Is it useful because old builds can be recreated? (but we can do this to with Mercurial, by checking out old revs) Basically I understand it can be useful but I fail to see exactly why. Any explanation taking into account the points I raised above would be most welcome.

    Read the article

  • Visual Studio 2010 editor painfully slow

    - by Daniel Gehriger
    I'm running out of patience with MS VisualStudio 2010: I'm working on a solution containing ~50 C++ projects. When using the editor, I experience a lag of 1 - 2 seconds whenever I move the cursor to a different line, or when I move to a different window, or generally when the editor losses and gains focus. I went through a whole series of optimizations, to no avail: installed all hotfixes for VS2010 disabled all add-ins and extensions disabled Intellisense deleted all temporary files created by VS2010 disabled hardware acceleration unloaded all but 15 projects disabled tracking changes closed all but one window and so on. This is on a Dual Core machine with SSD harddrive (verified throughput 100MB/s), enough free space on HD, Windows 7 Pro 32-bit with 3GB of RAM and most of it still free. Whenever I type a letter, CPU usage of devenv.exe goes to 50 - 90% in process monitor for 1 - 2 seconds before returning to 5%. I used Process Explorer to analyze registry and file system access, and I only notice frequent accesses to the .sln file (which is quiet small), and a few registry reads, but nothing that would raise a red flag. I don't have this problem with solutions containing less projects, so I'm inclined to think that it's related to the number of projects. For your information, the entire solution has been migrated over the years from VS2005 to VS2008 to now VS2010. Does anyone have any ideas what else I could do to resume work on this project, other than returning to VS2008?

    Read the article

  • Why kernel source is not installed

    - by Subhajit
    I want to install kernel source in Ubuntu 12.04 which is not installed. I have checked the same using the following command : dpkg -s kernel Output is Kernel is not installed no information available Hence I have followed the following steps to install the same: 1.Installed the Dependencies by the following command: sudo apt-get install gcc libncurses5-dev git-core kernel-package fakeroot build-essential sudo apt-get update && sudo apt-get upgrade Download kernel source by the following command: wget http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.5.tar.bz2 tar -xvf linux-3.5.tar.bz2 cd linux-3.5/ 3.Compiled the source code to generate the .deb packages by the following command make-kpkg clean fakeroot make-kpkg --initrd --append-to-version=-spica kernel_image kernel_headers Install the .deb packages (two .deb packages generated, one is to install the kernel headers, another one is to install the kernel image) by the following command sudo dpkg -i linux-*.deb But after reboot it seems kernel is not installed (checked by dpkg -s kernel). Please tell where I am going wrong. Also, In step 3, I guess I am installing a new kernel (named spica kernel_image) but during the boot up this new kernel is not showing as option. Please help me

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >