Search Results

Search found 7238 results on 290 pages for 'step into'.

Page 158/290 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • A Look Inside JSR 360 - CLDC 8

    - by Roger Brinkley
    If you didn't notice during JavaOne the Java Micro Edition took a major step forward in its consolidation with Java Standard Edition when JSR 360 was proposed to the JCP community. Over the last couple of years there has been a focus to move Java ME back in line with it's big brother Java SE. We see evidence of this in JCP itself which just recently merged the ME and SE/EE Executive Committees into a single Java Executive Committee. But just before that occurred JSR 360 was proposed and approved for development on October 29. So let's take a look at what changes are now being proposed. In a way JSR 360 is returning back to the original roots of Java ME when it was first introduced. It was indeed a subset of the JDK 4 language, but as Java progressed many of the language changes were not implemented in the Java ME. Back then the tradeoff was still a functionality, footprint trade off but the major market was feature phones. Today the market has changed and CLDC, while it will still target feature phones, will have it primary emphasis on embedded devices like wireless modules, smart meters, health care monitoring and other M2M devices. The major changes will come in three areas: language feature changes, library changes, and consolidating the Generic Connection Framework.  There have been three Java SE versions that have been implemented since JavaME was first developed so the language feature changes can be divided into changes that came in JDK 5 and those in JDK 7, which mostly consist of the project Coin changes. There were no language changes in JDK 6 but the changes from JDK 5 are: Assertions - Assertions enable you to test your assumptions about your program. For example, if you write a method that calculates the speed of a particle, you might assert that the calculated speed is less than the speed of light. In the example code below if the interval isn't between 0 and and 1,00 the an error of "Invalid value?" would be thrown. private void setInterval(int interval) { assert interval > 0 && interval <= 1000 : "Invalid value?" } Generics - Generics add stability to your code by making more of your bugs detectable at compile time. Code that uses generics has many benefits over non-generic code with: Stronger type checks at compile time. Elimination of casts. Enabling programming to implement generic algorithms. Enhanced for Loop - the enhanced for loop allows you to iterate through a collection without having to create an Iterator or without having to calculate beginning and end conditions for a counter variable. The enhanced for loop is the easiest of the new features to immediately incorporate in your code. In this tip you will see how the enhanced for loop replaces more traditional ways of sequentially accessing elements in a collection. void processList(Vector<string> list) { for (String item : list) { ... Autoboxing/Unboxing - This facility eliminates the drudgery of manual conversion between primitive types, such as int and wrapper types, such as Integer.  Hashtable<Integer, string=""> data = new Hashtable<>(); void add(int id, String value) { data.put(id, value); } Enumeration - Prior to JDK 5 enumerations were not typesafe, had no namespace, were brittle because they were compile time constants, and provided no informative print values. JDK 5 added support for enumerated types as a full-fledged class (dubbed an enum type). In addition to solving all the problems mentioned above, it allows you to add arbitrary methods and fields to an enum type, to implement arbitrary interfaces, and more. Enum types provide high-quality implementations of all the Object methods. They are Comparable and Serializable, and the serial form is designed to withstand arbitrary changes in the enum type. enum Season {WINTER, SPRING, SUMMER, FALL}; } private Season season; void setSeason(Season newSeason) { season = newSeason; } Varargs - Varargs eliminates the need for manually boxing up argument lists into an array when invoking methods that accept variable-length argument lists. The three periods after the final parameter's type indicate that the final argument may be passed as an array or as a sequence of arguments. Varargs can be used only in the final argument position. void warning(String format, String... parameters) { .. for(String p : parameters) { ...process(p);... } ... } Static Imports -The static import construct allows unqualified access to static members without inheriting from the type containing the static members. Instead, the program imports the members either individually or en masse. Once the static members have been imported, they may be used without qualification. The static import declaration is analogous to the normal import declaration. Where the normal import declaration imports classes from packages, allowing them to be used without package qualification, the static import declaration imports static members from classes, allowing them to be used without class qualification. import static data.Constants.RATIO; ... double r = Math.cos(RATIO * theta); Annotations - Annotations provide data about a program that is not part of the program itself. They have no direct effect on the operation of the code they annotate. There are a number of uses for annotations including information for the compiler, compiler-time and deployment-time processing, and run-time processing. They can be applied to a program's declarations of classes, fields, methods, and other program elements. @Deprecated public void clear(); The language changes from JDK 7 are little more familiar as they are mostly the changes from Project Coin: String in switch - Hey it only took us 18 years but the String class can be used in the expression of a switch statement. Fortunately for us it won't take that long for JavaME to adopt it. switch (arg) { case "-data": ... case "-out": ... Binary integral literals and underscores in numeric literals - Largely for readability, the integral types (byte, short, int, and long) can also be expressed using the binary number system. and any number of underscore characters (_) can appear anywhere between digits in a numerical literal. byte flags = 0b01001111; long mask = 0xfff0_ff08_4fff_0fffl; Multi-catch and more precise rethrow - A single catch block can handle more than one type of exception. In addition, the compiler performs more precise analysis of rethrown exceptions than earlier releases of Java SE. This enables you to specify more specific exception types in the throws clause of a method declaration. catch (IOException | InterruptedException ex) { logger.log(ex); throw ex; } Type Inference for Generic Instance Creation - Otherwise known as the diamond operator, the type arguments required to invoke the constructor of a generic class can be replaced with an empty set of type parameters (<>) as long as the compiler can infer the type arguments from the context.  map = new Hashtable<>(); Try-with-resource statement - The try-with-resources statement is a try statement that declares one or more resources. A resource is an object that must be closed after the program is finished with it. The try-with-resources statement ensures that each resource is closed at the end of the statement.  try (DataInputStream is = new DataInputStream(...)) { return is.readDouble(); } Simplified varargs method invocation - The Java compiler generates a warning at the declaration site of a varargs method or constructor with a non-reifiable varargs formal parameter. Java SE 7 introduced a compiler option -Xlint:varargs and the annotations @SafeVarargs and @SuppressWarnings({"unchecked", "varargs"}) to supress these warnings. On the library side there are new features that will be added to satisfy the language requirements above and some to improve the currently available set of APIs.  The library changes include: Collections update - New Collection, List, Set and Map, Iterable and Iteratator as well as implementations including Hashtable and Vector. Most of the work is too support generics String - New StringBuilder and CharSequence as well as a Stirng formatter. The javac compiler  now uses the the StringBuilder instead of String Buffer. Since StringBuilder is synchronized there is a performance increase which has necessitated the wahat String constructor works. Comparable interface - The comparable interface works with Collections, making it easier to reuse. Try with resources - Closeable and AutoCloseable Annotations - While support for Annotations is provided it will only be a compile time support. SuppressWarnings, Deprecated, Override NIO - There is a subset of NIO Buffer that have been in use on the of the graphics packages and needs to be pulled in and also support for NIO File IO subset. Platform extensibility via Service Providers (ServiceLoader) - ServiceLoader interface dos late bindings of interface to existing implementations. It helpe to package an interface and behavior of the implementation at a later point in time.Provider classes must have a zero-argument constructor so that they can be instantiated during loading. They are located and instantiated on demand and are identified via a provider-configuration file in the METAINF/services resource directory. This is a mechansim from Java SE. import com.XYZ.ServiceA; ServiceLoader<ServiceA> sl1= new ServiceLoader(ServiceA.class); Resources: META-INF/services/com.XYZ.ServiceA: ServiceAProvider1 ServiceAProvider2 ServiceAProvider3 META-INF/services/ServiceB: ServiceBProvider1 ServiceBProvider2 From JSR - I would rather use this list I think The Generic Connection Framework (GCF) was previously specified in a number of different JSRs including CLDC, MIDP, CDC 1.2, and JSR 197. JSR 360 represents a rare opportunity to consolidated and reintegrate parts that were duplicated in other specifications into a single specification, upgrade the APIs as well provide new functionality. The proposal is to specify a combined GCF specification that can be used with Java ME or Java SE and be backwards compatible with previous implementations. Because of size limitations as well as the complexity of the some features like InvokeDynamic and Unicode 6 will not be included. Additionally, any language or library changes in JDK 8 will be not be included. On the upside, with all the changes being made, backwards compatibility will still be maintained. JSR 360 is a major step forward for Java ME in terms of platform modernization, language alignment, and embedded support. If you're interested in following the progress of this JSR see the JSR's java.net project for details of the email lists, discussions groups.

    Read the article

  • RPi and Java Embedded GPIO: Using Java to read input

    - by hinkmond
    Now that we've learned about using Java code to control the output of the Raspberry Pi GPIO ports (by lighting up LEDs from a Java app on the RPi for now and noting in the future the same Java code can be used to drive industrial automation or medical equipment, etc.), let's move on to learn about reading input from the RPi GPIO using Java code. As before, we need to start out with the necessary hardware. For this exercise we will connect a Static Electricity Detector to the RPi GPIO port and read the value of that sensor using Java code. The circuit we'll use is from William J. Beaty and is described at this Web link. See: Static Electricity Detector He calls it an "Electric Charge" detector, which is a bit misleading. A Field Effect Transistor is subject to nearby electro-magnetic fields, such as a static charge on a nearby object, not really an electric charge. So, this sensor will detect static electricity (or ghosts if you are into paranormal activity ). Take a look at the circuit and in the next blog posts we'll step through how to connect it to the GPIO port of your RPi and then how to write Java code to access this fun sensor. Hinkmond

    Read the article

  • How to solve SocketException: Permission denied: connect

    - by luxinxian
    I recently encountered a problem getting a headache, need help... System consists of: Two subsystems, called A, B (each running on a standalone tomcat instance), currently running on the same machine. A invoke B's service via spring httpInvoker(http). B system also invoke other system's services via http. Symptoms: 1, the system starts to run normally after 10-15 days; 2, the system will run for a period of time after an exception: org.springframework.remoting.RemoteAccessException: Could not access HTTP invoker remote service at [http://xxx.xxx.xxx.xxx/remoting/call]; nested exception is java. net.SocketException: Permission denied: connect 3, when the exception occour, it continues, not only occasional .(it looks like some resources exhausted, but cpu rate < 5%, memory < 15%, network < 5%) 4, A, B when the system call fails, B system via http call to an external service also failed for the same exception. 5, closed two tomcat services, restart, and working properly. So repeatedly (step 1 - 5), has not found the root reason. Emvironment: windows 2008 R2 tomcat7.0.42 x86_64 oralce-jdk-1.7.0_40 Any ideas?

    Read the article

  • Debugging Node.js applications for Windows Azure

    - by cibrax
    In case you are developing a new web application with Node.js for Windows Azure, you might notice there is no easy way to debug the application unless you are developing in an integrated IDE like Cloud9. For those that develop applications locally using a text editor (or WebMatrix) and Windows Azure Powershell for Node.js, it requires some steps not documented anywhere for the moment. I spent a few hours on this the other day I practically got nowhere until I received some help from Tomek and the rest of them. The IISNode version that currently ships with the Windows Azure for Node.js SDK does not support debugging by default, so you need to install the IISNode full version available in the github repository.  Once you have installed the full version, you need to enable debugging for the web application by modifying the web.config file <iisnode debuggingEnabled="true" loggingEnabled="true" devErrorsEnabled="true" /> The xml above needs to be inserted within the existing “<system.webServer/>” section. The last step is to open a WebKit browser (e.g. Chrome) and navigate to the URL where your application is hosted but adding the segment “/debug” to  the end. The full URL to the node.js application must be used, for example, http://localhost:81/myserver.js/debug That should open a new instance of Node inspector on the browser, so you can debug the application from there. Enjoy!!

    Read the article

  • How to popularize Nemerle (or another programming language)?

    - by keykeeper
    Any .NET developer who is interested in different programming languages knows that F# is the most popular functional language for the .NET platform nowadays. The only fact describing the popularity of F# is the great support of Microsoft. But we are not limited with F# at all. There are some other functional languages on the .NET platform. I'm very disappointed with the fact that Nemerle isn't well-known. It's an awesome language which supports three paradigms: object-oriented, functional and meta- programming. I won't try to explain why I like it so much. The problem is that I can't use it at work. I think that only really brave companies can rely on Nemerle. It's almost unknown, that's why it's hard to find new developers for the project. Noone wants to make a first step with Nemerle if it can influence the budget what is reasonable. So, here is a question: what can I do to make Nemerle more popular? Here are my first ideas: implement open-source projects using Nemerle; make presentations on different conferences; write articles.

    Read the article

  • What is a good support knowledge base tool?

    - by Guillaume
    I have been searching for a tool to help my team organize its knowledge for resolving recurring support cases. I know this question will probably be closed, but I'll try my change anyway because I know that I can have some good answers about that. Context: our team is developing and supporting an huge applications (lots of different screens and workflow processes. We already have a good tool for managing our documentation, but we are struggling with support cases. Support action involve often quite a lot of manual steps to fix stuff and the knowledge for these actions is more 'oral transmission' than modern tools. We need an efficient way to store them in a knowledge base to be able to retrieve similar cases based on patterns (a stacktrace, an error message, a component name, a workflow step, ...) and ranked by similarity. Our wiki search is not very powerful when it come to this kind of search and the team members don't want to 'waste' time writing a report that will never be found... Do you know efficient knowledge base tool for this kind of use case ?

    Read the article

  • CodePlex Daily Summary for Thursday, August 30, 2012

    CodePlex Daily Summary for Thursday, August 30, 2012Popular ReleasesCoevery - Free CRM: Coevery 1.0: This is the first alpha release of Coevery.ExpressProfiler: Initial release of ExpressProfiler v1.2: This is initial release of ExpressProfilerNabu Library: 2012-08-29, 14: .Net Framework 4.0, .Net Framework 4.5 Debug and Release builds.Math.NET Numerics: Math.NET Numerics v2.2.1: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Since v2.2.0: Student-T density more robust for very large degrees of freedom Sparse Kronecker product much more efficient (now leverages sparsity) Direct access to raw matrix storage implementations for advanced extensibility Now also separate package for signed core library with a strong name (we dropped strong names in v2.2.0) Also available as NuGet packages...Microsoft SQL Server Product Samples: Database: AdventureWorks Databases – 2012, 2008R2 and 2008: About this release This release consolidates AdventureWorks databases for SQL Server 2012, 2008R2 and 2008 versions to one page. Each zip file contains an mdf database file and ldf log file. This should make it easier to find and download AdventureWorks databases since all OLTP versions are on one page. There are no database schema changes. For each release of the product, there is a light-weight and full version of the AdventureWorks sample database. The light-weight version is denoted by ...ImageServer: v1.1: This is the first version releasedChristoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ After you build in Release mode the installable packages (source/install) can be found in the INSTALL folder now, within your module's folder, not the packages folder anymore Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Projec...Home Access Plus+: v8.0: v8.0828.1800 RELEASE CHANGED TO BETA Any issues, please log them on http://www.edugeek.net/forums/home-access-plus/ This is full release, NO upgrade ZIP will be provided as most files require replacing. To upgrade from a previous version, delete everything but your AppData folder, extract all but the AppData folder and run your HAP+ install Documentation is supplied in the Web Zip The Quota Services require executing a script to register the service, this can be found in there install di...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3406 (September 2012): New features: Extended ReflectionClass libxml error handling, constants DateTime::modify(), DateTime::getOffset() TreatWarningsAsErrors MSBuild option OnlyPrecompiledCode configuration option; allows to use only compiled code Fixes: ArgsAware exception fix accessing .NET properties bug fix ASP.NET session handler fix for OutOfProc mode DateTime methods (WordPress posting fix) Phalanger Tools for Visual Studio: Visual Studio 2010 & 2012 New debugger engine, PHP-like debugging ...NougakuDoCompanion: v1.1.0: Add temp folder of local resource, Resize local resource. Change launch ruby commnadline args from rack to bundle. 1.NougakuDoCompanion v1.1.0 cspkg.zip - cspkg and ServiceConfiguration.xml (small , medium, large, extra large vm) - include NougakudoSetupTool.exe and readme.txt 2.NougakuDoCompanion v1.1.0.zip - Source code. include NougakudoSetupTool.exe - include activerecord-sqlserver-adapter patch in paches folder. 3.Depends tools. - Windows Azure SDK for .NET June 2012(1.7SP1) - Windows ...WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.06: Whats New Added CKEditor 3.6.4 oEmbed Plugin can now handle short urls changes The Template File can now parsed from an xml file instead of js (More Info...) Style Sets can now parsed from an xml file instead of js (More Info...) Fixed Showing wrong Pages in Child Portal in the Link Dialog Fixed Urls in dnnpages Plugin Fixed Issue #6969 WordCount Plugin Fixed Issue #6973 File-Browser: Fixed Deleting of Files File-Browser: Improved loading time File-Browser: Improved the loa...MabiCommerce: MabiCommerce 1.0.1: What's NewSetup now creates shortcuts Fix spelling errors Minor enhancement to the Map window.VFPX: Data Explorer 3: This release is the first alpha release for DX3. Even though great care has been taken, the project manager highly recommends you work with test data and you regularly back up the DataExplorer.DBF found in your HOME(7) folder. New features are documented on the project home page. IMPORTANT Once installed, make sure to go to the Data Explorer Options page, click the Restore to Default button. This brings up a dialog asking if you want to maintain connections and customizations that were done...ScintillaNET: ScintillaNET 2.5.2: This release has been built from the 2.5 branch. Version 2.5.2 is functionally identical to the 2.5.1 release but also includes the XML documentation comments file generated by Visual Studio. It is not 100% comprehensive but it will give you Visual Studio IntelliSense for a large part of the API. Just make sure the ScintillaNET.xml file is in the same folder as the ScintillaNET.dll reference you're using in your projects. (The XML file does not need to be distributed with your application)....Facebook Web Parts for SharePoint 2010: Version 1.0.1 - WSP: SharePoint 2010 solution (WSP) Resolved a bug from Version 1.0 - WSP where user profile names would not properly update.Contactor: GSMContactorProgram V1.0 - Source Code: This is the source code for the program, For Visual Studio 2012 RCTouchInjector: TouchInjector 1.1: Version 1.1: fixed a bug with the autorun optionWinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BlackJumboDog: Ver5.7.1: 2012.08.25 Ver5.7.1 (1)?????·?????LING?????????????? (2)SMTP???(????)????、?????\?????????????????????Visual Studio Team Foundation Server Branching and Merging Guide: v2 - Visual Studio 2012: Welcome to the Branching and Merging Guide Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review Documentation has been reviewed by the quality and recording team All critical bugs have been resolved Known Issues / Bugs Spelling, grammar and content revisions are in progress. Hotfix will be published.New ProjectsAd Specific Redirect Generator: Ad Specific Redirect Generatorargsv, command line argument processors: Two libraries to process command line options, one(argsvCPython) is written for Python applications and other one(the argsvCPP) is for C++ applications.AutomationML Export Import Mapper: This Tool supports the mapping between SystemUnitClass-Libraries, used to exchange data in the AutomationML Format between Exporters and Importers.BioPathSearch: Tool for probibalistic biochemical networks construction from input substrete to required product based on information from KEGG databasebuidingapp: i am buiding app but i not sure public because i don't code finish. then i upload code and pulish my application,. thank you for reading mynote. see you next tiCatalogo de Empresas y Productos: Este es el catalogo de empresas y productos asociadas a la produccion agricola en CRclarktestcodeplex: test codeplexDatabase Clean-up Engine - DataWashroom: Database Clean-up Rules EngineEdEx: edexExpense Management System: Expense Management SystemFinger Tracking with Kinect SDK for XBOX: This project explained step by step how to perform finger and hand tracking with the Kinect for XBOX with the official Kinect SDK.Flower - Workflow Engine: Flower is a simple yet powerful workflow engine allowing to develop workflows in C#.Grid: Copyright 2011 Badkid development. All rights reserved. Play Grid the the retro style arcade puzzle game. Join lines between the gids to create boxes to get pHeadSprings Assignment: A program to create custom tokens using class FizzBuzz programming. Ice Scream: IcescreamiD Commerce + Logistics: iD Commerce + Logistics is a company based outside of Chicago, Illinois specializing in fulfillment solutions and custom development.Mido: Mido is a simple utility that adds text or images watermarks to your photos, images and pictures.MLSTest: MLSTest is a Windows based software for the analysis of Multilocus Sequence Data for euckaryotic organisms My Project Foundation Library: A foundation library for asp.net web project, including easy-to-use data access layer and other utility code.MyTestProject001: ?????codeplex,??????????。MyTestProject2: My Test projectNEF - Native Extensibility Framework: NEF is an open source IoC extensibility framework targetting C++. It is modeled after the more useful features of MEF in C#.RazorSpy: A simple toy/tool for exploring the output of the Razor Parser for a particular document.Relay Command for WinRT: A simple RelayCommand for WinRT (sans and entire MVVM framework).Rocca. Store: Document storage with email capabilitiesScrabble Nights: Scrabble is a word game in which two to four players score points by forming words from individual lettered tiles on a game board marked with a 15-by-15 gridSlFrame: SlFrameSmart Data Access layer: WE ARE USING SMART DATA ACCESS LAYER MEANS ITS TIME TO FORGET ABOUT SYSTEM.DATA.SQLCLIENT NAMESPACE.SportSlot: This project allows you to search for available sport venuesSwitcher 2012: Allows for fast switching between source and header file in Visual Studio 2012testddtfs2908201201: satime tracking: coming soonTradePlatform.NET: TradePlatform.NET is addition to MetaTrader 4 client terminal which extends trading experience, MQL language and provides .NET world communication bridge.WaMa-SkyDriveClient: Project Description WaMa-SkyDriveClient is a Windows Skydrive-client, with Live authentication.Web Optimization: The ASP.NET Web optimization framework provides services to improve the performance of your ASP.NET Web applications.Web Pocket: Web tool for rapid application development of mobile software WinRT Toolkit: The purpose of WinRT Toolkit is to provide efficient and developer friendly instruments (API / Components...).Xcel Directory Service: Xcel Directory Service is an Excel 2010 add-in used to retrieve data from Active Directory using an import utility and user defined functions.zhangfacai: this project is based on ASP.NET MVC 4 and HTML5. It's a website to integrate public web services like amazon, live, facebook, etc. By using these services, mak

    Read the article

  • Interpreting Others' Source Code

    - by Maxpm
    Note: I am aware of this question. This question is a bit more specific and in-depth, however, focusing on reading the actual code rather than debugging it or asking the author. As a student in an introductory-level computer science class, my friends occasionally ask me to help them with their assignments. Programming is something I'm very proud of, so I'm always happy to oblige. However, I usually have difficulty interpreting their source code. Sometimes this is due to a strange or inconsistent style, sometimes it's due to strange design requirements specified in the assignment, and sometimes it's just due to my stupidity. In any case, I end up looking like an idiot staring at the screen for several minutes saying "Uh..." I usually check for the common errors first - missing semicolons or parentheses, using commas instead of extractor operators, etc. The trouble comes when that fails. I often can't step through with a debugger because it's a syntax error, and I often can't ask the author because he/she him/herself doesn't understand the design decisions. How do you typically read the source code of others? Do you read through the code from top-down, or do you follow each function as it's called? How do you know when to say "It's time to refactor?"

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-21

    - by Bob Rhubart
    Software Architects Need Not Apply | Dustin Marx "I think there is a place for software architecture," says Dustin Marx, "but a portion of our fellow software architects have harmed the reputation of the discipline." For another angle on this subject, check out Out of the Tower, Into the Trenches from the Nov/Dec edition of Oracle Magazine. Oracle Data Integrator 11g - Faster Files | David Allan David Allan illustrates "a big step for regular file processing on the way to super-charging big data files using Hadoop." 2012 Oracle Fusion Middleware Innovation Awards - Win a FREE Pass to Oracle OpenWorld 2012 in SF Share your use of Oracle Fusion Middleware solutions and how they help your organization drive business innovation. You just might win a free pass to Oracle Openworld 2012 in San Francisco. Deadline for submissions in July 17, 2012. WLST Domain creation using dry-run | Michel Schildmeijer What to do "if you want to browse through your domain to check if settings you want to apply satisfy your requirements." Cloud opens up new vistas for service orientation at Netflix | Joe McKendrick "Many see service oriented architecture as laying the groundwork for cloud. But at one well-known company, cloud has instigated the move to SOA." How to avoid the Portlet Skin mismatch | Martin Deh Detailed how-to from WebCenter A-Team blogger Martin Deh. Internationalize WebCenter Portal - Content Presenter | Stefan Krantz Stefan Krantz explains "how to get Content Presenter and its editorials to comply with the current selected locale for the WebCenter Portal session." Oracle Public Cloud Architecture | Tyler Jewell Tyler Jewell discusses the multi-tenancy model and elasticity solution implemented by Oracle Cloud in this QCon presentation. A Distributed Access Control Architecture for Cloud Computing The authors of this InfoQ article discuss a distributed architecture based on the principles from security management and software engineering. Thought for the Day "Let us change our traditional attitude to the construction of programs. Instead of imagining that our main task is to instruct a computer what to to, let us concentrate rather on explaining to human beings what we want a computer to do." — Donald Knuth Source: Quotes for Software Engineers

    Read the article

  • Splitting big request in multiple small ajax requests

    - by Ionut
    I am unsure regarding the scalability of the following model. I have no experience at all with large systems, big number of requests and so on but I'm trying to build some features considering scalability first. In my scenario there is a user page which contains data for: User's details (name, location, workplace ...) User's activity (blog posts, comments...) User statistics (rating, number of friends...) In order to show all this on the same page, for a request there will be at least 3 different database queries on the back-end. In some cases, I imagine that those queries will be running quite a wile, therefore the user experience may suffer while waiting between requests. This is why I decided to run only step 1 (User's details) as a normal request. After the response is received, two ajax requests are sent for steps 2 and 3. When those responses are received, I only place the data in the destined wrappers. For me at least this makes more sense. However there are 3 requests instead of one for every user page view. Will this affect the system on the long term? I'm assuming that this kind of approach requires more resources but is this trade of UX for resources a good dial or should I stick to one plain big request?

    Read the article

  • The internal storage of a DATETIMEOFFSET value

    - by Peter Larsson
    Today I went for investigating the internal storage of DATETIME2 datatype. What I found out was that for a datetime2 value with precision 0 (seconds only), SQL Server need 6 bytes to represent the value, but stores 7 bytes. This is because SQL Server add one byte that holds the precision for the datetime2 value. Start with this very simple repro declare    @now datetimeoffset(7) = '2010-12-15 21:04:03.6934231 +03:30'   select     cast(cast(@now as datetimeoffset(0)) as binary(9)),            cast(cast(@now as datetimeoffset(1)) as binary(9)),            cast(cast(@now as datetimeoffset(2)) as binary(9)),            cast(cast(@now as datetimeoffset(3)) as binary(10)),            cast(cast(@now as datetimeoffset(4)) as binary(10)),            cast(cast(@now as datetimeoffset(5)) as binary(11)),            cast(cast(@now as datetimeoffset(6)) as binary(11)),            cast(cast(@now as datetimeoffset(7)) as binary(11)) Now we are going to copy and paste these binary values and investigate which value is representing what time part. Prefix  Ticks       Ticks         Days    Days    Suffix  Suffix  Original value ------  ----------  ------------  ------  ------  ------  ------  ------------------------ 0x  00  0CF700             63244  A8330B  734120  D200       210  0x000CF700A8330BD200 0x  01  75A609            632437  A8330B  734120  D200       210 0x0175A609A8330BD200 0x  02  918060           6324369  A8330B  734120  D200       210  0x02918060A8330BD200 0x  03  AD05C503        63243693  A8330B  734120  D200       210  0x03AD05C503A8330BD200 0x  04  C638B225       632502470  A8330B  734120  D200       210  0x04C638B225A8330BD200 0x  05  BE37F67801    6324369342  A8330B  734120  D200       210  0x05BE37F67801A8330BD200 0x  06  6F2D9EB90E   63243693423  A8330B  734120  D200       210  0x066F2D9EB90EA8330BD200 0x  07  57C62D4093  632436934231  A8330B  734120  D200       210  0x0757C62D4093A8330BD200 Let us use the following color schema Red - Prefix Green - Time part Blue - Day part Purple - UTC offset What you can see is that the date part is equal in all cases, which makes sense since the precision doesn't affect the datepart. If you add 63244 seconds to midnight, you get 17:34:04, which is the correct UTC time. So what is stored is the UTC time and the local time can be found by adding "utc offset" minutes. And if you look at it, it makes perfect sense that each following value is 10 times greater when the precision is increased one step too. //Peter

    Read the article

  • Groovy Debugging

    - by Vijay Allen Raj
    Groovy Debugging - An Overview:ADF BC developers may express snippets of business logic (like the following) as embedded groovy expressions: default / calculated attribute valuesvalidation rules / conditionserror message tokensLOV input values (VO) This approach has the advantages that: Groovy has a compact, EL-like syntax for expressing simple logicADF has extended this syntax to provide useful built-insembedded Groovy expressions are customizableGroovy debugging support helps improve maintainability of business logic expressed in Groovy.Following is an example how groovy debugging works.Example:This example shows how a script expression validator can be created and the groovy script debugged. It shows Step over, breakpoint functionalities as well as syntax coloring.Let us create a ADFBC application based on Emp and Dept tables, and add a script expression validator based on the script:  if (Sal >= 5000){ //If EmpSal is greater than a property value set on the custom //properties on the root AM //raise a custom exception else raise a custom warning if (Sal >= source.DBTransaction.rootApplicationModule.propertiesMap.salHigh) { adf.error.raise("ExcGreaterThanApplicationLimit"); } else { adf.error.warn("WarnGreaterThan5000"); } } else if (EmpSal <= 1000) { adf.error.raise("ExcTooLow"); }return true;In the Emp.xml Flat editor, place breakpoints at various locations as shown below:Right click the appmodule and click Debug. Enter a value greater than 5000 and click next. You can see the debugging work as shown below:  The code can be also be stepped over and debugged.

    Read the article

  • Preventing Users From Copying Text From and Pasting It Into TextBoxes

    Many websites that support user accounts require users to enter an email address as part of the registration process. This email address is then used as the primary communication channel with the user. For instance, if the user forgets her password a new one can be generated and emailed to the address on file. But what if, when registering, a user enters an incorrect email address? Perhaps the user meant to enter [email protected], but accidentally transposed the first two letters, entering [email protected]. How can such typos be prevented? The only foolproof way to ensure that the user's entered email address is valid is to send them a validation email upon registering that includes a link that, when visited, activates their account. (This technique is discussed in detail in Examining ASP.NET's Membership, Roles, and Profile - Part 11.) The downside to using a validation email is that it adds one more step to the registration process, which will cause some people to bail out on the registration process. A simpler approach to lessening email entry errors is to have the user enter their email address twice, just like how most registration forms prompt users to enter their password twice. In fact, you may have seen registration pages that do just this. However, when I encounter such a registration page I usually avoid entering the email address twice, but instead enter it once and then copy and paste it from the first textbox into the second. This behavior circumvents the purpose of the two textboxes - any typo entered into the first textbox will be copied into the second. Using a bit of JavaScript it is possible to prevent most users from copying text from one textbox and pasting it into another, thereby requiring the user to type their email address into both textboxes. This article shows how to disable cut and paste between textboxes on a web page using the free jQuery library. Read on to learn more! Read More >

    Read the article

  • Requiring multithreading/concurrency for implementation of scripting language

    - by Ricky Stewart
    Here's the deal: I'm looking at designing my own scripting/interpreted language for fun. I'm only in the planning stages right now; I want to make sure I have a very strong hold on exactly how I will implement everything before I start coding. What I'm currently struggling with is concurrency. It seems to me like an easy way to avoid the unpredictable performance that comes with garbage collection would be to put the garbage collector in its own thread, and have it run concurrently with the interpreter itself. (To be clear, I don't plan to allow the scripts to be multithreaded themselves; I would simply put a garbage collector to work in a different thread than the interpreter.) This doesn't seem to be a common strategy for many popular scripting languages, probably for portability reasons; I would probably write the interpreter in the UNIX/POSIX threading framework initially and then port it to other platforms (Windows, etc.) if need be. Does anyone have any thoughts in this issue? Would whatever gains I receive by exploiting concurrency be nullified by the portability issues that will inevitably arise? (On that note, am I really correct in my assumption that I would experience great performance gains with a concurrent garbage collector?) Should I move forward with this strategy or step away from it?

    Read the article

  • Input signal out of range; Change settings to 1600 x 900

    - by Clayton
    I recently installed Ubuntu 12.04 onto my HP Pavilion, in an attempt to make the desktop able to dual-boot Windows 7 and Ubuntu. I managed to get down to the last step, and finished the installation process. After it prompted me to remove what I used to install Ubuntu, I did so, removing my SanDisk 8GB flash drive, and allowed the system to reboot. Like usual, the desktop booted with the HP image, with the options at the bottom(Boot Menu, System Recovery, etc). However, when it should have started up with Ubuntu(like I'm certain it should have done), I received the following error: Input signal out of range Change settings to 1600 x 900 From the time I installed the operating system, back in late August, till now, I've been trying to figure out how I would go about fixing this issue. My mom is also starting to get frustrated with my not having resolved the issue, as its the only desktop that has a printer installed. Is there any possible way to resolve this? To summarize the problem: -Successful boot -Screen brings up error -Screen goes to standby -Nothing else possible until desktop is rebooted, which will initiate the above three steps A few notes: -I did not back up my computer before I installed Ubuntu. I didn't have anything to write to, and basically just forgot to. : -I don't have a Recovery Disk. -I don't have the Windows 7 disk that is supposed to come with the computer. -It has been narrowed down by a friend on Skype that the problem lies with the display, and that the vga= boot command does have something to do with fixing the problem Thank you in advance for resolving this problem. I greatly appreciate it. ^^

    Read the article

  • What to do when the programming activity becomes a problem?

    - by gablin
    I once saw a program (can't remember which) where it talked about people "experiencing flow" when they are doing something they are passionate about. When "in flow", they tend to lose track of time and surrounding, concentrating only on their activity at hand. This happens a lot for me when I program; most particularly when I face a problem. I refuse to give up until it's solved. This usually leads to hours just rushing by and I forget to eat lunch, dinner gets pushed into far into the evening, and when I finally look at the clock, it's way into the wee-hours of the night and I will only get a few hours of sleep before having to rise early in the morning. (This is not to say that I'm in flow only when facing a problem - but I find it particularly hard to stop programming and step back when there's something I can't solve immediately.) I love programming, but I hate it when it disrupts my normal routines (most importantly eating and sleeping patterns). And sitting still for so many hours, staring a screen, is not healthy. Please, any ideas on how I can get my rampant programming activity under control?

    Read the article

  • Problem Installing Xubuntu 12.04 Audio-Video codecs

    - by Seib
    I used Crouton to install Xubuntu 12.04.4 LTS with the XFCE environment on my HP Chromebook 11. I've gotten it all fully installed and everything; the only thing I'm doing now is the basic setup things, like adding codecs and other things like LibreOffice, VLC, Firefox, Ubuntu Software Center, etc. This information I got from 2 sources: http://www.efytimes.com/e1/fullnews.asp?edid=137269 http://www.binarytides.com/better-xubuntu-14-04/ . I'm currently on the same step at both URLs, which is #6 on Link 1 and #8 on Link 2. Per the articles, which both said the same thing, I typed in sudo apt-get install xubuntu-restricted-extras libavcodec-extra and it didn't do anything. It just kept on saying the same thing: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libavcodec-extra I've spend the last hour or so scouring the internet for a solution, for a hint even at what is going on. I don't want to do a clean reinstall, for two reasons: 1) it's takes like 1.5h just to get the croot fully installed, and 2) everything but this out of what I've done so far (up to #6 at Link 1 & up to #8 at Link 2) works except the audio. I've already installed flash, so YouTube works fine. It's just I can't hear any audio. Please help? Thanks in advance. I appreciate all the great help I've been getting from AskUbuntu lately. You all are great.

    Read the article

  • Recommended total system backup solution

    - by bioShark
    I hope this question won't get closed immediately since it has a generic title. I already searched a bit around the answers here, but nothing satisfied me. I want a back-up solution that makes a total back-up, so that I can restore my Ubuntu in case of major failures, like HDD failing. As far as I can see, I have 2 choices: 1) Backing up with Deja Dup to an external disk. This is fine and I am already doing, but in case my HDD fails, and I make a new Ubuntu install on a new disk, will Deja Dup be able to restore all my setting and stuff from the backed up files? If it can, then what other files/folders should I add in Deja Dup to back-up (currently I have set only the recommended /home folder)? Is there a point in telling Deja Dup to back-up everything under "/" ? 2) A disk/partition cloning software. This would be something similar to Noton ghost. Is there such a software with nice GUI that you could recommend for Ubuntu? And even better, it would be nice if Ubuntu's liveCD could recognize such a clone at install step. I am using 11.10

    Read the article

  • Import SSIS Project in Denali CTP1

    For years Analysis Services has had the ability to take an existing database from a server and reverse engineer it into a BIDS project.  This is extremely useful when all you have is the running instance of the database and the project that created it has long since disappeared.  Reverse engineering has never been a feature of SSIS until now. Let me walk you through the simple steps. The first step is that you obviously have to have a project deployed to an SSIS Catalog.  I will do a video on this soon but in case you can’t wait then my good buddy Jamie Thomson has written it up here As you can see I have a project called imaginatively “Denali1” with one package “Package.dtsx” The next thing we need to do is fire up BIDS and choose the right project type (Integration Services Import Project) Now we just follow the wizard.  We make sure we specify on which server to find the Catalog and in which folder to look for the project. Next the setting are validated and we are greeted with the familiar review screen before the creation of our new project from the deployed project happens Hit Import and away we go The result is just what we wanted.

    Read the article

  • Design mode in Visual studio 2010 sp1 beta

    - by anirudha
    in MVC3 razor we found that their is no way to watch the design in Visual studio as well we can see aspx file by going to design mode. their is a little trick to solve this issue first is that if you have Expression web or Vs 2008 then open the file on them how see here in expression web 4 you need to add the extension .cshtml and open them as html as we open other. in Visual studio 2008 you need to add the extension .cshtml and set them open as html. well their is no big trouble if design not worked. but in some case when you need to get this issue solved this need to be work for configuration do this: Expression Web 4 > tools > application options > configure editor > click on new extension icon show in exact left put the cshtml in the window they show you and choose expression web [open as html] after that you can see the design in expression web. by default Expression web not have cshtml as known extension so you need to do that to add them because without it they never handle cshtml file and refer them to Visual studio. after setting this you  EW4 open the cshtml file as html and show you design in design mode. in Visual studio 2008 you can use same trick to solve this issue just follow this step > options > text editors > file extension put cshtml in textbox and set the option html editor from dropdown and click on add and ok this will open your cshtml file as html.

    Read the article

  • Why kernel source is not installed

    - by Subhajit
    I want to install kernel source in Ubuntu 12.04 which is not installed. I have checked the same using the following command : dpkg -s kernel Output is Kernel is not installed no information available Hence I have followed the following steps to install the same: 1.Installed the Dependencies by the following command: sudo apt-get install gcc libncurses5-dev git-core kernel-package fakeroot build-essential sudo apt-get update && sudo apt-get upgrade Download kernel source by the following command: wget http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.5.tar.bz2 tar -xvf linux-3.5.tar.bz2 cd linux-3.5/ 3.Compiled the source code to generate the .deb packages by the following command make-kpkg clean fakeroot make-kpkg --initrd --append-to-version=-spica kernel_image kernel_headers Install the .deb packages (two .deb packages generated, one is to install the kernel headers, another one is to install the kernel image) by the following command sudo dpkg -i linux-*.deb But after reboot it seems kernel is not installed (checked by dpkg -s kernel). Please tell where I am going wrong. Also, In step 3, I guess I am installing a new kernel (named spica kernel_image) but during the boot up this new kernel is not showing as option. Please help me

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • The Endeca UI Design Pattern Library Returns

    - by Joe Lamantia
    I'm happy to announce that the Endeca UI Design Pattern Library - now titled the Endeca Discovery Pattern Library - is once again providing guidance and good practices on the design of discovery experiences.  Launched publicly in 2010 following several years of internal development and usage, the Endeca Pattern Library is a unique and valued source of industry-leading perspective on discovery - something I've come to appreciate directly through  fielding the consistent stream of inquiries about the library's status, and requests for its rapid return to public availability. Restoring the library as a public resource is only the first step!  For the next stage of the library's evolution, we plan to increase the scope of the guidance it offers beyond user interface design to the broader topic of discovery.  This could include patterns for architecture at the systems, user experience, and business levels; information and process models; analytical method and activity patterns for conducting discovery; and organizational and resource patterns for provisioning discovery capability in different settings.  We'd like guidance from the community on the kinds of patterns that are most valuable - so make sure to let us know. And we're also considering ways to increase the number of patterns the library offers, possibly by expanding the set of contributors and the authoring mechanisms. If you'd like to contribute, please get in touch. Here's the new address of the library: http://www.oracle.com/goto/EndecaDiscoveryPatterns And I should say 'Many thanks' to the UXDirect team and all the others within the Oracle family who helped - literally - keep the library alive, and restore it as a public resource.

    Read the article

  • nVidia Settings: Overriding anti-aliasing causes delay

    - by Kalle Elmér
    I'm using Google Sketchup on ubuntu 12.04 with Wine 1.4. It works flawlessly out of the box, but anti-aliasing is causing some problems. I can override anti-aliasing settings using the nVidia X Server Settings utility, which results in a great-looking image. However, the view doesn't seem to update properly. It's a bit hard to explain, but if I do something (e.g. zooming) the changes won't appear in the view until I take another action. in other words, there seems to be a delay of one "action". Take this example. The mouse wheel is moved one notch to zoom in one step. Nothing happens. An object is selected by clicking. The new zoom is rendered but the selection box doesn't appear. An empty area is clicked. The selection box appears. Is there something that I can do to solve the problem? Could I force the GPU to redraw that view with a certain interval, or is there some other solution? I really like anti-aliasing, but it's hard to use when drawing stuff.

    Read the article

  • Restoring GRUB2 on Software RAID 0 after Windows 7 wiped it using LiveCD

    - by unknownthreat
    I have installed Ubuntu 10.10 on my system. However, I need to install Windows 7 back, and I expect that it would alter GRUB and it did. Right now, my partition on my Software RAID 0 looks like this: nvidia_acajefec1 is Ubuntu 10.10 and nvidia_acajefec3 is Windows 7. I've been following some guides around and I am always stuck at GRUB not able to detect the usual RAID content. I've tried running: sudo grub > root (hd0,0) GRUB complains it couldn't find my hard disk. So I tried: find (hd0,0) And it complains that it couldn't find anything. So I tried: find /boot/grub/stage1 It said "file not found". Here's the text from the console: ubuntu@ubuntu:~$ grub Probing devices to guess BIOS drives. This may take a long time. [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> root (hd0,0) root (hd0,0) Error 21: Selected disk does not exist grub> find /boot/grub/stage1 find /boot/grub/stage1 Error 15: File not found Fortunately, I got one person suggesting that what I've been trying to do is for GRUB Legacy, not GRUB2. So I went to the suggested website, ** (http://grub.enbug.org/Grub2LiveCdInstallGuide) **try to look around, and try: ubuntu@ubuntu:~$ sudo fdisk -l Unable to seek on /dev/sda This is just the step 2 of the instruction in the http://grub.enbug.org/Grub2LiveCdInstallGuide and I cannot proceed because it cannot seek /dev/sda. However, ubuntu@ubuntu:~$ sudo dmraid -r /dev/sdb: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 So what now? Do you have an idea for how to make fdisk see my RAID array on live cd (Ubuntu 10.10)? Honestly, I am lost, very lost in trying to restore GRUB2 on this software RAID 0 system right now.

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >