Search Results

Search found 3604 results on 145 pages for 'steve prior'.

Page 29/145 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Displaying Exceptions Thrown or Caught in Managed Beans

    - by Frank Nimphius
    Just came a cross a sample written by Steve Muench, which somewhere deep in its implementation details uses the following code to route exceptions to the ADF binding layer to be handled by the ADF model error handler (which can be customized by overriding the DCErrorHandlerImpl class and configuring the custom class in DataBindings.cpx file) To route an exception to the ADFm error handler, Steve used the following code ((DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry()).reportException(ex); The same code however can be used in managed beans as well to enforce consistent error handling in ADF. As an example, lets assume a managed bean method hits an exception. To simulate this, let's use the following code: public void onToolBarButtonAction(ActionEvent actionEvent) {    throw new JboException("Just to tease you !!!!!");        } The exception shows at runtime as displayed in the following image: Assuming a try-catch block is used to intercept the exception caused by a managed bean action, you can route the error message display to the ADF model error handler. Again, let's simulate the code that would need to go into a try-catch block public void onToolBarButtonAction(ActionEvent actionEvent) {    JboException ex = new JboException("Just to tease you !!!!!");  BindingContext bctx = BindingContext.getCurrent();    ((DCBindingContainer)bctx.getCurrentBindingsEntry()).reportException(ex); } The error now displays as shown in the image below As you can see, the error is now handled by the ADFm Error handler, which - as mentioned before - could be a custom error handler. Using the ADF model error handling for displaying exceptions thrown in managed beans require the current ADF Faces page to have an associated PageDef file (which is the case if the page or view contains ADF bound components). Note that to invoke methods exposed on the business service it is recommended to always work through the binding layer (method binding) so that in case of an error the ADF model error handler is automatically used.

    Read the article

  • A Panorama of JavaOne Latin America

    - by reza_rahman
    As you know, JavaOne Latin America 2012 was held at the Transamerica Expo Center in Sao Paulo, Brazil on December 4-6. It was a resounding success with a great vibe, excellent technical content and numerous world class speakers, both local and international. Various folks like Tori Wieldt, Steve Chin, Arun Gupta, Bruno Borges and myself looked at the conference from slightly different colored lenses. It's interesting to put them all together in a panoramatic collage: Tori wrote about the Sao Paulo Geek Bike Ride held the Saturday before the conference here (enjoy the photos and video). She also discusses the keynotes in great detail here. Steve looked at it from the viewpoint of someome instrumental to putting the event together. Read his thoughts here (he has more geek bike ride photos as well as material for his JavaFX/HTML 5 talk). Arun had a more holistic view of the conference. He covers the geek bike ride, the GlassFish party (organized by Bruno Borges), his Java EE talks, and more. Check out the cool photos as well as the technical material. Bruno provides the critical local perspective in his 7 reasons you had to be at JavaOne Latin America 2012. He discusses the OTN Lounge, the hands-on-lab, the Java community keynote, Java EE technical sessions and of course the GlassFish party! I covered the GlassFish booth, the lab and my technical sessions (as well as Sao Paulo's lively metal underground) here.

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-25

    - by Bob Rhubart
    Oracle 11gR2 RAC on Software Defined Network (SDN) | Gilbert Stan "The SDN [software defined network] idea is to separate the control plane and the data plane in networking and to virtualize networking the same way we have virtualized servers," explains Gil Standen. "This is an idea whose time has come because VMs and vmotion have created all kinds of problems with how to tell networking equipment that a VM has moved and to preserve connectivity to VPN end points, preserve IP, etc." H/T to Oracle ACE Director Tim Hall for the recommendation. ServerSent-Events on WebLogic Server | Steve Buttons "The HTML5 ServerSent-Event model provides a mechanism to allow browser clients to establish a uni-directional communication path to a server, where the server is then able to push messages to the browser at any point in time," explains Steve "Buttso" Buttons. Focus on Architects and Architecture This handy guide for sessions and other activities at Oracle OpenWorld 2012 focuses on IT architecture in all its many facets and permutations. Operating System Set-up for WebLogic Server | Rene van Wijk Oracle ACE Rene van Wijk shows you how to set-up an operating system for WebLogic Server. "We will use VMware as our virtualization platform and use CentOS as the operating system," says van Wijk. "We end the post by showing how the operating system can be tuned when running a Java process such as WebLogic Server." Free eBook: Oracle SOA Suite - In the Customer's Words If you find yourself in the position of having to sell the idea of Service-oriented Architecture to business stakeholders this free e-book may come in very handy. Check out "Oracle SOA Suite: In the Customer's Words. (Registration / Oracle.com login required.) Thought for the Day "The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency." — Bill Gates Source: BrainyQuote.com

    Read the article

  • Why is CS never a topic of conversation of the layman? [closed]

    - by hydroparadise
    Granted, every profession has it's technicalities. If you are an MD, you better know the anatomy of the human body, and if you are astronomer, you better know your calculus. Yet, you don't have to know these more advance topics to know that smoking might give you lung cancer because of carcinogens or the moon revolves around the earth because of gravity (thank you Discovery Channel). There's sort of a common knowledge (at least in more developed countries) of these more advanced topics. With that said, why are things like recursive descent parsing, BNF, or Turing machines hardly ever mentioned outsided 3000 or 4000 level classes in a university setting or between colleagues? Even back in my days before college in my pursuit of knowledge on how computers work, these very important topics (IMHO) never seem to get the light of day. Many different sources and sites go into "What is a processor?" or "What is RAM?", or "What is an OS?". You might get lucky and discover something about programming languages and how they play a role in how applications are created, but nothing about the tools for creating the language itself. To extend this idea, Dennis Ritchie died shortly after Steve Jobs, yet Dennis Ritchie got very little press compared to Steve Jobs. So, the heart of my question: Does the public in general not care to hear about computer science topics that make the technology in their lives work, or does the computer science community not lend itself to the general public to close the knowledge gap? Am I wrong to think the general public has the same thirst for knowledge on how things work as I do? Please consider the question carefully before answering or vote closing please.

    Read the article

  • Setting useLegacyV2RuntimeActivationPolicy At Runtime

    - by Reed
    Version 4.0 of the .NET Framework included a new CLR which is almost entirely backwards compatible with the 2.0 version of the CLR.  However, by default, mixed-mode assemblies targeting .NET 3.5sp1 and earlier will fail to load in a .NET 4 application.  Fixing this requires setting useLegacyV2RuntimeActivationPolicy in your app.Config for the application.  While there are many good reasons for this decision, there are times when this is extremely frustrating, especially when writing a library.  As such, there are (rare) times when it would be beneficial to set this in code, at runtime, as well as verify that it’s running correctly prior to receiving a FileLoadException. Typically, loading a pre-.NET 4 mixed mode assembly is handled simply by changing your app.Config file, and including the relevant attribute in the startup element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> </configuration> .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } This causes your application to run correctly, and load the older, mixed-mode assembly without issues. For full details on what’s happening here and why, I recommend reading Mark Miller’s detailed explanation of this attribute and the reasoning behind it. Before I show any code, let me say: I strongly recommend using the official approach of using app.config to set this policy. That being said, there are (rare) times when, for one reason or another, changing the application configuration file is less than ideal. While this is the supported approach to handling this issue, the CLR Hosting API includes a means of setting this programmatically via the ICLRRuntimeInfo interface.  Normally, this is used if you’re hosting the CLR in a native application in order to set this, at runtime, prior to loading the assemblies.  However, the F# Samples include a nice trick showing how to load this API and bind this policy, at runtime.  This was required in order to host the Managed DirectX API, which is built against an older version of the CLR. This is fairly easy to port to C#.  Instead of a direct port, I also added a little addition – by trapping the COM exception received if unable to bind (which will occur if the 2.0 CLR is already bound), I also allow a runtime check of whether this property was setup properly: public static class RuntimePolicyHelper { public static bool LegacyV2RuntimeEnabledSuccessfully { get; private set; } static RuntimePolicyHelper() { ICLRRuntimeInfo clrRuntimeInfo = (ICLRRuntimeInfo)RuntimeEnvironment.GetRuntimeInterfaceAsObject( Guid.Empty, typeof(ICLRRuntimeInfo).GUID); try { clrRuntimeInfo.BindAsLegacyV2Runtime(); LegacyV2RuntimeEnabledSuccessfully = true; } catch (COMException) { // This occurs with an HRESULT meaning // "A different runtime was already bound to the legacy CLR version 2 activation policy." LegacyV2RuntimeEnabledSuccessfully = false; } } [ComImport] [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] [Guid("BD39D1D2-BA2F-486A-89B0-B4B0CB466891")] private interface ICLRRuntimeInfo { void xGetVersionString(); void xGetRuntimeDirectory(); void xIsLoaded(); void xIsLoadable(); void xLoadErrorString(); void xLoadLibrary(); void xGetProcAddress(); void xGetInterface(); void xSetDefaultStartupFlags(); void xGetDefaultStartupFlags(); [MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)] void BindAsLegacyV2Runtime(); } } Using this, it’s possible to not only set this at runtime, but also verify, prior to loading your mixed mode assembly, whether this will succeed. In my case, this was quite useful – I am working on a library purely for internal use which uses a numerical package that is supplied with both a completely managed as well as a native solver.  The native solver uses a CLR 2 mixed-mode assembly, but is dramatically faster than the pure managed approach.  By checking RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully at runtime, I can decide whether to enable the native solver, and only do so if I successfully bound this policy. There are some tricks required here – To enable this sort of fallback behavior, you must make these checks in a type that doesn’t cause the mixed mode assembly to be loaded.  In my case, this forced me to encapsulate the library I was using entirely in a separate class, perform the check, then pass through the required calls to that class.  Otherwise, the library will load before the hosting process gets enabled, which in turn will fail. This code will also, of course, try to enable the runtime policy before the first time you use this class – which typically means just before the first time you check the boolean value.  As a result, checking this early on in the application is more likely to allow it to work. Finally, if you’re using a library, this has to be called prior to the 2.0 CLR loading.  This will cause it to fail if you try to use it to enable this policy in a plugin for most third party applications that don’t have their app.config setup properly, as they will likely have already loaded the 2.0 runtime. As an example, take a simple audio player.  The code below shows how this can be used to properly, at runtime, only use the “native” API if this will succeed, and fallback (or raise a nicer exception) if this will fail: public class AudioPlayer { private IAudioEngine audioEngine; public AudioPlayer() { if (RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully) { // This will load a CLR 2 mixed mode assembly this.audioEngine = new AudioEngineNative(); } else { this.audioEngine = new AudioEngineManaged(); } } public void Play(string filename) { this.audioEngine.Play(filename); } } Now – the warning: This approach works, but I would be very hesitant to use it in public facing production code, especially for anything other than initializing your own application.  While this should work in a library, using it has a very nasty side effect: you change the runtime policy of the executing application in a way that is very hidden and non-obvious.

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Solaris 11 SRU / Update relationship explained, and blackout period on delivery of new bug fixes eliminated

    - by user12244672
    Relationship between SRUs and Update releases As you may know, Support Repository Updates (SRUs) for Oracle Solaris 11 are released monthly and are available to customers with an appropriate support contract.  SRUs primarily deliver bug fixes.  They may also deliver low risk feature enhancements. Solaris Update are typically released once or twice a year, containing support for new hardware, new software feature enhancements, and all bug fixes available at the time the Update content was finalized.  They also contain a significant number of new bug fixes, for issues found internally in Oracle and complex customer bug fixes which  require significant "soak" time to ensure their efficacy prior to release. Changes to SRU and Update Naming Conventions We're changing the naming convention of Update releases from a date based format such as Oracle Solaris 10 8/11 to a simpler "dot" version numbering, e.g. Oracle Solaris 11.1. Oracle Solaris 11 11/11 (i.e. the initial Oracle Solaris 11 release) may be referred to as 11.0. SRUs will simply be named as "dot.dot" releases, e.g. Oracle Solaris 11.1.1, for SRU1 after Oracle Solaris 11.1. Many Oracle products and infrastructure tools such as BugDB and MOS are tailored towards this "dot.dot" style of release naming, so these name changes align Oracle Solaris with these conventions. No Blackout Periods on Bug Fix Releases The Oracle Solaris 11 release process has been enhanced to eliminate blackout periods on the delivery of new bug fixes to customers. Previously, Oracle Solaris Updates were a superset of all preceding bug fix deliveries.  This made for a very simple update message - that which releases later is always a superset of that which was delivered previously. However, it had a downside.  Once the contents of an Update release were frozen prior to release, the release of new bug fixes for customer issues was also frozen to maintain the Update's superset relationship. Since the amount of change allowed into the final internal builds of an Update release is reduced to mitigate risk, this throttling back also impacted the release of new bug fixes to customers. This meant that there was effectively a 6 to 9 week hiatus on the release of new bug fixes prior to the release of each Update.  That wasn't good for customers awaiting critical bug fixes. We've eliminated this hiatus on the delivery of new bug fixes in Oracle Solaris 11 by allowing new bug fixes to continue to be released in SRUs even after the contents of the next Update release have been frozen. The release of SRUs will remain contiguous, with the first SRU released after the Update release effectively being a superset of both the the Update release and all preceding SRUs*.  That is, later SRUs are supersets of the content of previous SRUs. Therefore, the progression path from the final SRUs prior to the Update release is to the first SRU after the Update release, rather than to the Update release itself. The timeline / logical sequence of releases can be shown as follows: Updates: 11.0                                                11.1                               11.2     etc.                  \                                                         \                                    \ SRUs:       11.0.1, 11.0.2,...,11.0.12, 11.0.13, 11.1.1, 11.1.2,...,11.1.x, 11.2.1, etc. For example, for systems with Oracle Solaris 11 11/11 SRU12.4 or later installed, the recommended update path is to Oracle Solaris 11.1.1 (i.e. SRU1 after Solaris 11.1) or later rather than to the Solaris 11.1 release itself.  This will ensure no bug fixes are "lost" during the update. If for any reason you do wish to update from SRU12.4 or later to the 11.1 release itself - for example to update a test system - the instructions to do so are in the SRU12.4 README, https://updates.oracle.com/Orion/Services/download?type=readme&aru=15564533 For systems with Oracle Solaris 11 11/11 SRU11.4 or earlier installed, customers can update to either the 11.1 release or any 11.1 SRU as both will be supersets of their current version. Please do read the README of the SRU you are updating to, as it will contain important installation instructions which will save you time and effort. *Nerdy details: SRUs only contain the latest change delta relative to the Update on which they are based.  Their dependencies will, however, effectively pull in the Update content.  Customers maintaining a local Repo (e.g. behind their firewall), need to add both the 11.1 content and the relevant SRU content to their Repo, to enable the SRU's dependencies to be resolved.  Both will be available from the standard Support Repo and from MOS.  This is no different to existing SRUs for Oracle Solaris 11.0, whereby you may often get away with using just the SRU content to update, but the original 11.0 content may be needed in the Repo to resolve dependencies.

    Read the article

  • SQLAuthority News – Meeting SQL Friends – SQLPASS 2011 Event Log

    - by pinaldave
    One of the biggest reason I go to SQLPASS is that my friends are going there too. There are so many friends with whom I often talk on Facebook and Twitter but I rarely get time to meet them as well talk with them. One thing I am usually sure that many fo them will be for sure attend SQLPASS. This is one event which every SQL Server Enthusiast should attend. Just like everybody I had pleasant time to meet many of my SQL friends. There were so many friends that I met and I did not click photo. There were so many friends who clicked photo in their camera and I do not have them. Here are 1% of the photos which I have. If you are not in the photo, it does not mean I have less respect to our friendship. Please post link to our photo together :) I was very fortunate that I was able to snap a quick photograph with Pinal Dave with Dr. David DeWitt. I stood outside of the hall waiting for Dr. to show up and when he was heading down from convention center I requested him if I can have one photo for my memory lane and very politely he agreed to have one. It indeed made my day! Pinal Dave with Dr. David DeWitt Every single time I met Steve, I make sure I have one photo for my memory. Steve is so kind every single time. If you know SQL and do not know Steve Jones, you do not know SQL (IMHO). Following is the photograph with Michael McLean. More details about this photo in future blog post! Pinal Dave, Michael McLean, and Rick Morelan Arnie always shares his wisdom with me. I still remember when I very first time visited USA, I was standing alone in corner and Arnie walked to me and introduced to every single person he know. Talking to Arnie is always pleasure and inspiring. Arnie Rowland and Pinal Dave I am now published author and have written two books so far. I am fortunate to have Rick Morelan as Co-author of both of my books. He is great guy and very easy to become friends with. I am very much impressed by him and his kindness during book co-authoring. Here is very first of our photograph together at SQLPASS. Rick Morelan and Pinal Dave Diego Nogare and I have been talking for long time on twitter and on various social media channels. I finally got chance to meet my friend from Brazil. It was excellent experience to meet a friend whom one wants to meet for long time and had never got chance earlier. Buck Woody – who does not know Buck. He is funny, kind and most important friends of every one. Buck is so kind that he does not hesitate to approach people even though he is famous and most known in community. Every time I meet him I learn something. He is always smiling and approachable. Pinal Dave and Buck Woddy Rushabh Mehta is current SQL PASS president and personal friend. He has always smiling face and tremendous love for SQL community. I often wonder where he gets all the time for all the time and efforts he puts in for community. I never miss a chance to meet and greet him. Even though he is renowned SQL Guru and extremely busy person – every single time I meet him he always asks me – “How is Nupur and Shaivi?” He even remembers my wife and daughters name. I am touched. Rushabh Mehta and Pinal Dave Nigel Sammy has extremely well sense of humor and passion from community. We have excellent synergy while we are together. The attached photo is taken while I was talking to him on Seattle Shoreline about SQL. Pinal Dave and Nigel Sammy Rick Morelan wanted my this trip to be memorable. I am vegetarian and I told him that I do not like Seafood. Well, to prove the point, he took me to fantastic Seafood restaurant in Seattle and treated me with mouth watering vegetarian dishes. I think when I go to Seattle next time, I am going to make him to take me again to the same place. Rick, Rushabh, Pinal and Paras Well, this is a short summary of few of the friends I met at Seattle. What is the life without friends, eh? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Removing Barriers to Create Effective Data Models

    After years of creating and maintaining data models, I have started to notice common barriers that decrease the accuracy and usefulness of models. In my opinion, the main causes of these barriers are the lack of knowledge and communication from within a company. The lack of knowledge in regards to data models or data modeling can take many forms. Company Culture Knowledge Whether documented or undocumented, existing business rules of a company can affect how data is modeled. For example, if a company only allows 1 assigned person per customer to be able to manipulate a customer’s record then then a data model that includes an associated table that joins customers and employee’s would be unneeded because that would allow for the possibility of multiple employees to handle a customer because of the potential for a many to many relationship between Customers and Employees. Technical Knowledge Depending on the data modeler’s proficiency in modeling data they can inadvertently cause issues and/or complications with a design without even noticing. It is important that companies share data modeling responsibilities so that the models are developed from multiple perspectives of a system, company and the original problem.  In addition, the tools that a company selects to create data models can also affect the accuracy of the model if designer are not familiar with the tools or the tools are too complex to use for the designer. Existing System Knowledge In order for a data modeler to model data for an existing system so that new changes can be applied to a system then they need to at least know the basic concepts of a system so that they can work within it. This will promote reusability of data and prevent the chance of duplicating data. Project Knowledge This should be pretty obvious, but it is very hard to create an accurate data model without knowing what data needs to be modeled. I have always found it strange that I have been asked to start modeling data prior to a client formalizing any requirements. Usually when this happens I have to make several iterations to a model, and the client still does not know exactly what they want.  In addition additional issues can arise when certain stakeholders of a project are not consulted prior to the design or after the project is over because it can cause miss understandings and confusion by the end user as well as possibly not solving the original problem for which a project is intended to solve. One common thread between each type of knowledge is that they can all be avoided through the use of good communication. For example, if a modeler is new to a company then they should ask older employees about any business specific rules that may be documented or undocumented that must be applied to projects in general. Furthermore, if a modeler is not really familiar with a specific data modeling software then they need to speak up and ask for help form other employees or their manager. This will not only help the modeler in the project, but also help them in future projects that they do for the company. Additionally, if a project is not clearly defined prior to a data modeler being assigned the modeling project then it is their responsibility to communicate with the other stakeholders to clarify any part of a project that is unclear so that the data model that is created is accurately aligned with a project.

    Read the article

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • How can I get a distinct list of elements in a hierarchical query?

    - by RenderIn
    I have a database table, with people identified by a name, a job and a city. I have a second table that contains a hierarchical representation of every job in the company in every city. Suppose I have 3 people in the people table: [name(PK),title,city] Jim, Salesman, Houston Jane, Associate Marketer, Chicago Bill, Cashier, New York And I have thousands of job type/location combinations in the job table, a sample of which follow. You can see the hierarchical relationship since parent_title is a foreign key to title: [title,city,pay,parent_title] Salesman, Houston, $50000, CEO Cashier, Houston, $25000 CEO, USA, $1000000 Associate Marketer, Chicago, $75000 Senior Marketer, Chicago, $125000 ..... The problem I'm having is that my Person table is a composite key, so I don't know how to structure the start with part of my query so that it starts with each of the three jobs in the cities I specified. I can execute three separate queries to get what I want, but this doesn't scale well. e.g.: select * from jobs start with city = (select city from people where name = 'Bill') and title = (select title from people where name = 'Bill') connect by prior parent_title = title UNION select * from jobs start with city = (select city from people where name = 'Jim') and title = (select title from people where name = 'Jim') connect by prior parent_title = title UNION select * from jobs start with city = (select city from people where name = 'Jane') and title = (select title from people where name = 'Jane') connect by prior parent_title = title How else can I get a distinct list (or I could wrap it with a distinct if not possible) of all the jobs which are above the three people I specified?

    Read the article

  • should variable be released or not? iphone-sdk

    - by psebos
    Hi, In the following piece of code (from a book) data is an NSDictionary *data; defined in the header (no property). In the viewDidLoad of the controller the following occurs: - (void)viewDidLoad { [super viewDidLoad]; NSArray *keys = [NSArray arrayWithObjects:@"home", @"work", nil]; NSArray *homeDVDs = [NSArray arrayWithObjects:@"Thomas the Builder", nil]; NSArray *workDVDs = [NSArray arrayWithObjects:@"Intro to Blender", nil]; NSArray *values = [NSArray arrayWithObjects:homeDVDs, workDVDs, nil]; data = [[NSDictionary alloc] initWithObjects:values forKeys:keys]; } Since I am really new to objective-c can someone explain to me why I do not have to retain the variables keys,homeDVDs,workDVDs and values prior exiting the function? I would expect prior the data allocation something like: [keys retain]; [homeDVDs retain]; [workDVDs retain]; [values retain]; or not? Does InitWithObjects copies (recursively) all objects into a new table? Assuming we did not have the last line (data allocation) should we release all the NSArrays prior exiting the function (or we could safely assumed that all NSArrays will be autoreleased since there is no alloc for each one?) Thanks!!!!

    Read the article

  • Client-Side Dynamic Removal of <script> Tags in <head>

    - by merv
    Is it possible to remove script tags in the <head> of an HTML document client-side and prior to execution of those tags? On the server-side I am able to insert a <script> above all other <script> tags in the <head>, except one, and I would like to be able to remove all subsequent scripts. I do not have the ability to remove <script> tags from the server side. What I've tried: (function (c,h) { var i, s = h.getElementsByTagName('script'); c.log("Num scripts: " + s.length); i = s.length - 1; while(i > 1) { h.removeChild(s[i]); i -= 1; } })(console, document.head); However, the logged number of scripts comes out to only 1, since (as @ryan pointed out) the code is being executed prior to the DOM being ready. Although wrapping the code above in a document.ready event callback does enable proper calculation of the number of <script> tags in the <head>, waiting until the DOM is ready fails to prevent the scripts from loading. Is there a reliable means of manipulating the HTML prior to the DOM being ready? Background If you want more context, this is part of an attempt to consolidate scripts where no option for server-side aggregation is available. Many of the JS libraries being loaded are from a CMS with limited configuration options. The content is mostly static, so there is very little concern about manually aggregating the JavaScript and serving it from a different location. Any suggestions for alternative applicable aggregation techniques would also be welcome.

    Read the article

  • Trouble recovering MySQL InnoDB database after server crash

    - by Andy Shinn
    I had a server crash due to a broken iSCSI link (the filesystem went into read-only mode). After repairing the link and rebooting the machine (CentOS 5 / MySQL 5.1), the MySQL server would not start and gave the following error: 100603 19:11:46 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100603 19:11:46 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Warning: database page corruption or a failed InnoDB: file read of page 112541. InnoDB: Trying to recover it from the doublewrite buffer. InnoDB: Dump of the page: 100603 19:11:46 InnoDB: Page dump in ascii and hex (16384 bytes): lots of binary data 100603 19:11:46 InnoDB: Page checksum 953720272, prior-to-4.0.14-form checksum 2641912043 InnoDB: stored checksum 617821918, prior-to-4.0.14-form stored checksum 2080617765 InnoDB: Page lsn 115 2632899642, low 4 bytes of lsn at page end 2641594600 InnoDB: Page number (if stored to page already) 112541, InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0 InnoDB: Page may be an index page where index id is 0 616 InnoDB: Dump of corresponding page in doublewrite buffer: 100603 19:11:46 InnoDB: Page dump in ascii and hex (16384 bytes): more binary data 100603 19:11:46 InnoDB: Page checksum 908374788, prior-to-4.0.14-form checksum 824841363 InnoDB: stored checksum 912869634, prior-to-4.0.14-form stored checksum 2210927931 InnoDB: Page lsn 115 2635312169, low 4 bytes of lsn at page end 2633173354 InnoDB: Page number (if stored to page already) 112541, InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0 InnoDB: Page may be an index page where index id is 0 616 InnoDB: Also the page in the doublewrite buffer is corrupt. InnoDB: Cannot continue operation. InnoDB: You can try to recover the database with the my.cnf InnoDB: option: InnoDB: innodb_force_recovery=6 100603 19:11:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended Per the error message, I have tried setting set-variable=innodb_force_recovery=6 in the my.cnf to get to the data. This allows the MySQL server to start. But when I try to do a mysqldump of the database or a SELECT * INTO OUTFILE "filename" FROM broken_table; it seems to endlessly just export the same line over and over again. I have also tried http://code.google.com/p/innodb-tools/. But this tool fails with an error that 'blob' type is not supported. If I try to access the data using the PHP application it crashes MySQL: `100603 19:19:19 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8384512 read_buffer_size=131072 max_used_connections=2 max_threads=151 threads_connected=2 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338317 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x15d33f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x453aff00 thread_stack 0x40000 /usr/libexec/mysqld(my_print_stacktrace+0x24) [0x874364] /usr/libexec/mysqld(handle_segfault+0x346) [0x5c9166] /lib64/libpthread.so.0 [0x3a6e40eb10] /usr/libexec/mysqld(rec_get_offsets_func+0x30) [0x7cc310] /usr/libexec/mysqld [0x7674d8] /usr/libexec/mysqld(btr_search_info_update_slow+0x638) [0x768d48] /usr/libexec/mysqld(btr_cur_search_to_nth_level+0xc7d) [0x75f86d] /usr/libexec/mysqld [0x7dd1c1] /usr/libexec/mysqld(row_search_for_mysql+0x18b0) [0x7e03d0] /usr/libexec/mysqld(ha_innobase::general_fetch(unsigned char*, unsigned int, unsigned int)+0x7c) [0x7526fc] /usr/libexec/mysqld(handler::read_multi_range_next(st_key_multi_range**)+0x29) [0x6aed09] /usr/libexec/mysqld(QUICK_RANGE_SELECT::get_next()+0x194) [0x69a964] /usr/libexec/mysqld [0x6aafe9] /usr/libexec/mysqld(sub_select(JOIN*, st_join_table*, bool)+0x56) [0x635196] /usr/libexec/mysqld [0x63f9cd] /usr/libexec/mysqld(JOIN::exec()+0x950) [0x6497c0] /usr/libexec/mysqld(mysql_select(THD*, Item*, TABLE_LIST*, unsigned int, List&, Item*, unsigned int, st_order*, st_order*, Item*, st_order*, unsign ed long long, select_result*, st_select_lex_unit*, st_select_lex*)+0x17b) [0x64b34b] /usr/libexec/mysqld(handle_select(THD*, st_lex*, select_result*, unsigned long)+0x169) [0x64bc79] /usr/libexec/mysqld [0x5d34b6] /usr/libexec/mysqld(mysql_execute_command(THD*)+0x4e5) [0x5d6b45] /usr/libexec/mysqld(mysql_parse(THD*, char const*, unsigned int, char const)+0x211) [0x5dc321] /usr/libexec/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x10b8) [0x5dd3f8] /usr/libexec/mysqld(do_command(THD*)+0xe6) [0x5dd9e6] /usr/libexec/mysqld(handle_one_connection+0x73d) [0x5d036d] /lib64/libpthread.so.0 [0x3a6e40673d] /lib64/libc.so.6(clone+0x6d) [0x3a6d4d3d1d] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd-query at 0x15de5e0 = SELECT DISTINCT count(DISTINCT i.itemid) as rowscount,i.hostid FROM items i WHERE ((i.itemid BETWEEN 000000000000000 AND 0999999 99999999)) AND i.type<9 AND (i.hostid IN (10017,10047,10050,10054,10056,10059,10062,10063,10064,10065,10066,10067,10068,10069,10070,10071,10072,10073,100 74,10075,10076,10077,10078,10079,10080,10081,10082,10084,10088,10089,10090,10091,10092,10093,10094,10095,10096,10097,10098,10099,10100,10101,10102,10103,10 104,10105,10106,10107,10108,10109)) GROUP BY i.hostid thd-thread_id=3 thd-killed=NOT_KILLED The manual page at contains information that should help you find out what is causing the crash. 100603 19:19:19 mysqld_safe Number of processes running now: 0 100603 19:19:19 mysqld_safe mysqld restarted` Before recovering form an older backup as a last resort I am looking for anymore suggestions. Thanks!

    Read the article

  • HP ProLiant Smart Array "lock up" code 0x11

    - by ewwhite
    I've a ProLiant DL580 G7 server that experienced a storage subsystem failure during production. The system appeared available and responded to pings, but all I/O access stalled (the system load must have been 100+). The ASR did not trigger at the specified watchdog timeout. I had to force a reboot from the ILO. During POST, I received the following error: A controller failure event occurred prior to this power-up. (Previous lock up code = 0x11) I haven't pulled the ADU report yet, but I'm curious as to what this error actually means. I was not responsible for the the installation, but can see that the firmware is very old. But if there's anything else I should know about the error, I'd like to know for the post-mortem report. edit - I should add that the server had 95 days of uptime prior to the lock up.

    Read the article

  • Backing up data in an encrypted way

    - by Eli Bendersky
    I have the following use case: There's some data from my PC I want to periodically back-up online I own some hosting, so I want to use that for the backups, don't want to pay to another backup service I want to encrypt my data locally prior to moving it to the server I have no problem writing scripts to automate the process (say, periodically generate the backup and upload by FTP to my server), but my main question is about step 3 - the encryption: which way is recommended to encrypt my files (say, collected into a .ZIP) prior to uploading to the server? P.S. TrueCrypt seems popular but it's not quite what I'm looking for, since I don't want the files to be constantly encrypted here on my PC.

    Read the article

  • T60 Screen/LCD gets black after some minutes with a highpitched sound rising and fading

    - by edelwater
    Just now my T60 screen got "black" (so no display). On my second monitor: no problems so the VGA output works. Symptom: Screen blanks / no display, but it works on the second monitor Steps to reproduce: - boot - wait (it does not matter what you do you do not have to login or anything) - (now the monitor of the laptop slowly begins to make a ssssssssHHHHHHHHHHHHHHHHHWOEOEssssssss noise of about 10 seconds) - right after the sounds ends, the monitor gets black. Sometimes it seems to be the same each time. Software: Installed no new software before/after, running ZoneAlarm and antivirus. Other: It does not feel hot in any place, there don't seem to be running processes with strange behaviour. Warranty: Out of warranty What was I doing: Typing text on a website and doing some PHP coding in a text editor. What can I do here other than buy a new laptop? Does it sound familiar to known cases? Update 1: Exactly the same problem: http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-Screen-Blackout/m-p/288772 and the second poster (garyj), http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/Black-Screen-on-T60/m-p/235053#M48627 And here: "I have that same problem. I replaced the CCRL on mine and it works fine when the screen is not screwed in. Once the frame of the LCD screen (metal portion) touches the metal on the laptop which holds the screen the screen goes black. If the metal is touching the screen when you boot up it boots up with it being very dimmly lit. " from http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-screen-problems/m-p/205047#M44995 (it seems replacing the LCD display is no use, he tried it three times). Same problem: http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-black-screen/m-p/80604#M25914 Hmmm... not handy 3 or 4 months ago I ordered and installed a new fan. Now the LCD. Which does not seem the core issue but some electric issue so it seems replacing the LCD is not the thing to do here. If it is not the LCD that needs to be replaced (see other threads), which parts can I order to fix this? Is there any information which could lead me to identify the issue? I have read replacing the "inverter" AND the "backlightning" would that make sense? Update 2: I replaced the inverter with another inverter, but IO have the same problem. I DID notice that the inverter is the component that makes the sssssssssssssHHHHHHHHHH sound AND it becomes very hot in a few seconds. (So both the old and the test one) The problem is hmmm wat is then the thing that makes the inverter hot by (assumption) after which it shuts itself down. Is it either the input or the output? The output seems to me not, because the screen seems to function so it must be the electricity coming in. But what causes it to become so hot would it be the VGA card outputting some unusual high voltage seems unlikely? I am looking for the component to order / replace update 3: Great news. Ewendish gave me the hint to look in the BIOS. While I was in the BIOS I noticed that the screen did not switch off and there was not a high pitched sound. So I lowered some settings in the BIOS. I then noticed that with brightness turned to 0 (via FN End), it does not make a high pitched sound and does not turn off, with brightness turned up just three "stripes" it starts making the sound. So I could from now on work under lowest brightness modus or... see where the problem lies. So as stated below with either power management or display drivers / ATI Catalyst settings / Windows display settings. I'm trying to see where it lies, but I will google some first. Update 4: I wiped clean the Windows XP installation and installed Windows 7 on it. Unfortunately the problem remains: as soon as the brightness goes up the screen starts hissing. This means... back to original thought: it probably IS a hardware problem. Although ... again... if it is NOT the inverter, what is it? Could it be the backlightning component? I could try to switch that with a another T60... but this is quite tricky.

    Read the article

  • Can't write to NTFS formatted drives

    - by mloman
    I'm not sure what has happened, but I've all of a sudden lost write access to any of my NTFS external drives. I installed a few games and apps from the software center, and now I can't make new folders or copy and paste files to anything that is NTFS. Everything is now read only, and I've tried so many things to fix it, but it seems hopeless. Just to check if it wasn't the drives themselves, I made a little ntfs formatted truecrypt volume, and a fat formatted volume. And yes, it seems that Ubuntu is blocking me from writing anything to NTFS. What happened here? Whats a way I can simply get write access to my NTFS drives, so I can just backup all my stuff. I'll probably reinstall Ubuntu. Please help. UPDATE (and thanks everyone for their quick replies) The problem has been solved. Prior to noticing that I had lost NTFS write permission, I had installed GParted from the software center, and there was an extension called ntfsprogs that came with it. During my search for a solution to the problem, I uninstalled GParted (as that was one of the apps I installed just before the problem). But that did not solve the problem. I came across an app called 'NTFS Configuration Tool'. When I installed this, it said that the ntfsprogs extension needed to be removed (so I guess uninstalling GPARTED, didn't remove the ntfsprog extension). I launched the NTFS Configuration Tool and now I have write access to NTFS drives. Unfortunately, I didn't check if I had write permission prior to launching the NTFS Configuration Tool, so I'm not sure whether the NTFS Configuration Tool, or the un-installation of ntfsprog gave me back NTFS write permission. Hopefully if another newbee encounters this problem, they'll come across this page and know what to do.

    Read the article

  • New WebLogic Server 12.1.2 Installation and Patching Technology By Monica Riccelli

    - by JuergenKress
    WebLogic Server 12.1.2 has many new features, but the first new feature you are likely to notice is the change in installer technology. WebLogic Server and Coherence 12.1.2 are installed using Oracle Universal Installer (OUI) installer technology. We have also changed WebLogic Server patching technology from SmartUpdate to OPatch, the patching tool used to patch OUI installations. Note that installation and patching technology used for prior versions of WebLogic Server has not changed. The primary motivation for this change is to provide consistency across the Oracle stack. Prior to WebLogic Server 12.1.2, Fusion Middleware customers were required to use different technologies to install and patch, for example, Oracle Application Development Framework (ADF) with WebLogic Server. Now users can perform installation and patching across products more efficiently by using the same technologies, and by using new installation packages that simplify installation of Fusion Middleware products with WebLogic Server. Check the YouTube video that describes how to install  WebLogic Server 12.1.2 using the  OUI installer. The following WebLogic Server distributions are now available on the Oracle Technology Network (OTN)  under OTN license, and from Oracle Software Delivery Cloud (OSDC) for licensed customers: wls_121200.jar - This OUI installer package includes WebLogic Server and Coherence and is targeted at WebLogic Server users who do not require other Fusion Middleware components such as ADF. This generic installer can be used to install WebLogic Server and Coherence on any supported operating system, and is intended for development or production purposes. This is available on OTN and OSDC. Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Monica Riccelli,WebLogic 12c,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Screen Resolution stuck at 640x480 after installing Bumblebee

    - by Saurabh Agarwal
    I have a Dell XPS 15z laptop. As you can see here, there are some issues with NVidia drivers. The site recommends installation of Bumblebee (instructions given in the link). I am posting it again for ease: $ sudo add-apt-repository ppa:bumblebee/stable $ sudo apt-get update && sudo apt-get upgrade $ sudo apt-get install bumblebee bumblebee-nvidia $ sudo usermod -a -G bumblebee $USER After restarting the computer however, the screen resolution was stuck at 640x480 and I got the following error message as soon as I logged in: **Could not apply the stored configuration for monitors** none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Prior to the update, the display was absolutely normal and thus there is no doubt about the cause. Albeit, there was no support for graphic drivers. In case it helps, some features of graphics drivers seem to be functional after bumblebee, ie, all features are in order except for the resolution. And if the resolution can't be fixed, please suggest a way to retract the changes so that atleast the prior state may be reachieved. Any help in the matter would be highly appreciated.

    Read the article

  • Unity isn't starting on 13.10 (with Cinnamon 2.0 installed)

    - by Sam Pearman
    Since upgrading to 13.10, I can't log in to unity desktop. Light dm works correctly, but attempting to log in tries to start the session then drops back to light. I've already dropped to terminal (ctrl+alt+f2) and done this: sudo apt-get update sudo apt-get install --reinstall ubuntu-desktop sudo apt-get install unity Logging in as a guest session also fails. Logging in to other window managers works with varying degrees of success. Note: I have Cinnamon 2.0 installed from PPA. I'm using a 2 monitor setup. Also of note is that the session prior to my upgrade to 13.10 the background of unity failed to display at all, instead showing what was there in the screen buffer from the previous frame. The entire OS worked correctly otherwise though, so I just ignored it for the session. No other upgrades or even updates were done prior to this occurring. My upgrade path to 13.10 was basically this: Install 13.04 alongside Windows 7, use ubuntu as a glorified web browser for a while, get updates (in preparation for 13.10), install 13.10. I also used Unity Tweak Tool to change some aspects of unity, particularly auto-hide. Any help or ideas would be appreciated, as I'm typing this on my phone :(

    Read the article

  • Oracle JDK 7u10 released with new security features

    - by Henrik Stahl
    A few days ago, we released JRE and JDK 7 update 10. This release adds support for the following new platforms: Windows 8 on x86-64. Note that Modern UI (aka Metro) mode is not supported. Internet Explorer 10 on Windows 8. Mac OS X 10.8 (Mountain Lion) This release also introduces new features that provide enhanced security for Java applet and webstart applications, specifically: The Java runtime tracks if it is updated to the latest security baseline. If you try to execute an unsigned applet with an outdated version of Java, a warning dialog will prompt you to update before running the applet. The Java runtime includes a hardcoded best before date. It is assumed that a new version will be released before this date. If the client has not been able to check for an update prior to this date, the Java runtime will assume that it is insecure and start warning the user prior to executing any applets. The Java control panel now includes an option to set the desired security level on a low-medium-high-very high scale, as well as an option to disable Java applets and webstart entirely. This level controls things such as if the Java runtime is allowed to execute unsigned code, and if so what type of warning will be displayed to the user. More details on the security settings can be found in the documentation. See below for a sample screenshot. The new update of the JRE and the JDK are available via OTN. To learn more about the release please visit the release notes.

    Read the article

  • What are the legal considerations when forking a BSD-licensed project?

    - by Thomas Owens
    I'm interested in forking a project released under a two-clause BSD license: Copyright (c) 2010 {copyright holder} All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the disclaimer at the end. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (2) Neither the name of {copyright holder} nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. DISCLAIMER THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. I've never forked a project before, but this project is very similar to something that I need/want. However, I'm not sure how far I'll get, so my plan is to pull the latest from their repository and start working. Maybe, eventually, I'll get it to where I want it, and be able to release it. Is this the right approach? How, exactly, does this impact forking of the project? How do I track who owns what components or sections (what's copyright me, what's copyright the original creators, once I start stomping over their code base)? Can I fork this project? What must I do prior to releasing, and when/if I decide to release the software derived from this BSD-licensed work?

    Read the article

  • Oracle VM Virtualbox 4.0 extension packs

    - by wim.coekaerts
    Some people have asked what this new extension pack is in Oracle VM Virtualbox 4.0 and how it's different from 3.2 and earlier releases. The extension pack is a restructuring of how Oracle VM VirtualBox is installed. Please take a look at http://virtualbox.org and read up on what the product install looked like prior to 4.0, you'll see the following : There were 2 versions to download : - Oracle VM VirtualBox (open source edition) OSE - download of the source tarball with a GPL license + compile needed to run. - Oracle VM VirtualBox PUEL (personal use/eval license) - download of an installable binary with a number of additional non-gpl license drivers, usb2, sata, pxe boot for e1000, vrdp server etc., all built in to the install. This contained the OSE edition + additional drivers with the installer. Customers could purchase an enterprise software license for the latter version. To make it easier to build and release additional drivers, they have been separated out and are now installed through an "extension pack" starting with Oracle VirtualBox version 4. This extension pack is still licensed the same way as in every prior version, via a PUEL license or with the ability to purchase a commercial license. It is now also possible for other companies or users that want to add extensions to do so by creating a similar extension pack -- and there's no need to do a new release of the entire product to do so. So it's a more flexible structure for installing VirtualBox and drivers and allows for more modular additions. The source code of Oracle VM Virtualbox is, of course, still available just like in 3.x, for 4.0. Like 3.x, not for the additional drivers which are now in the extension pack.

    Read the article

  • Migration a database from 32bit to 64bit

    - by Mike Dietrich
    Database migrations from an 32bit environment to an 64bit environment keeping the same platform architecture (e.g. moving an Oracle 10.2.0.5 database from MS Windows XP 32bit to MS Windows Server 2003 64bit) does not happen that often anymore. But still we see them getting done. And there are a few things to note when doing such a move. First of all the important question is:Will you upgrade your database as part of this move - Yes or No? If you say "Yes" then you are almost done with that topic as we will take care of that bitnes move during the upgrade. The only thing you have to take care is OLAP in case you are using OLAP Option with Analytic Workspaces (AW) by yourself. Those store data in Binary LOBs - and in order to move AWs from 32bit to 64bit you have to export your AWs prior to the move - and import them later on. People who don't use OLAP don't have to take care on this. But if you say "No" (meaning: no upgrade actions involved - you keep your database version) then you have to make sure to invalidate all packages and stored code in the database before you shutdown your database in the 32bit environment and prior to moving it over. And the same rule as above for OLAP applies once you use the OLAP Option. In the source environment: startup upgrade;    -- [or startup migrate; -- for Oracle 9i] @?/rdbms/admin/utlirp.sqlshutdown immediate In the destination environment: startup upgrade @?/olap/admin/xumuts.plb --Only if OLAP Option is installed@?/rdbms/admin/utlrp.sql The script utlirp.sql will invalidate all packages and stored code, utlrp.sql will recompile - and xumuts.plb will rebuild the OLAP Analytic Workspaces in case you have the OLAP Option installed.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >