Search Results

Search found 1513 results on 61 pages for 'unexpected'.

Page 42/61 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Thinktecture.IdentityModel: Comparing Strings without leaking Timinig Information

    - by Your DisplayName here!
    Paul Hill commented on a recent post where I was comparing HMACSHA256 signatures. In a nutshell his complaint was that I am leaking timing information while doing so – or in other words, my code returned faster with wrong (or partially wrong) signatures than with the correct signature. This can be potentially used for timing attacks like this one. I think he got a point here, especially in the era of cloud computing where you can potentially run attack code on the same physical machine as your target to do high resolution timing analysis (see here for an example). It turns out that it is not that easy to write a time-constant string comparer due to all sort of (unexpected) clever optimization mechanisms in the CLR. With the help and feedback of Paul and Shawn I came up with this: Structure the code in a way that the CLR will not try to optimize it In addition turn off optimization (just in case a future version will come up with new optimization methods) Add a random sleep when the comparison fails (using Shawn’s and Stephen’s nice Random wrapper for RNGCryptoServiceProvider). You can find the full code in the Thinktecture.IdentityModel download. [MethodImpl(MethodImplOptions.NoOptimization)] public static bool IsEqual(string s1, string s2) {     if (s1 == null && s2 == null)     {         return true;     }       if (s1 == null || s2 == null)     {         return false;     }       if (s1.Length != s2.Length)     {         return false;     }       var s1chars = s1.ToCharArray();     var s2chars = s2.ToCharArray();       int hits = 0;     for (int i = 0; i < s1.Length; i++)     {         if (s1chars[i].Equals(s2chars[i]))         {             hits += 2;         }         else         {             hits += 1;         }     }       bool same = (hits == s1.Length * 2);       if (!same)     {         var rnd = new CryptoRandom();         Thread.Sleep(rnd.Next(0, 10));     }       return same; }

    Read the article

  • Running Multiple WebLogic and OSB Domains

    - by jeff.x.davies
    I have any number of OSB domain created on my machine at any point in time. For example, I have different domains for different version of Oracle Service Bus and Oracle SOA Suite. I also have different domains for different purposes. I have a demo domain and another domain for the projects in my blog. Starting with OSB 11g and the Apache Derby server, there is a small "gotcha" if you want to create multiple domains on a devevelopment machine. When you create a new domain for OSB 11g it will use the same database info for all databases and this will cause an error when starting the admin server of the second domain (the first domain doesn't have to be running for this error to occur). Here is an example of the error message in the server console: ####<Mar 8, 2011 2:58:48 PM PST> <Critical> <JTA> <jeff-laptop> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1299625128464> <BEA-110482> <A logging last resource failed during initialization. The server cannot boot unless all configured logging last resources (LLRs) initialize. Failing reason: weblogic.transaction.loggingresource.LoggingResourceException: java.sql.SQLException: JDBC LLR, table verify failed for table 'WL_LLR_ADMINSERVER', row 'JDBC LLR Domain//Server' record had unexpected value 'osb11gR1PS3//AdminServer' expected 'OSBCIM//AdminServer'*** ONLY the original domain and server that creates an LLR table may access it *** The solution is to create a database instance for each of your domains and this is very simple to do. After you create a domain using the Configuration Wizard, locate the wlsbjmsrpDataSource-jdbc.xml file that is found under the DOMAIN_HOME/config/jdbc directory. Near the top of the file you will see the following entry: <url>jdbc:derby://localhost:1527/osbexamples;create=true;ServerName=localhost;databaseName=osbexamples</url> You need to modify this entry with a different and unique database name. The easiest way to do this is to substiture the name of your domain. For example: <url>jdbc:derby://localhost:1527/mydomain;create=true;ServerName=localhost;databaseName=mydomain</url> will create a database named mydomain . Now, when you restart the admin server for the domain, it will create the new database for you. Do this for each domain you create on your development machine and you'll have no troubles. The process is much simpler if you are creating a domain using the Configuration Wizard. Simply name the database when you get to the Configure JDBC Component Schema step of the Configuration Wizard, select the OSB JMS Reporting Provider and set the name in the DBMS/Service field to whatever name you like, as shown in Figure 1 below. Figure 1 – Configuring the JDBC Component Schema That is all there is to it. Now you can create as many domains on your leptop or development machine as you like and not have to worry about them conflicting with each other.

    Read the article

  • Problem with script substitution when running script

    - by tucaz
    Hi! I'm new to Linux so this probably should be an easy fix, but I cannot see it. I have a script downloaded from official sources that is used to install additional tools for fsharp but it gives me a syntax error when running it. I tried to replace ( and ) by { and } but eventually it lead me to another error so I think this is not the problem since the script works for everybody. I read some articles that say that my bash version maybe is not the right one. I'm using Ubuntu 10.10 and here is the error: install-bonus.sh: 28: Syntax error: "(" unexpected (expecting "}") And this is line 27, 28 and 29: { declare -a DIRS=("${!3}") FILE=$2 And the full script: #! /bin/sh -e PREFIX=/usr BIN=$PREFIX/bin MAN=$PREFIX/share/man/man1/ die() { echo "$1" &2 echo "Installation aborted." &2 exit 1 } echo "This script will install additional material for F# including" echo "man pages, fsharpc and fsharpi scripts and Gtk# support for F#" echo "Interactive (root access needed)" echo "" # ------------------------------------------------------------------------------ # Utility function that searches specified directories for a specified file # and if the file is not found, it asks user to provide a directory RESULT="" searchpaths() { declare -a DIRS=("${!3}") FILE=$2 DIR=${DIRS[0]} for TRYDIR in ${DIRS[@]} do if [ -f $TRYDIR/$FILE ] then DIR=$TRYDIR fi done while [ ! -f $DIR/$FILE ] do echo "File '$FILE' was not found in any of ${DIRS[@]}. Please enter $1 installation directory:" read DIR done RESULT=$DIR } # ------------------------------------------------------------------------------ # Locate F# installation directory - this is needed, because we want to # add environment variable with it, generate 'fsharpc' and 'fsharpi' and also # copy load-gtk.fsx to that directory # ------------------------------------------------------------------------------ PATHS=( $1 /usr/lib/fsharp /usr/lib/shared/fsharp ) searchpaths "F# installation" FSharp.Core.dll PATHS[@] FSHARPDIR=$RESULT echo "Successfully found F# installation directory." # ------------------------------------------------------------------------------ # Check that we have everything we need # ------------------------------------------------------------------------------ [ $(id -u) -eq 0 ] || die "Please run the script as root." which mono /dev/null || die "mono not found in PATH." # ------------------------------------------------------------------------------ # Make sure that all additional assemblies are in GAC # ------------------------------------------------------------------------------ echo "Installing additional F# assemblies to the GAC" gacutil -i $FSHARPDIR/FSharp.Build.dll gacutil -i $FSHARPDIR/FSharp.Compiler.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Interactive.Settings.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Server.Shared.dll # ------------------------------------------------------------------------------ # Install additional files # ------------------------------------------------------------------------------ # Install man pages echo "Installing additional F# commands, scripts and man pages" mkdir -p $MAN cp *.1 $MAN # Export the FSHARP_COMPILER_BIN environment variable if [[ ! "$OSTYPE" =~ "darwin" ]]; then echo "export FSHARP_COMPILER_BIN=$FSHARPDIR" fsharp.sh mv fsharp.sh /etc/profile.d/ fi # Generate 'load-gtk.fsx' script for F# Interactive (ask user if we cannot find binaries) PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gtk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gtk#" gtk-sharp.dll PATHS[@] GTKDIR=$RESULT echo "Successfully found Gtk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/glib-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Glib" glib-sharp.dll PATHS[@] GLIBDIR=$RESULT echo "Successfully found Glib# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/atk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Atk#" atk-sharp.dll PATHS[@] ATKDIR=$RESULT echo "Successfully found Atk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gdk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gdk#" gdk-sharp.dll PATHS[@] GDKDIR=$RESULT echo "Successfully found Gdk# root directory." cp bonus/load-gtk.fsx load-gtk1.fsx sed "s,INSERTGTKPATH,$GTKDIR,g" load-gtk1.fsx load-gtk2.fsx sed "s,INSERTGDKPATH,$GDKDIR,g" load-gtk2.fsx load-gtk3.fsx sed "s,INSERTATKPATH,$ATKDIR,g" load-gtk3.fsx load-gtk4.fsx sed "s,INSERTGLIBPATH,$GLIBDIR,g" load-gtk4.fsx load-gtk.fsx rm load-gtk1.fsx rm load-gtk2.fsx rm load-gtk3.fsx rm load-gtk4.fsx mv load-gtk.fsx $FSHARPDIR/load-gtk.fsx # Generate 'fsharpc' and 'fsharpi' scripts (using the F# path) # 'fsharpi' automatically searches F# root directory (e.g. load-gtk) echo "#!/bin/sh" fsharpc echo "exec mono $FSHARPDIR/fsc.exe --resident \"\$@\"" fsharpc chmod 755 fsharpc echo "#!/bin/sh" fsharpi echo "exec mono $FSHARPDIR/fsi.exe -I:\"$FSHARPDIR\" \"\$@\"" fsharpi chmod 755 fsharpi mv fsharpc $BIN/fsharpc mv fsharpi $BIN/fsharpi Thanks a lot!

    Read the article

  • Unit Testing Framework for XQuery

    - by Knut Vatsendvik
    This posting provides a unit testing framework for XQuery using Oracle Service Bus. It allows you to write a test case to run your XQuery transformations in an automated fashion. When the test case is run, the framework returns any differences found in the response. The complete code sample with install instructions can be downloaded from here. Writing a Unit Test You start a new Test Case by creating a Proxy Service from Workshop that comes with Oracle Service Bus. In the General Configuration page select Service Type to be Messaging Service           In the Message Type Configuration page link both the Request & Response Message Type to the TestCase element of the UnitTest.xsd schema                 The TestCase element consists of the following child elements The ID and optional Name element is simply used for reference. The Transformation element is the XQuery resource to be executed. The Input elements represents the input to run the XQuery with. The Output element represents the expected output. These XML documents are “also” represented as an XQuery resource where the XQuery function takes no arguments and returns the XML document. Why not pass the test data with the TestCase? Passing an XML structure in another XML structure is not very easy or at least not very human readable. Therefore it was chosen to represent the test data as an loadable resource in the OSB. However you are free to go ahead with another approach on this if wanted. The XMLDiff elements represents any differences found. A sample on input is shown here. Modeling the Message Flow Then the next step is to model the message flow of the Proxy Service. In the Request Pipeline create a stage node that loads the test case input data.      For this, specify a dynamic XQuery expression that evaluates at runtime to the name of a pre-registered XQuery resource. The expression is of course set by the input data from the test case.           Add a Run stage node. Assign the result of the XQuery, that is to be run, to a context variable. Define a mapping for each of the input variables added in previous stage.     Add a Compare stage. Like with the input data, load the expected output data. Do a compare using XMLDiff XQuery provided where the first argument is the loaded output test data, and the second argument the result from the Run stage. Any differences found is replaced back into the test case XMLDiff element. In case of any unexpected failure while processing, add an Error Handler to the Pipeline to capture the fault. To pass back the result add the following Insert action In the Response Pipeline. A sample on output is shown here.

    Read the article

  • Performing an upgrade from TFS 2008 to TFS 2010

    - by Enrique Lima
    I recently had to go through the process of migrating a TFS 2008 SP1 to a TFS 2010 environment. I will go into the details of the tasks that I went through, but first I want to explain why I define it as a migration and not an upgrade. When this environment was setup, based on support and limitations for TFS 2008, we used a 32 bit platform for the TFS Application Tier and Build Servers.  The Data Tier, since we were installing SP1 for TFS 2008, was done as a 64 bit installation.  We knew at that point that TFS 2010 was in the picture so that served as further motivation to make that a 64bit install of SQL Server.  The SQL Server at that point was a single instance (Default) installation too.  We had a pretty good strategy in place for backups of the databases supporting the environment (and this made the migration so much smoother), so we were pretty familiar with the databases and the purpose they serve. I am sure many of you that have gone through a TFS 2008 installation have encountered challenges and trials.  And likely even more so if you, like me, needed to configure your deployment for SSL.  So, frankly I was a little concerned about the process of migrating.  They say practice makes perfect, and this environment I worked on is in some way my brain child, so I was not ready nor willing for this to be a failure or something that would impact my client’s work. Prior to going through the migration process, we did the install of the environment.  The Data Tier was the same, with a new Named instance in place to host the 2010 install.  The Application Tier was in place too, and we did the DefaultCollection configuration to test and validate all components were in place as they should. Anyway, on to the tasks for the migration (thanks to Martin Hinselwood for his very thorough documentation): Close access to TFS 2008, you want to make sure all code is checked in and ready to go.  We stated a difference of 8 hours between code lock and the start of migration to give time for any unexpected delay.  How do we close access?  Stop IIS. Backup your databases.  Which ones? TfsActivityLogging TfsBuild TfsIntegration TfsVersionControl TfsWorkItemTracking TfsWorkItemTrackingAttachments Restore the databases to the new Named Instance (make sure you keep the same names) Now comes the fun part! The actual import/migration of the databases.  A couple of things happen here. The TfsIntegration database will be scanned, the other databases will be checked to validate they exist.  Those databases will go through a process of data being extracted and transferred to the TfsVersionControl database to then be renamed to Tfs_<Collection>. You will be using a tool called tfsconfig and the option import. This tool is located in the TFS 2010 installation path (C:\Program Files\Microsoft Team Foundation Server 2010\Tools),  the command to use is as follows:    tfsconfig import /sqlinstance:<instance> /collectionName:<name> /confirmed Where <instance> is going to be the SQL Server instance where you restored the databases to.  <name> is the name you will give the collection. And to explain /confirmed, well this means you have done a backup of the databases, why?  well remember you are going to merge the databases you restored when you execute the tfsconfig import command. The process will go through about 200 tasks, once it completes go to Team Foundation Server Administration Console and validate your imported databases and contents. We’ll keep this manageable, so the next post is about how to complete that implementation with the SSL configuration.

    Read the article

  • Subterranean IL: Custom modifiers

    - by Simon Cooper
    In IL, volatile is an instruction prefix used to set a memory barrier at that instruction. However, in C#, volatile is applied to a field to indicate that all accesses on that field should be prefixed with volatile. As I mentioned in my previous post, this means that the field definition needs to store this information somehow, as such a field could be accessed from another assembly. However, IL does not have a concept of a 'volatile field'. How is this information stored? Attributes The standard way of solving this is to apply a VolatileAttribute or similar to the field; this extra metadata notifies the C# compiler that all loads and stores to that field should use the volatile prefix. However, there is a problem with this approach, namely, the .NET C++ compiler. C++ allows methods to be overloaded using properties, like volatile or const, on the parameters; this is perfectly legal C++: public ref class VolatileMethods { void Method(int *i) {} void Method(volatile int *i) {} } If volatile was specified using a custom attribute, then the VolatileMethods class wouldn't be compilable to IL, as there is nothing to differentiate the two methods from each other. This is where custom modifiers come in. Custom modifiers Custom modifiers are similar to custom attributes, but instead of being applied to an IL element separately to its declaration, they are embedded within the field or parameter's type signature itself. The VolatileMethods class would be compiled to the following IL: .class public VolatileMethods { .method public instance void Method(int32* i) {} .method public instance void Method( int32 modreq( [mscorlib]System.Runtime.CompilerServices.IsVolatile)* i) {} } The modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile) is the custom modifier. This adds a TypeDef or TypeRef token to the signature of the field or parameter, and even though they are mostly ignored by the CLR when it's executing the program, this allows methods and fields to be overloaded in ways that wouldn't be allowed using attributes. Because the modifiers are part of the signature, they need to be fully specified when calling such a method in IL: call instance void Method( int32 modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile)*) There are two ways of applying modifiers; modreq specifies required modifiers (like IsVolatile), and modopt specifies optional modifiers that can be ignored by compilers (like IsLong or IsConst). The type specified as the modifier argument are simple placeholders; if you have a look at the definitions of IsVolatile and IsLong they are completely empty. They exist solely to be referenced by a modifier. Custom modifiers are used extensively by the C++ compiler to specify concepts that aren't expressible in IL, but still need to be taken into account when calling method overloads. C++ and C# That's all very well and good, but how does this affect C#? Well, the C++ compiler uses modreq(IsVolatile) to specify volatility on both method parameters and fields, as it would be slightly odd to have the same concept represented using a modifier or attribute depending on what it was applied to. Once you've compiled your C++ project, it can then be referenced and used from C#, so the C# compiler has to recognise the modreq(IsVolatile) custom modifier applied to fields, and vice versa. So, even though you can't overload fields or parameters with volatile using C#, volatile needs to be expressed using a custom modifier rather than an attribute to guarentee correct interoperability and behaviour with any C++ dlls that happen to come along. Next up: a closer look at attributes, and how certain attributes compile in unexpected ways.

    Read the article

  • AppKata - Enter the next level of programming exercises

    - by Ralf Westphal
    Doing CodeKatas is all the rage lately. That´s great since widely accepted exercises are important to further the art. They provide a means of communication across platforms and allow to compare results which is part of any deliberate practice. But CodeKatas suffer from their size. They are intentionally small, so they can be done again and again. Repetition helps to build habit and to dig deeper. Over time ever new nuances of the problem or one´s approach become visible. On the other hand, though, their small size limits the methods, techniques, technologies that can be applied. To improve your TDD skills doing CodeKatas might be enough. But what about other skills? Developing on a software in a team, designing larger pieces of software, iteratively releasing software… all this and more is kinda hard to train using the tiny CodeKata problems. That´s why I´d like to present here another kind of kata I call Application Kata (or just AppKata). AppKatas are larger programming problems. They require the development of “whole” applications, i.e. not just one class or method, but bunches of classes accessible through a user interface. Also AppKata problems always are split into iterations. To get the most out of them, just look at the requirements of one iteration at a time. This way you´re closer to reality where requirements evolve in unexpected ways. So if you´re looking for more of a challenge for your software development skills, check out these AppKatas – or invent your own. AppKatas are platform independent like CodeKatas. Use whatever programming language and IDE you like. Also use whatever approach to software development you like. Just be sensitive to how easy it is to evolve your code across iterations. Reflect on what went well and what not. Compare your solutions with others. Or – for even more challenge – go for the “Coding Carousel” (see below). CSV Viewer An application to view CSV files. Sounds easy, but watch out! Requirements sometimes drastically change if the customer is happy with what you delivered. Iteration 1 Iteration 2 Iteration 3 Iteration 4 Iteration 5 (to come) Questionnaire If you like GUI programming, this AppKata might be for you. It´s about an app to let people fill out questionnaires. Also this problem might be interestin for you, if you´re into DDD. Iteration 1 Iteration 2 (to come) Iteration 3 (to come) Iteration 4 (to come) Tic Tac Toe For developers who like game programming. Although Tic Tac Toe is a trivial game, this AppKata poses some interesting infrastructure challenges. The GUI, however, stays simple; leave any 3D ambitions at home ;-) Iteration 1 Iteration 2 (to come) Iteration 3 (to come) Iteration 4 (to come) Iteration 5 (to come) Coding Carousel There are many ways you can do AppKatas. Work on them alone or in a team, pitch several devs against each other in an AppKata contest – or go around in a Coding Carousel. For the Coding Carousel you need at least 3 dev teams (regardless of size). All teams work on the same iteration at the same time. But here´s the trick: After each iteration the teams swap their code. Whatever they did for iteration n will be the basis for changes another team has to apply in iteration n+1. The code is going around the teams like in a carousel. I promise you, that´s gonna be fun! :-)

    Read the article

  • Some Original Expressions

    - by Phil Factor
    Guest Editorial for Simple-Talk newsletterIn a guest editorial for the Simple-Talk Newsletter, Phil Factor wonders if we are still likely to find some more novel and unexpected ways of using the newer features of Transact SQL: or maybe in some features that have always been there! There can be a great deal of fun to be had in trying out recent features of SQL Expressions to see if  they provide new functionality.  It is surprisingly rare to find things that couldn’t be done before, but in a different   and more cumbersome way; but it is great to experiment or to read of someone else making that discovery.  One such recent feature is the ‘table value constructor’, or ‘VALUES constructor’, that managed to get into SQL Server 2008 from Standard SQL.  This allows you to create derived tables of up to 1000 rows neatly within select statements that consist of  lists of row values.  E.g. SELECT Old_Welsh, number FROM (VALUES ('Un',1),('Dou',2),('Tri',3),('Petuar',4),('Pimp',5),('Chwech',6),('Seith',7),('Wyth',8),('Nau',9),('Dec',10)) AS WelshWordsToTen (Old_Welsh, number) These values can be expressions that return single values, including, surprisingly, subqueries. You can use this device to create views, or in the USING clause of a MERGE statement. Joe Celko covered  this here and here.  It can become extraordinarily handy to use once one gets into the way of thinking in these terms, and I’ve rewritten a lot of routines to use the constructor, but the old way of using UNION can be used the same way, but is a little slower and more long-winded. The use of scalar SQL subqueries as an expression in a VALUES constructor, and then applied to a MERGE, has got me thinking. It looks very clever, but what use could one put it to? I haven’t seen anything yet that couldn’t be done almost as  simply in SQL Server 2000, but I’m hopeful that someone will come up with a way of solving a tricky problem, just in the same way that a freak of the XML syntax forever made the in-line  production of delimited lists from an expression easy, or that a weird XML pirouette could do an elegant  pivot-table rotation. It is in this sort of experimentation where the community of users can make a real contribution. The dissemination of techniques such as the Number, or Tally table, or the unconventional ways that the UPDATE statement can be used, has been rapid due to articles and blogs. However, there is plenty to be done to explore some of the less obvious features of Transact SQL. Even some of the features introduced into SQL Server 2000 are hardly well-known. Certain operations on data are still awkward to perform in Transact SQL, but we mustn’t, I think, be too ready to state that certain things can only be done in the application layer, or using a CLR routine. With the vast array of features in the product, and with the tools that surround it, I feel that there is generally a way of getting tricky things done. Or should we just stick to our lasts and push anything difficult out into procedural code? I’d love to know your views.

    Read the article

  • SQL SERVER – How to Roll Back SQL Server Database Changes

    - by Pinal Dave
    In a perfect scenario, no unexpected and unplanned changes occur. There are no unpleasant surprises, no inadvertent changes. However, even with all precautions and testing, there is sometimes a need to revert a structure or data change. One of the methods that can be used in this situation is to use an older database backup that has the records or database object structure you want to revert to. For this method, you have to have the adequate full database backup and a tool that will help you with comparison and synchronization is preferred. In this article, we will focus on another method: rolling back the changes. This can be done by using: An option in SQL Server Management Studio T-SQL, or ApexSQL Log The first two solutions have been described in this article The disadvantages of these methods are that you have to know when exactly the change you want to revert happened and that all transactions on the database executed in a specific time range are rolled back – the ones you want to undo and the ones you don’t. How to easily roll back SQL Server database changes using ApexSQL Log? The biggest challenge is to roll back just specific changes, not all changes that happened in a specific time range. While SQL Server Management Studio option and T-SQL read and roll forward all transactions in the transaction log files, I will show you a solution that finds and scripts only the specific changes that match your criteria. Therefore, you don’t need to worry about all other database changes that you don’t want to roll back. ApexSQL Log is a SQL Server disaster recovery tool that reads transaction logs and provides a wide range of filters that enable you to easily rollback only specific data changes. First, connect to the online database where you want to roll back the changes. Once you select the database, ApexSQL Log will show its recovery model. Note that changes can be rolled back even for a database in the Simple recovery model, when no database and transaction log backups are available. However, ApexSQL Log achieves best results when the database is in the Full recovery model and you have a chain of subsequent transaction log backups, back to the moment when the change occurred. In this example, we will use only the online transaction log. In the next step, use filters to read only the transactions that happened in a specific time range. To remove noise, it’s recommended to use as many filters as possible. Besides filtering by the time of the transaction, ApexSQL Log can filter by the operation type: Table name: As well as transaction state (committed, aborted, running, and unknown), name of the user who committed the change, specific field values, server process IDs, and transaction description. You can select only the tables affected by the changes you want to roll back. However, if you’re not certain which tables were affected, you can leave them all selected and once the results are shown in the main grid, analyze them to find the ones you to roll back. When you set the filters, you can select how to present the results. ApexSQL Log can automatically create undo or redo scripts, export the transactions into an XML, HTML, CSV, SQL, or SQL Bulk file, and create a batch file that you can use for unattended transaction log reading. In this example, I will open the results in the grid, as I want to analyze them before rolling back the transactions. The results contain information about the transaction, as well as who and when made it. For UPDATEs, ApexSQL Log shows both old and new values, so you can easily see what has happened. To create an UNDO script that rolls back the changes, select the transactions you want to roll back and click Create undo script in the menu. For the DELETE statement selected in the screenshot above, the undo script is: INSERT INTO [Sales].[PersonCreditCard] ([BusinessEntityID], [CreditCardID], [ModifiedDate]) VALUES (297, 8010, '20050901 00:00:00.000') When it comes to rolling back database changes, ApexSQL Log has a big advantage, as it rolls back only specific transactions, while leaving all other transactions that occurred at the same time range intact. That makes ApexSQL Log a good solution for rolling back inadvertent data and schema changes on your SQL Server databases. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: ApexSQL

    Read the article

  • Using a parser to locate faulty code

    - by ryan.riverside
    Lately I've been working a lot in PHP and have run into an abnormally large number of parsing errors. I realize these are my own fault and a result of sloppy initial coding on my part, but it's getting to the point that I'm spending more time resolving tags than developing. In the interest of not slamming my productivity, are there any tricks to locating the problem in the code? What I'd really be looking for would be a line to put in the code which would output the entire faulty tag in the parsing error, or something similar. Purely for reference sake, my current error is Parse error: syntax error, unexpected '}' in /home/content/80/9480880/html/cache/tpl_prosilver_viewtopic_body.html.php on line 50 (which refers to this): </dd><dd><?php if ($_poll_option_val['POLL_OPTION_RESULT'] == 0) { echo ((isset($this->_rootref['L_NO_VOTES'])) ? $this->_rootref['L_NO_VOTES'] : ((isset($user->lang['NO_VOTES'])) ? $user->lang['NO_VOTES'] : '{ NO_VOTES }')); } else { echo $_poll_option_val['POLL_OPTION_PERCENT']; } ?></dd> </dl> <?php }} if ($this->_rootref['S_DISPLAY_RESULTS']) { ?> <dl> <dt>&nbsp;</dt> <dd class="resultbar"><?php echo ((isset($this->_rootref['L_TOTAL_VOTES'])) ? $this->_rootref['L_TOTAL_VOTES'] : ((isset($user->lang['TOTAL_VOTES'])) ? $user->lang['TOTAL_VOTES'] : '{ TOTAL_VOTES }')); ?> : <?php echo (isset($this->_rootref['TOTAL_VOTES'])) ? $this->_rootref['TOTAL_VOTES'] : ''; ?></dd> </dl> <?php } if ($this->_rootref['S_CAN_VOTE']) { ?> <dl style="border-top: none;"> <dt>&nbsp;</dt> <dd class="resultbar"><input type="submit" name="update" value="<?php echo ((isset($this->_rootref['L_SUBMIT_VOTE'])) ? $this->_rootref['L_SUBMIT_VOTE'] : ((isset($user->lang['SUBMIT_VOTE'])) ? $user->lang['SUBMIT_VOTE'] : '{ SUBMIT_VOTE }')); ?>" class="button1" /></dd> </dl> <?php } if (! $this->_rootref['S_DISPLAY_RESULTS']) { ?> <dl style="border-top: none;"> <dt>&nbsp;</dt> <dd class="resultbar"><a href="<?php echo (isset($this->_rootref['U_VIEW_RESULTS'])) ? $this->_rootref['U_VIEW_RESULTS'] : ''; ?>"><?php echo ((isset($this->_rootref['L_VIEW_RESULTS'])) ? $this->_rootref['L_VIEW_RESULTS'] : ((isset($user->lang['VIEW_RESULTS'])) ? $user->lang['VIEW_RESULTS'] : '{ VIEW_RESULTS }')); ?></a></dd> </dl> <?php } ?> </fieldset></div>

    Read the article

  • Centralized Project Management Brings Needed Cost Controls to Growing Brazilian Firm

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Fast growth and a significant increase in business activities were creating project management challenges for CPqD, a developer of innovative information and communication technologies for large Brazilian organizations. To bring greater efficiency and centralized project management capabilities to its operations, CPqD chose Oracle’s Primavera P6 Enterprise Project Portfolio Management. “Oracle Primavera is an essential tool for our day-to-day business, and I notice the effort Oracle makes to constantly innovate and to add more functionality in an increasingly shorter period of time,” says Márcio Alexandre da Silva, IT department project coordinator, CPqD. He explains that before CPqD implemented the Oracle solution, the company did not have a corporate view of projects. “Our project monitoring was decentralized and restricted to each coordinator,” the project coordinator says. “With the Oracle solution, we achieved actual shared management, more control, and budgets that stay within projections.” Among the benefits that CPqD now enjoys are The ability to more effectively identify how employees are allocated, enabling managers to increase or reduce resources based on project scope, as well as secure the resources required for unexpected projects and demands A 75 percent reduction in the time it takes to collect project data and indicators—automated and centralized collection means project coordinators no longer have to manually compile information that was spread among various systems Read the complete CPqD company snapshot Read more in the October Edition of the quarterly Information InDepth EPPM Newsletter Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • HPCM 11.1.2.2.x - HPCM Standard Costing Generating >99 Calc Scipts

    - by Jane Story
    HPCM Standard Profitability calculation scripts are named based on a documented naming convention. From 11.1.2.2.x, the script name = a script suffix (1 letter) + POV identifier (3 digits) + Stage Order Number (1 digit) + “_” + index (2 digits) (please see documentation for more information (http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_admin/apes01.html). This naming convention results in the name being 8 characters in length i.e. the maximum number of characters permitted calculation script names in non-unicode Essbase BSO databases. The index in the name will indicate the number of scripts per stage. In the vast majority of cases, the number of scripts generated per stage will be significantly less than 100 and therefore, there will be no issue. However, in some cases, the number of scripts generated can exceed 99. It is unusual for an application to generate more than 99 calculation scripts for one stage. This may indicate that explicit assignments are being extensively used. An assessment should be made of the design to see if assignment rules can be used instead. Assignment rules will reduce the need for so many calculation script lines which will reduce the requirement for such a large number of calculation scripts. In cases where the scripts generates exceeds 100, the length of the name of the 100th calculation script is different from the 99th as the calculation script name changes from being 8 characters long and becomes 9 characters long (e.g. A6811_100 rather than A6811_99). A name of 9 characters is not permitted in non Unicode applications. It is “too long”. When this occurs, an error will show in the hpcm.log as “Error processing calculation scripts” and “Unexpected error in business logic “. Further down the log, it is possible to see that this is “Caused by: Error copying object “ and “Caused by: com.essbase.api.base.EssException: Cannot put olap file object ... object name_[<calc script name> e.g. A6811_100] too long for non-unicode mode application”. The error file will give the name of the calculation script which is causing the issue. In my example, this is A6811_100 and you can see this is 9 characters in length. It is not possible to increase the number of characters allowed in a calculation script name. However, it is possible to increase the size of each calculation script. The default for an HPCM application, set in the preferences, is set to 4mb. If the size of each calculation script is larger, the number of scripts generated will reduce and, therefore, less than 100 scripts will be generated which means that the name of the calculation script will remain 8 characters long. To increase the size of the generated calculation scripts for an application, in the HPM_APPLICATION_PREFERENCE table for the application, find the row where HPM_PREFERENCE_NAME_ID=20. The default value in this row is 4194304. This can be increased e.g. 7340032 will increase this to 7mb. Please restart the profitability service after making the change.

    Read the article

  • JFall 2012

    - by Geertjan
    JFall 2012 was over far too soon! Seven tracks going on simultaneously in a great location, with many artifacts reminding me of JavaOne, and nice snacks and drinks afterwards. The day started, as such things always do, with a keynote. Thanks to @royvanrijn for the photo below, I didn't take any myself and without a picture this report might have been too dry: What you see above is Steve Chin riding into the keynote hall on his NightHacking bike. The keynote was interesting, I can't be too complimentary about it, since I was part of it myself. Bert Ertman introduced the day and then Steve Chin took over, together with Sharat Chander, Tom Eugelink, Timon Veenstra, and myself. We had a strict choreography for the keynote, one that would ensure a lot of variation and some unexpected surprises, such as Steve being thrown off the stage a few times by Bert because of mentioning JavaOne too many times, rather than the clearly much cooler JFall. Steve talked about JavaOne and the direction Java is headed in, Sharat talked about JavaME and embedded devices, Steve and Tom did a demo involving JavaFX, I did a Project Easel demo, and Timon from Ordina talked about his Duke's Choice Award winning AgroSense project. I think the Project Easel demo (which I repeated later in a screencast for Parleys arranged by Eugene Boogaart) came across well and several people I spoke to especially like the roundtrip/bi-directional work that can be done, from browser to IDE and back again, very simply and intuitively. (In a long conversation on the drive back home afterwards, the scenario of a designer laying out the UI in HTML and then handing the HTML to a developer for back-end work, a developer who would then find it convenient to open the HTML in a browser and quickly navigate from the browser to the resources within the IDE, was discussed and considered to be extremely interesting and worth considering adopting NetBeans for, for no other reason than that.) Later I attended a session by David Delabassee on Java EE 7, Hans Dockter on Gradle, and Sander Mak on cross-build injection attacks. I was sorry to have missed Martijn Verburg's session, which sounded like it was really fantastic, among others, such as Gerrit Grunwald. I did a session too, entitled "Unlocking the Java EE 6 Platform", which was very well attended, pretty much a full room, and the demo went very smoothly. I talked to many people, e.g., a long time with Hans Dockter about how cool Gradle is and how great the Gradle/NetBeans plugin is turning out to be. I also had a long conversation (and did a demo) with Chris Chedgey, from Structure101, after his session, which was incredibly well attended; very interesting how popular modularity is. I met several people for the first time, as well as some colleagues from past places I've worked at. All in all, it was a great conference, unfortunately too short, which was very well attended (clearly over 1000) people, with several international speakers, as well as international attendees such as Mattias Karlsson, Sweden JUG leader. And, unsurprisingly, I came across NetBeans Platform applications again, none of which I had ever heard of before. In each case, "our fat client application" was mentioned in passing, never as a main application, and never in a context where there are plans for the application to be migrated to the web or mobile, simply because doing so makes no business sense at all. Great times at JFall, looking forward to meeting with some of the people I met again soon.

    Read the article

  • Handling "related" work within a single agile work item

    - by Tesserex
    I'm on a project team of 4 devs, myself included. We've been having a long discussion on how to handle extra work that comes up in the course of a single work item. This extra work is usually things that are slightly related to the task, but not always necessary to accomplish the goal of the item (that may be an opinion). Examples include but are not limited to: refactoring of the code changed by the work item refactoring code neighboring the code changed by the item re-architecting the larger code area around the ticket. For example if an item has you changing a single function, you realize the entire class now could be redone to better accommodate this change. improving the UI on a form you just modified When this extra work is small we don't mind. The problem is when this extra work causes a substantial extension of the item beyond the original feature point estimation. Sometimes a 5 point item will actually take 13 points of time. In one case we had a 13 point item that in retrospect could have been 80 points or more. There are two options going around in our discussion for how to handle this. We can accept the extra work in the same work item, and write it off as a mis-estimation. Arguments for this have included: We plan for "padding" at the end of the sprint to account for this sort of thing. Always leave the code in better shape than you found it. Don't check in half-assed work. If we leave refactoring for later, it's hard to schedule and may never get done. You are in the best mental "context" to handle this work now, since you're waist deep in the code already. Better to get it out of the way now and be more efficient than to lose that context when you come back later. We draw a line for the current work item, and say that the extra work goes into a separate ticket. Arguments include: Having a separate ticket allows for a new estimation, so we aren't lying to ourselves about how many points things really are, or having to admit that all of our estimations are terrible. The sprint "padding" is meant for unexpected technical challenges that are direct barriers to completing the ticket requirements. It is not intended for side items that are just "nice-to-haves". If you want to schedule refactoring, just put it at the top of the backlog. There is no way for us to properly account for this stuff in an estimation, since it seems somewhat arbitrary when it comes up. A code reviewer might say "those UI controls (which you actually didn't modify in this work item) are a bit confusing, can you fix that too?" which is like an hour, but they might say "Well if this control now inherits from the same base class as the others, why don't you move all of this (hundreds of lines of) code into the base and rewire all this stuff, the cascading changes, etc.?" And that takes a week. It "contaminates the crime scene" by adding unrelated work into the ticket, making our original feature point estimates meaningless. In some cases, the extra work postpones a check-in, causing blocking between devs. Some of us are now saying that we should decide some cut off, like if the additional stuff is less than 2 FP, it goes in the same ticket, if it's more, make it a new ticket. Since we're only a few months into using Agile, what's the opinion of all the more seasoned Agile veterans around here on how to handle this?

    Read the article

  • Learn Many Languages

    - by Phil Factor
    Around twenty-five years ago, I was trying to solve the problem of recruiting suitable developers for a large business. I visited the local University (it was a Technical College then). My mission was to remind them that we were a large, local employer of technical people and to suggest that, as they were in the business of educating young people for a career in IT, we should work together. I anticipated a harmonious chat where we could suggest to them the idea of mentioning our name to some of their graduates. It didn’t go well. The academic staff displayed a degree of revulsion towards the whole topic of IT in the world of commerce that surprised me; tweed met charcoal-grey, trainers met black shoes. However, their antipathy to commerce was something we could have worked around, since few of their graduates were destined for a career as university lecturers. They asked me what sort of language skills we needed. I tried ducking the invidious task of naming computer languages, since I wanted recruits who were quick to adapt and learn, with a broad understanding of IT, including development methodologies, technologies, and data. However, they pressed the point and I ended up saying that we needed good working knowledge of C and BASIC, though FORTRAN and COBOL were, at the time, still useful. There was a ghastly silence. It was as if I’d recommended the beliefs and practices of the Bogomils of Bulgaria to a gathering of Cardinals. They stared at me severely, like owls, until the head of department broke the silence, informing me in clipped tones that they taught only Modula 2. Now, I wouldn’t blame you if at this point you hurriedly had to look up ‘Modula 2′ on Wikipedia. Based largely on Pascal, it was a specialist language for embedded systems, but I’ve never ever come across it in a commercial business application. Nevertheless, it was an excellent teaching language since it taught modules, scope control, multiprogramming and the advantages of encapsulating a set of related subprograms and data structures. As long as the course also taught how to transfer these skills to other, more useful languages, it was not necessarily a problem. I said as much, but they gleefully retorted that the biggest local employer, a defense contractor specializing in Radar and military technology, used nothing but Modula 2. “Why teach any other programming language when they will be using Modula 2 for all their working lives?” said a complacent lecturer. On hearing this, I made my excuses and left. There could be no meeting of minds. They were providing training in a specific computer language, not an education in IT. Twenty years later, I once more worked nearby and regularly passed the long-deserted ‘brownfield’ site of the erstwhile largest local employer; the end of the cold war had led to lean times for defense contractors. A digger was about to clear the rubble of the long demolished factory along with the accompanying growth of buddleia and thistles, in order to lay the infrastructure for ‘affordable housing’. Modula 2 was a distant memory. Either those employees had short working lives or they’d retrained in other languages. The University, by contrast, was thriving, but I wondered if their erstwhile graduates had ever cursed the narrow specialization of their training in IT, as they struggled with the unexpected variety of their subsequent careers.

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • The Future of Air Travel: Intelligence and Automation

    - by BobEvans
    Remember those white-knuckle flights through stormy weather where unexpected plunges in altitude result in near-permanent relocations of major internal organs? Perhaps there’s a better way, according to a recent Wall Street Journal article: “Pilots of a Honeywell International Inc. test plane stayed on their initial flight path, relying on the company's latest onboard radar technology to steer through the worst of the weather. The specially outfitted Boeing 757 barely shuddered as it gingerly skirted some of the most ferocious storm cells over Fort Walton Beach and then climbed above the rest in zero visibility.” Or how about the multifaceted check-in process, which might not wreak havoc on liver location but nevertheless makes you wonder if you’ve been trapped in some sort of covert psychological-stress test? Another WSJ article, called “The Self-Service Airport,” says there’s reason for hope there as well: “Airlines are laying the groundwork for the next big step in the airport experience: a trip from the curb to the plane without interacting with a single airline employee. At the airport of the near future, ‘your first interaction could be with a flight attendant,’ said Ben Minicucci, chief operating officer of Alaska Airlines, a unit of Alaska Air Group Inc.” And in the topsy-turvy world of air travel, it’s not just the passengers who’ve been experiencing bumpy rides: the airlines themselves are grappling with a range of challenges—some beyond their control, some not—that make profitability increasingly elusive in spite of heavy demand for their services. A recent piece in The Economist illustrates one of the mega-challenges confronting the airline industry via a striking set of contrasting and very large numbers: while the airlines pay $7 billion per year to third-party computerized reservation services, the airlines themselves earn a collective profit of only $3 billion per year. In that context, the anecdotes above point unmistakably to the future that airlines must pursue if they hope to be able to manage some of the factors outside of their control (e.g., weather) as well as all of those within their control (operating expenses, end-to-end visibility, safety, load optimization, etc.): more intelligence, more automation, more interconnectedness, and more real-time awareness of every facet of their operations. Those moves will benefit both passengers and the air carriers, says the WSJ piece on The Self-Service Airport: “Airlines say the advanced technology will quicken the airport experience for seasoned travelers—shaving a minute or two from the checked-baggage process alone—while freeing airline employees to focus on fliers with questions. ‘It's more about throughput with the resources you have than getting rid of humans,’ said Andrew O'Connor, director of airport solutions at Geneva-based airline IT provider SITA.” Oracle’s attempting to help airlines gain control over these challenges by blending together a range of its technologies into a solution called the Oracle Airline Data Model, which suggests the following steps: • To retain and grow their customer base, airlines need to focus on the customer experience. • To personalize and differentiate the customer experience, airlines need to effectively manage their passenger data. • The Oracle Airline Data Model can help airlines jump-start their customer-experience initiatives by consolidating passenger data into a customer data hub that drives realtime business intelligence and strategic customer insight. • Oracle’s Airline Data Model brings together multiple types of data that can jumpstart your data-warehousing project with rich out-of-the-box functionality. • Oracle’s Intelligent Warehouse for Airlines brings together the powerful capabilities of Oracle Exadata and the Oracle Airline Data Model to give you real-time strategic insights into passenger demand, revenues, sales channels and your flight network. The airline industry aside, the bullet points above offer a broad strategic outline for just about any industry because the customer experience is becoming pre-eminent in each and there is simply no way to deliver world-class customer experiences unless a company can capture, manage, and analyze all of the relevant data in real-time. I’ll leave you with two thoughts from the WSJ article about the new in-flight radar system from Honeywell: first, studies show that a single episode of serious turbulence can wrack up $150,000 in additional costs for an airline—so, it certainly behooves the carriers to gain the intelligence to avoid turbulence as much as possible. And second, it’s back to that top-priority customer-experience thing and the value that ever-increasing levels of intelligence can deliver. As the article says: “In the cabin, reporters watched screens showing the most intense parts of the nearly 10-mile wide storm, which churned some 7,000 feet below, in vibrant red and other colors. The screens also were filled with tiny symbols depicting likely locations of lightning and hail, which can damage planes and wreak havoc on the nerves of white-knuckle flyers.”  (Bob Evans is senior vice-president, communications, for Oracle.)  

    Read the article

  • Documentation and Test Assertions in Databases

    - by Phil Factor
    When I first worked with Sybase/SQL Server, we thought our databases were impressively large but they were, by today’s standards, pathetically small. We had one script to build the whole database. Every script I ever read was richly annotated; it was more like reading a document. Every table had a comment block, and every line would be commented too. At the end of each routine (e.g. procedure) was a quick integration test, or series of test assertions, to check that nothing in the build was broken. We simply ran the build script, stored in the Version Control System, and it pulled everything together in a logical sequence that not only created the database objects but pulled in the static data. This worked fine at the scale we had. The advantage was that one could, by reading the source code, reach a rapid understanding of how the database worked and how one could interface with it. The problem was that it was a system that meant that only one developer at the time could work on the database. It was very easy for a developer to execute accidentally the entire build script rather than the selected section on which he or she was working, thereby cleansing the database of everyone else’s work-in-progress and data. It soon became the fashion to work at the object level, so that programmers could check out individual views, tables, functions, constraints and rules and work on them independently. It was then that I noticed the trend to generate the source for the VCS retrospectively from the development server. Tables were worst affected. You can, of course, add or delete a table’s columns and constraints retrospectively, which means that the existing source no longer represents the current object. If, after your development work, you generate the source from the live table, then you get no block or line comments, and the source script is sprinkled with silly square-brackets and other confetti, thereby rendering it visually indigestible. Routines, too, were affected. In our system, every routine had a directly attached string of unit-tests. A retro-generated routine has no unit-tests or test assertions. Yes, one can still commit our test code to the VCS but it’s a separate module and teams end up running the whole suite of tests for every individual change, rather than just the tests for that routine, which doesn’t scale for database testing. With Extended properties, one can get the best of both worlds, and even use them to put blame, praise or annotations into your VCS. It requires a lot of work, though, particularly the script to generate the table. The problem is that there are no conventional names beyond ‘MS_Description’ for the special use of extended properties. This makes it difficult to do splendid things such ensuring the integrity of the build by running a suite of tests that are actually stored in extended properties within the database and therefore the VCS. We have lost the readability of database source code over the years, and largely jettisoned the use of test assertions as part of the database build. This is not unexpected in view of the increasing complexity of the structure of databases and number of programmers working on them. There must, surely, be a way of getting them back, but I sometimes wonder if I’m one of very few who miss them.

    Read the article

  • Upgrade to Xubuntu 13.10 - Saucy Salamander

    As a common 'fashion' it is possible to upgrade an existing installation of Ubuntu or one of its derivates every six months. Of course, you might opt-in for the adventure and directly keep your system always on the latest version (including alphas and betas), or you might like to play safe and stay on the long-term support (LTS) versions which are updated every two years only. As for me, I'd like to jump from release to release on my main desktop machine. And since 17th October Saucy Salamander or also known as Ubuntu 13.10 has been released for general use. The following paragraphs document the steps I went in order to upgrade my system to the recent version. Don't worry about the fact that I'm actually using Xubuntu. It's mainly a flavoured version of Ubuntu running Xfce 4.10 as default X Window manager. Well, I have Gnome and LXDE on the same system... just out of couriosity. Preparing the system Before you think about upgrading you have to ensure that your current system is running on the latest packages. This can be done easily via a terminal like so: $ sudo apt-get update && sudo apt-get -y dist-upgrade --fix-missing Next, we are going to initiate the upgrade itself: $ sudo update-manager As a result the graphical Software Updater should inform you that a newer version of Ubuntu is available for installation. Ubuntu's Software Updater informs you whether an upgrade is available Running the upgrade After clicking 'Upgrade...' you will be presented with information about the new version. Details about Ubuntu 13.10 (Saucy Salamander) Simply continue with the procedure and your system will be analysed for the next steps. Analysing the existing system and preparing the actual upgrade to 13.10 Next, we are at the point of no return. Last confirmation dialog before having a coffee break while your machine is occupied to download the necessary packages. Not the best bandwidth at hand after all... yours might be faster. Are you really sure that you want to start the upgrade? Let's go and have fun! Anyway, bye bye Raring Ringtail and Welcome Saucy Salamander! In case that you added any additional repositories like Medibuntu or PPAs you will be informed that they are going to be disabled during the upgrade and they might require some manual intervention after completion. Ubuntu is playing safe and third party repositories are disabled during the upgrade Well, depending on your internet bandwidth this might take something between a couple of minutes and some hours to download all the packages and then trigger the actual installation process. In my case I left my PC unattended during the night. Time to reboot Finally, it's time to restart your system and see what's going to happen... In my case absolutely nothing unexpected. The system booted the new kernel 3.11.0 as usual and I was greeted by a new login screen. Honestly, 'same' system as before - which is good and I love that fact of consistency - and I can continue to work productively. And also Software Updater confirms that we just had a painless upgrade: System is running Ubuntu 13.10 - Saucy Salamander - and up to date See you in six months again... ;-) Post-scriptum In case that you would to upgrade to the latest development version of Ubuntu, run the following command in a console: $ sudo update-manager -d And repeat all steps as described above.

    Read the article

  • Exception on SslStream.AuthenticateAsClient (The message was badly formatted)

    - by Noms
    I have got wierd problem going on. I am trying to connect to Apple server via TCP/SSL. I am using a Client certificate provided by Apple for push notifications. I installed the certificate on my server (Win2k3) in both Local Trusted Root certificates and Local Personal Certificates folder. Now I have a class library that deals with that connection, when i call this class library from a console application running from the server it works absolutely fine, but when i call that class library from an asp.net page or asmx web service I get the following exception. A call to SSPI failed, see inner exception. The message received was unexpected or badly formatted. This is my code: X509Certificate cert = new X509Certificate(certificateLocation, certificatePassword); X509CertificateCollection certCollection = new X509CertificateCollection(new X509Certificate[1] { cert }); // OPEN the new SSL Stream SslStream ssl = new SslStream(client.GetStream(), false, new RemoteCertificateValidationCallback(ValidateServerCertificate), null); ssl.AuthenticateAsClient(ipAddress, certCollection, SslProtocols.Default, false); ssl.AuthenticateAsClient is where the error gets thrown. This is driving me nuts. If the console application can connect fine, there must be some problem with asp.net network layer security that is failing the authentication... not sure, perhaps need to add something or some sort of security policy in the web.config. Also just to point out that i can connect fine on my local development machine both with console and website. Anyone has got any ideas?

    Read the article

  • skipped when looking for precompiled header

    - by numerical25
    So some reason, my .cpp file is missing it's header file. But I am not including the header file anywhere else. I just started so I checked all the files I made enginuity.h #ifndef _ENGINE_ #define _ENGINE_ class Enginuity { public: void InitWindow(); }; enginuity.cpp #include "Enginuity.h" void Enginuity::InitWindow() { } main.cpp #include "stdafx.h" #include "GameProject1.h" #define MAX_LOADSTRING 100 // Global Variables: HINSTANCE hInst; // current instance TCHAR szTitle[MAX_LOADSTRING]; // The title bar text TCHAR szWindowClass[MAX_LOADSTRING]; // the main window class name // Forward declarations of functions included in this code module: ATOM MyRegisterClass(HINSTANCE hInstance); BOOL InitInstance(HINSTANCE, int); LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM); INT_PTR CALLBACK About(HWND, UINT, WPARAM, LPARAM); int APIENTRY _tWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { code..... #endif dont know what's going on. The error I get is 1>c:\users\numerical25\desktop\intro todirectx\gameproject\gameproject1\gameproject1\enginuity.cpp(1) : warning C4627: '#include "Enginuity.h"': skipped when looking for precompiled header use 1> Add directive to 'stdafx.h' or rebuild precompiled header 1>c:\users\numerical25\desktop\intro todirectx\gameproject\gameproject1\gameproject1\enginuity.cpp(8) : fatal error C1010: unexpected end of file while looking for precompiled header. Did you forget to add '#include "stdafx.h"' to your source?

    Read the article

  • Installing into the GAC with WiX 3.0

    - by Jeff Yates
    I have a DLL that I would like to install into the Global Assembly Cache so that it can be referenced from multiple locations. I have a File declaration with the Assembly attribute set to ".net" but when the installation tries to install the DLL into the GAC, I get the following error (I have tided it up a bit to make it more readable): MSI (s) (58:38) [19:14:31:031]: Product: MyProductName 1.01 -- Error 1935. An error occurred during the installation of assembly  'Compass,   version="1.0.0.0",   culture="neutral",   publicKeyToken="392B26B760D48103",   processorArchitecture="MSIL"'. Please refer to Help and Support for more information. HRESULT: 0x80131043. assembly interface:       IAssemblyCacheItem, function:             Commit, component: {53AEE63B-F356-4D4F-8D61-EB0640A6E160} I have hunted around to find out what this means and the error relates to FUSION_E_UNEXPECTED_MODULE_FOUND. This link also includes this information: /// When installing multi-file assemblies into the GAC, the hash of each module is /// checked against the hash of that file stored in the manifest. If the /// hash of one of the files in the multi-file assembly does not match what is recorded /// in the manifest, FUSION_E_UNEXPECTED_MODULE_FOUND will be returned. /// The name of the error, and the text description of it, are somewhat confusing. /// The reason this error code is described this way is that the internally, /// Fusion/CLR implements installation of assemblies in the GAC, by installing /// multiple "streams" that are individually committed. /// Each stream has its hash computed, and all the hashes found /// are compared against the hashes in the manifest, at the end of the installation. /// Hence, a file hash mismatch appears as if an "unexpected" module was found. Unfortunately, this doesn't make much sense to me and I don't see how it relates to my assembly, which isn't fancy or complex from my perspective (it's just a regular .NET 3.5 class library and the current installation test is occurring on my development machine, which is a valid target environment for my project - 32-bit Windows XP SP3). Can anyone shed some light on why I might be getting this error and how I might hope to fix it?

    Read the article

  • Selenium IDE - Passing a URL string variable into a conditional 'gotoIf' statement throws a Syntax e

    - by user318568
    I am trying to store the current url (http://example.com)in a variable and compare it with another string as a condition in the gotoIf command (part of the gotoIf extension.js): storeLocation || url gotoIf || ${url}=="http://example.com" || label When I run this seleniun ide throws this error: [error] Unexpected Exception: message -> syntax error, fileName -> chrome://selenium-ide/content/tools.js -> file:///C:/Users/David%20Cunningham/Desktop/extensions_js/extensions.js, lineNumber -> 183, stack -> eval("http://example.com==\"http://example.com\"")@:0 ("http://example.com==\"http://example.com\"","label1")@chrome://selenium-ide/content/tools.js -> file:///C:/Users/David%20Cunningham/Desktop/extensions_js/extensions.js:183 ("http://example.com==\"http://example.com\"","label1")@chrome://selenium-ide/content/selenium/scripts/htmlutils.js:60 ([object Object],[object Object])@chrome://selenium-ide/content/selenium/scripts/selenium-commandhandlers.js:310 ()@chrome://selenium-ide/content/selenium/scripts/selenium-executionloop.js:112 (6)@chrome://selenium-ide/content/selenium/scripts/selenium-executionloop.js:78 (6)@chrome://selenium-ide/content/selenium/scripts/htmlutils.js:60 , name -> SyntaxError storeLocation should return a String so why am i getting this error, what is wrong with the syntax and how do I declare this command?

    Read the article

  • Rally Rest .NET API throws KeyNotFoundException when posting a new defect without required field value

    - by Triet Pham
    I've tried to post a new defect to Rally via Rest .net api by the following code: var api = new RallyRestApi("<myusername>", "<mypassword>", "https://community.rallydev.com"); var defect = new DynamicJsonObject(); defect["Name"] = "Sample Defect"; defect["Description"] = "Test posting defect without required field value"; defect["Project"] = "https://trial.rallydev.com/slm/webservice/1.29/project/5808130051.js"; defect["SubmittedBy"] = "https://trial.rallydev.com/slm/webservice/1.29/user/5797741589.js"; defect["ScheduleState"] = "In-Progress"; defect["State"] = "Open"; CreateResult creationResult = api.Create("defect", defect); But the api throws a weird exception: System.Collections.Generic.KeyNotFoundException was unhandled Message=The given key was not present in the dictionary. Source=mscorlib StackTrace: at System.Collections.Generic.Dictionary`2.get_Item(TKey key) at Rally.RestApi.DynamicJsonObject.GetMember(String name) at Rally.RestApi.DynamicJsonObject.TryGetMember(GetMemberBinder binder, Object& result) at CallSite.Target(Closure , CallSite , Object ) at System.Dynamic.UpdateDelegates.UpdateAndExecute1[T0,TRet](CallSite site, T0 arg0) at Rally.RestApi.RallyRestApi.Create(String type, DynamicJsonObject obj) at RallyIntegrationSample.Program.Main(String[] args) in D:\Projects\qTrace\References\Samples\RallyIntegrationSample\Program.cs:line 24 The problem is when i checked out the trace log file of Rally, it showed exactly what wrong in the posting request: Rally.RestApi Post Response: { "CreateResult": { "_rallyAPIMajor":"1", "_rallyAPIMinor":"29", "Errors":["Validation error: Defect.Severity should not be null"], "Warnings":[] } } Instead of given the proper CreateResult object with according errors information in its property, the Rally Rest .Net Api throws an unexpected exception. Is that a mistake in Rally rest .net api or should i do any extra steps to get the CreatResult seamlessly in case of any errors returned by Rally service? Many thanks for your helps.

    Read the article

  • grails run-war connects to mysql but grails run-war doesn't

    - by damian
    Hi, I have a unexpected problem. I had create a war with grails war. Then I had deployed in Tomcat. For my surprise the crud works fine but I don't know what persistence is using. So I did this test: Compare grails prod run-app with grails prod run-war. The first works fine, and does conect with the mysql database. The other don't. This is my DataSource.groovy: dataSource { pooled = true driverClassName = "com.mysql.jdbc.Driver" username = "grails" password = "mysqlgrails" } hibernate { cache.use_second_level_cache=false cache.use_query_cache=false cache.provider_class='net.sf.ehcache.hibernate.EhCacheProvider' } // environment specific settings environments { development { dataSource { dbCreate = "update" // one of 'create', 'create-drop','update' url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } test { dataSource { dbCreate = "update" // one of 'create', 'create-drop','update' url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } production { dataSource { dbCreate = "update" url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } } Also I extract the war to see if I could find some data source configuration file without success. More info: Grails version: 1.2.1 JVM version: 1.6.0_17 Also I think this question it's similar, but doesn't have a awnser.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >