Search Results

Search found 27581 results on 1104 pages for 'execute command'.

Page 572/1104 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • Sql Server Prevent Saving Changes That Require Table to be Re-created

    When working with SQL Server Management studio, if you use the Design view of a table and attempt to make a change that will require the table to be dropped and re-added, you may receive an error message like this one: Saving changes is not permitted.  The changes you have made require the following tables to be dropped and re-created.  You have either made changes to a table that cant be re-created or enabled the option Prevent saving changes that require the table to be re-created. In truth, its quite likely that you didnt enable such an option, despite the error dialogs accusations, as it is enabled by default when you install SQL Management Studio.  You can learn more about the issue in the KB article, Error message when you try to save a table in SQL Server 2008: Saving changes is not permitted. Warning: As the above article states, it is not recommended that you turn off this option (at least not permanently), as it will ensure that you do not accidentally change the schema of a table such that data is lost.  Do so at your peril. The simplest way to bypass this error is to go into Option Designers and uncheck the option Prevent saving changes that require table re-creation.  See the screenshot below. The main reason why you will see this error is if you attempted to do any of the following to the table whose design you are saving: Change the Allow Nulls setting for a column Reorder columns Change any columns data type Add a new column The recommended workaround is to script out the changes to a SQL file and execute them by hand, or to simply write out your own T-SQL to make the changes. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How can I fix the "TERM environment variable not set" warning in eclipse

    - by Robert
    I'm running ubuntu 12.04 LTS and working with eclipse (juno) on a c++ project. I keep getting "TERM environment variable not set" in the console while trying to run the program. I realize this means the variable needs to get set. My question is what should it be set to and how do I set it? I've read that it should be 'xterm' in a few places. So I added export TERM=xterm in my .profile and while eclipse stopped giving me the warning, instead it would output unreadable garbage everynow and then (not a side effect of the program). It did display the program output but intermixed were weird characters. This leads me to believe it's not 'xterm' I should be setting TERM to. Or I'm setting it in an incorrect way. Any help is appreciated. Sample output: **TERM environment variable not set.** Please make a selection ----------------------- 1. Create a budget 2. Edit a budget 3. Display a budget 4. Save a budget 5. Load a budget 6. Exit What is your selection: 1 **TERM environment variable not set.** Enter the name of your budget: etc The program continues to execute as expected but the message is highly annoying As someone has commented, I do use system("clear") which is likely the source of the warning? Either way, is this likely just an eclipse issue or something I can fix in ubuntu/linux

    Read the article

  • Trying to install flash player on ubuntu 12.04

    - by Eric
    I am having trouble installing this program. I do not know how to locate the browser plugins directory, or change the directory in the terminal. Installation instructions Installing using the plugin tar.gz: Unpack the plugin tar.gz and copy the files to the appropriate location. Save the plugin tar.gz locally and note the location the file was saved to. Launch terminal and change directories to the location the file was saved to. Unpack the tar.gz file. Once unpacked you will see the following: + libflashplayer.so + /usr Identify the location of the browser plugins directory, based on your Linux distribution and Firefox version Copy libflashplayer.so to the appropriate browser plugins directory. At the prompt type: cp libflashlayer.so <BrowserPluginsLocation> Copy the Flash Player Local Settings configurations files to the /usr directory. At the prompt type: sudo cp -r usr/* /usr Installing the plugin using RPM: - As root, enter in terminal: rpm -Uvh <rpm_package_file> - Click Enter key and follow prompts Installing the standalone player Unpack the tar.gz file To execute the standalone player Double-click, or enter in terminal: ./flashplayer

    Read the article

  • PLINQ Adventure Land - WaitForAll

    - by adweigert
    PLINQ is awesome for getting a lot of work done fast, but one thing I haven't figured out yet is how to start work with PLINQ but only let it execute for a maximum amount of time and react if it is taking too long. So, as I must admit I am still learning PLINQ, I created this extension in that ignorance. It behaves similar to ForAll<> but takes a timeout and returns false if the threads don't complete in the specified amount of time. Hope this helps someone else take PLINQ further, it definitely has helped for me ...  public static bool WaitForAll<T>(this ParallelQuery<T> query, TimeSpan timeout, Action<T> action) { Contract.Requires(query != null); Contract.Requires(action != null); var exception = (Exception)null; var cts = new CancellationTokenSource(); var forAllWithCancellation = new Action(delegate { try { query.WithCancellation(cts.Token).ForAll(action); } catch (OperationCanceledException) { // NOOP } catch (AggregateException ex) { exception = ex; } }); var mrs = new ManualResetEvent(false); var callback = new AsyncCallback(delegate { mrs.Set(); }); var result = forAllWithCancellation.BeginInvoke(callback, null); if (mrs.WaitOne(timeout)) { forAllWithCancellation.EndInvoke(result); if (exception != null) { throw exception; } return true; } else { cts.Cancel(); return false; } }

    Read the article

  • Oracle VM 3: New Patch Set! (or Mega Millions winner?...you decide!..)

    - by Adam Hawley
    Today, my favorite number is 14736185 (despite the fact that it did not win me $249million in the MegaMillions lottery...or did it?)!  Why?  Because it is our latest patch release and it is chock-full of good stuff for the Oracle VM 3.0 user.  Oracle VM support customers can find it on My Oracle Support as patch number 14736185.   This can be installed on Oracle VM 3.0.x systems as an incremental patch on top of 3.0.3, so if you previously ran 3.0.3 GA or updated to 3.0.3 patch 1 ( build 150) this will just apply on top.  We're recommending you update to this patch set at your earliest convenience.  For more details, see below but also see Wim Coekaert's blog with related info here. Oracle VM Manager Update Instructions Oracle VM Manager 3.0.2 or 3.0.3 can be upgraded to this Oracle VM Manager 3.0.3 patch update. Unzip the patch file on the server running Oracle VM Manager and execute the runUpgrader.sh script. # ./runUpgrader.sh Please refer to Oracle VM Installation and Upgrade Guide for details. Upgrade Oracle VM Servers It's highly recommended to update Oracle VM Server 3.0.3 with the latest patch update. Please review Oracle VM 3.0.3 User Guide http://docs.oracle.com/cd/E26996_01/e18549/BABDDEGC.html for specific instructions how to use Yum repository to perform the server update. To receive notification on the software update delivered to Oracle Unbreakable Linux Network (ULN, http://linux.oracle.com) for Oracle VM, you can sign up here http://oss.oracle.com/mailman/listinfo/oraclevm-errata.  Additional Information Oracle VM documentation is available on the Oracle Technology Network (OTN):http://www.oracle.com/technetwork/server-storage/vm/documentation/index.html  Please refer to the Oracle VM 3.0.3 Release Notes for a list of features and known issues. For the latest information, best practices white papers and webinars, please visit http://oracle.com/virtualization

    Read the article

  • Cron not able to succesfully change background

    - by Solenoid
    I'm running 12.04 with a custom XML background (modification on Day of Ubuntu) that changes based on time of day. I've noticed that there's a significant delay between when the changes are scheduled to take place in the XML file and when they actually show up on the background. I've also noticed that when I resume from suspend I don't get the correct background image either. I've found that cycling the wallpaper manually will fix this, and I've written a script to automate the process. If I execute the script manually it works fine. However, when I schedule the script to run in cron, cron doesn't change the background. To make sure that the script was being run properly by cron, I had it create a directory in my home folder after running the background change, and the directory is created successfully, so I know cron is running and executing the script. My script: #!/bin/bash sleep 5 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU2.xml sleep 1 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU.xml sleep 1 mkdir /home/zak/iscronworking exit Is cron just not able to access gsettings? The job is on my user crontab so it shouldn't be running as root. Alternately, is there any way to make Precise play nicely with XML wallpaper?

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • What are some of the benefits of a "Micro-ORM"?

    - by Wayne M
    I've been looking into the so-called "Micro ORMs" like Dapper and (to a lesser extent as it relies on .NET 4.0) Massive as these might be easier to implement at work than a full-blown ORM since our current system is highly reliant on stored procedures and would require significant refactoring to work with an ORM like NHibernate or EF. What is the benefit of using one of these over a full-featured ORM? It seems like just a thin layer around a database connection that still forces you to write raw SQL - perhaps I'm wrong but I was always told the reason for ORMs in the first place is so you didn't have to write SQL, it could be automatically generated; especially for multi-table joins and mapping relationships between tables which are a pain to do in pure SQL but trivial with an ORM. For instance, looking at an example of Dapper: var connection = new SqlConnection(); // setup here... var person = connection.Query<Person>("select * from people where PersonId = @personId", new { PersonId = 42 }); How is that any different than using a handrolled ADO.NET data layer, except that you don't have to write the command, set the parameters and I suppose map the entity back using a Builder. It looks like you could even use a stored procedure call as the SQL string. Are there other tangible benefits that I'm missing here where a Micro ORM makes sense to use? I'm not really seeing how it's saving anything over the "old" way of using ADO.NET except maybe a few lines of code - you still have to write to figure out what SQL you need to execute (which can get hairy) and you still have to map relationships between tables (the part that IMHO ORMs help the most with).

    Read the article

  • Updated Batch Best Practices

    - by ACShorten
    The Batch Best Practices whitepaper has been updated and published to My Oracle Support with the latest advice and new facilities. Two of the more interesting updates are: Addition of a Bache Cache Flush - Just like the online, the batch component of the framework caches data. It is now possible to refresh the cache manually using a new batch jobs F1-FLUSH. This is particularly useful if you execute long running threadpool workers across many different batch processes (submitters). New EXTENDED execution mode - A new specialist mode has been introduced for sites that use a large number of submitters (concurrent threads) and are experiencing intermittent communication issues in the threadpoolworker. This mode uses Oracle Coherences Extend mode to allow submitters to be allocated to threadpoolworkers via proxy connections. It differs from CLUSTERED mode in that a submitter can be explicitly allocated to a specific threadpoolworker via a proxy connection. This mode is only used for specific situation and customers should continue to use CLUSTERED mode gemerally. The whitepaper outlines advice for these new facilities and provides advice for existing functionality. The whitepaper is available from My Oracle Support at Doc Id: 836362.1.

    Read the article

  • How to determine if you should use full or differential backup?

    - by Peter Larsson
    Or ask yourself, "How much of the database has changed since last backup?". Here is a simple script that will tell you how much (in percent) have changed in the database since last backup. -- Prepare staging table for all DBCC outputs DECLARE @Sample TABLE         (             Col1 VARCHAR(MAX) NOT NULL,             Col2 VARCHAR(MAX) NOT NULL,             Col3 VARCHAR(MAX) NOT NULL,             Col4 VARCHAR(MAX) NOT NULL,             Col5 VARCHAR(MAX)         )   -- Some intermediate variables for controlling loop DECLARE @FileNum BIGINT = 1,         @PageNum BIGINT = 6,         @SQL VARCHAR(100),         @Error INT,         @DatabaseName SYSNAME = 'Yoda'   -- Loop all files to the very end WHILE 1 = 1     BEGIN         BEGIN TRY             -- Build the SQL string to execute             SET     @SQL = 'DBCC PAGE(' + QUOTENAME(@DatabaseName) + ', ' + CAST(@FileNum AS VARCHAR(50)) + ', '                             + CAST(@PageNum AS VARCHAR(50)) + ', 3) WITH TABLERESULTS'               -- Insert the DBCC output in the staging table             INSERT  @Sample                     (                         Col1,                         Col2,                         Col3,                         Col4                     )             EXEC    (@SQL)               -- DCM pages exists at an interval             SET    @PageNum += 511232         END TRY           BEGIN CATCH             -- If error and first DCM page does not exist, all files are read             IF @PageNum = 6                 BREAK             ELSE                 -- If no more DCM, increase filenum and start over                 SELECT  @FileNum += 1,                         @PageNum = 6         END CATCH     END   -- Delete all records not related to diff information DELETE FROM    @Sample WHERE   Col1 NOT LIKE 'DIFF%'   -- Split the range UPDATE  @Sample SET     Col5 = PARSENAME(REPLACE(Col3, ' - ', '.'), 1),         Col3 = PARSENAME(REPLACE(Col3, ' - ', '.'), 2)   -- Remove last paranthesis UPDATE  @Sample SET     Col3 = RTRIM(REPLACE(Col3, ')', '')),         Col5 = RTRIM(REPLACE(Col5, ')', ''))   -- Remove initial information about filenum UPDATE  @Sample SET     Col3 = SUBSTRING(Col3, CHARINDEX(':', Col3) + 1, 8000),         Col5 = SUBSTRING(Col5, CHARINDEX(':', Col5) + 1, 8000)   -- Prepare data outtake ;WITH cteSource(Changed, [PageCount]) AS (     SELECT      Changed,                 SUM(COALESCE(ToPage, FromPage) - FromPage + 1) AS [PageCount]     FROM        (                     SELECT CAST(Col3 AS INT) AS FromPage,                             CAST(NULLIF(Col5, '') AS INT) AS ToPage,                             LTRIM(Col4) AS Changed                     FROM    @Sample                 ) AS d     GROUP BY    Changed     WITH ROLLUP ) -- Present the final result SELECT  COALESCE(Changed, 'TOTAL PAGES') AS Changed,         [PageCount],         100.E * [PageCount] / SUM(CASE WHEN Changed IS NULL THEN 0 ELSE [PageCount] END) OVER () AS Percentage FROM    cteSource

    Read the article

  • Displaying the Saved Pictures in the Windows Phone 8 emulator

    - by Laurent Bugnion
    One cool feature of the Windows Phone emulator is that it allows you to select pictures from your app (using the PhotoChooserTask) without having to try your app on a physical device. For example, this code (which I used in some of my recent presentations) will trigger the Photo Chooser UI to be displayed on the emulator too: private Action<IEnumerable<IImageFileInfo>> _callback; public void SelectFiles(Action<IEnumerable<IImageFileInfo>> callback) { var task = new PhotoChooserTask { ShowCamera = true }; task.Completed += TaskCompleted; _callback = callback; task.Show(); } void TaskCompleted(object sender, PhotoResult e) { if (e.Error == null && e.ChosenPhoto != null && _callback != null) { var fileName = e.OriginalFileName .Substring(e.OriginalFileName.LastIndexOf("\\") + 1); var info = new FileViewModel(e.ChosenPhoto, fileName); var infos = new List<IImageFileInfo> { info }; _callback(infos); } } In Windows Phone 8 however, when you execute this code, you will be shown an almost empty Photo Chooser UI: Notice that the “Saved Pictures” album is missing. At first I thought it was just not there at all, but you can actually restore it with the following steps: Press on the Windows button On the main screen, press on Photos Press on Albums Open the so called “8” photo album Press Back until you are back into your app and try again. This time you will see the saved pictures, and can perform your tests in more realistic conditions! Happy coding! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Help in (re)designing my Swing application

    - by Harihar Das
    I have developed a Swing application that controls execution of several script like jobs. I need to display the interim output of the jobs concurrently. I have followed MVC while writing the application. The application is working as expected. But off late I have the following requirements in hand: A few of the script jobs need special user privileges to execute so as to access specialized resources. There seems to be now way in Java to impersonate as a different user while running an application.[examined in this question]. Also trying to run the Swing application as a scheduled task in windows is not helping. Once started the jobs should be running even if the user logs off after starting the jobs. I am thinking of separating the execution logic from the UI and run that as a service; and introduce JMS in between the two layers so as to store/retrieve the interim the output. Note: I need to run this application on windows Any ideas on meeting my requirements will be highly appreciated.

    Read the article

  • Oracle nomeada pela Forrester Leader em Enterprise Business Intelligence Platforms

    - by Paulo Folgado
    According to an October 2010 report from independent analyst firm Forrester Research, Inc., Oracle is a leader in enterprise business intelligence (BI) platforms. Forrester Research defines BI as a set of methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information, which can then be used to enable more effective strategic, tactical, and operational insights and decision-making. Written by Forrester vice president and principal analyst Boris Evelson, The Forrester Wave: Enterprise Business Intelligence Platforms, Q4 2010 states that "Oracle has built new metadata-level [Oracle Business Intelligence Enterprise Edition 11g] integration with Oracle Fusion Middleware and Oracle Fusion Applications and continues to differentiate with its versatile ROLAP engine." The report goes on, "And in addition to closing some gaps it had in 10.x versions such as lack of RIA functionality, [the Oracle Business Intelligence Enterprise Edition 11g] actually leapfrogs the competition with the Common Enterprise Information Model (CEIM)--including the ability to define actions and execute processes right from BI metadata across BI and ERP applications." "We're pleased that the Forrester Wave recognizes Oracle Business Intelligence as a leading enterprise BI platform," said Paul Rodwick, vice president of product management, Oracle Business Intelligence. Key Innovations in Oracle Business Intelligence 11g Released in August 2010, Oracle Business Intelligence 11g represents the industry's most complete, integrated, and scalable suite of BI products. Encompassing thousands of new features and enhancements, the latest release offers three key areas of innovations. * A unified environment. The industry's first unified environment for accessing and analyzing data across relational, OLAP, and XML data sources. * Enhanced usability. A new, integrated scorecard application, plus innovations in reporting, visualization, search, and collaboration. * Enhanced performance, scalability, and security. Deeper integration with Oracle Enterprise Manager 11g and other components of Oracle Fusion Middleware provide lower management costs and increased performance, scalability, and security. Read the entire Forrester Wave Report.

    Read the article

  • Simultaneous AI in turn based games

    - by Eduard Strehlau
    I want to hack together a roguelike. Now I thought about entity and world representation and got to a quite big problem. If you want all the AI to act simultaneously you would normally(in cellular automa for examble) just copy the cell buffer and let all action of indiviual cells depend on the copy. Actions which are not valid anymore after some cell before the cell you are currently operating on changed the original enviourment(blocking the path) are just ignored or reapplied with the "current"(between turns) environment. After all cells have acted you copy the current map to the buffer again. Now for an environment with complex AI and big(datawise) entities the copying would take too long. So I thought you could put every action and entity makes into a que(make no changes to the environment) and execute the whole que after everyone took their move. Every interaction on this que are realy interacting entities, so if a entity tries to attack another entity it sends a message to it, the consequences of the attack would be visible next turn, either by just examining the entity or asking the entity for data. This would remove problems like what happens if an entity dies middle in the cue but got actions or is messaged later on(all messages would go to null, and the messages from the entity would either just be sent or deleted(haven't decided yet) But what would happen if a monster spawns a fireball which by itself tracks the player(in the same turn). Should I add the fireball to the enviourment beforehand, so make a change to the environment before executing the action list or just add the ball to the "need updated" list as a special case so it doesn't exist in the environment and still operates on it, spawing after evaluating the action list? Are there any solutions or papers on this subject which I can take a look at? EDIT: I don't need information on writing a roguelike I need information on turn based ai in respective to a complex enviourment.

    Read the article

  • Is mocking for unit testing appropriate in this scenario?

    - by Vinoth Kumar
    I have written around 20 methods in Java and all of them call some web services. None of these web services are available yet. To carry on with the server side coding, I hard-coded the results that the web-service is expected to give. Can we unit test these methods? As far as I know, unit testing is mocking the input values and see how the program responds. Are mocking both input and ouput values meaningful? Edit : The answers here suggest I should be writing unit test cases. Now, how can I write it without modifying the existing code ? Consider the following sample code (hypothetical code) : public int getAge() { Service s = locate("ageservice"); // line 1 int age = s.execute(empId); // line 2 return age; // line 3 } Now How do we mock the output ? Right now , I am commenting out 'line 1' and replacing line 2 with int age= 50. Is this right ? Can anyone point me to the right way of doing it ?

    Read the article

  • Ubuntu Server 12.04 as a router. Problem with DNS

    - by Lorenzo
    I have a virtualbox lab made up of 4 Windows 2008 R2 servers (DC/DNS,SQL,SHAREPOINT, EXCHANGE) that are configured with static ip addresses with NIC's attached to Internal network. Everything works. I had the requirement to execute some tests that also access external services available on the internet. To keep things clean and similar to the production environment I have installed another VM, with Ubuntu Server 12.04 64 bit and configured (I hope) to work as a router like described on this post. This VM has two network interfaces: first is Bridged with the host and is used as a WAN connection and the other one attached in the Internal Network with its own static IP address on the internal network subnet. But actually the Windows servers does not connect to the internet while the unix one connects. I did a route command. this is the result: Kernel IP Routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.69.121.1 0.0.0.0 UG 100 0 0 eth0 10.69.121.0 * 255.255.255.0 U 0 0 0 eth0 192.168.83.0 * 255.255.255.0 U 0 0 0 eth1 Can somebody help me with this configuration? :) Thanks! Addendum: I forgot to mention that one of the windows server hosts a DNS service for which I should maybe configure a forwarding server but I do not exactly know which server to forward on... :(

    Read the article

  • When does a Project Manager start in a project?

    - by johndoucette
    From a colleague of mine… “As a project manager, when do you typically like to get initially involved in the project? Is it better for the PM to be rolled on during the project kick-off, the first week, or is it better to roll-on the second week when things settle down?” My textbook answer is “the Project Manager is responsible for the successful completion and delivery of the expected outcome of the project through the following major tasks;” 1.    Identifying requirements 2.    Establishing clear and achievable objectives 3.    Balancing the competing demands for quality, scope, time, and cost 4.    Adapting the specifications, plans, and approach to the different concerns and expectations of the various stakeholders However; My colleague is often a lead technical consultant coming into a project alone to help a client solve a complex problem. As Magenic consultants, we all possess many of the “project managing” skills I talked about above and tend to be responsible for item #1 and #2 as well as the actual architecture/design tasks early in a project. When the real development begins and there is no PM involved, the project will quickly get harder to execute unless items #3 & #4 are assigned to a Project Manager. In software development, the concept of context switching between coding and other administrative activities is the hardest skill perfect. In my experience, I have rarely been introduced to someone who has mastered this skill. This is the limbo I was in when I was asked to become a PM -- while still developing. “Put down the code” was not only a profound statement, but looking back – a necessary one. Unless you are lucky to have found that one developer who is a superman, asking your developers (internal corporate or consultant) to perform #3 and #4 tasks, will surely take more time, allow opportunity for more scope, and eventually cost more. Project Managers are crucial to the overall success of a project, and I prefer them to start by taking ownership of delivery on day one.

    Read the article

  • Distinction between API and frontend-backend

    - by Jason
    I'm trying to write a "standard" business web site. By "standard", I mean this site runs the usual HTML5, CSS and Javascript for the front-end, a back-end (to process stuff), and runs MySQL for the database. It's a basic CRUD site: the front-end just makes pretty whatever the database has in store; the backend writes to the database whatever the user enters and does some processing. Just like most sites out there. In creating my Github repositories to begin coding, I've realized I don't understand the distinction between the front-end back-end, and the API. Another way of phrasing my question is: where does the API come into this picture? I'm going to list some more details and then questions I have - hopefully this gives you guys a better idea of what my actual question is, because I'm so confused that I don't know the specific question to ask. Some more details: I'd like to try the Model-View-Controller pattern. I don't know if this changes the question/answer. The API will be RESTful I'd like my back-end to use my own API instead of allowing the back-end to cheat and call special queries. I think this style is more consistent. My questions: Does the front-end call the back-end which calls the API? Or does the front-end just call the API instead of calling the back-end? Does the back-end just execute an API and the API returns control to the back-end (where the back-end acts as the ultimate controller, delegating tasks)? Long and detailed answers explaining the role of the API alongside the front-end back-end are encouraged. If the answer depends on the model of programming (models other than the Model-View-Controller pattern), please describe these other ways of thinking of the API. Thanks. I'm very confused.

    Read the article

  • Exadata X3 In-Memory Database Machine: To be or not to be

    - by Luis Moreno Campos
    Since Larry Ellison announced Oracle Exadata X3 as the new generation of the Database Machine, he established the product in the In-Memory Database arena. And that annoyed some people. We all know that In-Memory Databases are the ones that *only* execute in memory and use the other layers of storage for persistency (mainly disk). Oracle database has always been a technology that uses memory as a caching mechanism and that hasn't change nor it will change with Oracle Database 12c. So this is the central point of fuss when it comes to announcing an Engineered Systems as In-Memory Database, when in fact it still runs Oracle Database, not vanilla but still the same product. Let me tell you purist people out there: when you find no new ground breaking point to get all excited about you decide to bash it, and go against its claims. It's not like a car manufacturer that launches a mini-van in the market and calls it a Sports Car, we are talking about a fundamental change in the ILM stack: level 2 of caching is now self sufficient. It's not DRAM? Who cares, still let's you put in flash amounts of data not done up until now, so I guess Oracle can name it whatever Larry wants because in the end it's something never done before. Now let's imagine that you hop on the pure In-Memory Database bandwagon. You would be stuck with a database technology that lags behind the Oracle Database hundreds of light years in man/hours innovations and features. Do you really want to travel back in time? Remember, the first rule about time travelling is that "Security is not Guaranteed". Your choice. LMC

    Read the article

  • General questions regarding open-source licensing

    - by ndg
    I'm looking to release an open-source iOS software project but I'm very new to the licensing side of the things. While I'm aware that the majority of answers here will not lawyers, I'd appreciate it if anyone could steer me in the right direction. With the exception of the following requirements I'm happy for developers to largely do whatever they want with the projects source code. I'm not interested in any copyleft licensing schemes, and while I'd like to encourage attribution in derivative works it is not required. As such, my requirements are as follows: Original source can be distributed and re-distributed (verbatim) both commercially and non-commercially as long as the original copyright information, website link and license is maintained. I wish to retain rights to any of the multi-media distributed as part of the project (sound effects, graphics, logo marks, etc). Such assets will be included to allow other developers to easily execute the project, but cannot be re-distributed in any manner. I wish to retain rights to the applications name and branding. Futher to selecting an applicable license, I have the following questions: The project makes use of a number of third-party libraries (all licensed under variants of the MIT license). I've included individual licenses within the source (and application) and believe I've met all requirements expressed in these licenses, but is there anything else that needs to be done before distributing them as part of my open-source project? Also included in my project is a single proprietary, close-sourced library that's used to power a small part of the application. I'm obviously unable to include this in the source release, but what's the best way of handling this? Should I simply weak-link the project and exclude it entirely from the Git project?

    Read the article

  • Why does this game loop stop my process from responding?

    - by Ben
    I implemented a fixed time step loop for my C# game. All it does at the moment is make a square bounce around the screen. The problem I'm having is that when I execute the program, I can't close it from the window's close button and the cursor is stuck on the "busy" icon. I have to go into Visual Studio and stop the program manually. Here's the loop at the moment: public void run() { int updates = 0; int frames = 0; double msPerTick = 1000.0 / 60.0; double threshhold = 0; long lastTime = getCurrentTime(); long lastTimer = getCurrentTime(); while (true) { long currTime = getCurrentTime(); threshhold += (currTime - lastTime) / msPerTick; lastTime = currTime; while (threshhold >= 1) { update(); updates++; threshhold -= 1; } this.Refresh(); frames++; if ((getCurrentTime() - lastTimer) >= 1000) { this.Text = updates + " updates and " + frames + " frames per second"; updates = 0; frames = 0; lastTimer += 1000; } } } Why is this happening?

    Read the article

  • OWB 11gR2 &ndash; JDBC Helper Utility

    - by David Allan
    One of the common queries when importing the tables via JDBC with 11gR2 is determining why the import wizard doesn’t display the tables that you think it should. I often just use the script below to dump out the schemas, tables and columns that the JDBC driver is returning. This is useful in a few areas; to figure out what the schema name is returned to double check with the schema name you have used in the location (this is used in the DatabaseMetaData.getTables API call within the basic JDBC metadata import. to figure out the data types returned from the JDBC driver when you see columns skipped because of no datatype supported messages. also…I can do it via scripting and don’t need to recompile classes and stuff :-) Edit the tcl script and set the JDBC driver, the connection URL and the username and password (they are at the bottom of the script), the script then calls a basic tcl procedure which writes to standard out the schemas, tables and columns with various properties. For example I executed it using the XML JDBC driver from ODI over a simple customers XML file and it writes the following metadata; You can add more details as you need and execute from the OMBPlus panel within OWB. Download the sample tcl jdbc script here There is a bunch of really useful stuff on OTN documenting this area (start with the white paper here) that is worth checking out all related to the OWB SDK covering everything from platform definitions, custom metadata importers, application adapters, code templates etc. You can find a bunch of goodies on the OWB SDK here.

    Read the article

  • i can't uninstall ubuntu software

    - by cunix
    root@cunix:/home/cunix# sudo apt-get remove fern-wifi-cracker Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libqt4-test libqt4-sql-mysql mysql-common libqt4-xmlpatterns libqt4-help python-qt4 python-sip libqt4-sql-sqlite libqt4-sql macchanger libqt4-designer libmysqlclient16 python-scapy libqt4-scripttools Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: fern-wifi-cracker 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 3,514kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 167661 files and directories currently installed.) Removing fern-wifi-cracker ... dpkg (subprocess): unable to execute installed pre-removal script (/var/lib/dpkg/info/fern-wifi-cracker.prerm): Exec format error dpkg: error processing fern-wifi-cracker (--remove): subprocess installed pre-removal script returned error exit status 2 Errors were encountered while processing: fern-wifi-cracker E: Sub-process /usr/bin/dpkg returned an error code (1) how to uninstall?

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • SQL SERVER – A Puzzle Part 2 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value

    - by pinaldave
    Before continuing this blog post – please read the first part of the SEQUENCE Puzzle here A Puzzle – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value. Where we played a simple guessing game about predicting next value. The answers the of puzzle is shared on the blog posts as a comment. Now here is the next puzzle based on yesterday’s puzzle. First execute the script which I have written here. The only difference between yesterday’s script is that I have removed the MINVALUE as 1 from the syntax. Now guess what will be the next value as requested in the query. USE TempDB GO -- Create sequence CREATE SEQUENCE dbo.SequenceID AS BIGINT START WITH 3 INCREMENT BY 1 MAXVALUE 5 CYCLE NO CACHE; GO -- Following will return 3 SELECT next value FOR dbo.SequenceID; -- Following will return 4 SELECT next value FOR dbo.SequenceID; -- Following will return 5 SELECT next value FOR dbo.SequenceID; -- Following will return which number SELECT next value FOR dbo.SequenceID; -- Clean up DROP SEQUENCE dbo.SequenceID; GO Above script gave me following resultset. 3 is the starting value and 5 is the maximum value. Once Sequence reaches to maximum value what happens? and WHY? I (kindly) suggest you try to attempt to answer this question without running this code in SQL Server 2012. I am very confident that irrespective of SQL Server version you are running you will have great learning. I will follow up of the answer in comments below. Recently my friend Vinod Kumar wrote excellent blog post on SQL Server 2012: Using SEQUENCE, you can head over there for learning sequence in details. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >