Search Results

Search found 28627 results on 1146 pages for 'case statement'.

Page 76/1146 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • How can I make a case for "dependency management"?

    - by C. Ross
    I'm currently trying to make a case for adopting dependency management for builds (ala Maven, Ivy, NuGet) and creating an internal repository for shared modules, of which we have over a dozen enterprise wide. What are the primary selling points of this build technique? The ones I have so far: Eases the process of distributing and importing shared modules, especially version upgrades. Requires the dependencies of shared modules to be precisely documented. Removes shared modules from source control, speeding and simplifying checkouts/check ins (when you have applications with 20+ libraries this is a real factor). Allows more control or awareness of what third party libs are used in your organization. Are there any selling points that I'm missing? Are there any studies or articles giving improvement metrics?

    Read the article

  • Can I build or test a computer without a case?

    - by jasondavis
    I am in the process of building a really nice new PC right now. It's going to have a nice Lian Li case with the internals powder coated black and all the wires will be sleeved. So my problem is I am getting parts in a couple days but my case will not be completed for about a month because it is on back order plus time to powder coat it. I am purchasing many of m y parts from newegg.com and they claim you must return any dead parts within 30 days or the invoice for there warranty to replace bad parts. So is it possible for me to set up the PC without a case just to test that the main parts are working correctly within the timeframe I am allowed? If this is possible, how do I deal with turning the system on/off without a powere button? Or is therer one on a motherboard? Thanks for any tips/advice

    Read the article

  • What is happening in this T-SQL code? (Concatenting the results of a SELECT statement)

    - by Ben McCormack
    I'm just starting to learn T-SQL and could use some help in understanding what's going on in a particular block of code. I modified some code in an answer I received in a previous question, and here is the code in question: DECLARE @column_list AS varchar(max) SELECT @column_list = COALESCE(@column_list, ',') + 'SUM(Case When Sku2=' + CONVERT(varchar, Sku2) + ' Then Quantity Else 0 End) As [' + CONVERT(varchar, Sku2) + ' - ' + Convert(varchar,Description) +'],' FROM OrderDetailDeliveryReview Inner Join InvMast on SKU2 = SKU and LocationTypeID=4 GROUP BY Sku2 , Description ORDER BY Sku2 Set @column_list = Left(@column_list,Len(@column_list)-1) Select @column_list ---------------------------------------- 1 row is returned: ,SUM(Case When Sku2=157 Then Quantity Else 0 End) As [157 -..., SUM(Case ... The T-SQL code does exactly what I want, which is to make a single result based on the results of a query, which will then be used in another query. However, I can't figure out how the SELECT @column_list =... statement is putting multiple values into a single string of characters by being inside a SELECT statement. Without the assignment to @column_list, the SELECT statement would simply return multiple rows. How is it that by having the variable within the SELECT statement that the results get "flattened" down into one value? How should I read this T-SQL to properly understand what's going on?

    Read the article

  • How to understand "if ( obj.length === +obj.length )" Javascript condition statement?

    - by humanityANDpeace
    I have run across a condition statement which I have some difficulties to understand. It looks like (please note the +-sign on the right-hand-side) this: obj.length === +obj.length. Can this condition and its purpose/syntax be explained? Looking at the statement (without knowing it) provokes the impression that it is a dirty hack of some sort, but I am almost certain that underscore.js is rather a well designed library, so there must be a better explanation. Background I found this statement used in some functions of the underscore.js library (underscore.js annotated source). My guesswork is that this condition statement is somehow related to testing for a variable obj to be of Array type? (but I am totally unsure). I have tried to test this using this code. var myArray = [1,2,3]; testResult1 = myArray.length === +myArray.length; console.log( testResult1 ); //prints true var myObject = { foo : "somestring", bar : 123 }; testResult2 = myObject.length === +myObject.length; console.log( testResult2 ); //prints false

    Read the article

  • SQL SERVER – Get All the Information of Database using sys.databases

    - by pinaldave
    Earlier I wrote blog article SQL SERVER – Finding Last Backup Time for All Database. In the response of this article I have received very interesting script from SQL Server Expert Matteo as a comment in the blog. He has written script using sys.databases which provides plenty of the information about database. I suggest you can run this on your database and know unknown of your databases as well. SELECT database_id, CONVERT(VARCHAR(25), DB.name) AS dbName, CONVERT(VARCHAR(10), DATABASEPROPERTYEX(name, 'status')) AS [Status], state_desc, (SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS DataFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS [Data MB], (SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'log') AS LogFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'log') AS [Log MB], user_access_desc AS [User access], recovery_model_desc AS [Recovery model], CASE compatibility_level WHEN 60 THEN '60 (SQL Server 6.0)' WHEN 65 THEN '65 (SQL Server 6.5)' WHEN 70 THEN '70 (SQL Server 7.0)' WHEN 80 THEN '80 (SQL Server 2000)' WHEN 90 THEN '90 (SQL Server 2005)' WHEN 100 THEN '100 (SQL Server 2008)' END AS [compatibility level], CONVERT(VARCHAR(20), create_date, 103) + ' ' + CONVERT(VARCHAR(20), create_date, 108) AS [Creation date], -- last backup ISNULL((SELECT TOP 1 CASE TYPE WHEN 'D' THEN 'Full' WHEN 'I' THEN 'Differential' WHEN 'L' THEN 'Transaction log' END + ' – ' + LTRIM(ISNULL(STR(ABS(DATEDIFF(DAY, GETDATE(),Backup_finish_date))) + ' days ago', 'NEVER')) + ' – ' + CONVERT(VARCHAR(20), backup_start_date, 103) + ' ' + CONVERT(VARCHAR(20), backup_start_date, 108) + ' – ' + CONVERT(VARCHAR(20), backup_finish_date, 103) + ' ' + CONVERT(VARCHAR(20), backup_finish_date, 108) + ' (' + CAST(DATEDIFF(second, BK.backup_start_date, BK.backup_finish_date) AS VARCHAR(4)) + ' ' + 'seconds)' FROM msdb..backupset BK WHERE BK.database_name = DB.name ORDER BY backup_set_id DESC),'-') AS [Last backup], CASE WHEN is_fulltext_enabled = 1 THEN 'Fulltext enabled' ELSE '' END AS [fulltext], CASE WHEN is_auto_close_on = 1 THEN 'autoclose' ELSE '' END AS [autoclose], page_verify_option_desc AS [page verify option], CASE WHEN is_read_only = 1 THEN 'read only' ELSE '' END AS [read only], CASE WHEN is_auto_shrink_on = 1 THEN 'autoshrink' ELSE '' END AS [autoshrink], CASE WHEN is_auto_create_stats_on = 1 THEN 'auto create statistics' ELSE '' END AS [auto create statistics], CASE WHEN is_auto_update_stats_on = 1 THEN 'auto update statistics' ELSE '' END AS [auto update statistics], CASE WHEN is_in_standby = 1 THEN 'standby' ELSE '' END AS [standby], CASE WHEN is_cleanly_shutdown = 1 THEN 'cleanly shutdown' ELSE '' END AS [cleanly shutdown] FROM sys.databases DB ORDER BY dbName, [Last backup] DESC, NAME Please let me know if you find this information useful. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • C#: Adding Functionality to 3rd Party Libraries With Extension Methods

    - by James Michael Hare
    Ever have one of those third party libraries that you love but it's missing that one feature or one piece of syntactical candy that would make it so much more useful?  This, I truly think, is one of the best uses of extension methods.  I began discussing extension methods in my last post (which you find here) where I expounded upon what I thought were some rules of thumb for using extension methods correctly.  As long as you keep in line with those (or similar) rules, they can often be useful for adding that little extra functionality or syntactical simplification for a library that you have little or no control over. Oh sure, you could take an open source project, download the source and add the methods you want, but then every time the library is updated you have to re-add your changes, which can be cumbersome and error prone.  And yes, you could possibly extend a class in a third party library and override features, but that's only if the class is not sealed, static, or constructed via factories. This is the perfect place to use an extension method!  And the best part is, you and your development team don't need to change anything!  Simply add the using for the namespace the extensions are in! So let's consider this example.  I love log4net!  Of all the logging libraries I've played with, it, to me, is one of the most flexible and configurable logging libraries and it performs great.  But this isn't about log4net, well, not directly.  So why would I want to add functionality?  Well, it's missing one thing I really want in the ILog interface: ability to specify logging level at runtime. For example, let's say I declare my ILog instance like so:     using log4net;     public class LoggingTest     {         private static readonly ILog _log = LogManager.GetLogger(typeof(LoggingTest));         ...     }     If you don't know log4net, the details aren't important, just to show that the field _log is the logger I have gotten from log4net. So now that I have that, I can log to it like so:     _log.Debug("This is the lowest level of logging and just for debugging output.");     _log.Info("This is an informational message.  Usual normal operation events.");     _log.Warn("This is a warning, something suspect but not necessarily wrong.");     _log.Error("This is an error, some sort of processing problem has happened.");     _log.Fatal("Fatals usually indicate the program is dying hideously."); And there's many flavors of each of these to log using string formatting, to log exceptions, etc.  But one thing there isn't: the ability to easily choose the logging level at runtime.  Notice, the logging levels above are chosen at compile time.  Of course, you could do some fun stuff with lambdas and wrap it, but that would obscure the simplicity of the interface.  And yes there is a Logger property you can dive down into where you can specify a Level, but the Level properties don't really match the ILog interface exactly and then you have to manually build a LogEvent and... well, it gets messy.  I want something simple and sexy so I can say:     _log.Log(someLevel, "This will be logged at whatever level I choose at runtime!");     Now, some purists out there might say you should always know what level you want to log at, and for the most part I agree with them.  For the most party the ILog interface satisfies 99% of my needs.  In fact, for most application logging yes you do always know the level you will be logging at, but when writing a utility class, you may not always know what level your user wants. I'll tell you, one of my favorite things is to write reusable components.  If I had my druthers I'd write framework libraries and shared components all day!  And being able to easily log at a runtime-chosen level is a big need for me.  After all, if I want my code to really be re-usable, I shouldn't force a user to deal with the logging level I choose. One of my favorite uses for this is in Interceptors -- I'll describe Interceptors in my next post and some of my favorites -- for now just know that an Interceptor wraps a class and allows you to add functionality to an existing method without changing it's signature.  At the risk of over-simplifying, it's a very generic implementation of the Decorator design pattern. So, say for example that you were writing an Interceptor that would time method calls and emit a log message if the method call execution time took beyond a certain threshold of time.  For instance, maybe if your database calls take more than 5,000 ms, you want to log a warning.  Or if a web method call takes over 1,000 ms, you want to log an informational message.  This would be an excellent use of logging at a generic level. So here was my personal wish-list of requirements for my task: Be able to determine if a runtime-specified logging level is enabled. Be able to log generically at a runtime-specified logging level. Have the same look-and-feel of the existing Debug, Info, Warn, Error, and Fatal calls.    Having the ability to also determine if logging for a level is on at runtime is also important so you don't spend time building a potentially expensive logging message if that level is off.  Consider an Interceptor that may log parameters on entrance to the method.  If you choose to log those parameter at DEBUG level and if DEBUG is not on, you don't want to spend the time serializing those parameters. Now, mine may not be the most elegant solution, but it performs really well since the enum I provide all uses contiguous values -- while it's never guaranteed, contiguous switch values usually get compiled into a jump table in IL which is VERY performant - O(1) - but even if it doesn't, it's still so fast you'd never need to worry about it. So first, I need a way to let users pass in logging levels.  Sure, log4net has a Level class, but it's a class with static members and plus it provides way too many options compared to ILog interface itself -- and wouldn't perform as well in my level-check -- so I define an enum like below.     namespace Shared.Logging.Extensions     {         // enum to specify available logging levels.         public enum LoggingLevel         {             Debug,             Informational,             Warning,             Error,             Fatal         }     } Now, once I have this, writing the extension methods I need is trivial.  Once again, I would typically /// comment fully, but I'm eliminating for blogging brevity:     namespace Shared.Logging.Extensions     {         // the extension methods to add functionality to the ILog interface         public static class LogExtensions         {             // Determines if logging is enabled at a given level.             public static bool IsLogEnabled(this ILog logger, LoggingLevel level)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         return logger.IsDebugEnabled;                     case LoggingLevel.Informational:                         return logger.IsInfoEnabled;                     case LoggingLevel.Warning:                         return logger.IsWarnEnabled;                     case LoggingLevel.Error:                         return logger.IsErrorEnabled;                     case LoggingLevel.Fatal:                         return logger.IsFatalEnabled;                 }                                 return false;             }             // Logs a simple message - uses same signature except adds LoggingLevel             public static void Log(this ILog logger, LoggingLevel level, object message)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message);                         break;                     case LoggingLevel.Informational:                         logger.Info(message);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message);                         break;                     case LoggingLevel.Error:                         logger.Error(message);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message);                         break;                 }             }             // Logs a message and exception to the log at specified level.             public static void Log(this ILog logger, LoggingLevel level, object message, Exception exception)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message, exception);                         break;                     case LoggingLevel.Informational:                         logger.Info(message, exception);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message, exception);                         break;                     case LoggingLevel.Error:                         logger.Error(message, exception);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message, exception);                         break;                 }             }             // Logs a formatted message to the log at the specified level.              public static void LogFormat(this ILog logger, LoggingLevel level, string format,                                          params object[] args)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.DebugFormat(format, args);                         break;                     case LoggingLevel.Informational:                         logger.InfoFormat(format, args);                         break;                     case LoggingLevel.Warning:                         logger.WarnFormat(format, args);                         break;                     case LoggingLevel.Error:                         logger.ErrorFormat(format, args);                         break;                     case LoggingLevel.Fatal:                         logger.FatalFormat(format, args);                         break;                 }             }         }     } So there it is!  I didn't have to modify the log4net source code, so if a new version comes out, i can just add the new assembly with no changes.  I didn't have to subclass and worry about developers not calling my sub-class instead of the original.  I simply provide the extension methods and it's as if the long lost extension methods were always a part of the ILog interface! Consider a very contrived example using the original interface:     // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsWarnEnabled)             {                 _log.WarnFormat("Statement {0} took too long to execute.", statement);             }             ...         }     }     Now consider this alternate call where the logging level could be perhaps a property of the class          // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // allow logging level to be specified by user of class instead         public LoggingLevel ThresholdLogLevel { get; set; }                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsLogEnabled(ThresholdLogLevel))             {                 _log.LogFormat(ThresholdLogLevel, "Statement {0} took too long to execute.",                     statement);             }             ...         }     } Next time, I'll show one of my favorite uses for these extension methods in an Interceptor.

    Read the article

  • BYOD is not a fashion statement; it’s an architectural shift - by Indus Khaitan

    - by Greg Jensen
    Ten years ago, if you asked a CIO, “how mobile is your enterprise?”. The answer would be, “100%, we give Blackberry to all our employees.”Few things have changed since then: 1.    Smartphone form-factors have matured, especially after the launch of iPhone. 2.    Rapid growth of productivity applications and services that enable creation and consumption of digital content 3.    Pervasive mobile data connectivityThere are two threads emerging from the change. Users are rapidly mingling their personas of an individual as well as an employee. In the first second, posting a picture of a fancy dinner on Facebook, to creating an expense report for the same meal on the mobile device. Irrespective of the dual persona, a user’s personal and corporate lives intermingle freely on a single hardware and more often than not, it’s an employees personal smartphone being used for everything. A BYOD program enables IT to “control” an employee owned device, while enabling productivity. More often than not the objective of BYOD programs are financial; instead of the organization, an employee pays for it.  More than a fancy device, BYOD initiatives have become sort of fashion statement, of corporate productivity, of letting employees be in-charge and a show of corporate empathy to not force an archaic form-factor in a world of new device launches every month. BYOD is no longer a means of effectively moving expense dollars and support costs. It does not matter who owns the device, it has to be protected.  BYOD brings an architectural shift.  BYOD is an architecture, which assumes that every device is vulnerable, not just what your employees have brought but what organizations have purchased for their employees. It's an architecture, which forces us to rethink how to provide productivity without comprising security.Why assume that every device is vulnerable? Mobile operating systems are rapidly evolving with leading upgrade announcement every other month. It is impossible for IT to catch-up. More than that, user’s are savvier than earlier.  While IT could install locks at the doors to prevent intruders, it may degrade productivity—which incentivizes user’s to bypass restrictions. A rapidly evolving mobile ecosystem have moving parts which are vulnerable. Hence, creating a mobile security platform, which uses the fundamental blocks of BYOD architecture such as identity defragmentation, IT control and data isolation, ensures that the sprawl of corporate data is contained. In the next post, we’ll dig deeper into the BYOD architecture. Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Why should main() be short?

    - by Stargazer712
    I've been programming for over 9 years, and according to the advice of my first programming teacher, I always keep my main() function extremely short. At first I had no idea why. I just obeyed without understanding, much to the delight of my professors. After gaining experience, I realized that if I designed my code correctly, having a short main() function just sortof happened. Writing modularized code and following the single responsibility principle allowed my code to be designed in "bunches", and main() served as nothing more than a catalyst to get the program running. Fast forward to a few weeks ago, I was looking at Python's souce code, and I found the main() function: /* Minimal main program -- everything is loaded from the library */ ... int main(int argc, char **argv) { ... return Py_Main(argc, argv); } Yay Python. Short main() function == Good code. Programming teachers were right. Wanting to look deeper, I took a look at Py_Main. In its entirety, it is defined as follows: /* Main program */ int Py_Main(int argc, char **argv) { int c; int sts; char *command = NULL; char *filename = NULL; char *module = NULL; FILE *fp = stdin; char *p; int unbuffered = 0; int skipfirstline = 0; int stdin_is_interactive = 0; int help = 0; int version = 0; int saw_unbuffered_flag = 0; PyCompilerFlags cf; cf.cf_flags = 0; orig_argc = argc; /* For Py_GetArgcArgv() */ orig_argv = argv; #ifdef RISCOS Py_RISCOSWimpFlag = 0; #endif PySys_ResetWarnOptions(); while ((c = _PyOS_GetOpt(argc, argv, PROGRAM_OPTS)) != EOF) { if (c == 'c') { /* -c is the last option; following arguments that look like options are left for the command to interpret. */ command = (char *)malloc(strlen(_PyOS_optarg) + 2); if (command == NULL) Py_FatalError( "not enough memory to copy -c argument"); strcpy(command, _PyOS_optarg); strcat(command, "\n"); break; } if (c == 'm') { /* -m is the last option; following arguments that look like options are left for the module to interpret. */ module = (char *)malloc(strlen(_PyOS_optarg) + 2); if (module == NULL) Py_FatalError( "not enough memory to copy -m argument"); strcpy(module, _PyOS_optarg); break; } switch (c) { case 'b': Py_BytesWarningFlag++; break; case 'd': Py_DebugFlag++; break; case '3': Py_Py3kWarningFlag++; if (!Py_DivisionWarningFlag) Py_DivisionWarningFlag = 1; break; case 'Q': if (strcmp(_PyOS_optarg, "old") == 0) { Py_DivisionWarningFlag = 0; break; } if (strcmp(_PyOS_optarg, "warn") == 0) { Py_DivisionWarningFlag = 1; break; } if (strcmp(_PyOS_optarg, "warnall") == 0) { Py_DivisionWarningFlag = 2; break; } if (strcmp(_PyOS_optarg, "new") == 0) { /* This only affects __main__ */ cf.cf_flags |= CO_FUTURE_DIVISION; /* And this tells the eval loop to treat BINARY_DIVIDE as BINARY_TRUE_DIVIDE */ _Py_QnewFlag = 1; break; } fprintf(stderr, "-Q option should be `-Qold', " "`-Qwarn', `-Qwarnall', or `-Qnew' only\n"); return usage(2, argv[0]); /* NOTREACHED */ case 'i': Py_InspectFlag++; Py_InteractiveFlag++; break; /* case 'J': reserved for Jython */ case 'O': Py_OptimizeFlag++; break; case 'B': Py_DontWriteBytecodeFlag++; break; case 's': Py_NoUserSiteDirectory++; break; case 'S': Py_NoSiteFlag++; break; case 'E': Py_IgnoreEnvironmentFlag++; break; case 't': Py_TabcheckFlag++; break; case 'u': unbuffered++; saw_unbuffered_flag = 1; break; case 'v': Py_VerboseFlag++; break; #ifdef RISCOS case 'w': Py_RISCOSWimpFlag = 1; break; #endif case 'x': skipfirstline = 1; break; /* case 'X': reserved for implementation-specific arguments */ case 'U': Py_UnicodeFlag++; break; case 'h': case '?': help++; break; case 'V': version++; break; case 'W': PySys_AddWarnOption(_PyOS_optarg); break; /* This space reserved for other options */ default: return usage(2, argv[0]); /*NOTREACHED*/ } } if (help) return usage(0, argv[0]); if (version) { fprintf(stderr, "Python %s\n", PY_VERSION); return 0; } if (Py_Py3kWarningFlag && !Py_TabcheckFlag) /* -3 implies -t (but not -tt) */ Py_TabcheckFlag = 1; if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') Py_InspectFlag = 1; if (!saw_unbuffered_flag && (p = Py_GETENV("PYTHONUNBUFFERED")) && *p != '\0') unbuffered = 1; if (!Py_NoUserSiteDirectory && (p = Py_GETENV("PYTHONNOUSERSITE")) && *p != '\0') Py_NoUserSiteDirectory = 1; if ((p = Py_GETENV("PYTHONWARNINGS")) && *p != '\0') { char *buf, *warning; buf = (char *)malloc(strlen(p) + 1); if (buf == NULL) Py_FatalError( "not enough memory to copy PYTHONWARNINGS"); strcpy(buf, p); for (warning = strtok(buf, ","); warning != NULL; warning = strtok(NULL, ",")) PySys_AddWarnOption(warning); free(buf); } if (command == NULL && module == NULL && _PyOS_optind < argc && strcmp(argv[_PyOS_optind], "-") != 0) { #ifdef __VMS filename = decc$translate_vms(argv[_PyOS_optind]); if (filename == (char *)0 || filename == (char *)-1) filename = argv[_PyOS_optind]; #else filename = argv[_PyOS_optind]; #endif } stdin_is_interactive = Py_FdIsInteractive(stdin, (char *)0); if (unbuffered) { #if defined(MS_WINDOWS) || defined(__CYGWIN__) _setmode(fileno(stdin), O_BINARY); _setmode(fileno(stdout), O_BINARY); #endif #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stderr, (char *)NULL, _IONBF, BUFSIZ); #else /* !HAVE_SETVBUF */ setbuf(stdin, (char *)NULL); setbuf(stdout, (char *)NULL); setbuf(stderr, (char *)NULL); #endif /* !HAVE_SETVBUF */ } else if (Py_InteractiveFlag) { #ifdef MS_WINDOWS /* Doesn't have to have line-buffered -- use unbuffered */ /* Any set[v]buf(stdin, ...) screws up Tkinter :-( */ setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); #else /* !MS_WINDOWS */ #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IOLBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IOLBF, BUFSIZ); #endif /* HAVE_SETVBUF */ #endif /* !MS_WINDOWS */ /* Leave stderr alone - it should be unbuffered anyway. */ } #ifdef __VMS else { setvbuf (stdout, (char *)NULL, _IOLBF, BUFSIZ); } #endif /* __VMS */ #ifdef __APPLE__ /* On MacOS X, when the Python interpreter is embedded in an application bundle, it gets executed by a bootstrapping script that does os.execve() with an argv[0] that's different from the actual Python executable. This is needed to keep the Finder happy, or rather, to work around Apple's overly strict requirements of the process name. However, we still need a usable sys.executable, so the actual executable path is passed in an environment variable. See Lib/plat-mac/bundlebuiler.py for details about the bootstrap script. */ if ((p = Py_GETENV("PYTHONEXECUTABLE")) && *p != '\0') Py_SetProgramName(p); else Py_SetProgramName(argv[0]); #else Py_SetProgramName(argv[0]); #endif Py_Initialize(); if (Py_VerboseFlag || (command == NULL && filename == NULL && module == NULL && stdin_is_interactive)) { fprintf(stderr, "Python %s on %s\n", Py_GetVersion(), Py_GetPlatform()); if (!Py_NoSiteFlag) fprintf(stderr, "%s\n", COPYRIGHT); } if (command != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } if (module != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' so that PySys_SetArgv correctly sets sys.path[0] to '' rather than looking for a file called "-m". See tracker issue #8202 for details. */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } PySys_SetArgv(argc-_PyOS_optind, argv+_PyOS_optind); if ((Py_InspectFlag || (command == NULL && filename == NULL && module == NULL)) && isatty(fileno(stdin))) { PyObject *v; v = PyImport_ImportModule("readline"); if (v == NULL) PyErr_Clear(); else Py_DECREF(v); } if (command) { sts = PyRun_SimpleStringFlags(command, &cf) != 0; free(command); } else if (module) { sts = RunModule(module, 1); free(module); } else { if (filename == NULL && stdin_is_interactive) { Py_InspectFlag = 0; /* do exit on SystemExit */ RunStartupFile(&cf); } /* XXX */ sts = -1; /* keep track of whether we've already run __main__ */ if (filename != NULL) { sts = RunMainFromImporter(filename); } if (sts==-1 && filename!=NULL) { if ((fp = fopen(filename, "r")) == NULL) { fprintf(stderr, "%s: can't open file '%s': [Errno %d] %s\n", argv[0], filename, errno, strerror(errno)); return 2; } else if (skipfirstline) { int ch; /* Push back first newline so line numbers remain the same */ while ((ch = getc(fp)) != EOF) { if (ch == '\n') { (void)ungetc(ch, fp); break; } } } { /* XXX: does this work on Win/Win64? (see posix_fstat) */ struct stat sb; if (fstat(fileno(fp), &sb) == 0 && S_ISDIR(sb.st_mode)) { fprintf(stderr, "%s: '%s' is a directory, cannot continue\n", argv[0], filename); fclose(fp); return 1; } } } if (sts==-1) { /* call pending calls like signal handlers (SIGINT) */ if (Py_MakePendingCalls() == -1) { PyErr_Print(); sts = 1; } else { sts = PyRun_AnyFileExFlags( fp, filename == NULL ? "<stdin>" : filename, filename != NULL, &cf) != 0; } } } /* Check this environment variable at the end, to give programs the * opportunity to set it from Python. */ if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') { Py_InspectFlag = 1; } if (Py_InspectFlag && stdin_is_interactive && (filename != NULL || command != NULL || module != NULL)) { Py_InspectFlag = 0; /* XXX */ sts = PyRun_AnyFileFlags(stdin, "<stdin>", &cf) != 0; } Py_Finalize(); #ifdef RISCOS if (Py_RISCOSWimpFlag) fprintf(stderr, "\x0cq\x0c"); /* make frontend quit */ #endif #ifdef __INSURE__ /* Insure++ is a memory analysis tool that aids in discovering * memory leaks and other memory problems. On Python exit, the * interned string dictionary is flagged as being in use at exit * (which it is). Under normal circumstances, this is fine because * the memory will be automatically reclaimed by the system. Under * memory debugging, it's a huge source of useless noise, so we * trade off slower shutdown for less distraction in the memory * reports. -baw */ _Py_ReleaseInternedStrings(); #endif /* __INSURE__ */ return sts; } Good God Almighty...it is big enough to sink the Titanic. It seems as though Python did the "Intro to Programming 101" trick and just moved all of main()'s code to a different function called it something very similar to "main". Here's my question: Is this code terribly written, or are there other reasons reasons to have a short main function? As it stands right now, I see absolutely no difference between doing this and just moving the code in Py_Main() back into main(). Am I wrong in thinking this?

    Read the article

  • Why should main() be short?

    - by Stargazer712
    I've been programming for over 9 years, and according to the advice of my first programming teacher, I always keep my main() function extremely short. At first I had no idea why. I just obeyed without understanding, much to the delight of my professors. After gaining experience, I realized that if I designed my code correctly, having a short main() function just sortof happened. Writing modularized code and following the single responsibility principle allowed my code to be designed in "bunches", and main() served as nothing more than a catalyst to get the program running. Fast forward to a few weeks ago, I was looking at Python's souce code, and I found the main() function: /* Minimal main program -- everything is loaded from the library */ ... int main(int argc, char **argv) { ... return Py_Main(argc, argv); } Yay python. Short main() function == Good code. Programming teachers were right. Wanting to look deeper, I took a look at Py_Main. In its entirety, it is defined as follows: /* Main program */ int Py_Main(int argc, char **argv) { int c; int sts; char *command = NULL; char *filename = NULL; char *module = NULL; FILE *fp = stdin; char *p; int unbuffered = 0; int skipfirstline = 0; int stdin_is_interactive = 0; int help = 0; int version = 0; int saw_unbuffered_flag = 0; PyCompilerFlags cf; cf.cf_flags = 0; orig_argc = argc; /* For Py_GetArgcArgv() */ orig_argv = argv; #ifdef RISCOS Py_RISCOSWimpFlag = 0; #endif PySys_ResetWarnOptions(); while ((c = _PyOS_GetOpt(argc, argv, PROGRAM_OPTS)) != EOF) { if (c == 'c') { /* -c is the last option; following arguments that look like options are left for the command to interpret. */ command = (char *)malloc(strlen(_PyOS_optarg) + 2); if (command == NULL) Py_FatalError( "not enough memory to copy -c argument"); strcpy(command, _PyOS_optarg); strcat(command, "\n"); break; } if (c == 'm') { /* -m is the last option; following arguments that look like options are left for the module to interpret. */ module = (char *)malloc(strlen(_PyOS_optarg) + 2); if (module == NULL) Py_FatalError( "not enough memory to copy -m argument"); strcpy(module, _PyOS_optarg); break; } switch (c) { case 'b': Py_BytesWarningFlag++; break; case 'd': Py_DebugFlag++; break; case '3': Py_Py3kWarningFlag++; if (!Py_DivisionWarningFlag) Py_DivisionWarningFlag = 1; break; case 'Q': if (strcmp(_PyOS_optarg, "old") == 0) { Py_DivisionWarningFlag = 0; break; } if (strcmp(_PyOS_optarg, "warn") == 0) { Py_DivisionWarningFlag = 1; break; } if (strcmp(_PyOS_optarg, "warnall") == 0) { Py_DivisionWarningFlag = 2; break; } if (strcmp(_PyOS_optarg, "new") == 0) { /* This only affects __main__ */ cf.cf_flags |= CO_FUTURE_DIVISION; /* And this tells the eval loop to treat BINARY_DIVIDE as BINARY_TRUE_DIVIDE */ _Py_QnewFlag = 1; break; } fprintf(stderr, "-Q option should be `-Qold', " "`-Qwarn', `-Qwarnall', or `-Qnew' only\n"); return usage(2, argv[0]); /* NOTREACHED */ case 'i': Py_InspectFlag++; Py_InteractiveFlag++; break; /* case 'J': reserved for Jython */ case 'O': Py_OptimizeFlag++; break; case 'B': Py_DontWriteBytecodeFlag++; break; case 's': Py_NoUserSiteDirectory++; break; case 'S': Py_NoSiteFlag++; break; case 'E': Py_IgnoreEnvironmentFlag++; break; case 't': Py_TabcheckFlag++; break; case 'u': unbuffered++; saw_unbuffered_flag = 1; break; case 'v': Py_VerboseFlag++; break; #ifdef RISCOS case 'w': Py_RISCOSWimpFlag = 1; break; #endif case 'x': skipfirstline = 1; break; /* case 'X': reserved for implementation-specific arguments */ case 'U': Py_UnicodeFlag++; break; case 'h': case '?': help++; break; case 'V': version++; break; case 'W': PySys_AddWarnOption(_PyOS_optarg); break; /* This space reserved for other options */ default: return usage(2, argv[0]); /*NOTREACHED*/ } } if (help) return usage(0, argv[0]); if (version) { fprintf(stderr, "Python %s\n", PY_VERSION); return 0; } if (Py_Py3kWarningFlag && !Py_TabcheckFlag) /* -3 implies -t (but not -tt) */ Py_TabcheckFlag = 1; if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') Py_InspectFlag = 1; if (!saw_unbuffered_flag && (p = Py_GETENV("PYTHONUNBUFFERED")) && *p != '\0') unbuffered = 1; if (!Py_NoUserSiteDirectory && (p = Py_GETENV("PYTHONNOUSERSITE")) && *p != '\0') Py_NoUserSiteDirectory = 1; if ((p = Py_GETENV("PYTHONWARNINGS")) && *p != '\0') { char *buf, *warning; buf = (char *)malloc(strlen(p) + 1); if (buf == NULL) Py_FatalError( "not enough memory to copy PYTHONWARNINGS"); strcpy(buf, p); for (warning = strtok(buf, ","); warning != NULL; warning = strtok(NULL, ",")) PySys_AddWarnOption(warning); free(buf); } if (command == NULL && module == NULL && _PyOS_optind < argc && strcmp(argv[_PyOS_optind], "-") != 0) { #ifdef __VMS filename = decc$translate_vms(argv[_PyOS_optind]); if (filename == (char *)0 || filename == (char *)-1) filename = argv[_PyOS_optind]; #else filename = argv[_PyOS_optind]; #endif } stdin_is_interactive = Py_FdIsInteractive(stdin, (char *)0); if (unbuffered) { #if defined(MS_WINDOWS) || defined(__CYGWIN__) _setmode(fileno(stdin), O_BINARY); _setmode(fileno(stdout), O_BINARY); #endif #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stderr, (char *)NULL, _IONBF, BUFSIZ); #else /* !HAVE_SETVBUF */ setbuf(stdin, (char *)NULL); setbuf(stdout, (char *)NULL); setbuf(stderr, (char *)NULL); #endif /* !HAVE_SETVBUF */ } else if (Py_InteractiveFlag) { #ifdef MS_WINDOWS /* Doesn't have to have line-buffered -- use unbuffered */ /* Any set[v]buf(stdin, ...) screws up Tkinter :-( */ setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); #else /* !MS_WINDOWS */ #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IOLBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IOLBF, BUFSIZ); #endif /* HAVE_SETVBUF */ #endif /* !MS_WINDOWS */ /* Leave stderr alone - it should be unbuffered anyway. */ } #ifdef __VMS else { setvbuf (stdout, (char *)NULL, _IOLBF, BUFSIZ); } #endif /* __VMS */ #ifdef __APPLE__ /* On MacOS X, when the Python interpreter is embedded in an application bundle, it gets executed by a bootstrapping script that does os.execve() with an argv[0] that's different from the actual Python executable. This is needed to keep the Finder happy, or rather, to work around Apple's overly strict requirements of the process name. However, we still need a usable sys.executable, so the actual executable path is passed in an environment variable. See Lib/plat-mac/bundlebuiler.py for details about the bootstrap script. */ if ((p = Py_GETENV("PYTHONEXECUTABLE")) && *p != '\0') Py_SetProgramName(p); else Py_SetProgramName(argv[0]); #else Py_SetProgramName(argv[0]); #endif Py_Initialize(); if (Py_VerboseFlag || (command == NULL && filename == NULL && module == NULL && stdin_is_interactive)) { fprintf(stderr, "Python %s on %s\n", Py_GetVersion(), Py_GetPlatform()); if (!Py_NoSiteFlag) fprintf(stderr, "%s\n", COPYRIGHT); } if (command != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } if (module != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' so that PySys_SetArgv correctly sets sys.path[0] to '' rather than looking for a file called "-m". See tracker issue #8202 for details. */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } PySys_SetArgv(argc-_PyOS_optind, argv+_PyOS_optind); if ((Py_InspectFlag || (command == NULL && filename == NULL && module == NULL)) && isatty(fileno(stdin))) { PyObject *v; v = PyImport_ImportModule("readline"); if (v == NULL) PyErr_Clear(); else Py_DECREF(v); } if (command) { sts = PyRun_SimpleStringFlags(command, &cf) != 0; free(command); } else if (module) { sts = RunModule(module, 1); free(module); } else { if (filename == NULL && stdin_is_interactive) { Py_InspectFlag = 0; /* do exit on SystemExit */ RunStartupFile(&cf); } /* XXX */ sts = -1; /* keep track of whether we've already run __main__ */ if (filename != NULL) { sts = RunMainFromImporter(filename); } if (sts==-1 && filename!=NULL) { if ((fp = fopen(filename, "r")) == NULL) { fprintf(stderr, "%s: can't open file '%s': [Errno %d] %s\n", argv[0], filename, errno, strerror(errno)); return 2; } else if (skipfirstline) { int ch; /* Push back first newline so line numbers remain the same */ while ((ch = getc(fp)) != EOF) { if (ch == '\n') { (void)ungetc(ch, fp); break; } } } { /* XXX: does this work on Win/Win64? (see posix_fstat) */ struct stat sb; if (fstat(fileno(fp), &sb) == 0 && S_ISDIR(sb.st_mode)) { fprintf(stderr, "%s: '%s' is a directory, cannot continue\n", argv[0], filename); fclose(fp); return 1; } } } if (sts==-1) { /* call pending calls like signal handlers (SIGINT) */ if (Py_MakePendingCalls() == -1) { PyErr_Print(); sts = 1; } else { sts = PyRun_AnyFileExFlags( fp, filename == NULL ? "<stdin>" : filename, filename != NULL, &cf) != 0; } } } /* Check this environment variable at the end, to give programs the * opportunity to set it from Python. */ if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') { Py_InspectFlag = 1; } if (Py_InspectFlag && stdin_is_interactive && (filename != NULL || command != NULL || module != NULL)) { Py_InspectFlag = 0; /* XXX */ sts = PyRun_AnyFileFlags(stdin, "<stdin>", &cf) != 0; } Py_Finalize(); #ifdef RISCOS if (Py_RISCOSWimpFlag) fprintf(stderr, "\x0cq\x0c"); /* make frontend quit */ #endif #ifdef __INSURE__ /* Insure++ is a memory analysis tool that aids in discovering * memory leaks and other memory problems. On Python exit, the * interned string dictionary is flagged as being in use at exit * (which it is). Under normal circumstances, this is fine because * the memory will be automatically reclaimed by the system. Under * memory debugging, it's a huge source of useless noise, so we * trade off slower shutdown for less distraction in the memory * reports. -baw */ _Py_ReleaseInternedStrings(); #endif /* __INSURE__ */ return sts; } Good God Almighty...it is big enough to sink the Titanic. It seems as though Python did the "Intro to Programming 101" trick and just moved all of main()'s code to a different function called it something very similar to "main". Here's my question: Is this code terribly written, or are there other reasons to have a short main function? As it stands right now, I see absolutely no difference between doing this and just moving the code in Py_Main() back into main(). Am I wrong in thinking this?

    Read the article

  • Webcast - September 20th at 9am PT/12pm ET - Nucleus Research Report: The Evolving Business Case for Tier 1 ERP in Midsize Companies

    - by LanaProut
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} Join us on September 20th at 9am PT/12pm ET for a webcast featuring Rebecca Wettemann, Vice President of Research at Nucleus Research, and Jim Lein, Senior Director at Oracle. Together, they’ll explore the recently published note, “The Evolving Business Case for Tier 1 ERP in Midsize Companies." Register today!

    Read the article

  • Customizing the Test Status on the TFS 2010 SSRS Stories Overview Report

    - by Bob Hardister
    This post shows how to customize the SQL query used by the Team Foundation Server 2010 SQL Server Reporting Services (SSRS) Stories Overview Report. The objective is to show test status for the current version while including user story status of the current and prior versions.  Why? Because we don’t copy completed user stories into the next release. We only want one instance of a user story for the product because we believe copies can get out of sync when they are supposed to be the same. In the example below, work items for the current version are on the area path root and prior versions are not on the area path root. However, you can use area path or iteration path criteria in the query as suits your needs. In any case, here’s how you do it: 1. Download a copy of the report RDL file as a backup 2. Open the report by clicking the edit down arrow and selecting “Edit in Report Builder” 3. Right click on the dsOverview Dataset and select Dataset Properties 4. Update the following SQL per the comments in the code: Customization 1 of 3 … -- Get the list deliverable workitems that have Test Cases linked DECLARE @TestCases Table (DeliverableID int, TestCaseID int); INSERT @TestCases     SELECT h.ID, flh.TargetWorkItemID     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID                 AND flh.WorkItemLinkTypeSK = @TestedByLinkTypeSK                 AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)                 AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi ON flh.TargetWorkItemID = wi.[System_ID]                  AND wi.[System_WorkItemType] = @TestCase             AND wi.ProjectNodeGUID  = @ProjectGuid              --  Customization 1 of 3: only include test status information when test case area path = root. Added the following 2 statements              AND wi.AreaPath = '{the root area path of the team project}'  …          Customization 2 of 3 … -- Get the Bugs linked to the deliverable workitems directly DECLARE @Bugs Table (ID int, ActiveBugs int, ResolvedBugs int, ClosedBugs int, ProposedBugs int) INSERT @Bugs     SELECT h.ID,         SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,         SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,         SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,         SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID             AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi             ON wi.[System_WorkItemType] = @Bug             AND wi.[System_Id] = flh.TargetWorkItemID             AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)             AND wi.[ProjectNodeGUID] = @ProjectGuid              --  Customization 2 of 3: only include test status information when test case area path = root. Added the following statement              AND wi.AreaPath = '{the root area path of the team project}'       GROUP BY h.ID … Customization 2 of 3 … -- Add the Bugs linked to the Test Cases which are linked to the deliverable workitems -- Walks the links from the user stories to test cases (via the tested by link), and then to -- bugs that are linked to the test case. We don't need to join to the test case in the work -- item history view. -- --    [WIT:User Story/Requirement] --> [Link:Tested By]--> [Link:any type] --> [WIT:Bug] INSERT @Bugs SELECT tc.DeliverableID,     SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,     SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,     SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,     SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed FROM @TestCases tc     JOIN FactWorkItemLinkHistory flh         ON flh.SourceWorkItemID = tc.TestCaseID         AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)         AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK     JOIN [CurrentWorkItemView] wi         ON wi.[System_Id] = flh.TargetWorkItemID         AND wi.[System_WorkItemType] = @Bug         AND wi.[ProjectNodeGUID] = @ProjectGuid         --  Customization 3 of 3: only include test status information when test case area path = root. Added the following statement         AND wi.AreaPath = '{the root area path of the team project}'     GROUP BY tc.DeliverableID … 5. Save the report and you’re all set. Note: you may need to re-apply custom parameter changes like pre-selected sprints.

    Read the article

  • NINTENDO, EDCON and ALLEGIS GROUP @ Oracle Open World 2012 Conference Session (CON9418): The Business Case for Oracle Exalogic: A Customer Perspective

    - by Sanjeev Sharma
     Are you looking to deliver breakthrough performance for packaged and custom  applications? For many front-office applications such as Oracle WebCenter Sites, Oracle Transportation Management, and Oracle’s ATG and Siebel product families,  improved  performance leads directly to greater revenue or cost savings from the business - a  compelling  proposition. For back-office applications, improved performance has tangible benefits  in terms of  footprint reductions. For all applications, Oracle Exalogic and Oracle Exadata provide an engineered solution that provides shorter time to value and lower operational costs.  Edcon is a leading clothing, footwear and textiles (CFT) retailing group in southern Africa trading through a range of retail formats. The Company has grown from opening it's first store in 1929, to ten retail brands trading in over 1000 stores in South Africa, Botswana, Namibia, Swaziland and Lesotho. Edcon's retail business has, through recent acquisitions, added top stationery and houseware brands as well as general merchandise to its CFT portfolio. Edcon was looking to consolidate their existing middleware components (Weblogic and Oracle SOA) and retail applications (Retek, Siebel and E-Business Suite) on a common platform and turned to Oracle Exalogic. With Oracle Exalogic, Edcon is able to derive significant HW CAPEX savings, improve response-time of core business applications and mitigate operating risk. Hear senior business leaders from Nintendo, Edcon and Allegis Group discuss how the business value of  leveraging Oracle Exalogic at the following conference session at Oracle Open World 2012: Session:  CON9418 - The Business Case for Oracle Exalogic: A Customer PerspectiveDate: Monday, 1 Oct, 2012Time: 1:45 pm - 2:45 pm (PST)Venue: Moscone South (306)

    Read the article

  • Is chroot the right choice for my use case?

    - by Anthony
    Backstory: I am working on setting up a MineCraft server and want to allow admins to have ssh access to the MineCraft server console and appropriate mc server files, but not the whole system. The console provided by the minecraft server is only available to the user that launched the process. In addition, the admins will need terminal access to some basic cli tools such as wget, cp, mv, rm, and a text editor. Plan: I have already setup the ssh aspect of things, requiring pre-shared keys and whatnot. Setup a jailed environment in which all user activity will be contained. Setup user accounts. - The first user account will be the minecraft user. The minecraft user will start the MC server in a multiuser screen session and allow the other admins to attach to it. - Subsequent users should have their own /home directory for normal usage. Setup acl for the appropriate files to allow each user to edit the mc server files. No one will be doing system updates, nor will anyone be installing any programs, so I'll be the only user with sudo. The Issues: I don't want the ssh users to have access to the whole system. Users will still need to use wget or curl to update the mc server files. Is chroot the right tool for this use case, or is there something more appropriate for the job? I have no experience setting up a chroot environment and have found several tools to aid in this process. Jailkit seems to be the most robust, but it's not in the standard repos.

    Read the article

  • What do you do to balance the upper or lower case style to name file or folder between work and life? [on hold]

    - by sojyq
    I am a programmer from China. And I like to use English words to name my files and folders Whether it is for work or life. For example, suck as Movie, Work, QtProjects, Music and so on.And I keep the habit of initial the first letter for file name or folder name in Windows. But now I work on Ubuntu, and I found that all file name and folder name are lowercase in addition to the default folder such as Music, Movie and so on. And then I realize that in Linux world, most peoloe like to use all lowercase to name their files and folders for two reasons (1. Linux is Case sensitive. 2. It is fast for shell command.). And after work, when I switch from Linux to Windows, I confuse to use all lowercase or the first letter uppercase style to name my files in Windows. I'm caught in a dilemma. I think that all lowercase is more efficiency but the first letter uppercase is more readable. I thought for a long time and want to come up with a good answer to blance the two style name conversion. But I failed. I want to ask you that how you balance the uppercase or lowercase habbit in Windows, Mac, Linux between work and personal life style? Thank you very much! (My current solution is that when I am in Linux, I use all lowercase for files and folders, but when I am in Windows and Mac OS X, I couldn't find a good reason to convince me to use all lowercase ( I think in Windows and Mac OS X, the first letter uppercase style for me is more readable and beautiful).

    Read the article

  • How do programers balance the upper or lower case style to name file or folder between work and life?

    - by sojyq
    I am a programmer from China. And I like to use English words to name my files and folders Whether it is for work or life. For example, suck as Movie, Work, QtProjects, Music and so on.And I keep the habit of initial the first letter for file name or folder name in Windows. But now I work on Ubuntu, and I found that all file name and folder name are lowercase in addition to the default folder such as Music, Movie and so on. And then I realize that in Linux world, most peoloe like to use all lowercase to name their files and folders for two reasons (1. Linux is Case sensitive. 2. It is fast for shell command.). And after work, when I switch from Linux to Windows, I confuse to use all lowercase or the first letter uppercase style to name my files in Windows. I'm caught in a dilemma. I think that all lowercase is more efficiency but the first letter uppercase is more readable. I thought for a long time and want to come up with a good answer to blance the two style name conversion. But I failed. I want to ask you that how you balance the uppercase or lowercase habbit in Windows, Mac, Linux between work and personal life style? Thank you very much! (My current solution is that when I am in Linux, I use all lowercase for files and folders, but when I am in Windows and Mac OS X, I couldn't find a good reason to convince me to use all lowercase ( I think in Windows and Mac OS X, the first letter uppercase style for me is more readable and beautiful).

    Read the article

  • Why does my finite state machine take so long to execute?

    - by BillyONeal
    Hello all :) I'm working on a state machine which is supposed to extract function calls of the form /* I am a comment */ //I am a comment perf("this.is.a.string.which\"can have QUOTES\"", 123456); where the extracted data would be perf("this.is.a.string.which\"can have QUOTES\"", 123456); from a file. Currently, to process a 41kb file, this process is taking close to a minute and a half. Is there something I'm seriously misunderstanding here about this finite state machine? #include <boost/algorithm/string.hpp> std::vector<std::string> Foo() { std::string fileData; //Fill filedata with the contents of a file std::vector<std::string> results; std::string::iterator begin = fileData.begin(); std::string::iterator end = fileData.end(); std::string::iterator stateZeroFoundLocation = fileData.begin(); std::size_t state = 0; for(; begin < end; begin++) { switch (state) { case 0: if (boost::starts_with(boost::make_iterator_range(begin, end), "pref(")) { stateZeroFoundLocation = begin; begin += 4; state = 2; } else if (*begin == '/') state = 1; break; case 1: state = 0; switch (*begin) { case '*': begin = boost::find_first(boost::make_iterator_range(begin, end), "*/").end(); break; case '/': begin = std::find(begin, end, L'\n'); } break; case 2: if (*begin == '"') state = 3; break; case 3: switch(*begin) { case '\\': state = 4; break; case '"': state = 5; } break; case 4: state = 3; break; case 5: if (*begin == ',') state = 6; break; case 6: if (*begin != ' ') state = 7; break; case 7: switch(*begin) { case '"': state = 8; break; default: state = 10; break; } break; case 8: switch(*begin) { case '\\': state = 9; break; case '"': state = 10; } break; case 9: state = 8; break; case 10: if (*begin == ')') state = 11; break; case 11: if (*begin == ';') state = 12; break; case 12: state = 0; results.push_back(std::string(stateZeroFoundLocation, begin)); }; } return results; } Billy3

    Read the article

  • DisplayObject not being displayed in AS3

    - by MarkSteve
    I have this class: public class IskwabolText extends Sprite { private var _tf:TextField; private var _tfmt:TextFormat; private var _size:Number; private var _text:String; public function IskwabolText(params:Object) { var defaultParams:Object = { color: 0x000000, background: false, backgroundColor: 0xFFFFFF, width: 0, height: 0, multiline: false, wordWrap: false }; // textfield _tf = new TextField(); _tf.antiAliasType = 'advanced'; _tf.embedFonts = true; _tf.type = 'dynamic'; _tf.selectable = false; // textformat _tfmt = new TextFormat(); set(defaultParams); set(params); } public function get(param:String):Object { switch (param) { case 'size': return _tfmt.size; case 'text': return _tf.text; case 'font': return _tfmt.font; case 'color': return _tfmt.color; case 'background': return _tf.background; case 'backgroundColor': return _tf.backgroundColor; case 'width': return _tf.width; case 'height': return _tf.height; case 'multiline': return _tf.multiline; case 'wordWrap': return _tf.multiline; default: return this[param]; } return null; } public function set(params:Object):Object { for (var i:String in params) { setParam(i, params[i]); } redraw(); return this; } private function setParam(param:String, value:Object):Object { switch (param) { case 'size': _tfmt.size = new String(value); break; case 'text': _tf.text = new String(value); break; case 'font': _tfmt.font = new String(value); break; case 'color': _tfmt.color = new uint(value); break; case 'background': _tf.background = new Boolean(value); break; case 'backgroundColor': _tf.backgroundColor = new uint(value); break; case 'width': _tf.width = new Number(value); break; case 'height': _tf.height = new Number(value); break; case 'multiline': _tf.multiline = new Boolean(value); break; case 'wordWrap': _tf.multiline = new Boolean(value); break; default: this[param] = value; break; } return this; } private function redraw():void { _tf.setTextFormat(_tfmt); if (contains(_tf)) removeChild(_tf); if (_tf.width == 0) _tf.width= _tf.textWidth+5; _tf.height = _tf.textHeight; addChild(_tf); } } But when I do this: public class Main extends Sprite { public function Main() { addChild(new IskwabolText({ size: 100, text: 'iskwabol', font: 'Default', // this is properly embedded color: 0x000000, x: stage.stageWidth / 2 - this.width / 2, y: 140 })); } } The child IskwabolText doesn't get displayed. What happening?

    Read the article

  • Does the order of the columns in a SELECT statement make a difference?

    - by Frank Computer
    This question was inspired by a previous question posted on SO, "Does the order of the WHERE clause make a differnece?". Would it improve a SELECT statement's performance if the the columns used in the WHERE section are placed at the begining of the SELECT statement? example: SELECT customer.id, transaction.id, transaction.efective_date, transaction.a, [...] FROM customer, transaction WHERE customer.id = transaction.id; I do know that limiting the list of columns to only the needed ones in a SELECT statement improves performance as opposed to using SELECT * because the current list is smaller.

    Read the article

  • Why does the return statement not print anything to the console?

    - by dyoverdx
    I Googled it and I didn't hit anything useful, so I decided to ask on here. I can't use System.out.println for the project that I am working on, so I used the return statement. Everything compiles just fine, but my return statement doesn't print anything to the console, the program just terminates. All I have in the code is just an if-else statement that returns true or false. Why don't I see anything on the console? I am using Eclipse Juno's Console by the way.

    Read the article

  • how to copy files from TFS to clear case??

    - by barry
    Can i use powershell script to copy a set of files from a folder to Clear Case..?? i have the task of synchronising files from TFS to Clear Case.. like i need to take a set of files aftr a certain date from tfs server and synchronise these files to Clear case..

    Read the article

  • MyController class must produce class according to the enum type.

    - by programmerist
    GenoTipController must produce class according to the enum type. i have 3 class: _Company,_Muayene,_Radyoloji. Also i have CompanyView Class GetPersonel method. if you look GenoTipController my codes need refactoring. Can you understand me? i need a class according to ewnum type must me produce class. For example; case DataModelType.Radyoloji it must return radyoloji= new Radyoloji . Everything must be one switch case? public class GenoTipController { public _Company GenerateCompany(DataModelType modeltype) { _Company company = null; switch (modeltype) { case DataModelType.Radyoloji: break; case DataModelType.Satis: break; case DataModelType.Muayene: break; case DataModelType.Company: company = new Company(); break; default: break; } return company; } public _Muayene GenerateMuayene(DataModelType modeltype) { _Muayene muayene = null; switch (modeltype) { case DataModelType.Radyoloji: break; case DataModelType.Satis: break; case DataModelType.Muayene: muayene = new Muayene(); break; case DataModelType.Company: break; default: break; } return muayene; } public _Radyoloji GenerateRadyoloji(DataModelType modeltype) { _Radyoloji radyoloji = null; switch (modeltype) { case DataModelType.Radyoloji: radyoloji = new Radyoloji(); break; case DataModelType.Satis: break; case DataModelType.Muayene: break; case DataModelType.Company: break; default: break; } return radyoloji; } } public class CompanyView { public static List GetPersonel() { GenoTipController controller = new GenoTipController(); _Company company = controller.GenerateCompany(DataModelType.Company); return company.GetPersonel(); } } public enum DataModelType { Radyoloji, Satis, Muayene, Company } }

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >