Search Results

Search found 27870 results on 1115 pages for 'standard output'.

Page 321/1115 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Advantages of Client/Server Architecture over Mainframe Architecture

    Originally mainframe architectures relied on a centralized host server that processed data and returned it to be displayed on a dummy terminal. These dummy terminals did not have my processing power and could only display data that was sent from the mainframe. Application architecture completely changed with the advent of N-Tier architecture. The N-Tier architecture replaced the dummy terminals with standard PCs that could think and/or process for themselves. This allowed for applications to be decentralized. Further, this type of architecture also breaks up the roles found within a mainframe by extracting Web Interfaces, Application Logic and Data access in to 3 separate parts so that it can be extended and distributed as the demands of an application increases.

    Read the article

  • Stored procedure Naming conventions?

    - by Chris
    One of our senior developers has stated that we should be using a naming convention for stored procedures with an "objectVerb" style of naming such as ("MemberGetById") instead of a "verbObject" type of naming ("GetMemberByID"). The reasoning for this standard is that all related stored procedures would be grouped together by object rather than by the action. While I see the logic for this way of naming things, this is the first time that I have seen stored procedures named this way. My opinion of the naming convention is that the name can not be read naturally and takes some time to determine what the words are saying and what the procedure might do. What are your takes on this? Which way is the more common way of naming a stored proc, and does a what types of stored proc naming conventions have you used or go by?

    Read the article

  • Does the deprecation of mysql_* functions in PHP carry over to other Databases(MSSQL)?

    - by MobyD
    I'm not talking about MySQL, I'm talking about Microsoft SQL Server I've been aware of PDO for quite some time now, standard mysql functions are dangerous and should be avoided. http://php.net/manual/en/function.mysql-connect.php But what about the MSSQL function in PHP? They are, for most purposes, identical sets of functions, but the PHP page describing mssql_* carries no warning of deprecation. http://us.php.net/manual/en/function.mssql-connect.php There are PDO drivers available for MSSQL, but they aren't quite as readily available or used as the MySQL drivers. Ideally, it looks to me like I should get them working and move from mssql_* to PDO like I have with MySQL, but is it as big of a priority? Is there some hidden safety to MSSQL that means it's exempt from all of the mysql_* hatred as of late? Or is its obscurity as a backend the only reason there hasn't been more PDO encouragement?

    Read the article

  • StorageTek SL8500 Release 8.3 available

    - by uwes
    Boosting Performance and Enhancing Reliability with StorageTek SL8500 Release 8.3! We’re pleased to announce the availability of SL8500 8.3 firmware, which supports partitioning for library complexes, library media validation, drive tray serial number reporting, and StorageTek T10000D tape drives! StorageTek SL8500 8.3 support the following: Library Complex Partitioning: Provides support for partitioning across an SL8500 library complex  Supports up to 16 partitions per library complex   Library Media Validation: Utilizing StorageTek Library Console, users can initiate media verifications with our StorageTek T10000C/D tape drives on StorageTek T10000 T1 and T2 media  Supports 3 scan options: basic verify, standard verify and complete verify Please read the Sales Bulletin (Firmware reales 8.31) on Oracle HW TRC for more details. (If you are not registered on Oracle HW TRC, click here ... and follow the instructions..) For More Information Go To: Oracle.com Tape Page Oracle Technology Network Tape Page

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • [News] Les 10 raisons qui font que HTML 5 n'est pas pr?t de remplacer Silverlight

    Alors qu'on entend de plus en plus de voix s'?lever pour le remplacement de Flash et Silverlight par la future sp?cification HTML 5, Bart Czernicki explique les 10 raisons qui font que cela n'est pas pr?t d'arriver : "HTML 5 is the next update to the HTML standard that powers the web. There are many new exciting features being added like the the canvas element, local offline storage, drag and drop and video playback support. HTML needed to evolve and added these features in order to stay relevant as the de facto markup language that can provide a rich web experience.". Une argumentation ?tay?e ? lire absolument.

    Read the article

  • Mass Metadata Updates with Folders

    - by Kyle Hatlestad
    With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced.  This is meant to replace the folder architecture of 'Folders_g'.  While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten.  One of the main goals of the new folders is to scale better at large volumes and remove the limitations of 1000 content items or sub-folders within a folder.  Along with the new architecture, it has a new look and a few additional features have been added.  One of those features are Query Folders.  These are folders that are populated simply by a query rather then literally putting items within the folders.  This is something that the Library has provided, but it always took an administrator to define them through the Web Layout Editor.  Now users can quickly define query folders anywhere within the standard folder hierarchy. [ Read More ]

    Read the article

  • Developing a SQL Server Function in a Test-Harness.

    - by Phil Factor
    /* Many times, it is a lot quicker to take some pain up-front and make a proper development/test harness for a routine (function or procedure) rather than think ‘I’m feeling lucky today!’. Then, you keep code and harness together from then on. Every time you run the build script, it runs the test harness too.  The advantage is that, if the test harness persists, then it is much less likely that someone, probably ‘you-in-the-future’  unintentionally breaks the code. If you store the actual code for the procedure as well as the test harness, then it is likely that any bugs in functionality will break the build rather than to introduce subtle bugs later on that could even slip through testing and get into production.   This is just an example of what I mean.   Imagine we had a database that was storing addresses with embedded UK postcodes. We really wouldn’t want that. Instead, we might want the postcode in one column and the address in another. In effect, we’d want to extract the entire postcode string and place it in another column. This might be part of a table refactoring or int could easily be part of a process of importing addresses from another system. We could easily decide to do this with a function that takes in a table as its parameter, and produces a table as its output. This is all very well, but we’d need to work on it, and test it when you make an alteration. By its very nature, a routine like this either works very well or horribly, but there is every chance that you might introduce subtle errors by fidding with it, and if young Thomas, the rather cocky developer who has just joined touches it, it is bound to break.     right, we drop the function we’re developing and re-create it. This is so we avoid the problem of having to change CREATE to ALTER when working on it. */ IF EXISTS(SELECT * FROM sys.objects WHERE name LIKE ‘ExtractPostcode’                                      and schema_name(schema_ID)=‘Dbo’)     DROP FUNCTION dbo.ExtractPostcode GO   /* we drop the user-defined table type and recreate it */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘AddressesWithPostCodes’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.AddressesWithPostCodes GO /* we drop the user defined table type and recreate it */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘OutputFormat’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.OutputFormat GO   /* and now create the table type that we can use to pass the addresses to the function */ CREATE TYPE AddressesWithPostCodes AS TABLE ( AddressWithPostcode_ID INT IDENTITY PRIMARY KEY, –because they work better that way! Address_ID INT NOT NULL, –the address we are fixing TheAddress VARCHAR(100) NOT NULL –The actual address ) GO CREATE TYPE OutputFormat AS TABLE (   Address_ID INT PRIMARY KEY, –the address we are fixing   TheAddress VARCHAR(1000) NULL, –The actual address   ThePostCode VARCHAR(105) NOT NULL – The Postcode )   GO CREATE FUNCTION ExtractPostcode(@AddressesWithPostCodes AddressesWithPostCodes READONLY)  /** summary:   > This Table-valued function takes a table type as a parameter, containing a table of addresses along with their integer IDs. Each address has an embedded postcode somewhere in it but not consistently in a particular place. The routine takes out the postcode and puts it in its own column, passing back a table where theinteger key is accompanied by the address without the (first) postcode and the postcode. If no postcode, then the address is returned unchanged and the postcode will be a blank string Author: Phil Factor Revision: 1.3 date: 20 May 2014 example:      – code: returns:   > Table of  Address_ID, TheAddress and ThePostCode. **/     RETURNS @FixedAddresses TABLE   (   Address_ID INT, –the address we are fixing   TheAddress VARCHAR(1000) NULL, –The actual address   ThePostCode VARCHAR(105) NOT NULL – The Postcode   ) AS – body of the function BEGIN DECLARE @BlankRange VARCHAR(10) SELECT  @BlankRange = CHAR(0)+‘- ‘+CHAR(160) INSERT INTO @FixedAddresses(Address_ID, TheAddress, ThePostCode) SELECT Address_ID,          CASE WHEN start>0 THEN REPLACE(STUFF([Theaddress],start,matchlength,”),‘  ‘,‘ ‘)             ELSE TheAddress END            AS TheAddress,        CASE WHEN Start>0 THEN SUBSTRING([Theaddress],start,matchlength-1) ELSE ” END AS ThePostCode FROM (–we have a derived table with the results we need for the chopping SELECT MAX(PATINDEX([matched],‘ ‘+[Theaddress] collate SQL_Latin1_General_CP850_Bin)) AS start,         MAX( CASE WHEN PATINDEX([matched],‘ ‘+[Theaddress] collate SQL_Latin1_General_CP850_Bin)>0 THEN TheLength ELSE 0 END) AS matchlength,        MAX(TheAddress) AS TheAddress,        Address_ID FROM (SELECT –first the match, then the length. There are three possible valid matches         ‘%['+@BlankRange+'][A-Z][0-9] [0-9][A-Z][A-Z]%’, 7 –seven character postcode       UNION ALL SELECT ‘%['+@BlankRange+'][A-Z][A-Z0-9][A-Z0-9] [0-9][A-Z][A-Z]%’, 8       UNION ALL SELECT ‘%['+@BlankRange+'][A-Z][A-Z][A-Z0-9][A-Z0-9] [0-9][A-Z][A-Z]%’, 9)      AS f(Matched,TheLength) CROSS JOIN  @AddressesWithPostCodes GROUP BY [address_ID] ) WORK; RETURN END GO ——————————-end of the function————————   IF NOT EXISTS (SELECT * FROM sys.objects WHERE name LIKE ‘ExtractPostcode’)   BEGIN   RAISERROR (‘There was an error creating the function.’,16,1)   RETURN   END   /* now the job is only half done because we need to make sure that it works. So we now load our sample data, making sure that for each Sample, we have what we actually think the output should be. */ DECLARE @InputTable AddressesWithPostCodes INSERT INTO  @InputTable(Address_ID,TheAddress) VALUES(1,’14 Mason mews, Awkward Hill, Bibury, Cirencester, GL7 5NH’), (2,’5 Binney St      Abbey Ward    Buckinghamshire      HP11 2AX UK’), (3,‘BH6 3BE 8 Moor street, East Southbourne and Tuckton W     Bournemouth UK’), (4,’505 Exeter Rd,   DN36 5RP Hawerby cum BeesbyLincolnshire UK’), (5,”), (6,’9472 Lind St,    Desborough    Northamptonshire NN14 2GH  NN14 3GH UK’), (7,’7457 Cowl St, #70      Bargate Ward  Southampton   SO14 3TY UK’), (8,”’The Pippins”, 20 Gloucester Pl, Chirton Ward,   Tyne & Wear   NE29 7AD UK’), (9,’929 Augustine lane,    Staple Hill Ward     South Gloucestershire      BS16 4LL UK’), (10,’45 Bradfield road, Parwich   Derbyshire    DE6 1QN UK’), (11,’63A Northampton St,   Wilmington    Kent   DA2 7PP UK’), (12,’5 Hygeia avenue,      Loundsley Green WardDerbyshire    S40 4LY UK’), (13,’2150 Morley St,Dee Ward      Dumfries and Galloway      DG8 7DE UK’), (14,’24 Bolton St,   Broxburn, Uphall and Winchburg    West Lothian  EH52 5TL UK’), (15,’4 Forrest St,   Weston-Super-Mare    North Somerset       BS23 3HG UK’), (16,’89 Noon St,     Carbrooke     Norfolk       IP25 6JQ UK’), (17,’99 Guthrie St,  New Milton    Hampshire     BH25 5DF UK’), (18,’7 Richmond St,  Parkham       Devon  EX39 5DJ UK’), (19,’9165 laburnum St,     Darnall Ward  Yorkshire, South     S4 7WN UK’)   Declare @OutputTable  OutputFormat  –the table of what we think the correct results should be Declare @IncorrectRows OutputFormat –done for error reporting   –here is the table of what we think the output should be, along with a few edge cases. INSERT INTO  @OutputTable(Address_ID,TheAddress, ThePostcode)     VALUES         (1, ’14 Mason mews, Awkward Hill, Bibury, Cirencester, ‘,‘GL7 5NH’),         (2, ’5 Binney St   Abbey Ward    Buckinghamshire      UK’,‘HP11 2AX’),         (3, ’8 Moor street, East Southbourne and Tuckton W    Bournemouth UK’,‘BH6 3BE’),         (4, ’505 Exeter Rd,Hawerby cum Beesby   Lincolnshire UK’,‘DN36 5RP’),         (5, ”,”),         (6, ’9472 Lind St,Desborough    Northamptonshire NN14 3GH UK’,‘NN14 2GH’),         (7, ’7457 Cowl St, #70    Bargate Ward  Southampton   UK’,‘SO14 3TY’),         (8, ”’The Pippins”, 20 Gloucester Pl, Chirton Ward,Tyne & Wear   UK’,‘NE29 7AD’),         (9, ’929 Augustine lane,  Staple Hill Ward     South Gloucestershire      UK’,‘BS16 4LL’),         (10, ’45 Bradfield road, ParwichDerbyshire    UK’,‘DE6 1QN’),         (11, ’63A Northampton St,Wilmington    Kent   UK’,‘DA2 7PP’),         (12, ’5 Hygeia avenue,    Loundsley Green WardDerbyshire    UK’,‘S40 4LY’),         (13, ’2150 Morley St,     Dee Ward      Dumfries and Galloway      UK’,‘DG8 7DE’),         (14, ’24 Bolton St,Broxburn, Uphall and Winchburg    West Lothian  UK’,‘EH52 5TL’),         (15, ’4 Forrest St,Weston-Super-Mare    North Somerset       UK’,‘BS23 3HG’),         (16, ’89 Noon St,  Carbrooke     Norfolk       UK’,‘IP25 6JQ’),         (17, ’99 Guthrie St,      New Milton    Hampshire     UK’,‘BH25 5DF’),         (18, ’7 Richmond St,      Parkham       Devon  UK’,‘EX39 5DJ’),         (19, ’9165 laburnum St,   Darnall Ward  Yorkshire, South     UK’,‘S4 7WN’)       insert into @IncorrectRows(Address_ID,TheAddress, ThePostcode)        SELECT Address_ID,TheAddress,ThePostCode FROM dbo.ExtractPostcode(@InputTable)       EXCEPT     SELECT Address_ID,TheAddress,ThePostCode FROM @outputTable; If @@RowCount>0        Begin        PRINT ‘The following rows gave ‘;     SELECT Address_ID,TheAddress,ThePostCode FROM @IncorrectRows        RAISERROR (‘These rows gave unexpected results.’,16,1);     end   /* For tear-down, we drop the user defined table type */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘OutputFormat’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.OutputFormat GO /* once this is working, the development work turns from a chore into a delight and one ends up hitting execute so much more often to catch mistakes as soon as possible. It also prevents a wildly-broken routine getting into a build! */

    Read the article

  • How can I most efficiently communicate my personal code of ethics, and its implications?

    - by blueberryfields
    There is a lot to the definition of a professional. There are many questions here asking how to identify components of what is essentially a professional programmer - how do you identify or communicate expertise, specialization, high quality work, excellent skills in relation to the profession. I am specifically looking for methods to communicate a specific component, and I quote from wikipedia: A high standard of professional ethics, behavior and work activities while carrying out one's profession (as an employee, self-employed person, career, enterprise, business, company, or partnership/associate/colleague, etc.). The professional owes a higher duty to a client, often a privilege of confidentiality, as well as a duty not to abandon the client just because he or she may not be able to pay or remunerate the professional. Often the professional is required to put the interest of the client ahead of his own interests. How can I most efficiently communicate my professionalism, in the spirit of the quote above, to current and potential clients and employers?

    Read the article

  • Pros and cons of using Grails compared to pure Groovy

    - by shabunc
    Say, you (by you I mean an abstract guy, any guy in your team) have experience of writing and building java web apps, know about filters, servlet mappings and so on, and so on. Also, let us assume you know pretty well any sql db, no matter which one exactly, whether it mysql, oracle or psql. At last, let pretend we know Groovy and its standard libraries, for example all that JsonBuilder and XmlSlurper stuff, so we don't need grails converters. The question is - what are benefits of using grails in this case. I'm not trying to start flame war, I'm just asking to compare - what are ups and downs of grails development compared to pure groovy one. For instance, off the top of my head I can name two pluses - automatic DB mapping and custom gsp tags. But when I want to write a modest app which provides small API for handling some well defined set of data, I'm totally OK with groovy's awesome SQL support. As for gsp, we does not use it at all, so we are not interested in custom tags as well.

    Read the article

  • Does it ever make sense to license source code as a learning resource under GPL?

    - by Earlz
    I recently came across a series of articles walking through how to make a scheme interpreter. I was browsing through the code when I realized that it was AGPL. For the most part, the code itself is the teaching tool. Basically, the code as-is is what I need, however, I did want to understand how it all fits together as well. I realized though that if I copy and paste a single line of code, my project would become AGPL. Possibly by even more trivial actions? Anyway, is this a standard practice at all? Am I just being over-paranoid? Also, what benefits are there to this licensing scheme?

    Read the article

  • How can I switch memory modules to 1600 Mhz?

    - by Salvador
    Some months ago I bought 4 Memory modules of 4GB DDR3 1600 KINGSTON HYPERX. The official Kingston manual says: *This module has been tested to run at DDR3-1600 at a low latency timing of 9-9-9-27 at 1.65V.The SPD is programmed to JEDEC standard latency DDR3-1333 timing of 9-9-9. I cannot find which is the real speed of my memory modules. I normally get from several tools that the real speed is 1333 Mhz srs@ubuntu:~$ sudo dmidecode -t memory # dmidecode 2.9 SMBIOS 2.6 present. Handle 0x005D, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 32 GB Error Information Handle: 0x005F Number Of Devices: 4 Handle 0x005C, DMI type 17, 28 bytes Memory Device Array Handle: 0x005D Error Information Handle: 0x0060 Total Width: 64 bits Data Width: 64 bits Size: 4096 MB Form Factor: DIMM Set: None Locator: ChannelA-DIMM0 Bank Locator: BANK 0 Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: Kingston Serial Number: 07288F23 Asset Tag: 9876543210 Part Number: 9905403-439.A00LF How can I switch memory modules to 1600 Mhz?

    Read the article

  • Max Degree of Parallelism Server-Side Setting

    - by Tara Kizer
    Recently I opened a case with Microsoft PSS to help us through a severe performance problem on a new system.  As part of that case, the PSS engineer checked our “max degree of parallelism” server-side setting.  It is our standard to use 4 on our production systems that have 16 CPUs (2 sockets, quad-core, hyper-threaded).  The PSS engineer had me run the below query to get Microsoft’s recommended value of “max degree of parallelism” server-side setting for our 16-CPU system: select case when cpu_count / hyperthread_ratio > 8 then 8 else cpu_count / hyperthread_ratio end as optimal_maxdop_setting from sys.dm_os_sys_info; The query returned 2.  I made the change using sp_configure, and it did not resolve our issue.  We have decided to leave it in place for now.   Do you agree with this query?  What are your thoughts on this? If you decide to change your setting to reflect the output of this query, please test it first to ensure there are no negative side effects.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Send Port

    - by Stuart Brierley
    I have previously talked about the installation of the Community ODBC adapter and also using the ODBC adapter to generate schemas and laterly the creation of a receive port using the ODBC Adapter.  But what about creating a send port? Select to add a new Send Port, select the ODBC Adapter and click configure. Clicking Connection string will open the DataSource window. Choose one of your system datasources and press OK. This will now update the Transport properties.  Select okay. All that remains is to set the standard send port properties and your ODBC send port is now ready.

    Read the article

  • What is the best policy for allowing clients to change email?

    - by Steve Konves
    We are developing a web application with a fairly standard registration process which requires a client/user to verify their email address before they are allowed to use the site. The site also allows users to change their email address after verification (with a re-type email field, as well). What are the pros and cons of having the user re-verify their email. Is this even needed? EDIT: Summary of answers and comments below: "Over-verification annoys people, so don't use it unless critical Use a "re-type email" field to prevent typos Beware of overwriting known good data with potentially good data Send email to old for notification; to new for verification Don't assume that the user still has access to the old email Identify impact of incorrect email if account is compromised

    Read the article

  • Professional Development – Difference Between Bio, CV and Resume

    - by Pinal Dave
    Applying for work can be very stressful – you want to put your best foot forward, and it can be very hard to sell yourself to a potential employer while highlighting your best characteristics and answering questions.  On top of that, some jobs require different application materials – a biography (or bio), a curriculum vitae (or CV), or a resume.  These things seem so interchangeable, so what is the difference? Let’s start with the one most of us have heard of – the resume.  A resume is a summary of your job and education history.  If you have ever applied for a job, you will have used a resume.  The ability to write a good resume that highlights your best characteristics and emphasizes your qualifications for a specific job is a skill that will take you a long way in the world.  For such an essential skill, unfortunately it is one that many people struggle with. RESUME So let’s discuss what makes a great resume.  First, make sure that your name and contact information are at the top, in large print (slightly larger font than the rest of the text, size 14 or 16 if the rest is size 12, for example).  You need to make sure that if you catch the recruiter’s attention and they know how to get a hold of you. As for qualifications, be quick and to the point.  Make your job title and the company the headline, and include your skills, accomplishments, and qualifications as bullet points.  Use good action verbs, like “finished,” “arranged,” “solved,” and “completed.”  Include hard numbers – don’t just say you “changed the filing system,” say that you “revolutionized the storage of over 250 files in less than five days.”  Doesn’t that sentence sound much more powerful? Curriculum Vitae (CV) Now let’s talk about curriculum vitae, or “CVs”.  A CV is more like an expanded resume.  The same rules are still true: put your name front and center, keep your contact info up to date, and summarize your skills with bullet points.  However, CVs are often required in more technical fields – like science, engineering, and computer science.  This means that you need to really highlight your education and technical skills. Difference between Resume and CV Resumes are expected to be one or two pages long – CVs can be as many pages as necessary.  If you are one of those people lucky enough to feel limited by the size constraint of resumes, a CV is for you!  On a CV you can expand on your projects, highlight really exciting accomplishments, and include more educational experience – including GPA and test scores from the GRE or MCAT (as applicable).  You can also include awards, associations, teaching and research experience, and certifications.  A CV is a place to really expand on all your experience and how great you will be in this particular position. Biography (Bio) Chances are, you already know what a bio is, and you have even read a few of them.  Think about the one or two paragraphs that every author includes in the back flap of a book.  Think about the sentences under a blogger’s photo on every “About Me” page.  That is a bio.  It is a way to quickly highlight your life experiences.  It is essentially the way you would introduce yourself at a party. Where a bio is required for a job, chances are they won’t want to know about where you were born and how many pets you have, though.  This is a way to summarize your entire job history in quick-to-read format – and sometimes during a job hunt, being able to get to the point and grab the recruiter’s interest is the best way to get your foot in the door.  Think of a bio as your entire resume put into words. Most bios have a standard format.  In paragraph one, talk about your most recent position and accomplishments there, specifically how they relate to the job you are applying for.  If you have teaching or research experience, training experience, certifications, or management experience, talk about them in paragraph two.  Paragraph three and four are for highlighting publications, education, certifications, associations, etc.  To wrap up your bio, provide your contact info and availability (dates and times). Where to use What? For most positions, you will know exactly what kind of application to use, because the job announcement will state what materials are needed – resume, CV, bio, cover letter, skill set, etc.  If there is any confusion, choose whatever the industry standard is (CV for technical fields, resume for everything else) or choose which of your documents is the strongest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, Professional Development, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • What quality standards to consider for software development process?

    - by Ron-Damon
    Hi, i'm looking for metrics/standards/normatives to evaluate a given "Software Development Process". I'm NOT looking to evaluate the SOFTWARE itself (trough SQUARE and such), i'm trying to evaluate software development PROCESS. So, my question is if you could give me some pointers to find this standard, considering that "evaluation objetives" would be documentation quality, how good is the customer relation, how efective is the process, etc. Very much like a ISO 9000, and like CMMI on a sense, but much lightweight and concrete and process oriented, not company oriented. Please help, i'm trying to stablish the advantages of our development process as formal as i can.

    Read the article

  • Support / Maintenance documentation for development team

    - by benwebdev
    Hi, I'm working in the Development dept (around 40 developers) for a large E-Commerce company. We've grown quickly but have not evolved very well in the field of documenting our work. We work with an Agile / Scrum-like methodology with our development and testing but documentation seems to be neglected. We need to be able to make documentation that would aid a developer who hasnt worked on our project before or was new to the company. We also have to create more high level information for our support department to explain any extra config settings and fixes of known issues that may arise, if any. Currently we put this in a badly put together wiki, based on an old Sharepoint / TFS site. Can anyone suggest some ideal links or advice on improving the documentation standard? What works in other companies? Has anyone got avice on developing documentation as part of an agile process? Many thanks, ben

    Read the article

  • Fetch as Googlebot works but Submit to Index does not for AJAX urls

    - by Jennifer
    First I fetch as googlebot, then I am prompted to Submit to Index. This I want to do, but the tool just re-prompts me. This does not happen when I am just submitting a standard url. For those urls I get a confirmation that they were submitted to the index. It only occurs when I am submitting a AJAX url. I know the urls are searchable, as I have performed many tests and see the results using /?_escaped_fragment_= Here is an example url: http://www.townbeam.com/#!events Can someone shed some light on this? Thank you

    Read the article

  • Environment naming standards in software development?

    - by Marcus_33
    My project is currently suffering from environment naming issues. Different people have different assumptions as to what environments should be named or what the names designate, and it's causing confusion when discussing them. I've done a bit of research and I haven't found any standards out there. The terms include "Local", "Sand", "Dev", "Test", "User", "QA", "Staging" and "Prod" (plus a few more that different people have asked about) I'm not looking for just opinions, though if there's one out there that "everyone" has I'll take it - I'm trying to find definitions advanced by some sort of authority, even if it's unofficial. Here's the environments we currently use: Environment on the developer's PC Shared Environment where developers directly upload code to self-test Shared Environment where standards and functionality are tested by QA people Shared Environment where completed and QA-checked code is approved by project requesters Environment that mirrors the final environment as a final check and to prepare for deployment Final Environment where code is in use I know what I'd call them, but is there some sort of standard on this? Thanks in advance.

    Read the article

  • IE9 - HTML5, Hardware Accelerated: First IE9 Platform Preview Available for Developers

    At the MIX conference, we demonstrated how the standard web patterns that developers already know and use broadly run better by taking advantage of PC hardware through IE9 on Windows. First, we showed IE9s new script engine, internally known as Chakra We shared the data and framework that informed our approach, and demonstrated better support for several standards: HTML5, DOM, and CSS3. We showed hardware-accelerated SVG support in IE9. Finally, we announced the availability of...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • SSH as root using public key still prompts for password on RHEL 6.1

    - by Dean Schulze
    I've generated rsa keys with cygwin ssh-keygen and copied them to the server with ssh-copy-id -i id_rsa.pub [email protected] I've got the following settings in my /etc/ssh/sshd_config file RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys PermitRootLogin yes When I ssh [email protected] it still prompts for a password. The output below from /usr/sbin/sshd -d says that a matching keys was found in the .ssh/authorized_keys file, but it still requires a password from the client. I've read a bunch of web postings about permissions on files and directories, but nothing works. Is it possible to ssh with keys in RHEL 6.1 or is this forbidden? The debug output from ssh and sshd is below. $ ssh -v [email protected] OpenSSH_6.1p1, OpenSSL 1.0.1c 10 May 2012 debug1: Connecting to my.ip.address [my.ip.address] port 22. debug1: Connection established. debug1: identity file /home/dschulze/.ssh/id_rsa type 1 debug1: identity file /home/dschulze/.ssh/id_rsa-cert type -1 debug1: identity file /home/dschulze/.ssh/id_dsa type 2 debug1: identity file /home/dschulze/.ssh/id_dsa-cert type -1 debug1: identity file /home/dschulze/.ssh/id_ecdsa type -1 debug1: identity file /home/dschulze/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 9f:00:e0:1e:a2:cd:05:53:c8:21:d5:69:25:80:39:92 debug1: Host 'my.ip.address' is known and matches the RSA host key. debug1: Found key in /home/dschulze/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/dschulze/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Offering DSA public key: /home/dschulze/.ssh/id_dsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Trying private key: /home/dschulze/.ssh/id_ecdsa debug1: Next authentication method: password Here is the server output from /usr/sbin/sshd -d [root@ga2-lab .ssh]# /usr/sbin/sshd -d debug1: sshd version OpenSSH_5.3p1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: rexec_argv[0]='/usr/sbin/sshd' debug1: rexec_argv[1]='-d' debug1: Bind to port 22 on 0.0.0.0. Server listening on 0.0.0.0 port 22. debug1: Bind to port 22 on ::. Server listening on :: port 22. debug1: Server will not fork when running in debugging mode. debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8 debug1: inetd sockets after dupping: 3, 3 Connection from 172.60.254.24 port 53401 debug1: Client protocol version 2.0; client software version OpenSSH_6.1 debug1: match: OpenSSH_6.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: permanently_set_uid: 74/74 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr hmac-md5 none debug1: kex: server->client aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST received debug1: SSH2_MSG_KEX_DH_GEX_GROUP sent debug1: expecting SSH2_MSG_KEX_DH_GEX_INIT debug1: SSH2_MSG_KEX_DH_GEX_REPLY sent debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user root service ssh-connection method none debug1: attempt 0 failures 0 debug1: PAM: initializing for "root" debug1: userauth-request for user root service ssh-connection method publickey debug1: attempt 1 failures 0 debug1: test whether pkalg/pkblob are acceptable debug1: PAM: setting PAM_RHOST to "172.60.254.24" debug1: PAM: setting PAM_TTY to "ssh" debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file /root/.ssh/authorized_keys debug1: fd 4 clearing O_NONBLOCK debug1: matching key found: file /root/.ssh/authorized_keys, line 1 Found matching RSA key: db:b3:b9:b1:c9:df:6d:e1:03:5b:57:d3:d9:c4:4e:5c debug1: restore_uid: 0/0 Postponed publickey for root from 172.60.254.24 port 53401 ssh2 debug1: userauth-request for user root service ssh-connection method publickey debug1: attempt 2 failures 0 debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file /root/.ssh/authorized_keys debug1: fd 4 clearing O_NONBLOCK debug1: matching key found: file /root/.ssh/authorized_keys, line 1 Found matching RSA key: db:b3:b9:b1:c9:df:6d:e1:03:5b:57:d3:d9:c4:4e:5c debug1: restore_uid: 0/0 debug1: ssh_rsa_verify: signature correct debug1: do_pam_account: called Accepted publickey for root from 172.60.254.24 port 53401 ssh2 debug1: monitor_child_preauth: root has been authenticated by privileged process debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: ssh_gssapi_storecreds: Not a GSSAPI mechanism debug1: restore_uid: 0/0 debug1: SELinux support enabled debug1: PAM: establishing credentials PAM: pam_open_session(): Authentication failure debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_pty_req: session 0 alloc /dev/pts/1 ssh_selinux_setup_pty: security_compute_relabel: Invalid argument debug1: server_input_channel_req: channel 0 request shell reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req shell debug1: Setting controlling tty using TIOCSCTTY. debug1: Received SIGCHLD. debug1: session_by_pid: pid 17323 debug1: session_exit_message: session 0 channel 0 pid 17323 debug1: session_exit_message: release channel 0 debug1: session_pty_cleanup: session 0 release /dev/pts/1 debug1: session_by_channel: session 0 channel 0 debug1: session_close_by_channel: channel 0 child 0 debug1: session_close: session 0 pid 0 debug1: channel 0: free: server-session, nchannels 1 Received disconnect from 172.60.254.24: 11: disconnected by user debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: deleting credentials

    Read the article

  • How to design database having multiple interrelated entities

    - by Sharath Chandra
    I am designing a new system which is more of a help system for core applications in banks or healthcare sector. Given the nature of the system this is not a heavy transaction oriented system but more of read intensive. Now within this application I have multiple entities which are related to each other. For e.g. Assume the following entities in the system User Training Regulations Now each of these entities have M:N Relationship with each other. Assuming the usage of a standard RDBMS, the design may involve many relationship tables each containing the relationships one other entity ("User_Training", "User_Regulations", "Training_Regulations"). This design is limiting since I have more than 3 entities in the system and maintaining the relationship graph is difficult this way. The most frequently used operation is "given an entity get me all the related entities" . I need to design the database where this operation is relatively inexpensive. What are the different recommendations for modelling this kind of database.

    Read the article

  • Remote host: can tracert, can telnet, can*not* browse: what gives?

    - by MacThePenguin
    One of my customers of the company I work for has made a change to their Internet connection, and now we can't connect to them any more from our LAN. To help me troubleshoot this issue, the network guy on the customer's site has configured their firewall so that a HTTPS connection to their public IP address is open to any IP. I should put https://<customer's IP> in my browser and get a web page. Well, it works from any network I've tried (even from my smartphone), just not from my company's LAN. I thought it may be an issue with our firewall (though I checked its rules and it allows outbound TCP port 443 to anywhere), so I just connected a PC directly to the network connection of our provider, bypassing out firewall completely, and still it didn't work (everything else worked). So I asked for help to our Internet provider's customer service, and they asked me to do a tracert to our customer's IP. The tracert is successful, as the final hop shown in the output is the host I want to reach. So they said there's no problem. :( I also tried telnet <customer's IP> 443 and that works as well: I get a blank page with the cursor blinking (I've tried using another random port and that gives me an error message, as it should). Still, from any browser of any PC in my LAN I can't open that URL. I tried checking the network traffic with Wireshark: I see the packages going through and answers coming back, thought the packets I see passing are far less than they are if I successfully connect to another HTTPS website. See the attached screenshot: I had to blur the IPs, anyway the longer string is my PC's local IP address, the shorter one is the customer's public IP. I don't know what else to try. This is the only IP doing this... Any idea what could I try to find a solution to this issue? Thanks, let me know if you need further details. Edit: when I say "it doesn't work" I mean: the page doesn't open, the browser keeps loading for a long time and eventually shows an error saying that the page cannot be opened. I'm not in my office now so I can't paste the exact message, but it's the usual message you get when the browser reaches its timeout. When I say "it works", I mean the browser loads and shows a webpage (it's the logon page for the customers' firewall admin interface: so there's the firewall brand's logo and there are fields to enter a user id and a password). Update 13/09/2012: tried again to connect to the customer's network through our Internet connection without a firewall. This is what I did: Run a Kubuntu 12.04 live distro on a spare laptop; Updated all the packages I could and installed WireShark; Attached it to my LAN and verified that I couldn't open https://<customer's IP>. Verified that the Wireshark trace for this attempt was the same as the one I've already posted; Verified that I could connect to another customer's host using rdesktop (it worked); Tried to rdesktop to <customer's IP>, here's the output: kubuntu@kubuntu:/etc$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: recv: Connection reset by peer Disconnected the laptop from the LAN; Disconnected the firewall from the Extranet connection, connected the laptop instead. Set its network configuration so that I could access the Internet; Verified that I could connect to other websites in http and https and in RDP to other customers' hosts - it all worked as expected; Verified that I could still traceroute to <customer's IP>: I could; Verified that I still couldn't open https://<customer's IP> (same exact result as before); Checked the WireShark trace for this attempt and noticed a different behaviour: I could see packets going out to the customer's IP, but no replies at all; Tried to run rdesktop again, with a slightly different result: kubuntu@kubuntu:/etc/network$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: <customer's IP>: unable to connect Finally gave up, put everything back as it was before, turned off the laptop and lost the WireShark traces I had saved. :( I still remember them very well though. :) Can you get anything out of it? Thank you very much. Update 12/09/2012 n.2: I followed the suggestion by MadHatter in the comments. From inside the firewall, this is what I get: user@ubuntu-mantis:~$ openssl s_client -connect <customer's IP>:443 CONNECTED(00000003) If I now type GET / the output pauses for several seconds and then I get: write:errno=104 I'm going to try the same, but bypassing the firewall, as soon as I can. Thanks. Update 12/09/2012 n.3: So, I think ISA Server is altering the results of my tests... I tried installing Wireshark directly on the firewall and monitoring the packets on the Extranet network card. When the destination is the customer's IP, whatever service I try to connect to (HTTPS, RDP or SAProuter), I can only see outbound packets and no response packets whatsoever from their side. It looks like ISA Server is "faking" the remote server's replies, that's why I get a connection using telnet or the openSSL client. This is the wireshark trace from inside our LAN: But this is the trace on the Extranet network card: This makes a bit more sense... I'll send this info to the customer's tech and see if he can make anything out of it. Thanks to all that took the time to read my question and post suggestions. I'll update this post again.

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >