Search Results

Search found 41135 results on 1646 pages for 'non relational database'.

Page 413/1646 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • How to do a non-waiting write on a named pipe (c#) ?

    - by Jelly Amma
    Hello, I'm using .net 3.5 named pipes and my server side is : serverPipeStream = new NamedPipeServerStream("myPipe", PipeDirection.InOut, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous); When I write some data with, say, BinaryWriter, the write() call itself doesn't return until the client side has called a read() on its NamedPipeClientStream. How can I make my write() to the named pipe non-blocking ? Thanks in advance for any help.

    Read the article

  • SQLite delete the last 25% of records in a database.

    - by Steven smethurst
    I am using a SQLite database to store values from a data logger. The data logger will eventually fills up all the available hard drive space on the computer. I'm looking for a way to remove the last 25% of the logs from the database once it reaches a certain limit. Using the following code: $ret = Query( 'SELECT id as last FROM data ORDER BY id desc LIMIT 1 ;' ); $last_id = $ret[0]['last'] ; $ret = Query( 'SELECT count( * ) as total FROM data' ); $start_id = $last_id - $ret[0]['total'] * 0.75 ; Query( 'DELETE FROM data WHERE id < '. round( $start_id, 0 ) ); A journal file gets created next to the database that fills up the remaining space on the drive until the script fails. How/Can I stop this journal file from being created? Anyway to combined all three SQL queries in to one statement?

    Read the article

  • ????Oracle EBS R12 on Exadata V2 ,MMA and Hight Performance

    - by longchun.zhu
    ???????? ?????,???, ????????hands-on ??? ??????: 1. Oracle EBS R12 on Database Machine MAA & Performance Architecture. 2. Oracle EBS R12 Single Instance Node Deployment Procedure. 2.Start Rapid Install Wizard. 3. Oracle EBS R12 Single Instance Node Chinese Patch Update. 4.Applying Patches. 5.Upgrade Application Database Version to 11g Release 2. 6.Database Upgrade 7. Deploy Clone Application Database to Sun Oracle Database Machine. 8.Migrate Application Database File System to Exadata ASM Storage. 9.Covert Application Database Single Instance to RAC.. 10.Configure High Availability & High Performance Architecture with Exadata.

    Read the article

  • Windows?? Oracle Database 11g Release 2 ? SAP ????????

    - by ?? ?
    2010?3?31??UNIX???Linux???????????????????????? 2010?6?9??Windows(32bit, x64)???????SAP??????????????? ?????Windows?2003?2008????????? ???ECC?????(7.02)??????????????? ??10g R2??????????????????11g R2????OK??????? (?????????UNIX/Linux/Windows??2010?????????)   SAP????? 1398634???????????????????????????????????? ??????PAM?????????????????????? SAP?????SAP/Oracle 11g???????????????????????????????

    Read the article

  • ????: PostgreSQL??Oracle RAC????

    - by Kumiko Fujita
    ?????????????????????????????????????????????????????????????????????????? ????????????????????????? * * * ?????????????????????????????????????DBMS??????????????????????????????DBMS????????????????????????????????????????????? 1. ???? ?????????????????????????????????????????????????????????????????????1?????? ???????????????? ?????????????????????????????DB???????OSS?PostgreSQL?????AP?????DB??????????????????? ???????? ?????10?????????????GB????????????????????????????DB?????????????????????????? ?????????????3,500?????????24????????????????????????????????????? ??AP?????????????????????????????????????????DB??PostgreSQL??????????????????PostgreSQL ????????????????????Vacuum????????????????????????????????????????????????????? ??????????????????PostgreSQL?OSS??????????????????????????????????????????????????DB MS??????Oracle Database 11gR2???????????????????????500GB???????????????????????????Partitioning ???????? Oracle Database Enterprise Edition?????????????????????????????????????????????? ????SAN?????Active/Standby???HA????????????????? 2. ????? 2.1. ???? PostgreSQL??????Oracle??????????????????????????????????????????????????????TEXT????? ????????????????????Oracle??????????????????????????PostgreSQL??csv???????Oracle Database?SQL*Loa der????????????? ??????????????????????????????DB??????????????Windows?Liunx??????????????????????? ????????????????????????????????????????????????? ?????????????PostgreSQL?NULL?????''????????????Oracle Database???????????????????????? ?????????? table { border-collapse: collapse; } th { border: solid 1px #666666; color: #000000; background-color: #ff9999; } td { border: solid 1px #666666; color: #000000; background-color: #ffffff; } ???? PostgreSQL Oracle Database ??? CHAR(n) CHAR(n),CLOB VARCHAR(n) VARCHAR2(n),CLOB TEXT CLOB ??? NUMERIC NUMBER INTEGER NUMBER SMALLINT NUMBER BIGINT NUMBER REAL NUMBER DOUBLE PRECISION NUMBER ??? DATE DATE TIMESTAMP TIMESTAMP ????? Bytea BLOB LOB BFILE/SecureFiles ??? OID ROWID 2.2. ????? ?????????????PostgreSQL?Oracle Database??????????SQL???????????????????????????????????Postg reSQL?LIMIT?OFFSET??Oracle Database?????????????????????? LIMIT,OFFSET???SELECT?????? /* PostgreSQL LIMIT,OFFSET */ SELECT ??? FROM ????? ORDER BY ???? LIMIT 2 OFFSET 5; /* Oracle Database????? */ SELECT ??? FROM (SELECT ???, ROWNUM line_no FROM (SELECT??? FROM ????? OREDR BY ???? ) ) WHERE line_no BETWEEN 6 AND 7; ??????????????????????????????????????????????????????????????????????????? ?????????????????? ????????????????????????????????????????????????Oracle Database??????????????????????Oracle Database????WHERE??????????????????????????????????????????????????????WHERE?????????????????????? 3. ???? ???????????????????????30%~40%????????????????????80%????????????????????? ?ITpro???:???????4????? ??????????????????????????????????? ·?????·??????????????????????????? ·????????????????????????????? ????????????????????????????????????????? 3.1. ??????? ????????????????????????????????????????·??????????????????????????????????? ???????????????????????????????????????????????????????·?????????????????? ???????????????????????????? (1)???????????????????? (2)???????????????????????????????????????????? (3)??????????????? (4)???????????????????????????????? ???????????·???????????????????????????????????????????????????????????????? ????????????????????? ????????·?? table { border-collapse: collapse; } th { border: solid 1px #666666; color: #000000; background-color: #ff9999; } td { border: solid 1px #666666; color: #000000; background-color: #ffffff; } ?? ?? ?? (1) ?????????? ????????????·???????????????????????? (2) ???????????????????? ?????????????????????????????? (3) ?????4????????????????? ???????????????????????DB????????? (4) ??????????(3)???????? ???????????????????????? ?????????????????????GB???????????????????????????????????????????(3)?????????? ??????? ??????????????????????????????????????????????csv??????????SQL*Loader?Oracle Database?????????????????????Oracle Database???????????????????????????INSERT????????????? ???????????????????????????????????????????????????????????????????????????? ?????????????????????? 3.2. ????? ???????????????????????????????????????????????????????????????????????????? ????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? 3.3. ????? ??????????????????????????????????????????????????????????????????????????? ??????????????????????? DBMS????????????????????????SQL??????????????????????????????????????????????????PostgreSQL?Oracle Database???????????MVCC?????????????????????????Read Committed??????????????????????????????????????????????????????????????????????????????????? ????????????????DBMS?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? 4. ??? PostgreSQL??Oracle Database?????????????????????????????? ????????????·????????????????????????????????????? ??????4???????????????????????·??????????????????? ???????????????????????????????????????????????? ?????????????????????????????????????????????DBMS???????????????????DBMS???????? ?????SQL?????????????????????????????DB???????????????????????????? ???????????????????????????DBMS?????????????????????????????????????????????????????? ??????????????????????????????

    Read the article

  • A ToDynamic() Extension Method For Fluent Reflection

    - by Dixin
    Recently I needed to demonstrate some code with reflection, but I felt it inconvenient and tedious. To simplify the reflection coding, I created a ToDynamic() extension method. The source code can be downloaded from here. Problem One example for complex reflection is in LINQ to SQL. The DataContext class has a property Privider, and this Provider has an Execute() method, which executes the query expression and returns the result. Assume this Execute() needs to be invoked to query SQL Server database, then the following code will be expected: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // Executes the query. Here reflection is required, // because Provider, Execute(), and ReturnValue are not public members. IEnumerable<Product> results = database.Provider.Execute(query.Expression).ReturnValue; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } Of course, this code cannot compile. And, no one wants to write code like this. Again, this is just an example of complex reflection. using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider PropertyInfo providerProperty = database.GetType().GetProperty( "Provider", BindingFlags.NonPublic | BindingFlags.GetProperty | BindingFlags.Instance); object provider = providerProperty.GetValue(database, null); // database.Provider.Execute(query.Expression) // Here GetMethod() cannot be directly used, // because Execute() is a explicitly implemented interface method. Assembly assembly = Assembly.Load("System.Data.Linq"); Type providerType = assembly.GetTypes().SingleOrDefault( type => type.FullName == "System.Data.Linq.Provider.IProvider"); InterfaceMapping mapping = provider.GetType().GetInterfaceMap(providerType); MethodInfo executeMethod = mapping.InterfaceMethods.Single(method => method.Name == "Execute"); IExecuteResult executeResult = executeMethod.Invoke(provider, new object[] { query.Expression }) as IExecuteResult; // database.Provider.Execute(query.Expression).ReturnValue IEnumerable<Product> results = executeResult.ReturnValue as IEnumerable<Product>; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } This may be not straight forward enough. So here a solution will implement fluent reflection with a ToDynamic() extension method: IEnumerable<Product> results = database.ToDynamic() // Starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue; C# 4.0 dynamic In this kind of scenarios, it is easy to have dynamic in mind, which enables developer to write whatever code after a dot: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider dynamic dynamicDatabase = database; dynamic results = dynamicDatabase.Provider.Execute(query).ReturnValue; } This throws a RuntimeBinderException at runtime: 'System.Data.Linq.DataContext.Provider' is inaccessible due to its protection level. Here dynamic is able find the specified member. So the next thing is just writing some custom code to access the found member. .NET 4.0 DynamicObject, and DynamicWrapper<T> Where to put the custom code for dynamic? The answer is DynamicObject’s derived class. I first heard of DynamicObject from Anders Hejlsberg's video in PDC2008. It is very powerful, providing useful virtual methods to be overridden, like: TryGetMember() TrySetMember() TryInvokeMember() etc.  (In 2008 they are called GetMember, SetMember, etc., with different signature.) For example, if dynamicDatabase is a DynamicObject, then the following code: dynamicDatabase.Provider will invoke dynamicDatabase.TryGetMember() to do the actual work, where custom code can be put into. Now create a type to inherit DynamicObject: public class DynamicWrapper<T> : DynamicObject { private readonly bool _isValueType; private readonly Type _type; private T _value; // Not readonly, for value type scenarios. public DynamicWrapper(ref T value) // Uses ref in case of value type. { if (value == null) { throw new ArgumentNullException("value"); } this._value = value; this._type = value.GetType(); this._isValueType = this._type.IsValueType; } public override bool TryGetMember(GetMemberBinder binder, out object result) { // Searches in current type's public and non-public properties. PropertyInfo property = this._type.GetTypeProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in explicitly implemented properties for interface. MethodInfo method = this._type.GetInterfaceMethod(string.Concat("get_", binder.Name), null); if (method != null) { result = method.Invoke(this._value, null).ToDynamic(); return true; } // Searches in current type's public and non-public fields. FieldInfo field = this._type.GetTypeField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // Searches in base type's public and non-public properties. property = this._type.GetBaseProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in base type's public and non-public fields. field = this._type.GetBaseField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // The specified member is not found. result = null; return false; } // Other overridden methods are not listed. } In the above code, GetTypeProperty(), GetInterfaceMethod(), GetTypeField(), GetBaseProperty(), and GetBaseField() are extension methods for Type class. For example: internal static class TypeExtensions { internal static FieldInfo GetBaseField(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeField(name) ?? @base.GetBaseField(name); } internal static PropertyInfo GetBaseProperty(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeProperty(name) ?? @base.GetBaseProperty(name); } internal static MethodInfo GetInterfaceMethod(this Type type, string name, params object[] args) { return type.GetInterfaces().Select(type.GetInterfaceMap).SelectMany(mapping => mapping.TargetMethods) .FirstOrDefault( method => method.Name.Split('.').Last().Equals(name, StringComparison.Ordinal) && method.GetParameters().Count() == args.Length && method.GetParameters().Select( (parameter, index) => parameter.ParameterType.IsAssignableFrom(args[index].GetType())).Aggregate( true, (a, b) => a && b)); } internal static FieldInfo GetTypeField(this Type type, string name) { return type.GetFields( BindingFlags.GetField | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( field => field.Name.Equals(name, StringComparison.Ordinal)); } internal static PropertyInfo GetTypeProperty(this Type type, string name) { return type.GetProperties( BindingFlags.GetProperty | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( property => property.Name.Equals(name, StringComparison.Ordinal)); } // Other extension methods are not listed. } So now, when invoked, TryGetMember() searches the specified member and invoke it. The code can be written like this: dynamic dynamicDatabase = new DynamicWrapper<NorthwindDataContext>(ref database); dynamic dynamicReturnValue = dynamicDatabase.Provider.Execute(query.Expression).ReturnValue; This greatly simplified reflection. ToDynamic() and fluent reflection To make it even more straight forward, A ToDynamic() method is provided: public static class DynamicWrapperExtensions { public static dynamic ToDynamic<T>(this T value) { return new DynamicWrapper<T>(ref value); } } and a ToStatic() method is provided to unwrap the value: public class DynamicWrapper<T> : DynamicObject { public T ToStatic() { return this._value; } } In the above TryGetMember() method, please notice it does not output the member’s value, but output a wrapped member value (that is, memberValue.ToDynamic()). This is very important to make the reflection fluent. Now the code becomes: IEnumerable<Product> results = database.ToDynamic() // Here starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue .ToStatic(); // Unwraps to get the static value. With the help of TryConvert(): public class DynamicWrapper<T> : DynamicObject { public override bool TryConvert(ConvertBinder binder, out object result) { result = this._value; return true; } } ToStatic() can be omitted: IEnumerable<Product> results = database.ToDynamic() .Provider.Execute(query.Expression).ReturnValue; // Automatically converts to expected static value. Take a look at the reflection code at the beginning of this post again. Now it is much much simplified! Special scenarios In 90% of the scenarios ToDynamic() is enough. But there are some special scenarios. Access static members Using extension method ToDynamic() for accessing static members does not make sense. Instead, DynamicWrapper<T> has a parameterless constructor to handle these scenarios: public class DynamicWrapper<T> : DynamicObject { public DynamicWrapper() // For static. { this._type = typeof(T); this._isValueType = this._type.IsValueType; } } The reflection code should be like this: dynamic wrapper = new DynamicWrapper<StaticClass>(); int value = wrapper._value; int result = wrapper.PrivateMethod(); So accessing static member is also simple, and fluent of course. Change instances of value types Value type is much more complex. The main problem is, value type is copied when passing to a method as a parameter. This is why ref keyword is used for the constructor. That is, if a value type instance is passed to DynamicWrapper<T>, the instance itself will be stored in this._value of DynamicWrapper<T>. Without the ref keyword, when this._value is changed, the value type instance itself does not change. Consider FieldInfo.SetValue(). In the value type scenarios, invoking FieldInfo.SetValue(this._value, value) does not change this._value, because it changes the copy of this._value. I searched the Web and found a solution for setting the value of field: internal static class FieldInfoExtensions { internal static void SetValue<T>(this FieldInfo field, ref T obj, object value) { if (typeof(T).IsValueType) { field.SetValueDirect(__makeref(obj), value); // For value type. } else { field.SetValue(obj, value); // For reference type. } } } Here __makeref is a undocumented keyword of C#. But method invocation has problem. This is the source code of TryInvokeMember(): public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (binder == null) { throw new ArgumentNullException("binder"); } MethodInfo method = this._type.GetTypeMethod(binder.Name, args) ?? this._type.GetInterfaceMethod(binder.Name, args) ?? this._type.GetBaseMethod(binder.Name, args); if (method != null) { // Oops! // If the returnValue is a struct, it is copied to heap. object resultValue = method.Invoke(this._value, args); // And result is a wrapper of that copied struct. result = new DynamicWrapper<object>(ref resultValue); return true; } result = null; return false; } If the returned value is of value type, it will definitely copied, because MethodInfo.Invoke() does return object. If changing the value of the result, the copied struct is changed instead of the original struct. And so is the property and index accessing. They are both actually method invocation. For less confusion, setting property and index are not allowed on struct. Conclusions The DynamicWrapper<T> provides a simplified solution for reflection programming. It works for normal classes (reference types), accessing both instance and static members. In most of the scenarios, just remember to invoke ToDynamic() method, and access whatever you want: StaticType result = someValue.ToDynamic()._field.Method().Property[index]; In some special scenarios which requires changing the value of a struct (value type), this DynamicWrapper<T> does not work perfectly. Only changing struct’s field value is supported. The source code can be downloaded from here, including a few unit test code.

    Read the article

  • With WebMatrix, How do I Connect to a MySQL Database on a Colleague's Machine?

    - by Ash Clarke
    I have scoured Google trying to discover how to do this, but essentially I want to connect to a colleague's MySQL database for working together on a Wordpress installation. I am having no luck and keep getting an error about the connection not being possible: Unable to connect to any of the specified MySQL hosts. MySql.Data.MySqlClient.MySqlException (0x80004005): Unable to connect to any of the specified MySQL hosts. at MySql.Data.MySqlClient.NativeDriver.Open() at MySql.Data.MySqlClient.Driver.Open() at MySql.Data.MySqlClient.Driver.Create(MySqlConnectionStringBuilder settings) at MySql.Data.MySqlClient.MySqlPool.GetPooledConnection() at MySql.Data.MySqlClient.MySqlPool.TryToGetDriver() at MySql.Data.MySqlClient.MySqlPool.GetConnection() at MySql.Data.MySqlClient.MySqlConnection.Open() at Microsoft.WebMatrix.DatabaseManager.MySqlDatabase.MySqlDatabaseProvider.TestConnection(String connectionString) at Microsoft.WebMatrix.DatabaseManager.IisDbManagerModuleService.TestConnection(DatabaseConnection databaseConnection, String configPathState) at Microsoft.WebMatrix.DatabaseManager.Client.ClientConnection.Test(ManagementConfigurationPath configPath) at Microsoft.WebMatrix.DatabaseManager.Client.DatabaseHierarchyInfo.EnsureLoaded() The connection details are copied from my colleague's connection string, with the exception of the server being modified to match the IP address of his machine. I'm not sure if there is a firewall port I have to open or a configuration file I have to modify, but I'm not having much luck so far. (There is a strong chance that, by default, web matrix / iis express doesn't set the mysql database it creates to accept remote connections. If anyone knows how to change this, that would be grand!) Anyone have any ideas?

    Read the article

  • Coldfusion 9 losing connection to MySQL 5 database server a couple of weeks after the server is started

    - by user1503757
    We get the following Coldfusion error message after our server have been running for a couple of weeks: Error Executing Database Query.Could not create connection to database server. Attempted reconnect 3 times We run Coldfusion Enterprise 9 on a one year old XServer with Snow Leopard and MySQL 5 The server has about ten DSN set up in the Coldfusion Administrator All local, with default advanced settings, and host set to "localhost" The server is not under heavy load. The strange thing is that after a restart of the server, everything works fine. Then after a week or so, some databases will stop working, in the sense that Coldfusion cannot create a connection to them. If I then go to the Coldfusion Administrator and click "Verify all datasources", I will get that only 2 or 3 got verified, the other ones failed, and it is always the same datasources that can't be verified when the server starts to behave like this if I try to verify again, BUT NOT neccessary the same datasources that couldn't be verified the last time the server behaved like this. I know about the setting "max_connections" and we have included a line for that setting in the MySQL config file and set it to 2000, and when we read it by a query it says "2000", so that can't be the problem. Anyone?

    Read the article

  • How to get just value from database query in Excel?

    - by Corin
    I'm creating a spreadsheet as a collection point of information from a number of MS Access databases. I will run a query on each database to get a count of records in a particular table. Each database has the same structure but different content as they are used in different situations. So the query returns a single value, rec_count. I've figured out how to create that query, save it and then use it as the data source. So far so good. The problem is that Excel treats the query results as a table. So instead of getting just the single value the query returns, I also get the field name. Thus the result takes up two cells instead of one. When linking in the data source, I only see Table, PivotTable Report and PivotChart as options for viewing the data. I don't want any of those. I just want the single value without any formatting, column headers, etc. Is there a way to do this is Excel 2007?

    Read the article

  • Is Ubuntu a bad distro for a standalone mysql database server?

    - by DhruvPathak
    I read an article here : http://www.mysqlperformanceblog.com/2011/12/08/which-linux-distribution-for-mysql-server/ On the other end there are Debian and Ubuntu. Both use tool called dpkg for package management. There isn’t a month that I log in to a system based on either distribution where there are no issues with packages consistency. Unfinished installations, unresolved conflicts are so common that it’s just beyond simple negligence. The packaging system is just not robust enough. Another problem is that one broken package may block you from installing or uninstalling anything else. Imagine that someone left system in such shape, you prepared for downtime, stopped MySQL and… error – text editor has not been properly installed, so you cannot upgrade MySQL either until the problem is fixed. In a stressful situation when downtime clock ticks – annoying at best We prefer Ubuntu server because of familiarity and Ubuntu also being development environment. Questions: Is Ubuntu used commonly in production for a mysql database server ? Is it worth the trouble ever to have one distro eg Ubuntu in web server, and another say Red Hat in database server ? Or Is a homogenous server pool a better choice ?

    Read the article

  • Using the ASP.NET Membership API with SQL Server / SQL Azure: The new &ldquo;System.Web.Providers&rdquo; namespace

    - by Harish Ranganathan
    The Membership API came in .NET 2.0 and was a huge enhancement in building web applications with users, managing roles, permissions etc.,  The Membership API by default uses SQL Express and until Visual Studio 2008, it was available only through the ASP.NET Configuration manager screen (Website – ASP.NET Configuration) or (Project – ASP.NET Configuration) and for every application, one has to manually visit this place to start using the Security and other settings.  Upon doing that the default SQL Express database aspnet.mdf is created to store all the user profiles. Starting Visual Studio 2010 and .NET 4.0, the Default Website template includes the Membership API controls as a part of the page i.e. When you create a “File – New – ASP.NET Web Application” or an “ASP.NET MVC Application”, by default the Login/Register controls are enabled in the MasterPage and they are termed under “ApplicationServices” setting in the web.config file with connection string pointed to the SQL Express database. In fact, when you run the default website and click on “Logon” –> “Register”, and enter the details for registration and click “Register”, that is the time the aspnet.mdf file is created with the tables for Users, Roles, UsersInRoles, Profile etc., Now, this uses the default SQL Express database within the App_Data folder.  If you want to move your Membership information to some other database such as SQL Server, SQL CE or SQL Azure, you need to manually run the aspnet_regsql command and specify the destination database name. This would create all the Tables, Procedures and Views required to handle the Membership information.  Thereafter you can change the connection string for “ApplicationServices” to point to the database where you had run all the scripts. Now, enter “System.Web.Providers” Alpha. This is available as a part of the NuGet package library.  Scott Hanselman has a neat post describing the steps required to get it up and running as well as doing the basic changes  at http://www.hanselman.com/blog/IntroducingSystemWebProvidersASPNETUniversalProvidersForSessionMembershipRolesAndUserProfileOnSQLCompactAndSQLAzure.aspx Pretty much, it covers what the new System.Web.Providers do. One thing I wanted to clarify is that, the new “System.Web.Providers” add a lot of new settings which are also marked as the defaults, in the web.config.  Even now, they use SQL Express as the default database.  But, if you change the connection string for “DefaultConnection” under connectionStrings to point to your SQL Server or SQL Azure, Membership API would now be able to create all the tables, procedures and views at the destination specified (i.e. SQL Server or SQL Azure). In my case, I modified the DefaultConneciton to point to my SQL Azure database.  Next, I hit F5 to run the application.  The default view loads.  I clicked on “LogOn” and then “Register” since I knew there are no tables/users as of then.  One thing to note is that, I had put “NewDB” as the database name in the connection string that points to SQL Azure.  NewDB wasn’t existing and I would assume it would be created before the tables/views/procedures for Membership are created. Once I clicked on the “Register” to register my first username, it took a while and then registered as well as logged in me in.  Also, I went to the SQL Azure Management Portal and verified that there exists “NewDB” which has just been created I could also connect to the SQL Azure database “NewDB” from Management Studio and found that the tables now don’t have the aspnet_ prefix.  The tables were simply Users, Roles, UsersInRoles, Profiles etc., So, with a few clicks and configuration change, I could actually set up the user base for my application on SQL Azure and even make the SessionState, Roles, Profiles being stored in SQL Azure database. The new System.Web.Proivders also required MARS (MultipleActiveResultSets=true) setting since it uses Entity Framework for the DAL operations.  Also, the “Project – ASP.NET Configuration” screen can be used to further create/manage users/roles etc., although the data is stored on the remote database. With that, a long pending request from the community to have the ability to configure and use remote databases for Application users management without having to run the scripts from SQL Express is fulfilled. Cheers !!!

    Read the article

  • Why Cornell University Chose Oracle Data Masking

    - by Troy Kitch
    One of the eight Ivy League schools, Cornell University found itself in the unfortunate position of having to inform over 45,000 University community members that their personal information had been breached when a laptop was stolen. To ensure this wouldn’t happen again, Cornell took steps to ensure that data used for non-production purposes is de-identified with Oracle Data Masking. A recent podcast highlights why organizations like Cornell are choosing Oracle Data Masking to irreversibly de-identify production data for use in non-production environments. Organizations often copy production data, that contains sensitive information, into non-production environments so they can test applications and systems using “real world” information. Data in non-production has increasingly become a target of cyber criminals and can be lost or stolen due to weak security controls and unmonitored access. Similar to production environments, data breaches in non-production environments can cost millions of dollars to remediate and cause irreparable harm to reputation and brand. Cornell’s applications and databases help carry out the administrative and academic mission of the university. They are running Oracle PeopleSoft Campus Solutions that include highly sensitive faculty, student, alumni, and prospective student data. This data is supported and accessed by a diverse set of developers and functional staff distributed across the university. Several years ago, Cornell experienced a data breach when an employee’s laptop was stolen.  Centrally stored backup information indicated there was sensitive data on the laptop. With no way of knowing what the criminal intended, the university had to spend significant resources reviewing data, setting up service centers to handle constituent concerns, and provide free credit checks and identity theft protection services—all of which cost money and took time away from other projects. To avoid this issue in the future Cornell came up with several options; one of which was to sanitize the testing and training environments. “The project management team was brought in and they developed a project plan and implementation schedule; part of which was to evaluate competing products in the market-space and figure out which one would work best for us.  In the end we chose Oracle’s solution based on its architecture and its functionality.” – Tony Damiani, Database Administration and Business Intelligence, Cornell University The key goals of the project were to mask the elements that were identifiable as sensitive in a consistent and efficient manner, but still support all the previous activities in the non-production environments. Tony concludes,  “What we saw was a very minimal impact on performance. The masking process added an additional three hours to our refresh window, but it was well worth that time to secure the environment and remove the sensitive data. I think some other key points you can keep in mind here is that there was zero impact on the production environment. Oracle Data Masking works in non-production environments only. Additionally, the risk of exposure has been significantly reduced and the impact to business was minimal.” With Oracle Data Masking organizations like Cornell can: Make application data securely available in non-production environments Prevent application developers and testers from seeing production data Use an extensible template library and policies for data masking automation Gain the benefits of referential integrity so that applications continue to work Listen to the podcast to hear the complete interview.  Learn more about Oracle Data Masking by registering to watch this SANS Institute Webcast and view this short demo.

    Read the article

  • Scrambling Sensitive Data in E-Business Suite Release 12 Cloned Environments

    - by Elke Phelps (Oracle Development)
    Securing the Oracle E-Business Suite includes protecting the underlying E-Business data in production and non-production databases.  While steps can be taken to provide a secure configuration to limit EBS access, a better approach to protecting non-production data is simply to scramble (mask) the data in the non-production copy.  You can use the Oracle Data Masking Pack with Oracle Enterprise Manager today to scramble sensitive data in cloned environments. Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function.  Using the Data Masking Pack in E-Business Suite environments is now easier with the release of new set of templates for E-Business Suite databases: Oracle E-Business Suite Release 12.1.3 Template for Data Masking Pack (Patch13898999) This template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments.  Is there a charge for this? Yes. You must purchase licenses for Oracle Enterprise Manager and the Oracle Data Masking Pack plug-in. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application. How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing.   References For additional information on the Oracle E-Business Suite Template for Data Masking Pack please refer to the following: Masking Sensitive Data for Non-production Use in the Oracle Enterprise Manager Concepts 11g Using the Oracle E-Business Suite, Release 12.1.3 Template for the Data Masking Pack, Note 1437485.1 Related Articles Webcast Replay Available: E-Business Suite Data Protection Oracle E-Business Suite Plug-in 4.0 Released for OEM 11g (11.1.0.1)

    Read the article

  • DDD and Value Objects. Are mutable Value Objects a good candidate for Non Aggr. Root Entity?

    - by Tony
    Here is a little problem Have an entity, with a value object. Not a problem. I replace a value object for a new one, then nhibernate inserts the new value and orphan the old one, then deletes it. Ok, that's a problem. Insured is my entity in my domain. He has a collection of Addresses (value objects). One of the addresses is the MailingAddress. When we want to update the mailing address, let's say zipcode was wrong, following Mr. Evans doctrine, we must replace the old object for a new one since it's immutable (a value object right?). But we don't want to delete the row thou, because that address's PK is a FK in a MailingHistory table. So, following Mr. Evans doctrine, we are pretty much screwed here. Unless i make my addressses Entities, so i don't have to "replace" it, and simply update its zipcode member, like the old good days. What would you suggest me in this case? The way i see it, ValueObjects are only useful when you want to encapsulate a group of database table's columns (component in nhibernate). Everything that has a persistence id in the database, is better off to make it an Entity (not necessarily an aggregate root) so you can update its members without recreating the whole object graph, specially if that's a deep-nested object. Do you concur? Is it allowed by Mr. Evans to have a mutable value object? Or is a mutable value object a candidate for an Entity? Thanks

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • RHEL - blocked FC remote port time out: saving binding

    - by Dev G
    My Server went into a faulty state since the database could not write on the partition. I found out that the partition went into Read Only mode. Finally to fix it, I had to do a hard reboot. Linux 2.6.18-164.el5PAE #1 SMP Tue Aug 18 15:59:11 EDT 2009 i686 i686 i386 GNU/Linux /var/log/messages Oct 31 00:56:45 ota3g1 Had[17275]: VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group sg_network Oct 31 00:57:05 ota3g1 Had[17275]: VCS CRITICAL V-16-1-50086 CPU usage on ota3g1.mtsallstream.com is 100% Oct 31 01:01:47 ota3g1 Had[17275]: VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group sg_network Oct 31 01:06:50 ota3g1 Had[17275]: VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group sg_network Oct 31 01:11:52 ota3g1 Had[17275]: VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group sg_network Oct 31 01:12:10 ota3g1 kernel: lpfc 0000:29:00.1: 1:1305 Link Down Event x2 received Data: x2 x20 x80000 x0 x0 Oct 31 01:12:10 ota3g1 kernel: lpfc 0000:29:00.1: 1:1303 Link Up Event x3 received Data: x3 x1 x10 x1 x0 x0 0 Oct 31 01:12:12 ota3g1 kernel: lpfc 0000:29:00.1: 1:1305 Link Down Event x4 received Data: x4 x20 x80000 x0 x0 Oct 31 01:12:40 ota3g1 kernel: rport-8:0-0: blocked FC remote port time out: saving binding Oct 31 01:12:40 ota3g1 kernel: lpfc 0000:29:00.1: 1:(0):0203 Devloss timeout on WWPN 20:25:00:a0:b8:74:f5:65 NPort x0000e4 Data: x0 x7 x0 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 38617577 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 283532153 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 90825 Oct 31 01:12:40 ota3g1 kernel: Aborting journal on device dm-16. Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 868841 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: Aborting journal on device dm-10. Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 37759889 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 283349449 Oct 31 01:12:40 ota3g1 kernel: printk: 6 messages suppressed. Oct 31 01:12:40 ota3g1 kernel: Aborting journal on device dm-12. Oct 31 01:12:40 ota3g1 kernel: EXT3-fs error (device dm-12) in ext3_reserve_inode_write: Journal has aborted Oct 31 01:12:40 ota3g1 kernel: Buffer I/O error on device dm-16, logical block 1545 Oct 31 01:12:40 ota3g1 kernel: lost page write due to I/O error on dm-16 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 12745 Oct 31 01:12:40 ota3g1 kernel: Buffer I/O error on device dm-10, logical block 1545 Oct 31 01:12:40 ota3g1 kernel: EXT3-fs error (device dm-16) in ext3_reserve_inode_write: Journal has aborted Oct 31 01:12:40 ota3g1 kernel: lost page write due to I/O error on dm-10 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 37749121 Oct 31 01:12:40 ota3g1 kernel: Buffer I/O error on device dm-12, logical block 0 Oct 31 01:12:40 ota3g1 kernel: lost page write due to I/O error on dm-12 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: EXT3-fs error (device dm-12) in ext3_dirty_inode: Journal has aborted Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 37757897 Oct 31 01:12:40 ota3g1 kernel: Buffer I/O error on device dm-12, logical block 1097 Oct 31 01:12:40 ota3g1 kernel: lost page write due to I/O error on dm-12 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 283337089 Oct 31 01:12:40 ota3g1 kernel: Buffer I/O error on device dm-16, logical block 0 Oct 31 01:12:40 ota3g1 kernel: lost page write due to I/O error on dm-16 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: EXT3-fs error (device dm-16) in ext3_dirty_inode: Journal has aborted Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 37749121 Oct 31 01:12:40 ota3g1 kernel: Buffer I/O error on device dm-12, logical block 0 Oct 31 01:12:41 ota3g1 kernel: lost page write due to I/O error on dm-12 Oct 31 01:12:41 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:41 ota3g1 kernel: end_request: I/O error, dev sdi, sector 283337089 Oct 31 01:12:41 ota3g1 kernel: Buffer I/O error on device dm-16, logical block 0 Oct 31 01:12:41 ota3g1 kernel: lost page write due to I/O error on dm-16 Oct 31 01:12:41 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/cciss-root 4.9G 730M 3.9G 16% / /dev/mapper/cciss-home 9.7G 1.2G 8.1G 13% /home /dev/mapper/cciss-var 9.7G 494M 8.8G 6% /var /dev/mapper/cciss-usr 15G 2.6G 12G 19% /usr /dev/mapper/cciss-tmp 3.9G 153M 3.6G 5% /tmp /dev/sda1 996M 43M 902M 5% /boot tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/mapper/cciss-product 25G 16G 7.4G 68% /product /dev/mapper/cciss-opt 20G 4.5G 14G 25% /opt /dev/mapper/dg_db1-vol_db1_system 18G 2.2G 15G 14% /database/OTADB/sys /dev/mapper/dg_db1-vol_db1_undo 18G 5.8G 12G 35% /database/OTADB/undo /dev/mapper/dg_db1-vol_db1_redo 8.9G 4.3G 4.2G 51% /database/OTADB/redo /dev/mapper/dg_db1-vol_db1_sgbd 8.9G 654M 7.8G 8% /database/OTADB/admin /dev/mapper/dg_db1-vol_db1_arch 98G 24G 69G 26% /database/OTADB/arch /dev/mapper/dg_db1-vol_db1_indexes 240G 14G 214G 6% /database/OTADB/index /dev/mapper/dg_db1-vol_db1_data 275G 47G 215G 18% /database/OTADB/data /dev/mapper/dg_dbrman-vol_db_rman 8.9G 351M 8.1G 5% /database/RMAN /dev/mapper/dg_app1-vol_app1 151G 113G 31G 79% /files/ota /etc/fstab /dev/cciss/root / ext3 defaults 1 1 /dev/cciss/home /home ext3 defaults 1 2 /dev/cciss/var /var ext3 defaults 1 2 /dev/cciss/usr /usr ext3 defaults 1 2 /dev/cciss/tmp /tmp ext3 defaults 1 2 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/cciss/swap swap swap defaults 0 0 /dev/cciss/product /product ext3 defaults 1 2 /dev/cciss/opt /opt ext3 defaults 1 2 /dev/dg_db1/vol_db1_system /database/OTADB/sys ext3 defaults 1 2 /dev/dg_db1/vol_db1_undo /database/OTADB/undo ext3 defaults 1 2 /dev/dg_db1/vol_db1_redo /database/OTADB/redo ext3 defaults 1 2 /dev/dg_db1/vol_db1_sgbd /database/OTADB/admin ext3 defaults 1 2 /dev/dg_db1/vol_db1_arch /database/OTADB/arch ext3 defaults 1 2 /dev/dg_db1/vol_db1_indexes /database/OTADB/index ext3 defaults 1 2 /dev/dg_db1/vol_db1_data /database/OTADB/data ext3 defaults 1 2 /dev/dg_dbrman/vol_db_rman /database/RMAN ext3 defaults 1 2 /dev/dg_app1/vol_app1 /files/ota ext3 defaults 1 2 Thanks for all the help.

    Read the article

  • Is it possible to run dhcpd3 as non-root user in a chroot jail?

    - by Lenain
    Hi everyone. I would like to run dhcpd3 from a chroot jail on Debian Lenny. At the moment, I can run it as root from my jail. Now I want to do this as non-root user (as "-u blah -t /path/to/jail" Bind option). If I start my process like this : start-stop-daemon --chroot /home/jails/dhcp --chuid dhcp \ --start --pidfile /home/jails/dhcp/var/run/dhcp.pid --exec /usr/sbin/dhcpd3 I get stuck with these errors : Internet Systems Consortium DHCP Server V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ unable to create icmp socket: Operation not permitted Wrote 0 deleted host decls to leases file. Wrote 0 new dynamic host decls to leases file. Wrote 0 leases to leases file. Open a socket for LPF: Operation not permitted strace : brk(0) = 0x911b000 fcntl64(0, F_GETFD) = 0 fcntl64(1, F_GETFD) = 0 fcntl64(2, F_GETFD) = 0 access("/etc/suid-debug", F_OK) = -1 ENOENT (No such file or directory) access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb775d000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = -1 ENOENT (No such file or directory) open("/lib/tls/i686/cmov/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/tls/i686/cmov", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/tls/i686/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/tls/i686", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/tls/cmov/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/tls/cmov", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/tls/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/tls", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/i686/cmov/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i686/cmov", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/i686/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i686", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/cmov/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/cmov", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\260e\1\0004\0\0\0t"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=1294572, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb775c000 mmap2(NULL, 1300080, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb761e000 mmap2(0xb7756000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x138) = 0xb7756000 mmap2(0xb7759000, 9840, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb7759000 close(3) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb761d000 set_thread_area({entry_number:-1 - 6, base_addr:0xb761d6b0, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}) = 0 mprotect(0xb7756000, 4096, PROT_READ) = 0 open("/dev/null", O_RDWR) = 3 close(3) = 0 brk(0) = 0x911b000 brk(0x913c000) = 0x913c000 socket(PF_FILE, SOCK_DGRAM, 0) = 3 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 connect(3, {sa_family=AF_FILE, path="/dev/log"...}, 110) = 0 time(NULL) = 1284760816 open("/etc/localtime", O_RDONLY) = 4 fstat64(4, {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 fstat64(4, {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb761c000 read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\f\0\0\0\f\0\0\0\0\0"..., 4096) = 2945 _llseek(4, -28, [2917], SEEK_CUR) = 0 read(4, "\nCET-1CEST,M3.5.0,M10.5.0/3\n"..., 4096) = 28 close(4) = 0 munmap(0xb761c000, 4096) = 0 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Intern"..., 73, MSG_NOSIGNAL) = 73 write(2, "Internet Systems Consortium DHCP "..., 46Internet Systems Consortium DHCP Server V3.1.1) = 46 write(2, "\n"..., 1 ) = 1 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Copyri"..., 75, MSG_NOSIGNAL) = 75 write(2, "Copyright 2004-2008 Internet Syst"..., 48Copyright 2004-2008 Internet Systems Consortium.) = 48 write(2, "\n"..., 1 ) = 1 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: All ri"..., 47, MSG_NOSIGNAL) = 47 write(2, "All rights reserved."..., 20All rights reserved.) = 20 write(2, "\n"..., 1 ) = 1 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: For in"..., 77, MSG_NOSIGNAL) = 77 write(2, "For info, please visit http://www"..., 50For info, please visit http://www.isc.org/sw/dhcp/) = 50 write(2, "\n"..., 1 ) = 1 socket(PF_FILE, SOCK_STREAM, 0) = 4 fcntl64(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(4, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(4) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 4 fcntl64(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(4, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(4) = 0 open("/etc/nsswitch.conf", O_RDONLY) = 4 fstat64(4, {st_mode=S_IFREG|0644, st_size=475, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb761c000 read(4, "# /etc/nsswitch.conf\n#\n# Example "..., 4096) = 475 read(4, ""..., 4096) = 0 close(4) = 0 munmap(0xb761c000, 4096) = 0 open("/lib/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/lib/tls/i686/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/tls/i686/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/tls/i686/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/tls/i686", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/tls/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/tls/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/tls/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/tls", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i686/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i686/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i686/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i686", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/tls/i686/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/tls/i686/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/tls/i686/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/tls/i686", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/tls/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/tls/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/tls/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/tls", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/i686/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/i686/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/i686/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/i686", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/tls/i686/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/tls/i686/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/tls/i686/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/tls/i686", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/tls/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/tls/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/tls/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/tls", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/i686/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/i686/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/i686/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/i686", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/libnss_files.so.2", O_RDONLY) = 4 read(4, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\320\30\0\0004\0\0\0\250"..., 512) = 512 fstat64(4, {st_mode=S_IFREG|0644, st_size=38408, ...}) = 0 mmap2(NULL, 41624, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0xb7612000 mmap2(0xb761b000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x8) = 0xb761b000 close(4) = 0 open("/etc/services", O_RDONLY|O_CLOEXEC) = 4 fcntl64(4, F_GETFD) = 0x1 (flags FD_CLOEXEC) fstat64(4, {st_mode=S_IFREG|0644, st_size=18480, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7611000 read(4, "# Network services, Internet styl"..., 4096) = 4096 read(4, "9/tcp\t\t\t\t# Quick Mail Transfer Pr"..., 4096) = 4096 read(4, "note\t1352/tcp\tlotusnotes\t# Lotus "..., 4096) = 4096 read(4, "tion\nafs3-kaserver\t7004/udp\nafs3-"..., 4096) = 4096 read(4, "backup\t2989/tcp\t\t\t# Afmbackup sys"..., 4096) = 2096 read(4, ""..., 4096) = 0 close(4) = 0 munmap(0xb7611000, 4096) = 0 time(NULL) = 1284760816 open("/etc/protocols", O_RDONLY|O_CLOEXEC) = 4 fstat64(4, {st_mode=S_IFREG|0644, st_size=2626, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7611000 read(4, "# Internet (IP) protocols\n#\n# Upd"..., 4096) = 2626 close(4) = 0 munmap(0xb7611000, 4096) = 0 socket(PF_INET, SOCK_RAW, IPPROTO_ICMP) = -1 EPERM (Operation not permitted) time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: unable"..., 80, MSG_NOSIGNAL) = 80 write(2, "unable to create icmp socket: Ope"..., 53unable to create icmp socket: Operation not permitted) = 53 write(2, "\n"..., 1 ) = 1 open("/etc/dhcp3/dhcpd.conf", O_RDONLY) = 4 lseek(4, 0, SEEK_END) = 1426 lseek(4, 0, SEEK_SET) = 0 read(4, "#----------------------------\n# G"..., 1426) = 1426 close(4) = 0 mmap2(NULL, 401408, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb75b0000 mmap2(NULL, 401408, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb754e000 mmap2(NULL, 401408, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb74ec000 brk(0x916f000) = 0x916f000 close(3) = 0 socket(PF_FILE, SOCK_DGRAM, 0) = 3 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 connect(3, {sa_family=AF_FILE, path="/dev/log"...}, 110) = 0 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Inter"..., 74, MSG_NOSIGNAL) = 74 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Copyr"..., 76, MSG_NOSIGNAL) = 76 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: All r"..., 48, MSG_NOSIGNAL) = 48 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: For i"..., 78, MSG_NOSIGNAL) = 78 open("/var/lib/dhcp3/dhcpd.leases", O_RDONLY) = 4 lseek(4, 0, SEEK_END) = 126 lseek(4, 0, SEEK_SET) = 0 read(4, "# The format of this file is docu"..., 126) = 126 close(4) = 0 open("/var/lib/dhcp3/dhcpd.leases", O_WRONLY|O_CREAT|O_APPEND, 0666) = 4 fstat64(4, {st_mode=S_IFREG|0644, st_size=126, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb74eb000 fstat64(4, {st_mode=S_IFREG|0644, st_size=126, ...}) = 0 _llseek(4, 126, [126], SEEK_SET) = 0 time(NULL) = 1284760816 time(NULL) = 1284760816 open("/var/lib/dhcp3/dhcpd.leases.1284760816", O_WRONLY|O_CREAT|O_TRUNC, 0664) = 5 fcntl64(5, F_GETFL) = 0x1 (flags O_WRONLY) fstat64(5, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb74ea000 _llseek(5, 0, [0], SEEK_CUR) = 0 close(4) = 0 munmap(0xb74eb000, 4096) = 0 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Wrote"..., 70, MSG_NOSIGNAL) = 70 write(2, "Wrote 0 deleted host decls to lea"..., 42Wrote 0 deleted host decls to leases file.) = 42 write(2, "\n"..., 1 ) = 1 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Wrote"..., 74, MSG_NOSIGNAL) = 74 write(2, "Wrote 0 new dynamic host decls to"..., 46Wrote 0 new dynamic host decls to leases file.) = 46 write(2, "\n"..., 1 ) = 1 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Wrote"..., 58, MSG_NOSIGNAL) = 58 write(2, "Wrote 0 leases to leases file."..., 30Wrote 0 leases to leases file.) = 30 write(2, "\n"..., 1 ) = 1 write(5, "# The format of this file is docu"..., 126) = 126 fsync(5) = 0 unlink("/var/lib/dhcp3/dhcpd.leases~") = 0 link("/var/lib/dhcp3/dhcpd.leases", "/var/lib/dhcp3/dhcpd.leases~") = 0 rename("/var/lib/dhcp3/dhcpd.leases.1284760816", "/var/lib/dhcp3/dhcpd.leases") = 0 socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 4 ioctl(4, SIOCGIFCONF, {0 - 64, NULL}) = 0 ioctl(4, SIOCGIFCONF, {64, {{"lo", {AF_INET, inet_addr("127.0.0.1")}}, {"eth0", {AF_INET, inet_addr("192.168.0.10")}}}}) = 0 ioctl(4, SIOCGIFFLAGS, {ifr_name="lo", ifr_flags=IFF_UP|IFF_LOOPBACK|IFF_RUNNING}) = 0 ioctl(4, SIOCGIFFLAGS, {ifr_name="eth0", ifr_flags=IFF_UP|IFF_BROADCAST|IFF_RUNNING|IFF_MULTICAST}) = 0 ioctl(4, SIOCGIFHWADDR, {ifr_name="eth0", ifr_hwaddr=00:c0:26:87:55:c0}) = 0 socket(PF_PACKET, SOCK_PACKET, 768) = -1 EPERM (Operation not permitted) time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Open "..., 74, MSG_NOSIGNAL) = 74 write(2, "Open a socket for LPF: Operation "..., 46Open a socket for LPF: Operation not permitted) = 46 write(2, "\n"..., 1 ) = 1 exit_group(1) = ? I understand that dhcpd wants to create sockets on port 67... but I don't know how to authorize that through the chroot. Any idea?

    Read the article

  • Preserving URLs after CMS migration with redirect database and apache http?

    - by stwissel
    We will migrate an entire intranet from one CMS to another. All URLs will change in a non-predictable pattern, but I can capture a file with original,new URLs I can feed into anything. I have hundred thousands of URLs, not just a few hundred. What I would like to do: every URL that is not found (404) should be checked against the database and if a new URL found a 301/308 issued instead. Some trickery to suggest similar pages in the 404 message if the lookup was unsuccessful would be an added bonus. Is that the right approach or should I check redirection first all the time? How would I do that in Apache2? Is that a custom 404 module?

    Read the article

  • Why doens't my Postgres user have permissions to add a Postgres database?

    - by orokusaki
    First, I ran: sudo su postgres createuser -U postgres foouser -P which worked fine, and I ran: createdb -U foouser -E utf8 -O foouser foodatabase -T template0 and got "permission denied: cannot create database" Firstly, should I even su as postgres to do operations like the first one (assuming my postgres data dir is owned by postgres), or is -U postgres from any user (assuming trust is used in pg_hba.conf) sufficient? Secondly, why am I running into this error? Is this because the user foouser is a non-superuser? Should I create foodatabase using the postgres user and simply -O foouser?

    Read the article

  • Is it possible to backup an SQL Database hosted on an Azure VM, to our internal DPM2012?

    - by Florent Courtay
    I've got an SQL database on an azure VM (non domain) that i'd like to backup to our internal DPM 2012 server. I've installed the DPM agent on the Azure VM, setup DCOM to use only the ports 5000 to 5025 on both the VM and the DPM server, created the 135, 5000-5025, 5718 5719 endpoints on azure and on the VM's firewall. When trying to add this agent to the DPM server, I end up with an error, "Unable to contact the protection Agent on server .cloudapp.net" I know there is some sort of connection between them, as using a wrong password gives me an Invalid Credentials error. The error seems to be DCOM related : When trying to connect to the Azure VM from the DPM server using VBEMTest, i get an Error "0x800706ba The RPC server is unavailable", but access is deneid when using wrong credentials ) What am i missing ? Has someone been able to achieve this kind of setup ? Thanks for your help !

    Read the article

  • Extending JavaScript's Date.parse to allow for DD/MM/YYYY (non-US formatted dates)?

    - by Campbeln
    I've come up with this solution to extending JavaScript's Date.parse function to allow for dates formatted in DD/MM/YYYY (rather then the American standard [and default] MM/DD/YYYY): (function() { var fDateParse = Date.parse; Date.parse = function(sDateString) { var a_sLanguage = ['en','en-us'], a_sMatches = null, sCurrentLanguage, dReturn = null, i ; //#### Traverse the a_sLanguages (as reported by the browser) for (i = 0; i < a_sLanguage.length; i++) { //#### Collect the .toLowerCase'd sCurrentLanguage for this loop sCurrentLanguage = (a_sLanguage[i] + '').toLowerCase(); //#### If this is the first English definition if (sCurrentLanguage.indexOf('en') == 0) { //#### If this is a definition for a non-American based English (meaning dates are "DD MM YYYY") if (sCurrentLanguage.indexOf('en-us') == -1 && // en-us = English (United States) + Palau, Micronesia, Philippians sCurrentLanguage.indexOf('en-ca') == -1 && // en-ca = English (Canada) sCurrentLanguage.indexOf('en-bz') == -1 // en-bz = English (Belize) ) { //#### Setup a oRegEx to locate "## ## ####" (allowing for any sort of delimiter except a '\n') then collect the a_sMatches from the passed sDateString var oRegEx = new RegExp("(([0-9]{2}|[0-9]{1})[^0-9]*?([0-9]{2}|[0-9]{1})[^0-9]*?([0-9]{4}))", "i"); a_sMatches = oRegEx.exec(sDateString); } //#### Fall from the loop (as we've found the first English definition) break; } } //#### If we were able to find a_sMatches for a non-American English "DD MM YYYY" formatted date if (a_sMatches != null) { var oRegEx = new RegExp(a_sMatches[0], "i"); //#### .parse the sDateString via the normal Date.parse function, but replacing the "DD?MM?YYYY" with "YYYY/MM/DD" beforehand //#### NOTE: a_sMatches[0]=[Default]; a_sMatches[1]=DD?MM?YYYY; a_sMatches[2]=DD; a_sMatches[3]=MM; a_sMatches[4]=YYYY dReturn = fDateParse(sDateString.replace(oRegEx, a_sMatches[4] + "/" + a_sMatches[3] + "/" + a_sMatches[2])); } //#### Else .parse the sDateString via the normal Date.parse function else { dReturn = fDateParse(sDateString); } //#### return dReturn; } })(); In my actual (dotNet) code, I'm collecting the a_sLanguage array via: a_sLanguage = '<% Response.Write(Request.ServerVariables["HTTP_ACCEPT_LANGUAGE"]); %>'.split(','); Now, I'm not certain my approach to locating "us-en"/etc. is the most proper. Pretty much it's just the US and current/former US influenced areas (Palau, Micronesia, Philippines) + Belize & Canada that use the funky MM/DD/YYYY format (I am American, so I can call it funky =). So one could rightly argue that if the Locale is not "en-us"/etc. first, then DD/MM/YYYY should be used. Thoughts? As a side note... I "grew up" in PERL but it's been a wee while since I've done much heavy lifting in RegEx. Does that expression look right to everyone? This seems like a lot of work, but based on my research this is indeed about the best way to go about enabling DD/MM/YYYY dates within JavaScript. Is there an easier/more betterer way? PS- Upon re-reading this post just before submission... I've realized that this is more of a "can you code review this" rather then a question (or, an answer is embedded within the question). When I started writing this it was not my intention to end up here =)

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #038

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 CASE Statement in ORDER BY Clause – ORDER BY using Variable This article is as per request from the Application Development Team Leader of my company. His team encountered code where the application was preparing string for ORDER BY clause of the SELECT statement. Application was passing this string as variable to Stored Procedure (SP) and SP was using EXEC to execute the SQL string. This is not good for performance as Stored Procedure has to recompile every time due to EXEC. sp_executesql can do the same task but still not the best performance. SSMS – View/Send Query Results to Text/Grid/Files Results to Text – CTRL + T Results to Grid – CTRL + D Results to File – CTRL + SHIFT + F 2008 Introduction to SPARSE Columns Part 2 I wrote about Introduction to SPARSE Columns Part 1. Let us understand the concept of the SPARSE column in more detail. I suggest you read the first part before continuing reading this article. All SPARSE columns are stored as one XML column in the database. Let us see some of the advantage and disadvantage of SPARSE column. Deferred Name Resolution How come when table name is incorrect SP can be created successfully but when an incorrect column is used SP cannot be created? 2009 Backup Timeline and Understanding of Database Restore Process in Full Recovery Model In general, databases backup in full recovery mode is taken in three different kinds of database files. Full Database Backup Differential Database Backup Log Backup Restore Sequence and Understanding NORECOVERY and RECOVERY While doing RESTORE Operation if you restoring database files, always use NORECOVER option as that will keep the database in a state where more backup file are restored. This will also keep database offline also to prevent any changes, which can create itegrity issues. Once all backup file is restored run RESTORE command with a RECOVERY option to get database online and operational. Four Different Ways to Find Recovery Model for Database Perhaps, the best thing about technical domain is that most of the things can be executed in more than one ways. It is always useful to know about the various methods of performing a single task. Two Methods to Retrieve List of Primary Keys and Foreign Keys of Database When Information Schema is used, we will not be able to discern between primary key and foreign key; we will have both the keys together. In the case of sys schema, we can query the data in our preferred way and can join this table to another table, which can retrieve additional data from the same. Get Last Running Query Based on SPID PID is returns sessions ID of the current user process. The acronym SPID comes from the name of its earlier version, Server Process ID. 2010 SELECT * FROM dual – Dual Equivalent Dual is a table that is created by Oracle together with data dictionary. It consists of exactly one column named “dummy”, and one record. The value of that record is X. You can check the content of the DUAL table using the following syntax. SELECT * FROM dual Identifying Statistics Used by Query Someone asked this question in my training class of query optimization and performance tuning.  “Can I know which statistics were used by my query?” 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 14 of 31 What are the basic functions for master, msdb, model, tempdb and resource databases? What is the Maximum Number of Index per Table? Explain Few of the New Features of SQL Server 2008 Management Studio Explain IntelliSense for Query Editing Explain MultiServer Query Explain Query Editor Regions Explain Object Explorer Enhancements Explain Activity Monitors SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 15 of 31 What is Service Broker? Where are SQL server Usernames and Passwords Stored in the SQL server? What is Policy Management? What is Database Mirroring? What are Sparse Columns? What does TOP Operator Do? What is CTE? What is MERGE Statement? What is Filtered Index? Which are the New Data Types Introduced in SQL SERVER 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 16 of 31 What are the Advantages of Using CTE? How can we Rewrite Sub-Queries into Simple Select Statements or with Joins? What is CLR? What are Synonyms? What is LINQ? What are Isolation Levels? What is Use of EXCEPT Clause? What is XPath? What is NOLOCK? What is the Difference between Update Lock and Exclusive Lock? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 17 of 31 How will you Handle Error in SQL SERVER 2008? What is RAISEERROR? What is RAISEERROR? How to Rebuild the Master Database? What is the XML Datatype? What is Data Compression? What is Use of DBCC Commands? How to Copy the Tables, Schema and Views from one SQL Server to Another? How to Find Tables without Indexes? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 18 of 31 How to Copy Data from One Table to Another Table? What is Catalog Views? What is PIVOT and UNPIVOT? What is a Filestream? What is SQLCMD? What do you mean by TABLESAMPLE? What is ROW_NUMBER()? What are Ranking Functions? What is Change Data Capture (CDC) in SQL Server 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 19 of 31 How can I Track the Changes or Identify the Latest Insert-Update-Delete from a Table? What is the CPU Pressure? How can I Get Data from a Database on Another Server? What is the Bookmark Lookup and RID Lookup? What is Difference between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE? What is Difference between GETDATE and SYSDATETIME in SQL Server 2008? How can I Check that whether Automatic Statistic Update is Enabled or not? How to Find Index Size for Each Index on Table? What is the Difference between Seek Predicate and Predicate? What are Basics of Policy Management? What are the Advantages of Policy Management? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 20 of 31 What are Policy Management Terms? What is the ‘FILLFACTOR’? Where in MS SQL Server is ’100’ equal to ‘0’? What are Points to Remember while Using the FILLFACTOR Argument? What is a ROLLUP Clause? What are Various Limitations of the Views? What is a Covered index? When I Delete any Data from a Table, does the SQL Server reduce the size of that table? What are Wait Types? How to Stop Log File Growing too Big? If any Stored Procedure is Encrypted, then can we see its definition in Activity Monitor? 2012 Example of Width Sensitive and Width Insensitive Collation Width Sensitive Collation: A single-byte character (half-width) represented as single-byte and the same character represented as a double-byte character (full-width) are when compared are not equal the collation is width sensitive. In this example we have one table with two columns. One column has a collation of width sensitive and the second column has a collation of width insensitive. Find Column Used in Stored Procedure – Search Stored Procedure for Column Name Very interesting conversation about how to find column used in a stored procedure. There are two different characters in the story and both are having a conversation about how to find column in the stored procedure. Here are two part story Part 1 | Part 2 SQL SERVER – 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video In simple words, in many cases the database move from one place to another place. It is not always possible to back up and restore databases. There are possibilities when only part of the database (with schema and data) has to be moved. In this video we learn that we can easily generate script for schema for data and move from one server to another one. INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1 I often see the value -1 in the CHARACTER_MAXIMUM_LENGTH column of INFORMATION_SCHEMA.COLUMNS table. I understand that the length of any column can be between 0 to large number but I do not get it when I see value in negative (i.e. -1). Any insight on this subject? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >