Search Results

Search found 1703 results on 69 pages for 'grails constraints'.

Page 67/69 | < Previous Page | 63 64 65 66 67 68 69  | Next Page >

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • Operator of the week - Assert

    - by Fabiano Amorim
    Well my friends, I was wondering how to help you in a practical way to understand execution plans. So I think I'll talk about the Showplan Operators. Showplan Operators are used by the Query Optimizer (QO) to build the query plan in order to perform a specified operation. A query plan will consist of many physical operators. The Query Optimizer uses a simple language that represents each physical operation by an operator, and each operator is represented in the graphical execution plan by an icon. I'll try to talk about one operator every week, but so as to avoid having to continue to write about these operators for years, I'll mention only of those that are more common: The first being the Assert. The Assert is used to verify a certain condition, it validates a Constraint on every row to ensure that the condition was met. If, for example, our DDL includes a check constraint which specifies only two valid values for a column, the Assert will, for every row, validate the value passed to the column to ensure that input is consistent with the check constraint. Assert  and Check Constraints: Let's see where the SQL Server uses that information in practice. Take the following T-SQL: IF OBJECT_ID('Tab1') IS NOT NULL   DROP TABLE Tab1 GO CREATE TABLE Tab1(ID Integer, Gender CHAR(1))  GO  ALTER TABLE TAB1 ADD CONSTRAINT ck_Gender_M_F CHECK(Gender IN('M','F'))  GO INSERT INTO Tab1(ID, Gender) VALUES(1,'X') GO To the command above the SQL Server has generated the following execution plan: As we can see, the execution plan uses the Assert operator to check that the inserted value doesn't violate the Check Constraint. In this specific case, the Assert applies the rule, 'if the value is different to "F" and different to "M" than return 0 otherwise returns NULL'. The Assert operator is programmed to show an error if the returned value is not NULL; in other words, the returned value is not a "M" or "F". Assert checking Foreign Keys Now let's take a look at an example where the Assert is used to validate a foreign key constraint. Suppose we have this  query: ALTER TABLE Tab1 ADD ID_Genders INT GO  IF OBJECT_ID('Tab2') IS NOT NULL   DROP TABLE Tab2 GO CREATE TABLE Tab2(ID Integer PRIMARY KEY, Gender CHAR(1))  GO  INSERT INTO Tab2(ID, Gender) VALUES(1, 'F') INSERT INTO Tab2(ID, Gender) VALUES(2, 'M') INSERT INTO Tab2(ID, Gender) VALUES(3, 'N') GO  ALTER TABLE Tab1 ADD CONSTRAINT fk_Tab2 FOREIGN KEY (ID_Genders) REFERENCES Tab2(ID) GO  INSERT INTO Tab1(ID, ID_Genders, Gender) VALUES(1, 4, 'X') Let's look at the text execution plan to see what these Assert operators were doing. To see the text execution plan just execute SET SHOWPLAN_TEXT ON before run the insert command. |--Assert(WHERE:(CASE WHEN NOT [Pass1008] AND [Expr1007] IS NULL THEN (0) ELSE NULL END))      |--Nested Loops(Left Semi Join, PASSTHRU:([Tab1].[ID_Genders] IS NULL), OUTER REFERENCES:([Tab1].[ID_Genders]), DEFINE:([Expr1007] = [PROBE VALUE]))           |--Assert(WHERE:(CASE WHEN [Tab1].[Gender]<>'F' AND [Tab1].[Gender]<>'M' THEN (0) ELSE NULL END))           |    |--Clustered Index Insert(OBJECT:([Tab1].[PK]), SET:([Tab1].[ID] = RaiseIfNullInsert([@1]),[Tab1].[ID_Genders] = [@2],[Tab1].[Gender] = [Expr1003]), DEFINE:([Expr1003]=CONVERT_IMPLICIT(char(1),[@3],0)))           |--Clustered Index Seek(OBJECT:([Tab2].[PK]), SEEK:([Tab2].[ID]=[Tab1].[ID_Genders]) ORDERED FORWARD) Here we can see the Assert operator twice, first (looking down to up in the text plan and the right to left in the graphical plan) validating the Check Constraint. The same concept showed above is used, if the exit value is "0" than keep running the query, but if NULL is returned shows an exception. The second Assert is validating the result of the Tab1 and Tab2 join. It is interesting to see the "[Expr1007] IS NULL". To understand that you need to know what this Expr1007 is, look at the Probe Value (green text) in the text plan and you will see that it is the result of the join. If the value passed to the INSERT at the column ID_Gender exists in the table Tab2, then that probe will return the join value; otherwise it will return NULL. So the Assert is checking the value of the search at the Tab2; if the value that is passed to the INSERT is not found  then Assert will show one exception. If the value passed to the column ID_Genders is NULL than the SQL can't show a exception, in that case it returns "0" and keeps running the query. If you run the INSERT above, the SQL will show an exception because of the "X" value, but if you change the "X" to "F" and run again, it will show an exception because of the value "4". If you change the value "4" to NULL, 1, 2 or 3 the insert will be executed without any error. Assert checking a SubQuery: The Assert operator is also used to check one subquery. As we know, one scalar subquery can't validly return more than one value: Sometimes, however, a  mistake happens, and a subquery attempts to return more than one value . Here the Assert comes into play by validating the condition that a scalar subquery returns just one value. Take the following query: INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT ID_TipoSexo FROM Tab1), 'F')    INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT ID_TipoSexo FROM Tab1), 'F')    |--Assert(WHERE:(CASE WHEN NOT [Pass1016] AND [Expr1015] IS NULL THEN (0) ELSE NULL END))        |--Nested Loops(Left Semi Join, PASSTHRU:([tempdb].[dbo].[Tab1].[ID_TipoSexo] IS NULL), OUTER REFERENCES:([tempdb].[dbo].[Tab1].[ID_TipoSexo]), DEFINE:([Expr1015] = [PROBE VALUE]))              |--Assert(WHERE:([Expr1017]))             |    |--Compute Scalar(DEFINE:([Expr1017]=CASE WHEN [tempdb].[dbo].[Tab1].[Sexo]<>'F' AND [tempdb].[dbo].[Tab1].[Sexo]<>'M' THEN (0) ELSE NULL END))              |         |--Clustered Index Insert(OBJECT:([tempdb].[dbo].[Tab1].[PK__Tab1__3214EC277097A3C8]), SET:([tempdb].[dbo].[Tab1].[ID_TipoSexo] = [Expr1008],[tempdb].[dbo].[Tab1].[Sexo] = [Expr1009],[tempdb].[dbo].[Tab1].[ID] = [Expr1003]))              |              |--Top(TOP EXPRESSION:((1)))              |                   |--Compute Scalar(DEFINE:([Expr1008]=[Expr1014], [Expr1009]='F'))              |                        |--Nested Loops(Left Outer Join)              |                             |--Compute Scalar(DEFINE:([Expr1003]=getidentity((1856985942),(2),NULL)))              |                             |    |--Constant Scan              |                             |--Assert(WHERE:(CASE WHEN [Expr1013]>(1) THEN (0) ELSE NULL END))              |                                  |--Stream Aggregate(DEFINE:([Expr1013]=Count(*), [Expr1014]=ANY([tempdb].[dbo].[Tab1].[ID_TipoSexo])))             |                                       |--Clustered Index Scan(OBJECT:([tempdb].[dbo].[Tab1].[PK__Tab1__3214EC277097A3C8]))              |--Clustered Index Seek(OBJECT:([tempdb].[dbo].[Tab2].[PK__Tab2__3214EC27755C58E5]), SEEK:([tempdb].[dbo].[Tab2].[ID]=[tempdb].[dbo].[Tab1].[ID_TipoSexo]) ORDERED FORWARD)  You can see from this text showplan that SQL Server as generated a Stream Aggregate to count how many rows the SubQuery will return, This value is then passed to the Assert which then does its job by checking its validity. Is very interesting to see that  the Query Optimizer is smart enough be able to avoid using assert operators when they are not necessary. For instance: INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT ID_TipoSexo FROM Tab1 WHERE ID = 1), 'F') INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT TOP 1 ID_TipoSexo FROM Tab1), 'F')  For both these INSERTs, the Query Optimiser is smart enough to know that only one row will ever be returned, so there is no need to use the Assert. Well, that's all folks, I see you next week with more "Operators". Cheers, Fabiano

    Read the article

  • ASP.NET MVC 2 / Localization / Dynamic Default Value?

    - by cyberblast
    Hello In an ASP.NET MVC 2 application, i'm having a route like this: routes.MapRoute( "Default", // Route name "{lang}/{controller}/{action}/{id}", // URL with parameters new // Parameter defaults { controller = "Home", action = "Index", lang = "de", id = UrlParameter.Optional }, new { lang = new AllowedValuesRouteConstraint(new string[] { "de", "en", "fr", "it" }, StringComparison.InvariantCultureIgnoreCase) } Now, basically I would like to set the thread's culture according the language passed in. But there is one exception: If the user requests the page for the first time, like calling "http://www.mysite.com" I want to set the initial language if possible to the one "preferred by the browser". How can I distinguish in an early procesing stage (like global.asax), if the default parameter has been set because of the default value or mentioned explicit through the URL? (I would prefer a solution where the request URL is not getting parsed). Is there a way to dynamically provide a default-value for a paramter? Something like a hook? Or where can I override the default value (good application event?). This is the code i'm actually experimenting with: protected void Application_AcquireRequestState(object sender, EventArgs e) { string activeLanguage; string[] validLanguages; string defaultLanguage; string browsersPreferredLanguage; try { HttpContextBase contextBase = new HttpContextWrapper(Context); RouteData activeRoute = RouteTable.Routes.GetRouteData(new HttpContextWrapper(Context)); if (activeRoute == null) { return; } activeLanguage = activeRoute.GetRequiredString("lang"); Route route = (Route)activeRoute.Route; validLanguages = ((AllowedValuesRouteConstraint)route.Constraints["lang"]).AllowedValues; defaultLanguage = route.Defaults["lang"].ToString(); browsersPreferredLanguage = GetBrowsersPreferredLanguage(); //TODO: Better way than parsing the url bool defaultInitialized = contextBase.Request.Url.ToString().IndexOf(string.Format("/{0}/", defaultLanguage), StringComparison.InvariantCultureIgnoreCase) > -1; string languageToActivate = defaultLanguage; if (!defaultInitialized) { if (validLanguages.Contains(browsersPreferredLanguage, StringComparer.InvariantCultureIgnoreCase)) { languageToActivate = browsersPreferredLanguage; } } //TODO: Where and how to overwrtie the default value that it gets passed to the controller? contextBase.RewritePath(contextBase.Request.Path.Replace("/de/", "/en/")); SetLanguage(languageToActivate); } catch (Exception ex) { //TODO: Log Console.WriteLine(ex.Message); } } protected string GetBrowsersPreferredLanguage() { string acceptedLang = string.Empty; if (HttpContext.Current.Request.UserLanguages != null && HttpContext.Current.Request.UserLanguages.Length > 0) { acceptedLang = HttpContext.Current.Request.UserLanguages[0].Substring(0, 2); } return acceptedLang; } protected void SetLanguage(string languageToActivate) { CultureInfo cultureInfo = new CultureInfo(languageToActivate); if (!Thread.CurrentThread.CurrentUICulture.TwoLetterISOLanguageName.Equals(languageToActivate, StringComparison.InvariantCultureIgnoreCase)) { Thread.CurrentThread.CurrentUICulture = cultureInfo; } if (!Thread.CurrentThread.CurrentCulture.TwoLetterISOLanguageName.Equals(languageToActivate, StringComparison.InvariantCultureIgnoreCase)) { Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(cultureInfo.Name); } } The RouteConstraint to reproduce the sample: public class AllowedValuesRouteConstraint : IRouteConstraint { private string[] _allowedValues; private StringComparison _stringComparism; public string[] AllowedValues { get { return _allowedValues; } } public AllowedValuesRouteConstraint(string[] allowedValues, StringComparison stringComparism) { _allowedValues = allowedValues; _stringComparism = stringComparism; } public AllowedValuesRouteConstraint(string[] allowedValues) { _allowedValues = allowedValues; _stringComparism = StringComparison.InvariantCultureIgnoreCase; } public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection) { if (_allowedValues != null) { return _allowedValues.Any(a => a.Equals(values[parameterName].ToString(), _stringComparism)); } else { return false; } } } Can someone help me out with that problem? Thanks, Martin

    Read the article

  • Managing highly repetitive code and documentation in Java

    - by polygenelubricants
    Highly repetitive code is generally a bad thing, and there are design patterns that can help minimize this. However, sometimes it's simply inevitable due to the constraints of the language itself. Take the following example from java.util.Arrays: /** * Assigns the specified long value to each element of the specified * range of the specified array of longs. The range to be filled * extends from index <tt>fromIndex</tt>, inclusive, to index * <tt>toIndex</tt>, exclusive. (If <tt>fromIndex==toIndex</tt>, the * range to be filled is empty.) * * @param a the array to be filled * @param fromIndex the index of the first element (inclusive) to be * filled with the specified value * @param toIndex the index of the last element (exclusive) to be * filled with the specified value * @param val the value to be stored in all elements of the array * @throws IllegalArgumentException if <tt>fromIndex &gt; toIndex</tt> * @throws ArrayIndexOutOfBoundsException if <tt>fromIndex &lt; 0</tt> or * <tt>toIndex &gt; a.length</tt> */ public static void fill(long[] a, int fromIndex, int toIndex, long val) { rangeCheck(a.length, fromIndex, toIndex); for (int i=fromIndex; i<toIndex; i++) a[i] = val; } The above snippet appears in the source code 8 times, with very little variation in the documentation/method signature but exactly the same method body, one for each of the root array types int[], short[], char[], byte[], boolean[], double[], float[], and Object[]. I believe that unless one resorts to reflection (which is an entirely different subject in itself), this repetition is inevitable. I understand that as a utility class, such high concentration of repetitive Java code is highly atypical, but even with the best practice, repetition does happen! Refactoring doesn't always work because it's not always possible (the obvious case is when the repetition is in the documentation). Obviously maintaining this source code is a nightmare. A slight typo in the documentation, or a minor bug in the implementation, is multiplied by however many repetitions was made. In fact, the best example happens to involve this exact class: Google Research Blog - Extra, Extra - Read All About It: Nearly All Binary Searches and Mergesorts are Broken (by Joshua Bloch, Software Engineer) The bug is a surprisingly subtle one, occurring in what many thought to be just a simple and straightforward algorithm. // int mid =(low + high) / 2; // the bug int mid = (low + high) >>> 1; // the fix The above line appears 11 times in the source code! So my questions are: How are these kinds of repetitive Java code/documentation handled in practice? How are they developed, maintained, and tested? Do you start with "the original", and make it as mature as possible, and then copy and paste as necessary and hope you didn't make a mistake? And if you did make a mistake in the original, then just fix it everywhere, unless you're comfortable with deleting the copies and repeating the whole replication process? And you apply this same process for the testing code as well? Would Java benefit from some sort of limited-use source code preprocessing for this kind of thing? Perhaps Sun has their own preprocessor to help write, maintain, document and test these kind of repetitive library code? A comment requested another example, so I pulled this one from Google Collections: com.google.common.base.Predicates lines 276-310 (AndPredicate) vs lines 312-346 (OrPredicate). The source for these two classes are identical, except for: AndPredicate vs OrPredicate (each appears 5 times in its class) "And(" vs Or(" (in the respective toString() methods) #and vs #or (in the @see Javadoc comments) true vs false (in apply; ! can be rewritten out of the expression) -1 /* all bits on */ vs 0 /* all bits off */ in hashCode() &= vs |= in hashCode()

    Read the article

  • MySQL "ERROR 1005 (HY000): Can't create table 'foo.#sql-12c_4' (errno: 150)"

    - by Ankur Banerjee
    Hi, I was working on creating some tables in database foo, but every time I end up with errno 150 regarding the foreign key. Firstly, here's my code for creating tables: CREATE TABLE Clients ( client_id CHAR(10) NOT NULL , client_name CHAR(50) NOT NULL , provisional_license_num CHAR(50) NOT NULL , client_address CHAR(50) NULL , client_city CHAR(50) NULL , client_county CHAR(50) NULL , client_zip CHAR(10) NULL , client_phone INT NULL , client_email CHAR(255) NULL , client_dob DATETIME NULL , test_attempts INT NULL ); CREATE TABLE Applications ( application_id CHAR(10) NOT NULL , office_id INT NOT NULL , client_id CHAR(10) NOT NULL , instructor_id CHAR(10) NOT NULL , car_id CHAR(10) NOT NULL , application_date DATETIME NULL ); CREATE TABLE Instructors ( instructor_id CHAR(10) NOT NULL , office_id INT NOT NULL , instructor_name CHAR(50) NOT NULL , instructor_address CHAR(50) NULL , instructor_city CHAR(50) NULL , instructor_county CHAR(50) NULL , instructor_zip CHAR(10) NULL , instructor_phone INT NULL , instructor_email CHAR(255) NULL , instructor_dob DATETIME NULL , lessons_given INT NULL ); CREATE TABLE Cars ( car_id CHAR(10) NOT NULL , office_id INT NOT NULL , engine_serial_num CHAR(10) NULL , registration_num CHAR(10) NULL , car_make CHAR(50) NULL , car_model CHAR(50) NULL ); CREATE TABLE Offices ( office_id INT NOT NULL , office_address CHAR(50) NULL , office_city CHAR(50) NULL , office_County CHAR(50) NULL , office_zip CHAR(10) NULL , office_phone INT NULL , office_email CHAR(255) NULL ); CREATE TABLE Lessons ( lesson_num INT NOT NULL , client_id CHAR(10) NOT NULL , date DATETIME NOT NULL , time DATETIME NOT NULL , milegage_used DECIMAL(5, 2) NULL , progress CHAR(50) NULL ); CREATE TABLE DrivingTests ( test_num INT NOT NULL , client_id CHAR(10) NOT NULL , test_date DATETIME NOT NULL , seat_num INT NOT NULL , score INT NULL , test_notes CHAR(255) NULL ); ALTER TABLE Clients ADD PRIMARY KEY (client_id); ALTER TABLE Applications ADD PRIMARY KEY (application_id); ALTER TABLE Instructors ADD PRIMARY KEY (instructor_id); ALTER TABLE Offices ADD PRIMARY KEY (office_id); ALTER TABLE Lessons ADD PRIMARY KEY (lesson_num); ALTER TABLE DrivingTests ADD PRIMARY KEY (test_num); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Offices FOREIGN KEY (office_id) REFERENCES Offices (office_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Instructors FOREIGN KEY (instructor_id) REFERENCES Instructors (instructor_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Cars FOREIGN KEY (car_id) REFERENCES Cars (car_id); ALTER TABLE Lessons ADD CONSTRAINT FK_Lessons_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); ALTER TABLE Cars ADD CONSTRAINT FK_Cars_Offices FOREIGN KEY (office_id) REFERENCES Offices (office_id); ALTER TABLE Clients ADD CONSTRAINT FK_DrivingTests_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); These are the errors that I get: mysql> ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Cars FOREIGN KEY (car_id) REFERENCES Cars (car_id); ERROR 1005 (HY000): Can't create table 'foo.#sql-12c_4' (errno: 150) I ran SHOW ENGINE INNODB STATUS which gives a more detailed error description: ------------------------ LATEST FOREIGN KEY ERROR ------------------------ 100509 20:59:49 Error in foreign key constraint of table practice9/#sql-12c_4: FOREIGN KEY (car_id) REFERENCES Cars (car_id): Cannot find an index in the referenced table where the referenced columns appear as the first columns, or column types in the table and the referenced table do not match for constraint. Note that the internal storage type of ENUM and SET changed in tables created with >= InnoDB-4.1.12, and such columns in old tables cannot be referenced by such columns in new tables. See http://dev.mysql.com/doc/refman/5.1/en/innodb-foreign-key-constraints.html for correct foreign key definition. ------------ I searched around on StackOverflow and elsewhere online - came across a helpful blog post here with pointers on how to resolve this error - but I can't figure out what's going wrong. Any help would be appreciated!

    Read the article

  • My Dijit DateTimeCombo widget doesn't send selected value on form submission

    - by david bessire
    i need to create a Dojo widget that lets users specify date & time. i found a sample implementation attached to an entry in the Dojo bug tracker. It looks nice and mostly works, but when i submit the form, the value sent by the client is not the user-selected value but the value sent from the server. What changes do i need to make to get the widget to submit the date & time value? Sample usage is to render a JSP with basic HTML tags (form & input), then dojo.addOnLoad a function which selects the basic elements by ID, adds dojoType attribute, and dojo.parser.parse()-es the page. Thanks in advance. The widget is implemented in two files. The application uses Dojo 1.3. File 1: DateTimeCombo.js dojo.provide("dojox.form.DateTimeCombo"); dojo.require("dojox.form._DateTimeCombo"); dojo.require("dijit.form._DateTimeTextBox"); dojo.declare( "dojox.form.DateTimeCombo", dijit.form._DateTimeTextBox, { baseClass: "dojoxformDateTimeCombo dijitTextBox", popupClass: "dojox.form._DateTimeCombo", pickerPostOpen: "pickerPostOpen_fn", _selector: 'date', constructor: function (argv) {}, postMixInProperties: function() { dojo.mixin(this.constraints, { /* datePattern: 'MM/dd/yyyy HH:mm:ss', timePattern: 'HH:mm:ss', */ datePattern: 'MM/dd/yyyy HH:mm', timePattern: 'HH:mm', clickableIncrement:'T00:15:00', visibleIncrement:'T00:15:00', visibleRange:'T01:00:00' }); this.inherited(arguments); }, _open: function () { this.inherited(arguments); if (this._picker!==null && (this.pickerPostOpen!==null && this.pickerPostOpen!=="")) { if (this._picker.pickerPostOpen_fn!==null) { this._picker.pickerPostOpen_fn(this); } } } } ); File 2: _DateTimeCombo.js dojo.provide("dojox.form._DateTimeCombo"); dojo.require("dojo.date.stamp"); dojo.require("dijit._Widget"); dojo.require("dijit._Templated"); dojo.require("dijit._Calendar"); dojo.require("dijit.form.TimeTextBox"); dojo.require("dijit.form.Button"); dojo.declare("dojox.form._DateTimeCombo", [dijit._Widget, dijit._Templated], { // invoked only if time picker is empty defaultTime: function () { var res= new Date(); res.setHours(0,0,0); return res; }, // id of this table below is the same as this.id templateString: " <table class=\"dojoxDateTimeCombo\" waiRole=\"presentation\">\ <tr class=\"dojoxTDComboCalendarContainer\">\ <td>\ <center><input dojoAttachPoint=\"calendar\" dojoType=\"dijit._Calendar\"></input></center>\ </td>\ </tr>\ <tr class=\"dojoxTDComboTimeTextBoxContainer\">\ <td>\ <center><input dojoAttachPoint=\"timePicker\" dojoType=\"dijit.form.TimeTextBox\"></input></center>\ </td>\ </tr>\ <tr><td><center><button dojoAttachPoint=\"ctButton\" dojoType=\"dijit.form.Button\">Ok</button></center></td></tr>\ </table>\ ", widgetsInTemplate: true, constructor: function(arg) {}, postMixInProperties: function() { this.inherited(arguments); }, postCreate: function() { this.inherited(arguments); this.connect(this.ctButton, "onClick", "_onValueSelected"); }, // initialize pickers to calendar value pickerPostOpen_fn: function (parent_inst) { var parent_value = parent_inst.attr('value'); if (parent_value !== null) { this.setValue(parent_value); } }, // expects a valid date object setValue: function(value) { if (value!==null) { this.calendar.attr('value', value); this.timePicker.attr('value', value); } }, // return a Date constructed date in calendar & time in time picker. getValue: function() { var value = this.calendar.attr('value'); var result=value; if (this.timePicker.value !== null) { if ((this.timePicker.value instanceof Date) === true) { result.setHours(this.timePicker.value.getHours(), this.timePicker.value.getMinutes(), this.timePicker.value.getSeconds()); return result; } } else { var defTime=this.defaultTime(); result.setHours(defTime.getHours(), defTime.getMinutes(), defTime.getSeconds()); return result; } }, _onValueSelected: function() { var value = this.getValue(); this.onValueSelected(value); }, onValueSelected: function(value) {} });

    Read the article

  • Where'd My Data Go? (and/or...How Do I Get Rid of It?)

    - by David Paquette
    Want to get a better idea of how cascade deletes work in Entity Framework Code First scenarios? Want to see it in action? Stick with us as we quickly demystify what happens when you tell your data context to nuke a parent entity. This post is authored by Calgary .NET User Group Leader David Paquette with help from Microsoft MVP in Asp.Net James Chambers. We got to spend a great week back in March at Prairie Dev Con West, chalk full of sessions, presentations, workshops, conversations and, of course, questions.  One of the questions that came up during my session: "How does Entity Framework Code First deal with cascading deletes?". James and I had different thoughts on what the default was, if it was different from SQL server, if it was the same as EF proper and if there was a way to override whatever the default was.  So we built a set of examples and figured out that the answer is simple: it depends.  (Download Samples) Consider the example of a hockey league. You have several different entities in the league including games, teams that play the games and players that make up the teams. Each team also has a mascot.  If you delete a team, we need a couple of things to happen: The team, games and mascot will be deleted, and The players for that team will remain in the league (and therefore the database) but they should no longer be assigned to a team. So, let's make this start to come together with a look at the default behaviour in SQL when using an EDMX-driven project. The Reference – Understanding EF's Behaviour with an EDMX/DB First Approach First up let’s take a look at the DB first approach.  In the database, we defined 4 tables: Teams, Players, Mascots, and Games.  We also defined 4 foreign keys as follows: Players.Team_Id (NULL) –> Teams.Id Mascots.Id (NOT NULL) –> Teams.Id (ON DELETE CASCADE) Games.HomeTeam_Id (NOT NULL) –> Teams.Id Games.AwayTeam_Id (NOT NULL) –> Teams.Id Note that by specifying ON DELETE CASCADE for the Mascots –> Teams foreign key, the database will automatically delete the team’s mascot when the team is deleted.  While we want the same behaviour for the Games –> Teams foreign keys, it is not possible to accomplish this using ON DELETE CASCADE in SQL Server.  Specifying a ON DELETE CASCADE on these foreign keys would cause a circular reference error: The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree that contains no circular references. No table can appear more than one time in the list of all cascading referential actions that result from the DELETE or UPDATE – MSDN When we create an entity data model from the above database, we get the following:   In order to get the Games to be deleted when the Team is deleted, we need to specify End1 OnDelete action of Cascade for the HomeGames and AwayGames associations.   Now, we have an Entity Data Model that accomplishes what we set out to do.  One caveat here is that Entity Framework will only properly handle the cascading delete when the the players and games for the team have been loaded into memory.  For a more detailed look at Cascade Delete in EF Database First, take a look at this blog post by Alex James.   Building The Same Sample with EF Code First Next, we're going to build up the model with the code first approach.  EF Code First is defined on the Ado.Net team blog as such: Code First allows you to define your model using C# or VB.Net classes, optionally additional configuration can be performed using attributes on your classes and properties or by using a Fluent API. Your model can be used to generate a database schema or to map to an existing database. Entity Framework Code First follows some conventions to determine when to cascade delete on a relationship.  More details can be found on MSDN: If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior detected by convention can be overridden by using the fluent API. For more information, see Configuring Relationships with Fluent API (Code First). Our DbContext consists of 4 DbSets: public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } public DbSet<Mascot> Mascots { get; set; } public DbSet<Game> Games { get; set; } When we set the Mascot –> Team relationship to required, Entity Framework will automatically delete the Mascot when the Team is deleted.  This can be done either using the [Required] data annotation attribute, or by overriding the OnModelCreating method of your DbContext and using the fluent API. Data Annotations: public class Mascot { public int Id { get; set; } public string Name { get; set; } [Required] public virtual Team Team { get; set; } } Fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); } The Player –> Team relationship is automatically handled by the Code First conventions. When a Team is deleted, the Team property for all the players on that team will be set to null.  No additional configuration is required, however all the Player entities must be loaded into memory for the cascading to work properly. The Game –> Team relationship causes some grief in our Code First example.  If we try setting the HomeTeam and AwayTeam relationships to required, Entity Framework will attempt to set On Cascade Delete for the HomeTeam and AwayTeam foreign keys when creating the database tables.  As we saw in the database first example, this causes a circular reference error and throws the following SqlException: Introducing FOREIGN KEY constraint 'FK_Games_Teams_AwayTeam_Id' on table 'Games' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. To solve this problem, we need to disable the default cascade delete behaviour using the fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); modelBuilder.Entity<Team>() .HasMany(t => t.HomeGames) .WithRequired(g => g.HomeTeam) .WillCascadeOnDelete(false); modelBuilder.Entity<Team>() .HasMany(t => t.AwayGames) .WithRequired(g => g.AwayTeam) .WillCascadeOnDelete(false); base.OnModelCreating(modelBuilder); } Unfortunately, this means we need to manually manage the cascade delete behaviour.  When a Team is deleted, we need to manually delete all the home and away Games for that Team. foreach (Game awayGame in jets.AwayGames.ToArray()) { entities.Games.Remove(awayGame); } foreach (Game homeGame in homeGames) { entities.Games.Remove(homeGame); } entities.Teams.Remove(jets); entities.SaveChanges();   Overriding the Defaults – When and How To As you have seen, the default behaviour of Entity Framework Code First can be overridden using the fluent API.  This can be done by overriding the OnModelCreating method of your DbContext, or by creating separate model override files for each entity.  More information is available on MSDN.   Going Further These were simple examples but they helped us illustrate a couple of points. First of all, we were able to demonstrate the default behaviour of Entity Framework when dealing with cascading deletes, specifically how entity relationships affect the outcome. Secondly, we showed you how to modify the code and control the behaviour to get the outcome you're looking for. Finally, we showed you how easy it is to explore this kind of thing, and we're hoping that you get a chance to experiment even further. For example, did you know that: Entity Framework Code First also works seamlessly with SQL Azure (MSDN) Database creation defaults can be overridden using a variety of IDatabaseInitializers  (Understanding Database Initializers) You can use Code Based migrations to manage database upgrades as your model continues to evolve (MSDN) Next Steps There's no time like the present to start the learning, so here's what you need to do: Get up-to-date in Visual Studio 2010 (VS2010 | SP1) or Visual Studio 2012 (VS2012) Build yourself a project to try these concepts out (or download the sample project) Get into the community and ask questions! There are a ton of great resources out there and community members willing to help you out (like these two guys!). Good luck! About the Authors David Paquette works as a lead developer at P2 Energy Solutions in Calgary, Alberta where he builds commercial software products for the energy industry.  Outside of work, David enjoys outdoor camping, fishing, and skiing. David is also active in the software community giving presentations both locally and at conferences. David also serves as the President of Calgary .Net User Group. James Chambers crafts software awesomeness with an incredible team at LogiSense Corp, based in Cambridge, Ontario. A husband, father and humanitarian, he is currently residing in the province of Manitoba where he resists the urge to cheer for the Jets and maintains he allegiance to the Calgary Flames. When he's not active with the family, outdoors or volunteering, you can find James speaking at conferences and user groups across the country about web development and related technologies.

    Read the article

  • Launch configuration for an Eclipse RCP project

    - by ekebsi
    I am new to Eclipse and have been asked to make some changes to an RCP project. Now I can not get it to run. Unfortunately I cannot contact the author to get some help. I would appreciate an hint which could lead to the solution. When I lunch the application I get following log message: !ENTRY org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE One or more bundles are not resolved because the following root constraints are not resolved: !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.team.core.nl1_3.3.0.I20070607.jar was not resolved. !SUBENTRY 2 org.eclipse.team.core.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.team.core_[3.2.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.emf.edit.ui.nl1_2.3.0.v200706262000.jar was not resolved. !SUBENTRY 2 org.eclipse.emf.edit.ui.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.emf.edit.ui_[2.0.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.team.ui.nl1_3.3.0.I20070607.jar was not resolved. !SUBENTRY 2 org.eclipse.team.ui.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.team.ui_[3.2.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.rcp.nl1_3.2.0.v20070612.jar was not resolved. !SUBENTRY 2 org.eclipse.rcp.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.rcp_[3.2.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.ui.win32.nl1_3.2.100.I20070319-0010.jar was not resolved. !SUBENTRY 2 org.eclipse.ui.win32.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.ui.win32_[3.2.0,4.0.0). !ENTRY org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE The following is a complete list of bundles which are not resolved, see the prior log entry for the root cause if it exists: !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.emf.edit.ui.nl1_2.3.0.v200706262000.jar [29] was not resolved. !SUBENTRY 2 org.eclipse.emf.edit.ui.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.emf.edit.ui_[2.0.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.team.ui.nl1_3.3.0.I20070607.jar [45] was not resolved. !SUBENTRY 2 org.eclipse.team.ui.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.team.ui_[3.2.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.ui.win32.nl1_3.2.100.I20070319-0010.jar [66] was not resolved. !SUBENTRY 2 org.eclipse.ui.win32.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.ui.win32_[3.2.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.team.core.nl1_3.3.0.I20070607.jar [83] was not resolved. !SUBENTRY 2 org.eclipse.team.core.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.team.core_[3.2.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.rcp.nl1_3.2.0.v20070612.jar [124] was not resolved. !SUBENTRY 2 org.eclipse.rcp.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.rcp_[3.2.0,4.0.0). I am using a windows 7 Enterprise 64 (4GB) and Eclipse Juno 64.

    Read the article

  • CodePlex Daily Summary for Monday, January 03, 2011

    CodePlex Daily Summary for Monday, January 03, 2011Popular ReleasesStyleCop for ReSharper: StyleCop for ReSharper 5.1.14977.000: Prerequisites: ============== o Visual Studio 2008 / Visual Studio 2010 o ReSharper 5.1.1753.4 o StyleCop 4.4.1.2 Preview This release adds no new features, has bug fixes around performance and unhandled errors reported on YouTrack.Morphine: Morphine Alpha Build 30: - Optimization - Some fixes with playlists - Added kinetic scrolling to tracklist view - Updated animations - Added controls to tracklist view Media opens by clicking "No media" or song title now.BloodSim: BloodSim - 1.3.1.0: - Restructured simulation log back end to something less stupid to drastically reduce simulation time and memory usage - Removed a debug log entry that was left over from testing of 1.3.0.0 - Fixed a rounding and calculation error with Haste rating - Added option for Rune of SwordshatteringDbDocument: DbDoc Initial Version: DbDoc Initial versionUltimateJB: UltimateJB 2.03 PL3 KAKAROTO: Voici une version attendu avec impatience pour beaucoup : - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 2.43 !!! Conclusion : - ultimateJB DEFAULT => Pas de spoof mais disponible pour les PS3 suivantes : 3.41_kiosk 3.41 3.40 3.30 3.21 3.15 3.10 3.01 2.76 2.70 2.60 2.53 2.43ASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.2: Atomic CMS 2.1.2 release notes Atomic CMS installation guide Kind Of Magic MSBuild Task: Beta 4: Update to keep up with latest bug fixes. To those who don't like Magic/NoMagic attributes, you may change these names in KindOfMagic.targets file: Change this line: <MagicTask Assembly="@(IntermediateAssembly)" References="@(ReferencePath)"/> to something like this: <MagicTask Assembly="@(IntermediateAssembly)" References="@(ReferencePath)" MagicAttribute="MyMagicAttribute" NoMagicAttribute="MyNoMagicAttribute"/>N2 CMS: 2.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) All-round improvements and bugfixes File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open the proj...Wii Backup Fusion: Wii Backup Fusion 1.0: - Norwegian translation - French translation - German translation - WBFS dump for analysis - Scalable full HQ cover - Support for log file - Load game images improved - Support for image splitting - Diff for images after transfer - Support for scrubbing modes - Search functionality for log - Recurse depth for Files/Load - Show progress while downloading game cover - Supports more databases for cover download - Game cover loading routines improvedAutoLoL: AutoLoL v1.5.1: Fix: Fixed a bug where pressing Save As would not select the Mastery Directory by default Unexpected errors are now always reported to the user before closing AutoLoL down.* Extracted champion data to Data directory** Added disclaimer to notify users this application has nothing to do with Riot Games Inc. Updated Codeplex image * An error report will be shown to the user which can help the developers to find out what caused the error, this should improve support ** We are working on ...Random password generator written in F#.: VS 2010 solution + exe: Download a VS 2010 solution (unzip before opening) or a ready to go exe.TortoiseHg: TortoiseHg 1.1.8: TortoiseHg 1.1.8 is a minor bug fix release, with minor improvementsBlogEngine.NET: BlogEngine.NET 2.0: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info 3 Months FREE – BlogEngine.NET Hosting – Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. To get started, be sure to check out our installatio...EnhSim: EnhSim 2.2.8 ALPHA: 2.2.8 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Rebuilt Feral Spir...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.6 Released: Hi, Today we are releasing final version of Visifire, v3.6.6 with the following new feature: * TextDecorations property is implemented in Title for Chart. * TitleTextDecorations property is implemented in Axis. * MinPointHeight property is now applicable for Column and Bar Charts. Also this release includes few bug fixes: * ToolTipText property of DataSeries was not getting applied from Style. * Chart threw exception if IndicatorEnabled property was set to true and Too...StyleCop Compliant Visual Studio Code Snippets: Visual Studio Code Snippets - January 2011: StyleCop Compliant Visual Studio Code Snippets Visual Studio 2010 provides C# developers with 38 code snippets, enhancing developer productivty and increasing the consistency of the code. Within this project the original code snippets have been refactored to provide StyleCop compliant versions of the original code snippets while also adding many new code snippets. Within the January 2011 release you'll find 82 code snippets to make you more productive and the code you write more consistent!...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.2: Version: 2.0.0.2 (Milestone 2): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...Cosmos (C# Open Source Managed Operating System): 71406: This is the second release supporting the full line of Visual Studio 2010 editions. Changes since release 71246 include: Debug info is now stored in a single .cpdb file (which is a Firebird database) Keyboard input works now (using Console.ReadLine) Console colors work (using Console.ForegroundColor and .BackgroundColor)Paint.NET PSD Plugin: 1.6.0: Handling of layer masks has been greatly improved. Improved reliability. Many PSD files that previously loaded in as garbage will now load in correctly. Parallelized loading. PSD files containing layer masks will load in a bit quicker thanks to the removal of the sequential bottleneck. Hidden layers are no longer made visible on save. Many thanks to the users who helped expose the layer masks problem: Rob Horowitz, M_Lyons10. Please keep sending in those bug reports and PSD repro files!Facebook C# SDK: 4.1.1: From 4.1.1 Release: Authentication bug fix caused by facebook change (error with redirects in Safari) Authenticator fix, always returning true From 4.1.0 Release Lots of bug fixes Removed Dynamic Runtime Language dependencies from non-dynamic platforms. Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample Changed internal serialization to use Json.net BREAKING CHANGE: Canvas Session is no longer supported. Use Signed...New ProjectsAndroid Battery Indicator: Small widget that shows the battery life as a percentageBudget: A personal exploration into C#. A quickly thrown together project that allows you to track expenses by week. MS Access back end.Cafeteria Dotnetnuke Module: Simple Dotnetnuke Module about Managing Cafeteria. This module was applied at Ho Chi Minh City International University Portal Website.E Book Database & Organizer: This Is a project to organize ebooks and retrive information about them very fast.Later a free e-library software may be developed based on this project. This project mainly developed with visual C#2008express edition.It also can compiled by mono. It use SQLite as database. GrowlWebBridge: Using c# Growl connector library, accept notification parameters via querystring and fire off to Growl. I'm personally using it to have my Vera 2 (www.micasaverde.com) send growls when things happen.JakoPiste: N/AJAudit: Static analysis for java programs. Helps audit java code. Reports possible code improvements. 100% C#.Morphine: Morphine is a nice WPF media player with Android Honeycomb interface.NUnit test template for VS2010 Express MVC 2: This is an attempt to create a set of template of NUnit (a test framework) for MVC 2 in Visual Studio 2010 Express.Orchard Image Field Module: Orchard Image Field Module adds a new Image editor to content type management. PDB2MOBI batch convert: Converts PDB files to MOBI format in batch. Used primary to convert large PDB libraries for Kindle.Random password generator written in F#.: A password generator that creates random and strong passwords. It's developed in F#. Robots Routing using Swarm Intelligence: A project to simulate and test a multiagent algorithm for finding multiple noisy radiation Sources with spatial and communication constraints with an emulated environment with different parameters and conditions. sanmei: sanmeiServer DateTime: Server DateTime renders the date and time from the server and make it active using javascript. It is in Military Time Format.Windows K: Microsoft Imagine Cup 2011 Project

    Read the article

  • Nightmare: Upgrading Tomcat 5.5 to 6.0

    - by pavanlimo
    I'm trying to upgrade a perfectly running embedded Tomcat 5.5 to Tomcat 6.0. I understand that all I need to do is replace Tomcat 5.5 jars with 6.0. That's what I did. So I replaced the following jars: catalina-5.0.28.jar catalina-5.5.9.jar catalina-optional-5.5.9.jar commons-el.jar commons-modeler-1.1.0.jar jasper-compiler-jdt.jar jasper-compiler.jar jasper-runtime.jar jmx-5.0.28.jar jsp-api-2.0.jar naming-factory.jar naming-resources.jar servlet-api-2.4.jar servlets-default.jar tomcat-coyote.jar tomcat-http.jar tomcat-util.jar with: annotations-api.jar catalina.jar jasper.jar tomcat-dbcp.jar catalina-ant.jar el-api.jar jsp-api.jar tomcat-i18n-es.jar catalina-ha.jar jasper-el.jar servlet-api.jar tomcat-i18n-fr.jar catalina-tribes.jar jasper-jdt.jar tomcat-coyote.jar tomcat-i18n-ja.jar tomcat-juli.jar As soon as I start the server, I get the following message in the logs at INFO level: INFO: Starting Servlet Engine: Apache Tomcat/6.0.29 Dec 31, 2010 6:04:18 AM org.apache.catalina.loader.WebappClassLoader validateJarFile INFO: validateJarFile(/usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class Based on the this explanation, I need to remove a jar file which has a conflicting Servlet.class. I swear to God, there is no other conflicting jar file, I grepped system wide for Servlet.class, it matched only servlet-api.jar. I also downloaded javaee.jar and replaced it by servlet-api.jar, to same avail. Having tried lot of these stuff, I did not have much to look upto, so set the tomcat logging level to ALL. In the log I could see that it is trying to check for Servlet.class in each and every jar it is loading until it finds servlet-api.jar and throws "jar not loaded" message as soon as it finds servlet-api.jar. See below: FINE: Checking for javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappLoader setRepositories FINE: Deploy JAR /WEB-INF/lib/servlet-api.jar to /usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader addJar FINE: addJar(/WEB-INF/lib/servlet-api.jar) Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader validateJarFile FINE: Checking for javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader validateJarFile INFO: validateJarFile(/usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappLoader setRepositories Please note however, that Tomcat starts successfully! And as soon as I hit the URL on the browser, I get blank page(this may be in my case only, I guess 'cuz of my web.xml, sorta different from most. Other people on the internet have got Error 404 instead.) with following log statements(at finest level) Jan 2, 2011 9:40:01 AM org.apache.catalina.connector.CoyoteAdapter parseSessionCookiesId FINE: Requested cookie session id is 0FBA716E3F9B0147C3AF7ABAE3B1C27B Jan 2, 2011 9:40:01 AM org.apache.catalina.authenticator.AuthenticatorBase invoke FINE: Security checking request GET /login.jsp Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: No applicable constraint located Jan 2, 2011 9:40:01 AM org.apache.catalina.authenticator.AuthenticatorBase invoke FINE: Not subject to any constraint Jan 2, 2011 9:40:01 AM org.apache.catalina.core.StandardWrapper allocate FINEST: Returning non-STM instance I'm not sure if the above log message is important, but I'm for all-out disclosure here. One interesting thing though, I manually created a dummy jsp file containing only "helloooo" just outside WEB-INF folder(no security constraints for this file). This file was accessible and could be displayed. But, all my jsp's and classes are inside WEB-INF(ofcourse). Sick and tired of this issue, please help me solve it. I've already spent 20-24 hours on this unsuccessfully. Any pointers directions hints leads?

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • CodePlex Daily Summary for Tuesday, June 25, 2013

    CodePlex Daily Summary for Tuesday, June 25, 2013Popular ReleasesCrmRestKit.js: CrmRestKit-2.6.1.js: Improvments for "ByQueryAll": Instead of defering unit all records are loaded, the progress-handler is invoked for each 50 records Added "progress-handler" support for "ByQueryAll" See: http://api.jquery.com/deferred.notify/* See Unit-tests for details ('Retrieve all with progress notification')VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterMinesweeper by S. Joshi: Minesweeper 1.0.0.1: Zipped!Alfa: Alfa.Common 1.0.4923: Removed useless property ErrorMessage from StringRegularExpressionValidationRule Repaired ErrorWindow and disabled report button after click action Implemented: DateTimeRangeValidationRule DateTimeValidationRule DateValidationRule NotEmptyValidationRule NotNullValidationRule RequiredValidationRule StringMaxLengthValidationRule StringMinLengthValidationRule TimeValidationRule DateTimeToStringConverter DecimalToStringConverter StringToDateTimeConverter StringToDecima...Document.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsStyleMVVM: 3.0.2: This is a minor feature and bug fix release Features: ExportWhenDebuggerIsAttacedAttribute - new attribute that marks an attribute to only be exported when the debugger is attahced InjectedFilterAttributeFilterProvider - new Attribute Filter provider for MVC that injects the attributes Performance Improvements - minor speed improvements all over, and Import collections is now 50% faster Bug Fixes: Open Generic Constraints are now respected when finding exports Fix for fluent registrat...Base64 File Converter: First release: - One file can be converted at a time. - Bytes to Base64 and Base64 to Bytes conversions are availableADExtractor - work with Active-Directory: V1.3.0.2: 1.3.0.2- added optional value-index to properties - added new aggregation "Literal" to pass-through values - added new SQL commandline-switch --db to override the connection string - added new global commandline-switch --quiet to reduce output to minimum 1.3.0.1- sub-property resolution now working with aggregation (ie. All(member.objectGuid)) 1.3.0.0- added correct formatting for guids (GuidFormatter)EMICHAG: EMICHAG NETMF 4.2: Alpha release of the 4.2 version.WPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Windows.Forms.Controls Revisited (SSTA.WinForms): SSTA.WinForms 1.0.0: Latest stable releaseAscend 3D: Ascend 2.0: Release notes: Implemented bone/armature animation Refactored class hierarchy and naming Addressed high CPU usage issue during animation Updated the Blender exporter and Ascend model format (now XML) Created AscendViewer, a tool for viewing Ascend modelsIndent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...Magick.NET: Magick.NET 6.8.5.1001: Magick.NET compiled against ImageMagick 6.8.5.10. Breaking changes: - MagickNET.Initialize has been made obsolete because the ImageMagick files in the directory are no longer necessary. - MagickGeometry is no longer IDisposable. - Renamed dll's so they include the platform name. - Image profiles can now only be accessed and modified with ImageProfile classes. - Renamed DrawableBase to Drawable. - Removed Args part of PathArc/PathCurvetoArgs/PathQuadraticCurvetoArgs classes. The...Three-Dimensional Maneuver Gear for Minecraft: TDMG 1.1.0.0 for 1.5.2: CodePlex???(????????) ?????????(???1/4) ??????????? ?????????? ???????????(??????????) ??????????????????????? ↑????、?????????????????????(???????) ???、??????????、?????????????????????、????????1.5?????????? Shift+W(????)??????????????????10°、?10°(?????????)???Hyper-V Management Pack Extensions 2012: HyperVMPE2012 (v1.0.1.126): Hyper-V Management Pack Extensions 2012 Beta ReleaseOutlook 2013 Add-In: Email appointments: This new version includes the following changes: - Ability to drag emails to the calendar to create appointments. Will gather all the recipients from all the emails and create an appointment on the day you drop the emails, with the text and subject of the last selected email (if more than one selected). - Increased maximum of numbers to display appointments to 30. You will have to uninstall the previous version (add/remove programs) if you had installed it before. Before unzipping the file...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.5.2: v1.5.2 - This is a service release. We've fixed a number of issues with Tasks and IoC. We've made some consistency improvements across platforms and fixed a number of minor bugs. See changes.txt for details. Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro inversion of control container (IoC); source code...SQL Compact Query Analyzer: 1.0.1.1511: Beta build of SQL Compact Query Analyzer Bug fixes: - Resolved issue where the application crashes when loading a database that contains tables without a primary key Features: - Displays database information (database version, filename, size, creation date) - Displays schema summary (number of tables, columns, primary keys, identity fields, nullable fields) - Displays the information schema views - Displays column information (database type, clr type, max length, allows null, etc) - Support...New Projectsalgorithm interview: Write some code for programming interview questionsAmbiTrack: AmbiTrack provides marker-free localization and tracking using an array of cameras, e.g. the PlayStation Eye.DSeX DragonSpeak eXtended Editor: Furcadia DragonSpeak EditorEasyProject: easy project is finel engineering project this sys made for easy way to manage emploees of a small business or organization .EDIVisualizer SDK: EDIVisualizer SDK is a software developpement kit (SDK) for creating plugin used by EDIVisualizer program.Flame: Interactive shell for scripting in IronRuby IronPython Powershell C#, with syntax highlighting, autocomplete,and for easily work within a .NET framework consoleFusion-io MS SQL Server scripts: Fusion-io's Sumeet Bansal shows how Microsoft SQL Server users can quickly determine whether their database will benefit from Fusion-io ioMemory products.hotelprimero: Proyecto de hotelImage filters: It's a simple image filter :)JSLint.NET: JSLint.NET is a wrapper for Douglas Crockford's JSLint, the JavaScript code quality tool. It can validate JavaScript anywhere .NET runs.KZ.Express.H: KZ.Express.Hmoleshop: moleshop???asp.net ???????B2C????,????????.net???????????。moleshop??asp.net mvc 4.0???,???.net 4.0??。 moleshop???????,??AL2.0 ????,????????????????,????????。PDF Generator Net Free Xml Parser: PDF Generator Net Free Xml Parser?XML?????????????????????。?????????????????????CSV?????????PDF???????????????????????。ScreenCatcher: ScreenCatcher is a program for creating screenshots.SharePoint Configuration Guidance for 21 CFR Part 11 Compliance – SP 2013: This CodePlex project provides the Paragon Solutions configurations and code to support 21 CFR Part 11 compliance with SharePoint 2013.Sky Editor: Save file editor for Pokémon Mystery Dungeon: Explorers of Sky with limited support for Explorers of Time and DarknessTRANCIS: For foundation and runnning online operations for Trancis.UGSF GRAND OUEST: UGSF GRAND OUESTWP.JS: WP.JS (Windows Phone (dot) Javascript) aims to be a compelling library for developing HTML5-based applications for Windows Phone 8.X-Aurora: X-Aurora ?? MVC3

    Read the article

  • Code golf: Word frequency chart

    - by ChristopheD
    The challenge: Build an ASCII chart of the most commonly used words in a given text. The rules: Only accept a-z and A-Z (alphabetic characters) as part of a word. Ignore casing (She == she for our purpose). Ignore the following words (quite arbitary, I know): the, and, of, to, a, i, it, in, or, is Clarification: considering don't: this would be taken as 2 different 'words' in the ranges a-z and A-Z: (don and t). Optionally (it's too late to be formally changing the specifications now) you may choose to drop all single-letter 'words' (this could potentially make for a shortening of the ignore list too). Parse a given text (read a file specified via command line arguments or piped in; presume us-ascii) and build us a word frequency chart with the following characteristics: Display the chart (also see the example below) for the 22 most common words (ordered by descending frequency). The bar width represents the number of occurences (frequency) of the word (proportionally). Append one space and print the word. Make sure these bars (plus space-word-space) always fit: bar + [space] + word + [space] should be always <= 80 characters (make sure you account for possible differing bar and word lenghts: e.g.: the second most common word could be a lot longer then the first while not differing so much in frequency). Maximize bar width within these constraints and scale the bars appropriately (according to the frequencies they represent). An example: The text for the example can be found here (Alice's Adventures in Wonderland, by Lewis Carroll). This specific text would yield the following chart: _________________________________________________________________________ |_________________________________________________________________________| she |_______________________________________________________________| you |____________________________________________________________| said |____________________________________________________| alice |______________________________________________| was |__________________________________________| that |___________________________________| as |_______________________________| her |____________________________| with |____________________________| at |___________________________| s |___________________________| t |_________________________| on |_________________________| all |______________________| this |______________________| for |______________________| had |_____________________| but |____________________| be |____________________| not |___________________| they |__________________| so For your information: these are the frequencies the above chart is built upon: [('she', 553), ('you', 481), ('said', 462), ('alice', 403), ('was', 358), ('that ', 330), ('as', 274), ('her', 248), ('with', 227), ('at', 227), ('s', 219), ('t' , 218), ('on', 204), ('all', 200), ('this', 181), ('for', 179), ('had', 178), (' but', 175), ('be', 167), ('not', 166), ('they', 155), ('so', 152)] A second example (to check if you implemented the complete spec): Replace every occurence of you in the linked Alice in Wonderland file with superlongstringstring: ________________________________________________________________ |________________________________________________________________| she |_______________________________________________________| superlongstringstring |_____________________________________________________| said |______________________________________________| alice |________________________________________| was |_____________________________________| that |______________________________| as |___________________________| her |_________________________| with |_________________________| at |________________________| s |________________________| t |______________________| on |_____________________| all |___________________| this |___________________| for |___________________| had |__________________| but |_________________| be |_________________| not |________________| they |________________| so The winner: Shortest solution (by character count, per language). Have fun! Edit: Table summarizing the results so far (2012-02-15) (originally added by user Nas Banov): Language Relaxed Strict ========= ======= ====== GolfScript 130 143 Perl 185 Windows PowerShell 148 199 Mathematica 199 Ruby 185 205 Unix Toolchain 194 228 Python 183 243 Clojure 282 Scala 311 Haskell 333 Awk 336 R 298 Javascript 304 354 Groovy 321 Matlab 404 C# 422 Smalltalk 386 PHP 450 F# 452 TSQL 483 507 The numbers represent the length of the shortest solution in a specific language. "Strict" refers to a solution that implements the spec completely (draws |____| bars, closes the first bar on top with a ____ line, accounts for the possibility of long words with high frequency etc). "Relaxed" means some liberties were taken to shorten to solution. Only solutions shorter then 500 characters are included. The list of languages is sorted by the length of the 'strict' solution. 'Unix Toolchain' is used to signify various solutions that use traditional *nix shell plus a mix of tools (like grep, tr, sort, uniq, head, perl, awk).

    Read the article

  • Unknown error sourcing a script containing 'typeset -r' wrapped in command substitution

    - by Bernard Assaf
    I wish to source a script, print the value of a variable this script defines, and then have this value be assigned to a variable on the command line with command substitution wrapping the source/print commands. This works on ksh88 but not on ksh93 and I am wondering why. $ cat typeset_err.ksh #!/bin/ksh unset _typeset_var typeset -i -r _typeset_var=1 DIR=init # this is the variable I want to print When run on ksh88 (in this case, an AIX 6.1 box), the output is as follows: $ A=$(. ./typeset_err.ksh; print $DIR) $ echo $A init When run on ksh93 (in this case, a Linux machine), the output is as follows: $ A=$(. ./typeset_err.ksh; print $DIR) -ksh: _typeset_var: is read only $ print $A ($A is undefined) The above is just an example script. The actual thing I wish to accomplish is to source a script that sets values to many variables, so that I can print just one of its values, e.g. $DIR, and have $A equal that value. I do not know in advance the value of $DIR, but I need to copy files to $DIR during execution of a different batch script. Therefore the idea I had was to source the script in order to define its variables, print the one I wanted, then have that print's output be assigned to another variable via $(...) syntax. Admittedly a bit of a hack, but I don't want to source the entire sub-script in the batch script's environment because I only need one of its variables. The typeset -r code in the beginning is the error. The script I'm sourcing contains this in order to provide a semaphore of sorts--to prevent the script from being sourced more than once in the environment. (There is an if statement in the real script that checks for _typeset_var = 1, and exits if it is already set.) So I know I can take this out and get $DIR to print fine, but the constraints of the problem include keeping the typeset -i -r. In the example script I put an unset in first, to ensure _typeset_var isn't already defined. By the way I do know that it is not possible to unset a typeset -r variable, according to ksh93's man page for ksh. There are ways to code around this error. The favorite now is to not use typeset, but just set the semaphore without typeset (e.g. _typeset_var=1), but the error with the code as-is remains as a curiosity to me, and I want to see if anyone can explain why this is happening. By the way, another idea I abandoned was to grep the variable I need out of its containing script, then print that one variable for $A to be set to; however, the variable ($DIR in the example above) might be set to another variable's value (e.g. DIR=$dom/init), and that other variable might be defined earlier in the script; therefore, I need to source the entire script to make sure I all variables are defined so that $DIR is correctly defined when sourcing.

    Read the article

  • Use QuickCheck by generating primes

    - by Dan
    Background For fun, I'm trying to write a property for quick-check that can test the basic idea behind cryptography with RSA. Choose two distinct primes, p and q. Let N = p*q e is some number relatively prime to (p-1)(q-1) (in practice, e is usually 3 for fast encoding) d is the modular inverse of e modulo (p-1)(q-1) For all x such that 1 < x < N, it is always true that (x^e)^d = x modulo N In other words, x is the "message", raising it to the eth power mod N is the act of "encoding" the message, and raising the encoded message to the dth power mod N is the act of "decoding" it. (The property is also trivially true for x = 1, a case which is its own encryption) Code Here are the methods I have coded up so far: import Test.QuickCheck -- modular exponentiation modExp :: Integral a => a -> a -> a -> a modExp y z n = modExp' (y `mod` n) z `mod` n where modExp' y z | z == 0 = 1 | even z = modExp (y*y) (z `div` 2) n | odd z = (modExp (y*y) (z `div` 2) n) * y -- relatively prime rPrime :: Integral a => a -> a -> Bool rPrime a b = gcd a b == 1 -- multiplicative inverse (modular) mInverse :: Integral a => a -> a -> a mInverse 1 _ = 1 mInverse x y = (n * y + 1) `div` x where n = x - mInverse (y `mod` x) x -- just a quick way to test for primality n `divides` x = x `mod` n == 0 primes = 2:filter isPrime [3..] isPrime x = null . filter (`divides` x) $ takeWhile (\y -> y*y <= x) primes -- the property prop_rsa (p,q,x) = isPrime p && isPrime q && p /= q && x > 1 && x < n && rPrime e t ==> x == (x `powModN` e) `powModN` d where e = 3 n = p*q t = (p-1)*(q-1) d = mInverse e t a `powModN` b = modExp a b n (Thanks, google and random blog, for the implementation of modular multiplicative inverse) Question The problem should be obvious: there are way too many conditions on the property to make it at all usable. Trying to invoke quickCheck prop_rsa in ghci made my terminal hang. So I've poked around the QuickCheck manual a bit, and it says: Properties may take the form forAll <generator> $ \<pattern> -> <property> How do I make a <generator> for prime numbers? Or with the other constraints, so that quickCheck doesn't have to sift through a bunch of failed conditions? Any other general advice (especially regarding QuickCheck) is welcome.

    Read the article

  • "Attach or Add an entity that is not new...loaded from another DataContext. This is not supported."

    - by sah302
    Similar error as other questions, but not quite the same, I am not trying to attach anything. What I am trying to do is insert a new row into a linking table, specifically UserAccomplishment. Relations are set in LINQ to User and Accomplishment Tables. I have a generic insert function: Public Function insertRow(ByVal entity As ImplementationType) As Boolean If entity IsNot Nothing Then Dim lcfdatacontext As New LCFDataContext() Try lcfdatacontext.GetTable(Of ImplementationType)().InsertOnSubmit(entity) lcfdatacontext.SubmitChanges() lcfdatacontext.Dispose() Return True Catch ex As Exception Return False End Try Else Return False End If End Function If you try and give UserAccomplishment the two appropriate objects this will naturally crap out if either the User or Accomplishment already exist. It only works when both user and accomplishment don't exist. I expected this behavior. What does work is simply giving the userAccomplishment object a user.id and accomplishment.id and populating the rest of the fields. This works but is kind of awkward to use in my app, it would be much easier to simply pass in both objects and have it work out what already exists and what doesn't. Okay so I made the following (please ignore the fact that this is horribly inefficient because I know it is): Public Class UserAccomplishmentDao Inherits EntityDao(Of UserAccomplishment) Public Function insertLinkerObjectRow(ByVal userAccomplishment As UserAccomplishment) Dim insertSuccess As Boolean = False If Not userAccomplishment Is Nothing Then Dim userDao As New UserDao() Dim accomplishmentDao As New AccomplishmentDao() Dim user As New User() Dim accomplishment As New Accomplishment() 'see if either object already exists in db' user = userDao.getOneByValueOfProperty("Id", userAccomplishment.User.Id) accomplishment = accomplishmentDao.getOneByValueOfProperty("Id", userAccomplishment.Accomplishment.Id) If user Is Nothing And accomplishment Is Nothing Then 'neither the user or the accomplishment exist, both are new so insert them both, typical insert' insertSuccess = Me.insertRow(userAccomplishment) ElseIf user Is Nothing And Not accomplishment Is Nothing Then 'user is new, accomplishment is not new, so just insert the user, and the relation in userAccomplishment' Dim userWithExistingAccomplishment As New UserAccomplishment(userAccomplishment.User, userAccomplishment.Accomplishment.Id, userAccomplishment.LastUpdatedBy) insertSuccess = Me.insertRow(userWithExistingAccomplishment) ElseIf Not user Is Nothing And accomplishment Is Nothing Then 'user is not new, accomplishment is new, so just insert the accomplishment, and the relation in userAccomplishment' Dim existingUserWithAccomplishment As New UserAccomplishment(userAccomplishment.UserId, userAccomplishment.Accomplishment, userAccomplishment.LastUpdatedBy) insertSuccess = Me.insertRow(existingUserWithAccomplishment) Else 'both are not new, just add the relation' Dim userAccomplishmentBothExist As New UserAccomplishment(userAccomplishment.User.Id, userAccomplishment.Accomplishment.Id, userAccomplishment.LastUpdatedBy) insertSuccess = Me.insertRow(userAccomplishmentBothExist) End If End If Return insertSuccess End Function End Class Alright, here I basically check if the supplied user and accomplishment already exists in the db, and if so call an appropriate constructor that will leave whatever already exists empty, but supply the rest of the information so the insert can succeed. However, upon trying an insert: Dim result As Boolean = Me.userAccomplishmentDao.insertLinkerObjectRow(userAccomplishment) In which the user already exists, but the accomplishment does not (the 99% typical scenario) I get the error: "An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported." I have debugged this multiple times now and am not sure why this is occuring, if either User or Accomplishment exist, I am not including it in the final object to try to insert. So nothing appears to be attempted to be added. Even in debug, upon insert, the object was set to empty. So the accomplishment is new and the user is empty. 1) Why is it still saying that and how can I fix it ..using my current structure 2) Pre-emptive 'use repository pattern answers' - I know this way kind of sucks in general and I should be using the repository pattern. However, I can't use that in the current project because I don't have time to refactor that due to my non existence knowledge of it and time constraints. The usage of the app is going to so small that the inefficient use of datacontext's and what have you won't matter so much. I can refactor it once it's up and running, but for now I just need to 'push through' with my current structure. Edit: I also just tested this when having both already exists, and only insert each object's IDs into the table, that works. So I guess I could manually insert whichever object doesn't exist as a single insert, then put the ids only into the linking table, but I still don't know why when one object exists, and I make it empty, it doens't work.

    Read the article

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

  • CodePlex Daily Summary for Wednesday, June 26, 2013

    CodePlex Daily Summary for Wednesday, June 26, 2013Popular ReleasesNaked Objects: Naked Objects Release 5.5.0: This release includes a number of significant improvements to the usability of the UI, some of which involve new programming conventions or attributes: Action dialogs now appear as pop-up modal dialogs instead of as a new page; query-only actions have an Apply as well as an OK button. See https://nakedobjects.codeplex.com/workitem/175 When a reference object is expanded in-line there is a button to jump straight to an Edit view of that object see https://nakedobjects.codeplex.com/workitem/1...VeraCrypt: VeraCrypt version 1.0b: Changes since version 1.0a :Enhance RIPEMD160 implementation in BootLoaded by using the compiler uint32 type Don't position legacy flag in volume header for newer VeraCrypt releasesPlayer Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for WAMS. Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expecting only finite values. (...SSIS DQS Matching Transformation: SSIS DQS Matching Transformation 1.0: Initial release of the SSIS DQS Matching Component.AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...Compare .NET Objects: Version 1.7.2.0: If you like it, please rate it. :) Performance Improvements Fix for deleted row in a data table Added ability to ignore the collection order Fix for Ignoring by AttributesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.95: update parser to allow for CSS3 calc( function to nest. add recognition of -pponly (Preprocess-Only) switch in AjaxMinManifestTask build task. Fix crashing bug in EXE when processing a manifest file using the -xml switch and an error message needs to be displayed (like a missing input file). Create separate Clean and Bundle build tasks for working with manifest files (AjaxMinManifestCleanTask and AjaxMinBundleTask). Removed the IsCleanOperation from AjaxMinManifestTask -- use AjaxMinMan...VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterDocument.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsStyleMVVM: 3.0.2: This is a minor feature and bug fix release Features: ExportWhenDebuggerIsAttacedAttribute - new attribute that marks an attribute to only be exported when the debugger is attahced InjectedFilterAttributeFilterProvider - new Attribute Filter provider for MVC that injects the attributes Performance Improvements - minor speed improvements all over, and Import collections is now 50% faster Bug Fixes: Open Generic Constraints are now respected when finding exports Fix for fluent registrat...WPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Channel9's Absolute Beginner Series: Windows Phone 8: Entire source code for the Channel 9 series, Windows Phone 8 Development for Absolute Beginners.Indent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...Magick.NET: Magick.NET 6.8.5.1001: Magick.NET compiled against ImageMagick 6.8.5.10. Breaking changes: - MagickNET.Initialize has been made obsolete because the ImageMagick files in the directory are no longer necessary. - MagickGeometry is no longer IDisposable. - Renamed dll's so they include the platform name. - Image profiles can now only be accessed and modified with ImageProfile classes. - Renamed DrawableBase to Drawable. - Removed Args part of PathArc/PathCurvetoArgs/PathQuadraticCurvetoArgs classes. The...Keyboard Image Viewer: 1.5.4: Upgraded folder picker dialog to better version on Win7+ Fixed bug that stopped slideshow from looping back to the start of the list. Added crash dialog that allows you to see and copy exception stack traces for fixing.DependencyAnalysis (Egg and Gherkin): 0.9.4: - Create Visual Studio 2012 Addin for ad-hoc analysis of your project - Display metrics in a grid - Adequate performing serialization between Addin (Visual Studio process) and AnalysisHost process - Display dependencies as graph (proximity graph) - Create a logo for the project - Constructors of anonymous types no longer hide constructors of the declaring type during "build dependencies" phase - Type descriptors were added multiple times to SubmoduleDescriptor. Types collection, same instanc...Bloomberg API Emulator: Bloomberg API Emulator (v 1.0.5): This version contains the full Java port of my original C# code. I just finished the MarketDataSubscription request type. I will start working on a C++ port of my C# code.Three-Dimensional Maneuver Gear for Minecraft: TDMG 1.1.0.0 for 1.5.2: CodePlex???(????????) ?????????(???1/4) ??????????? ?????????? ???????????(??????????) ??????????????????????? ↑????、?????????????????????(???????) ???、??????????、?????????????????????、????????1.5?????????? Shift+W(????)??????????????????10°、?10°(?????????)???Hyper-V Management Pack Extensions 2012: HyperVMPE2012 (v1.0.1.126): Hyper-V Management Pack Extensions 2012 Beta ReleaseNew Projects.Net Encryption App: This is a C#.Net desktop application that will let users encrypt and de-crypt files with the algorithm of their choosing.AutoSPDocumenter: AutoSPDocumenter utilises PowerShell to document SharePoint farms and provide output in usable formats.Azure Storage Redirector: Azure Storage Redirector. Redirects requests to Global Azure Storage to China Azure Storage. ?????Azure Storage?request??????Azure storagebrownbag: Simple project to show branching, merging, and shelvingChannel9's Absolute Beginner Series: Channel 9's absolute beginner series source code. From Windows Phone 7, Windows Phone 8, Windows Store applications, one stop area for all the seriesFAST for Sharepoint 2010 Query Tool (.NET 3.5): .NET 3.5 version of the FAST Search for SharePoint MOSS 2010 Query Tool (https://fastforsharepoint.codeplex.com/). For environments without .NET 4.0HAOest Framework: HAOest????ListManager: ListManager????? by HADB of HAOestMyPS: mypsNNRel: NNRelO - BV - 2: TestOpen XML SDK for JavaScript: Small JavaScript library that enables you to implement Open XML functionality anywhere you can use JavaScript.Orchard Prefix free: Provides a script manifest for the Prefix free script libraries.PVDesktop: PVDesktop is an application for designing and analyzing specific solar energy sites.Red the sound TowePlay: School project at ISEN Lille. Creation of a collaborative music creation software in C# using.NETScience Kits for Kids: This project is the service for Childhood Education which aged 4 - 8.SharePoint ULS for PowerShell: Allows PowerShell to log to the SharePoint ULS.SSIS DQS Matching Transformation: The SSIS DQS Matching Transformation uses Data Quality Services (DQS) to find duplicate data within the SSIS data flow.TARVOS Computer Networks Simulator: Discrete event-based network simulator, supports simulating MPLS architecture, several RSVP-TE protocol functionalities and fast recovery.Web API Explorer 4 DNN: Web API Explorer for DNN(R) aids module development allowing you to examine the Routing Table entries for a DotNetNuke(R) portal.

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • How to design a high-level application protocol for metadata syncing between devices and server?

    - by Jaanus
    I am looking for guidance on how to best think about designing a high-level application protocol to sync metadata between end-user devices and a server. My goal: the user can interact with the application data on any device, or on the web. The purpose of this protocol is to communicate changes made on one endpoint to other endpoints through the server, and ensure all devices maintain a consistent picture of the application data. If user makes changes on one device or on the web, the protocol will push data to the central repository, from where other devices can pull it. Some other design thoughts: I call it "metadata syncing" because the payloads will be quite small, in the form of object IDs and small metadata about those ID-s. When client endpoints retrieve new metadata over this protocol, they will fetch actual object data from an external source based on this metadata. Fetching the "real" object data is out of scope, I'm only talking about metadata syncing here. Using HTTP for transport and JSON for payload container. The question is basically about how to best design the JSON payload schema. I want this to be easy to implement and maintain on the web and across desktop and mobile devices. The best approach feels to be simple timer- or event-based HTTP request/response without any persistent channels. Also, you should not have a PhD to read it, and I want my spec to fit on 2 pages, not 200. Authentication and security are out of scope for this question: assume that the requests are secure and authenticated. The goal is eventual consistency of data on devices, it is not entirely realtime. For example, user can make changes on one device while being offline. When going online again, user would perform "sync" operation to push local changes and retrieve remote changes. Having said that, the protocol should support both of these modes of operation: Starting from scratch on a device, should be able to pull the whole metadata picture "sync as you go". When looking at the data on two devices side by side and making changes, should be easy to push those changes as short individual messages which the other device can receive near-realtime (subject to when it decides to contact server for sync). As a concrete example, you can think of Dropbox (it is not what I'm working on, but it helps to understand the model): on a range of devices, the user can manage a files and folders—move them around, create new ones, remove old ones etc. And in my context the "metadata" would be the file and folder structure, but not the actual file contents. And metadata fields would be something like file/folder name and time of modification (all devices should see the same time of modification). Another example is IMAP. I have not read the protocol, but my goals (minus actual message bodies) are the same. Feels like there are two grand approaches how this is done: transactional messages. Each change in the system is expressed as delta and endpoints communicate with those deltas. Example: DVCS changesets. REST: communicating the object graph as a whole or in part, without worrying so much about the individual atomic changes. What I would like in the answers: Is there anything important I left out above? Constraints, goals? What is some good background reading on this? (I realize this is what many computer science courses talk about at great length and detail... I am hoping to short-circuit it by looking at some crash course or nuggets.) What are some good examples of such protocols that I could model after, or even use out of box? (I mention Dropbox and IMAP above... I should probably read the IMAP RFC.)

    Read the article

  • Internet Explorer Automation: how to suppress Open/Save dialog?

    - by Vladimir Dyuzhev
    When controlling IE instance via MSHTML, how to suppress Open/Save dialogs for non-HTML content? I need to get data from another system and import it into our one. Due to budget constraints no development (e.g. WS) can be done on the other side for some time, so my only option for now is to do web scrapping. The remote site is ASP.NET-based, so simple HTML requests won't work -- too much JS. I wrote a simple C# application that uses MSHTML and SHDocView to control an IE instance. So far so good: I can perform login, navigate to desired page, populate required fields and do submit. Then I face a couple of problems: First is that report is opening in another window. I suspect I can attach to that window too by enumerating IE windows in the system. Second, more troublesome, is that report itself is CSV file, and triggers Open/Save dialog. I'd like to avoid it and make IE save the file into given location OR I'm fine with programmatically clicking dialog buttons too (how?) I'm actually totally non-Windows guy (unix/J2EE), and hope someone with better knowledge would give me a hint how to do those tasks. Thanks! UPDATE I've found a promising document on MSDN: http://msdn.microsoft.com/en-ca/library/aa770041.aspx Control the kinds of content that are downloaded and what the WebBrowser Control does with them once they are downloaded. For example, you can prevent videos from playing, script from running, or new windows from opening when users click on links, or prevent Microsoft ActiveX controls from downloading or executing. Slowly reading through... UPDATE 2: MADE IT WORK, SORT OF... Finally I made it work, but in an ugly way. Essentially, I register a handler "before navigate", then, in the handler, if the URL is matching my target file, I cancel the navigation, but remember the URL, and use WebClient class to access and download that temporal URL directly. I cannot copy the whole code here, it contains a lot of garbage, but here are the essential parts: Installing handler: _IE2.FileDownload += new DWebBrowserEvents2_FileDownloadEventHandler(IE2_FileDownload); _IE.BeforeNavigate2 += new DWebBrowserEvents2_BeforeNavigate2EventHandler(IE_OnBeforeNavigate2); Recording URL and then cancelling download (thus preventing Save dialog to appear): public string downloadUrl; void IE_OnBeforeNavigate2(Object ob1, ref Object URL, ref Object Flags, ref Object Name, ref Object da, ref Object Head, ref bool Cancel) { Console.WriteLine("Before Navigate2 "+URL); if (URL.ToString().EndsWith(".csv")) { Console.WriteLine("CSV file"); downloadUrl = URL.ToString(); } Cancel = false; } void IE2_FileDownload(bool activeDocument, ref bool cancel) { Console.WriteLine("FileDownload, downloading "+downloadUrl+" instead"); cancel = true; } void IE_OnNewWindow2(ref Object o, ref bool cancel) { Console.WriteLine("OnNewWindow2"); _IE2 = new SHDocVw.InternetExplorer(); _IE2.BeforeNavigate2 += new DWebBrowserEvents2_BeforeNavigate2EventHandler(IE_OnBeforeNavigate2); _IE2.Visible = true; o = _IE2; _IE2.FileDownload += new DWebBrowserEvents2_FileDownloadEventHandler(IE2_FileDownload); _IE2.Silent = true; cancel = false; return; } And in the calling code using the found URL for direct download: ... driver.ClickButton(".*_btnRunReport"); driver.WaitForComplete(); Thread.Sleep(10000); WebClient Client = new WebClient(); Client.DownloadFile(driver.downloadUrl, "C:\\affinity.dump"); (driver is a simple wrapper over IE instance = _IE) Hope that helps someone.

    Read the article

  • Help with refactoring PHP code

    - by Richard Knop
    I had some troubles implementing Lawler's algorithm but thanks to SO and a bounty of 200 reputation I finally managed to write a working implementation: http://stackoverflow.com/questions/2466928/lawlers-algorithm-implementation-assistance I feel like I'm using too many variables and loops there though so I am trying to refactor the code. It should be simpler and shorter yet remain readable. Does it make sense to make a class for this? Any advice or even help with refactoring this piece of code is welcomed: <?php /* * @name Lawler's algorithm PHP implementation * @desc This algorithm calculates an optimal schedule of jobs to be * processed on a single machine (in reversed order) while taking * into consideration any precedence constraints. * @author Richard Knop * */ $jobs = array(1 => array('processingTime' => 2, 'dueDate' => 3), 2 => array('processingTime' => 3, 'dueDate' => 15), 3 => array('processingTime' => 4, 'dueDate' => 9), 4 => array('processingTime' => 3, 'dueDate' => 16), 5 => array('processingTime' => 5, 'dueDate' => 12), 6 => array('processingTime' => 7, 'dueDate' => 20), 7 => array('processingTime' => 5, 'dueDate' => 27), 8 => array('processingTime' => 6, 'dueDate' => 40), 9 => array('processingTime' => 3, 'dueDate' => 10)); // precedence constrainst, i.e job 2 must be completed before job 5 etc $successors = array(2=>5, 7=>9); $n = count($jobs); $optimalSchedule = array(); for ($i = $n; $i >= 1; $i--) { // jobs not required to precede any other job $arr = array(); foreach ($jobs as $k => $v) { if (false === array_key_exists($k, $successors)) { $arr[] = $k; } } // calculate total processing time $totalProcessingTime = 0; foreach ($jobs as $k => $v) { if (true === array_key_exists($k, $arr)) { $totalProcessingTime += $v['processingTime']; } } // find the job that will go to the end of the optimal schedule array $min = null; $x = 0; $lastKey = null; foreach($arr as $k) { $x = $totalProcessingTime - $jobs[$k]['dueDate']; if (null === $min || $x < $min) { $min = $x; $lastKey = $k; } } // add the job to the optimal schedule array $optimalSchedule[$lastKey] = $jobs[$lastKey]; // remove job from the jobs array unset($jobs[$lastKey]); // remove precedence constraint from the successors array if needed if (true === in_array($lastKey, $successors)) { foreach ($successors as $k => $v) { if ($lastKey === $v) { unset($successors[$k]); } } } } // reverse the optimal schedule array and preserve keys $optimalSchedule = array_reverse($optimalSchedule, true); // add tardiness to the array $i = 0; foreach ($optimalSchedule as $k => $v) { $optimalSchedule[$k]['tardiness'] = 0; $j = 0; foreach ($optimalSchedule as $k2 => $v2) { if ($j <= $i) { $optimalSchedule[$k]['tardiness'] += $v2['processingTime']; } $j++; } $i++; } echo '<pre>'; print_r($optimalSchedule); echo '</pre>';

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

  • LLBLGen Pro feature highlights: automatic element name construction

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) One of the things one might take for granted but which has a huge impact on the time spent in an entity modeling environment is the way the system creates names for elements out of the information provided, in short: automatic element name construction. Element names are created in both directions of modeling: database first and model first and the more names the system can create for you without you having to rename them, the better. LLBLGen Pro has a rich, fine grained system for creating element names out of the meta-data available, which I'll describe more in detail below. First the model element related element naming features are highlighted, in the section Automatic model element naming features and after that I'll go more into detail about the relational model element naming features LLBLGen Pro has to offer in the section Automatic relational model element naming features. Automatic model element naming features When working database first, the element names in the model, e.g. entity names, entity field names and so on, are in general determined from the relational model element (e.g. table, table field) they're mapped on, as the model elements are reverse engineered from these relational model elements. It doesn't take rocket science to automatically name an entity Customer if the entity was created after reverse engineering a table named Customer. It gets a little trickier when the entity which was created by reverse engineering a table called TBL_ORDER_LINES has to be named 'OrderLine' automatically. Automatic model element naming also takes into effect with model first development, where some settings are used to provide you with a default name, e.g. in the case of navigator name creation when you create a new relationship. The features below are available to you in the Project Settings. Open Project Settings on a loaded project and navigate to Conventions -> Element Name Construction. Strippers! The above example 'TBL_ORDER_LINES' shows that some parts of the table name might not be needed for name creation, in this case the 'TBL_' prefix. Some 'brilliant' DBAs even add suffixes to table names, fragments you might not want to appear in the entity names. LLBLGen Pro offers you to define both prefix and suffix fragments to strip off of table, view, stored procedure, parameter, table field and view field names. In the example above, the fragment 'TBL_' is a good candidate for such a strip pattern. You can specify more than one pattern for e.g. the table prefix strip pattern, so even a really messy schema can still be used to produce clean names. Underscores Be Gone Another thing you might get rid of are underscores. After all, most naming schemes for entities and their classes use PasCal casing rules and don't allow for underscores to appear. LLBLGen Pro can automatically strip out underscores for you. It's an optional feature, so if you like the underscores, you're not forced to see them go: LLBLGen Pro will leave them alone when ordered to to so. PasCal everywhere... or not, your call LLBLGen Pro can automatically PasCal case names on word breaks. It determines word breaks in a couple of ways: a space marks a word break, an underscore marks a word break and a case difference marks a word break. It will remove spaces in all cases, and based on the underscore removal setting, keep or remove the underscores, and upper-case the first character of a word break fragment, and lower case the rest. Say, we keep the defaults, which is remove underscores and PasCal case always and strip the TBL_ fragment, we get with our example TBL_ORDER_LINES, after stripping TBL_ from the table name two word fragments: ORDER and LINES. The underscores are removed, the first character of each fragment is upper-cased, the rest lower-cased, so this results in OrderLines. Almost there! Pluralization and Singularization In general entity names are singular, like Customer or OrderLine so LLBLGen Pro offers a way to singularize the names. This will convert OrderLines, the result we got after the PasCal casing functionality, into OrderLine, exactly what we're after. Show me the patterns! There are other situations in which you want more flexibility. Say, you have an entity Customer and an entity Order and there's a foreign key constraint defined from the target of Order and the target of Customer. This foreign key constraint results in a 1:n relationship between the entities Customer and Order. A relationship has navigators mapped onto the relationship in both entities the relationship is between. For this particular relationship we'd like to have Customer as navigator in Order and Orders as navigator in Customer, so the relationship becomes Customer.Orders 1:n Order.Customer. To control the naming of these navigators for the various relationship types, LLBLGen Pro defines a set of patterns which allow you, using macros, to define how the auto-created navigator names will look like. For example, if you rather have Customer.OrderCollection, you can do so, by changing the pattern from {$EndEntityName$P} to {$EndEntityName}Collection. The $P directive makes sure the name is pluralized, which is not what you want if you're going for <EntityName>Collection, hence it's removed. When working model first, it's a given you'll create foreign key fields along the way when you define relationships. For example, you've defined two entities: Customer and Order, and they have their fields setup properly. Now you want to define a relationship between them. This will automatically create a foreign key field in the Order entity, which reflects the value of the PK field in Customer. (No worries if you hate the foreign key fields in your classes, on NHibernate and EF these can be hidden in the generated code if you want to). A specific pattern is available for you to direct LLBLGen Pro how to name this foreign key field. For example, if all your entities have Id as PK field, you might want to have a different name than Id as foreign key field. In our Customer - Order example, you might want to have CustomerId instead as foreign key name in Order. The pattern for foreign key fields gives you that freedom. Abbreviations... make sense of OrdNr and friends I already described word breaks in the PasCal casing paragraph, how they're used for the PasCal casing in the constructed name. Word breaks are used for another neat feature LLBLGen Pro has to offer: abbreviation support. Burt, your friendly DBA in the dungeons below the office has a hate-hate relationship with his keyboard: he can't stand it: typing is something he avoids like the plague. This has resulted in tables and fields which have names which are very short, but also very unreadable. Example: our TBL_ORDER_LINES example has a lovely field called ORD_NR. What you would like to see in your fancy new OrderLine entity mapped onto this table is a field called OrderNumber, not a field called OrdNr. What you also like is to not have to rename that field manually. There are better things to do with your time, after all. LLBLGen Pro has you covered. All it takes is to define some abbreviation - full word pairs and during reverse engineering model elements from tables/views, LLBLGen Pro will take care of the rest. For the ORD_NR field, you need two values: ORD as abbreviation and Order as full word, and NR as abbreviation and Number as full word. LLBLGen Pro will now convert every word fragment found with the word breaks which matches an abbreviation to the given full word. They're case sensitive and can be found in the Project Settings: Navigate to Conventions -> Element Name Construction -> Abbreviations. Automatic relational model element naming features Not everyone works database first: it may very well be the case you start from scratch, or have to add additional tables to an existing database. For these situations, it's key you have the flexibility that you can control the created table names and table fields without any work: let the designer create these names based on the entity model you defined and a set of rules. LLBLGen Pro offers several features in this area, which are described in more detail below. These features are found in Project Settings: navigate to Conventions -> Model First Development. Underscores, welcome back! Not every database is case insensitive, and not every organization requires PasCal cased table/field names, some demand all lower or all uppercase names with underscores at word breaks. Say you create an entity model with an entity called OrderLine. You work with Oracle and your organization requires underscores at word breaks: a table created from OrderLine should be called ORDER_LINE. LLBLGen Pro allows you to do that: with a simple checkbox you can order LLBLGen Pro to insert an underscore at each word break for the type of database you're working with: case sensitive or case insensitive. Checking the checkbox Insert underscore at word break case insensitive dbs will let LLBLGen Pro create a table from the entity called Order_Line. Half-way there, as there are still lower case characters there and you need all caps. No worries, see below Casing directives so everyone can sleep well at night For case sensitive databases and case insensitive databases there is one setting for each of them which controls the casing of the name created from a model element (e.g. a table created from an entity definition using the auto-mapping feature). The settings can have the following values: AsProjectElement, AllUpperCase or AllLowerCase. AsProjectElement is the default, and it keeps the casing as-is. In our example, we need to get all upper case characters, so we select AllUpperCase for the setting for case sensitive databases. This will produce the name ORDER_LINE. Sequence naming after a pattern Some databases support sequences, and using model-first development it's key to have sequences, when needed, to be created automatically and if possible using a name which shows where they're used. Say you have an entity Order and you want to have the PK values be created by the database using a sequence. The database you're using supports sequences (e.g. Oracle) and as you want all numeric PK fields to be sequenced, you have enabled this by the setting Auto assign sequences to integer pks. When you're using LLBLGen Pro's auto-map feature, to create new tables and constraints from the model, it will create a new table, ORDER, based on your settings I previously discussed above, with a PK field ID and it also creates a sequence, SEQ_ORDER, which is auto-assigns to the ID field mapping. The name of the sequence is created by using a pattern, defined in the Model First Development setting Sequence pattern, which uses plain text and macros like with the other patterns previously discussed. Grouping and schemas When you start from scratch, and you're working model first, the tables created by LLBLGen Pro will be in a catalog and / or schema created by LLBLGen Pro as well. If you use LLBLGen Pro's grouping feature, which allows you to group entities and other model elements into groups in the project (described in a future blog post), you might want to have that group name reflected in the schema name the targets of the model elements are in. Say you have a model with a group CRM and a group HRM, both with entities unique for these groups, e.g. Employee in HRM, Customer in CRM. When auto-mapping this model to create tables, you might want to have the table created for Employee in the HRM schema but the table created for Customer in the CRM schema. LLBLGen Pro will do just that when you check the setting Set schema name after group name to true (default). This gives you total control over where what is placed in the database from your model. But I want plural table names... and TBL_ prefixes! For now we follow best practices which suggest singular table names and no prefixes/suffixes for names. Of course that won't keep everyone happy, so we're looking into making it possible to have that in a future version. Conclusion LLBLGen Pro offers a variety of options to let the modeling system do as much work for you as possible. Hopefully you enjoyed this little highlight post and that it has given you new insights in the smaller features available to you in LLBLGen Pro, ones you might not have thought off in the first place. Enjoy!

    Read the article

< Previous Page | 63 64 65 66 67 68 69  | Next Page >