Search Results

Search found 17422 results on 697 pages for 'visual studio designer'.

Page 678/697 | < Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >

  • Stretching DIV to 100% height and width of window but not less than 800x600px

    - by El Eme
    I have a page that needs to stretch and resize with with window and I've managed to do that but I need that the "inner div" (#pgContent) stretch if the window is resized to higher dimensions but that it doesn't shrink more than, let's say for example 800 x 600 px. As I have it now it stretches well but it also shrinks more than I want! It's working as I want in width but not in height!? Here's a visual example: My CSS: * { margin: 0; padding: 0; outline: 0; } /*| PAGE LAYOUT |*/ html, body { height: 100%; width: 100%; /*text-align: center;*/ /*IE doesn't ~like this*/ cursor: default; } #pgWrapper { z-index: 1; min-height: 100%; height: auto !important; /*min-height: 600px;*/ /* THIS SHOULD WORK BUT IT DOESN'T */ height: 100%; min-width: 1000px; width: 100%; position: absolute; overflow: hidden; text-align: center; background: #000; } #pgContent { position: absolute; top: 30px; left: 30px; right: 30px; bottom: 50px; overflow: hidden; background: #CCC; } #footWrapper { z-index: 2; height: 50px; min-width: 940px; position: absolute; left: 30px; right: 30px; bottom: 0px; background: #C00; } /*| END PAGE LAYOUT |*/ And the HTML: <body> <div id="pgWrapper"> <div id="pgContent"> This DIV should stretch with window but never lower than for example 800px x 600px!<br /> If window size is lower then scrollbars should appear. </div> </div> <div id="footWrapper"> <div id="footLft"></div> <div id="footRgt"></div> </div> </body> If someone could give me a help on this I would appreciate. Thanks in advance.

    Read the article

  • C++ include statement required if defining a map in a headerfile.

    - by Justin
    I was doing a project for computer course on programming concepts. This project was to be completed in C++ using Object Oriented designs we learned throughout the course. Anyhow, I have two files symboltable.h and symboltable.cpp. I want to use a map as the data structure so I define it in the private section of the header file. I #include <map> in the cpp file before I #include "symboltable.h". I get several errors from the compiler (MS VS 2008 Pro) when I go to debug/run the program the first of which is: Error 1 error C2146: syntax error : missing ';' before identifier 'table' c:\users\jsmith\documents\visual studio 2008\projects\project2\project2\symboltable.h 22 Project2 To fix this I had to #include <map> in the header file, which to me seems strange. Here are the relevant code files: // symboltable.h #include <map> class SymbolTable { public: SymbolTable() {} void insert(string variable, double value); double lookUp(string variable); void init(); // Added as part of the spec given in the conference area. private: map<string, double> table; // Our container for variables and their values. }; and // symboltable.cpp #include <map> #include <string> #include <iostream> using namespace std; #include "symboltable.h" void SymbolTable::insert(string variable, double value) { table[variable] = value; // Creates a new map entry, if variable name already exist it overwrites last value. } double SymbolTable::lookUp(string variable) { if(table.find(variable) == table.end()) // Search for the variable, find() returns a position, if thats the end then we didnt find it. throw exception("Error: Uninitialized variable"); else return table[variable]; } void SymbolTable::init() { table.clear(); // Clears the map, removes all elements. }

    Read the article

  • NullReferenceException at Microsoft.Silverlight.Build.Tasks.CompileXaml.LoadAssemblies(ITaskItem[] R

    - by Eugene Larchick
    Hi, I updated my Visual Studio 2010 to the version 10.0.30319.1 RTM Rel and start getting the following exception during the build: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.Silverlight.Build.Tasks.CompileXaml.LoadAssemblies(ITaskItem[] ReferenceAssemblies) at Microsoft.Silverlight.Build.Tasks.CompileXaml.get_GetXamlSchemaContext() at Microsoft.Silverlight.Build.Tasks.CompileXaml.GenerateCode(ITaskItem item, Boolean isApplication) at Microsoft.Silverlight.Build.Tasks.CompileXaml.Execute() at Bohr.Silverlight.BuildTasks.BohrCompileXaml.Execute() The code of BohrCompileXaml.Execute is the following: public override bool Execute() { List<TaskItem> pages = new List<TaskItem>(); foreach (ITaskItem item in SilverlightPages) { string newFileName = getGeneratedName(item.ItemSpec); String content = File.ReadAllText(item.ItemSpec); String parentClassName = getParentClassName(content); if (null != parentClassName) { content = content.Replace("<UserControl", "<" + parentClassName); content = content.Replace("</UserControl>", "</" + parentClassName + ">"); content = content.Replace("bohr:ParentClass=\"" + parentClassName + "\"", ""); } File.WriteAllText(newFileName, content); pages.Add(new TaskItem(newFileName)); } if (null != SilverlightApplications) { foreach (ITaskItem item in SilverlightApplications) { Log.LogMessage(MessageImportance.High, "Application: " + item.ToString()); } } foreach (ITaskItem item in pages) { Log.LogMessage(MessageImportance.High, "newPage: " + item.ToString()); } CompileXaml xamlCompiler = new CompileXaml(); xamlCompiler.AssemblyName = AssemblyName; xamlCompiler.Language = Language; xamlCompiler.LanguageSourceExtension = LanguageSourceExtension; xamlCompiler.OutputPath = OutputPath; xamlCompiler.ProjectPath = ProjectPath; xamlCompiler.RootNamespace = RootNamespace; xamlCompiler.SilverlightApplications = SilverlightApplications; xamlCompiler.SilverlightPages = pages.ToArray(); xamlCompiler.TargetFrameworkDirectory = TargetFrameworkDirectory; xamlCompiler.TargetFrameworkSDKDirectory = TargetFrameworkSDKDirectory; xamlCompiler.BuildEngine = BuildEngine; bool result = xamlCompiler.Execute(); // HERE we got the error! And the definition of the task: <BohrCompileXaml LanguageSourceExtension="$(DefaultLanguageSourceExtension)" Language="$(Language)" SilverlightPages="@(Page)" SilverlightApplications="@(ApplicationDefinition)" ProjectPath="$(MSBuildProjectFullPath)" RootNamespace="$(RootNamespace)" AssemblyName="$(AssemblyName)" OutputPath="$(IntermediateOutputPath)" TargetFrameworkDirectory="$(TargetFrameworkDirectory)" TargetFrameworkSDKDirectory="$(TargetFrameworkSDKDirectory)" > <Output ItemName="Compile" TaskParameter="GeneratedCodeFiles" /> <!-- Add to the list list of files written. It is used in Microsoft.Common.Targets to clean up for a next clean build --> <Output ItemName="FileWrites" TaskParameter="WrittenFiles" /> <Output ItemName="_GeneratedCodeFiles" TaskParameter="GeneratedCodeFiles" /> </BohrCompileXaml> What can be the reason? And how can I get more info what's happening inside CompileXaml class?

    Read the article

  • JSON datetime to SQL Server database via WCF

    - by moikey
    I have noticed a problem over the past couple of days where my dates submitted to an sql server database are wrong. I have a webpage, where users can book facilities. This webpage takes a name, a date, a start time and an end time(BookingID is required for transactions but generated by database), which I format as a JSON string as follows: {"BookingEnd":"\/Date(2012-26-03 09:00:00.000)\/","BookingID":1,"BookingName":"client test 1","BookingStart":"\/Date(2012-26-03 10:00:00.000)\/","RoomID":4} This is then passed to a WCF service, which handles the database insert as follows: [WebInvoke(Method = "POST", RequestFormat = WebMessageFormat.Json, UriTemplate = "createbooking")] void CreateBooking(Booking booking); [DataContract] public class Booking { [DataMember] public int BookingID { get; set; } [DataMember] public string BookingName { get; set; } [DataMember] public DateTime BookingStart { get; set; } [DataMember] public DateTime BookingEnd { get; set; } [DataMember] public int RoomID { get; set; } } Booking.svc public void CreateBooking(Booking booking) { BookingEntity bookingEntity = new BookingEntity() { BookingName = booking.BookingName, BookingStart = booking.BookingStart, BookingEnd = booking.BookingEnd, RoomID = booking.RoomID }; BookingsModel model = new BookingsModel(); model.CreateBooking(bookingEntity); } Booking Model: public void CreateBooking(BookingEntity booking) { using (var conn = new SqlConnection("Data Source=cpm;Initial Catalog=BookingDB;Integrated Security=True")) using (var cmd = conn.CreateCommand()) { conn.Open(); cmd.CommandText = @"IF NOT EXISTS ( SELECT * FROM Bookings WHERE BookingStart = @BookingStart AND BookingEnd = @BookingEnd AND RoomID= @RoomID ) INSERT INTO Bookings ( BookingName, BookingStart, BookingEnd, RoomID ) VALUES ( @BookingName, @BookingStart, @BookingEnd, @RoomID )"; cmd.Parameters.AddWithValue("@BookingName", booking.BookingName); cmd.Parameters.AddWithValue("@BookingStart", booking.BookingStart); cmd.Parameters.AddWithValue("@BookingEnd", booking.BookingEnd); cmd.Parameters.AddWithValue("@RoomID", booking.RoomID); cmd.ExecuteNonQuery(); conn.Close(); } } This updates the database but the time ends up "1970-01-01 00:00:02.013" each time I submit the date in the above json format. However, when I do a query in SQL server management studio with the above date format ("YYYY-MM-DD HH:MM:SS.mmm"), it inserts the correct values. Also, if I submit a millisecond datetime to the wcf, the correct date is being inserted. The problem seems to be with the format I am submitting. I am a little lost with this problem. I don't really see why it is doing this. Any help would be greatly appreciated. Thanks.

    Read the article

  • please help me to find Bug in my Code (segmentation fault)

    - by Vikramaditya Battina
    i am tring to solve this http://www.spoj.com/problems/LEXISORT/ question it working fine in visual studio compiler and IDEone also but when i running in SPOJ compiler it is getting SEGSIGV error Here my code goes #include<stdio.h> #include<stdlib.h> #include<string.h> char *getString(); void lexisort(char **str,int num); void countsort(char **str,int i,int num); int main() { int num_test; int num_strings; char **str; int i,j; scanf("%d",&num_test); for(i=0;i<num_test;i++) { scanf("%d",&num_strings); str=(char **)malloc(sizeof(char *)*num_strings); for(j=0;j<num_strings;j++) { str[j]=(char *)malloc(sizeof(char)*11); scanf("%s",str[j]); } lexisort(str,num_strings); for(j=0;j<num_strings;j++) { printf("%s\n",str[j]); free(str[j]); } free(str); } return 0; } void lexisort(char **str,int num) { int i; for(i=9;i>=0;i--) { countsort(str,i,num); } } void countsort(char **str,int i,int num) { int buff[52]={0,0},k,x; char **temp=(char **)malloc(sizeof(char *)*num); for(k=0;k<52;k++) { buff[k]=0; } for(k=0;k<num;k++) { if(str[k][i]>='A' && str[k][i]<='Z') { buff[(str[k][i]-'A')]++; } else { buff[26+(str[k][i]-'a')]++; } } for(k=1;k<52;k++) { buff[k]=buff[k]+buff[k-1]; } for(k=num-1;k>=0;k--) { if(str[k][i]>='A' && str[k][i]<='Z') { x=buff[(str[k][i]-'A')]; temp[x-1]=str[k]; buff[(str[k][i]-'A')]--; } else { x=buff[26+(str[k][i]-'a')]; temp[x-1]=str[k]; buff[26+(str[k][i]-'a')]--; } } for(k=0;k<num;k++) { str[k]=temp[k]; } free(temp); }

    Read the article

  • How to fix OpenGL/SDL runtime error which is probobly caused by adding textures [closed]

    - by Arturs Lapins
    Hello I've recently worked with OpenGL and SDL and I was adding textures to my GL_QUADS and when I ran my program I came across with a runtime error. I've searched all over the internet for a fix but I couldn't find anything so I guess I had one more option. Asking here. So here is some of my code. int loadTexture(std::string fileName){ SDL_Surface *image=IMG_Load(fileName.c_str()); SDL_DisplayFormatAlpha(image); unsigned int id; glGenTextures(1,&id); glBindTexture(GL_TEXTURE_2D,&id); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,image->w,image >h,0,GL_RGBA,GL_UNSIGNED_BYTE,image->pixels); SDL_FreeSurface(image); return id; } That's my loadTexture function. void init() { glClearColor(0.0, 0.0, 0.0, 1.0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, 800.0 / 600.0, 1.0, 500.0); glMatrixMode(GL_MODELVIEW); glEnable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); tex=loadTexture("test.png"); } That's my init function for OpenGL. Btw I have declared my tex variable. void render() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glTranslatef(0.0, 0.0, -10.0); glRotatef(rotation, 1.0, 1.0, 1.0); glBindTexture(GL_TEXTURE_2D, tex); glBegin(GL_QUADS); glTexCoord2f(1.0, 0.0); glVertex3f(-2.0, 2.0, 0.0); glTexCoord2f(1.0, 1.0); glVertex3f(2.0, 2.0, 0.0); glTexCoord2f(1.0, 0.0); glVertex3f(2.0, -2.0, 0.0); glTexCoord2f(0.0, 0.0); glVertex3f(-2.0, -2.0, 0.0); glEnd(); } That's my render function for all my OpenGL render stuff... The render function is called in the main function which contains a game loop. Here is the runtime error when I run it with Visual C++ Unhandled exception at 0x004ffee9 in OpenGL Crap.exe: 0xC0000005: Access violation reading location 0x05c90000. So I have only had this error when I added textures... ... So I found where the error occured it was at this line glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,image->w,image->h,0,GL_RGBA,GL_UNSIGNED_BYTE,image->pixels); but I have totally no Idea what could it be. Update Only thanks to zero298

    Read the article

  • Planning to create PDF files in Ruby on Rails

    - by deau
    Hi there, A Ruby on Rails app will have access to a number of images and fonts. The images are components of a visual layout which will be stored separately as a set of rules. The rules specify document dimensions along with which images are used and where. The app needs to take these rules, fetch the images, and generate a PDF that is ready for local printing or emailing. The fonts will also be important. The user needs to customize the layout by inputting text which will be included in the PDF. The PDF must therefore also contain the desired font so that the document renders identically across different machines. Each PDF may have many pages. Each page may have different dimensions but this is not essential. Either way, the ability to manipulate the dimensions and margins given by the PDF is essential. The only thing that needs to be regularly changed is the text. If this is takes too much development then the app can store the layouts in 3rd party PDFs and edit the textual content directly. Eventually though, this will prove too restrictive on the apps intended functionality so I would prefer the app to generate the PDF's itself. I have never worked with PDFs before and, for the most part, I've never had to output anything to the user outside their monitor. A printed medium could require a very different approach to get the best results. If anyone has any advice on how to model the PDF format this it would be really appreciated. The technical aspects of printing such as bleed, resolution and colour have already been factored in to the layouts and images. I am aware that PDF is a proprietary file format and I want to use free or open source software. I have seen a number of Ruby libraries for generating PDF files but because I am new on this scene I have no way to reliably compare them and too little time to implement and test them all. I also have the option of using C to handle this feature and if this is process intensive then that might be preferred. What should I be thinking about and how should I be planning to implement this?

    Read the article

  • C++ Stack Overflow

    - by PhilMAN
    Here is some code: void main() { GameEngine ge("phil", "anotherguy"); string response; do { ge.playGame(); cout << endl << "Do you want to (r)eplay the same battle, (s)tart a new battle, or (q)uit? "; cin >> response; } while(response == "r" || response == "R" || response == "s" || response == "S" ); } GameEngine::GameEngine(string name1, string name2) { p1Name = name1; p2Name = name2; } void GameEngine::playGame() { cout << "PLAY GAME" << endl; Army p1, p2; Battlefield testField; RuleSet rs; int xSize = 13; // Number of rows int ySize = 13; // Number of columns loadData(p1, p2, testField, rs, xSize, ySize); ... } void GameEngine::loadData(Army& p1, Army& p2, Battlefield& testField, RuleSet& rs, int& xSize, int& ySize) { string terrain = BattlefieldUtils::pickTerrain(); string armySplit[14];//id index 1 string ruleSplit[19];//in index 7 string armyP1, armyP2, ruleSet; Skill p1Skills[8]; Skill p2Skills[8]; CreatureStack p1Stacks[20]; CreatureStack p2Stacks[20]; ... } CreatureStack(){quantity = 0; isLive = false; id = -1;}; Army(){}; Battlefield(){}; RuleSet(){}; I have posted every line of code that executes until the program crashes. This code ran fine for a long time, I added some stuff that does not even execute until way after the code I have posted here, and bam stack overflow that occurs at GameEngine::loadData() line: CreatureStack p2Stacks[20]; will not go away. What am I doing wrong here? Is that all the stack can handle? I increased the stack size in Visual Studio and got the error to go away, but that slowed things down considerably, so I would rather just get to the source of the issue and fix that.

    Read the article

  • Execution plan issue requires reset on SQL Server 2005, how to determine cause?

    - by Tony Brandner
    We have a web application that delivers training to thousands of corporate students running on top of SQL Server 2005. Recently, we started seeing that a single specific query in the application went from 1 second to about 30 seconds in terms of execution time. The application started throwing timeouts in that area. Our first thought was that we may have incorrect indexes, so we reviewed the tables and indexes. However, similar queries elsewhere in the application also run quickly. Reviewing the indexes showed us that they were configured as expected. We were able to narrow it down to a single query, not a stored procedure. Running this query in SQL Studio also runs quickly. We tried running the application in a different server environment. So a different web server with the same query, parameters and database. The query still ran slow. The query is a fairly large one related to determining a student's current list of training. It includes joins and left joins on a dozen tables and subqueries. A few of the tables are fairly large (hundreds of thousands of rows) and some of the other tables are small lookup tables. The query uses a grouping clause and a few where conditions. A few of the tables are quite active and the contents change often but the volume of added rows doesn't seem extreme. These symptoms led us to consider the execution plan. First off, as soon as we reset the execution plan cache with the SQL command 'DBCC FREEPROCCACHE', the problem went away. Unfortunately, the problem started to reoccur within a few days. The problem has continued to plague us for awhile now. It's usually the same query, but we did appear to see the problem occur in another single query recently. It happens enough to be a nuisance. We're having a heck of a time trying to fix it since we can't reproduce it in any other environment other than production. I have downloaded the High Availability guide from Red Gate and I read up more on execution plans. I hope to run the profiler on the live server, but I'm a bit concerned about impact. I would like to ask - what is the best way to figure out what is triggering this problem? Has anyone else seen this same issue?

    Read the article

  • C++ Vector vs Array (Time)

    - by vsha041
    I have got here two programs with me, both are doing exactly the same task. They are just setting an boolean array / vector to the value true. The program using vector takes 27 seconds to run whereas the program involving array with 5 times greater size takes less than 1 s. I would like to know the exact reason as to why there is such a major difference ? Are vectors really that inefficient ? Program using vectors #include <iostream> #include <vector> #include <ctime> using namespace std; int main(){ const int size = 2000; time_t start, end; time(&start); vector<bool> v(size); for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - 27 seconds Program using Array #include <iostream> #include <ctime> using namespace std; int main(){ const int size = 10000; // 5 times more size time_t start, end; time(&start); bool v[size]; for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - < 1 seconds Platform - Visual Studio 2008 OS - Windows Vista 32 bit SP 1 Processor Intel(R) Pentium(R) Dual CPU T2370 @ 1.73GHz Memory (RAM) 1.00 GB Thanks Amare

    Read the article

  • NVIDIA graphics driver in Ubuntu 12.04

    - by user924501
    So my overall goal is that I want to be able to code with CUDA enabled applications. However, upon many days of searching, using installation walkthroughs, and reinstalling countless times after driver failure... I'm now here as a last resort. I cannot get Ubuntu 12.04 LTS to install the NVIDIA 295.59 driver for my GeForce GT 540M NVIDIA graphics card. My main system specs is as follows... (I believe having the Intel processor may be the problem) DELL Laptop XPS L502X Intel® Core™ i7-2620M CPU @ 2.70GHz × 4 Intel Integrated Graphics 64 bit NVIDIA GeForce GT 540M Ubuntu 12.04 LTS All other specs are irrelevant unless I forgot something? Methods Tried: aptitude install nvidia-current (all packages) Results: Nothing really happened. Nothing in the additional drivers menu appeared, nor was the NVIDIA X Server settings application allowing access because it thought there was no NVIDIA X Server installed. Downloaded driver from nvidia.com. Set nomodeset in the grub boot menu through /boot/grub/grub.cfg Went to console and turned off lightdm. Installed the driver, but it said the pre-install failed? (mean anything?) Started up lightdm again. Results: NVIDIA X Server settings still didn't notice it. Even tried to do nvidia-xconfig multiple times. I also went into the config file to make sure the driver setting said "nvidia". aptitude install nvidia-173 (all packages) Results: Couldn't find the xorg-video-abi-10 virtual package. It was nowhere to be found and the ubuntu forums everywhere had no answers. Lots of people were having this problem. This is easily done in windows, simply download the driver and debug in visual studio with no problems at all. I'd like clear step-by-step instructions on how I should go about this. I'm relatively new to linux but I can find my way around pretty well so you aren't talking to a straight-up beginner. Also, if you think another thread may have the answer please post because I was having a hard time looking for my specific type of problem. TL;DR I want to have access to my GPU so I can code with CUDA while in Ubuntu 12.04 on my 64 bit laptop that also has Intel integrated graphics on the processor. Solution: sudo apt-add-repository ppa:ubuntu-x-swat/x-updates && sudo apt-get update && sudo apt-get upgrade && sudo apt-get install nvidia-current

    Read the article

  • Large transactions causing "System.Data.SqlClient.SqlException: Timeout expired" error?

    - by Michael
    My application requires a user to log in and allows them to edit a list of things. However, it seems that if the same user always logs in and out and edits the list, this user will run into a "System.Data.SqlClient.SqlException: Timeout expired." error. I've read comments about increasing the timeout period but I've also read a comment about it possibly caused by uncommitted transactions. And I do have one going in the application. I'll provide the code I'm working with and there is an IF statement in there that I was a little iffy about but it seemed like a reasonable thing to do. I'll just go over what's going on here, there is a list of objects to update or add into the database. New objects created in the application are given an ID of 0 while existing objects have their own ID's generated from the DB. If the user chooses to delete some objects, their IDs are stored in a separate list of Integers. Once the user is ready to save their changes, the two lists are passed into this method. By use of the IF statement, objects with ID of 0 are added (using the Add stored procedure) and those objects with non-zero IDs are updated (using the Update stored procedure). After all this, a FOR loop goes through all the integers in the "removal" list and uses the Delete stored procedure to remove them. A transaction is used for all this. Public Shared Sub UpdateSomethings(ByVal SomethingList As List(Of Something), ByVal RemovalList As List(Of Integer)) Using DBConnection As New SqlConnection(conn) DBConnection.Open() Dim MyTransaction As SqlTransaction MyTransaction = DBConnection.BeginTransaction() Try For Each SomethingItem As Something In SomethingList Using MyCommand As New SqlCommand() MyCommand.Connection = DBConnection If SomethingItem.ID > 0 Then MyCommand.CommandText = "UpdateSomething" Else MyCommand.CommandText = "AddSomething" End If MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters If MyCommand.CommandText = "UpdateSomething" Then .Add("@id", SqlDbType.Int).Value = SomethingItem.ID End If .Add("@stuff", SqlDbType.Varchar).Value = SomethingItem.Stuff End With MyCommand.ExecuteNonQuery() End Using Next For Each ID As Integer In RemovalList Using MyCommand As New SqlCommand("DeleteSomething", DBConnection) MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters .Add("@id", SqlDbType.Int).Value = ID End With MyCommand.ExecuteNonQuery() End Using Next MyTransaction.Commit() Catch ex As Exception MyTransaction.Rollback() 'Exception handling goes here End Try End Using End Sub There are three stored procedures used here as well as some looping so I can see how something can be holding everything up if the list is large enough. Other users can log in to the system at the same time just fine though. I'm using Visual Studio 2008 to debug and am using SQL Server 2000 for the DB.

    Read the article

  • Databinding in combo box

    - by muralekarthick
    Hi I have two forms, and a class, queries return in Stored procedure. Stored Procedure: ALTER PROCEDURE [dbo].[Payment_Join] @reference nvarchar(20) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here SELECT p.iPaymentID,p.nvReference,pt.nvPaymentType,p.iAmount,m.nvMethod,u.nvUsers,p.tUpdateTime FROM Payment p, tblPaymentType pt, tblPaymentMethod m, tblUsers u WHERE p.nvReference = @reference and p.iPaymentTypeID = pt.iPaymentTypeID and p.iMethodID = m.iMethodID and p.iUsersID = u.iUsersID END payment.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using System.Data.SqlClient; using System.Windows.Forms; namespace Finance { class payment { string connection = global::Finance.Properties.Settings.Default.PaymentConnectionString; #region Fields int _paymentid = 0; string _reference = string.Empty; string _paymenttype; double _amount = 0; string _paymentmethod; string _employeename; DateTime _updatetime = DateTime.Now; #endregion #region Properties public int paymentid { get { return _paymentid; } set { _paymentid = value; } } public string reference { get { return _reference; } set { _reference = value; } } public string paymenttype { get { return _paymenttype; } set { _paymenttype = value; } } public string paymentmethod { get { return _paymentmethod; } set { _paymentmethod = value; } } public double amount { get { return _amount;} set { _amount = value; } } public string employeename { get { return _employeename; } set { _employeename = value; } } public DateTime updatetime { get { return _updatetime; } set { _updatetime = value; } } #endregion #region Constructor public payment() { } public payment(string refer) { reference = refer; } public payment(int paymentID, string Reference, string Paymenttype, double Amount, string Paymentmethod, string Employeename, DateTime Time) { paymentid = paymentID; reference = Reference; paymenttype = Paymenttype; amount = Amount; paymentmethod = Paymentmethod; employeename = Employeename; updatetime = Time; } #endregion #region Methods public void Save() { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("payment_create", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@reference", reference)); command.Parameters.Add(new SqlParameter("@paymenttype", paymenttype)); command.Parameters.Add(new SqlParameter("@amount", amount)); command.Parameters.Add(new SqlParameter("@paymentmethod", paymentmethod)); command.Parameters.Add(new SqlParameter("@employeename", employeename)); command.Parameters.Add(new SqlParameter("@updatetime", updatetime)); connect.Open(); command.ExecuteScalar(); connect.Close(); } catch { } } public void Load(string reference) { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("Payment_Join", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@Reference", reference)); //MessageBox.Show("ref = " + reference); connect.Open(); SqlDataReader reader = command.ExecuteReader(); while (reader.Read()) { this.reference = Convert.ToString(reader["nvReference"]); // MessageBox.Show(reference); // MessageBox.Show("here"); // MessageBox.Show("payment type id = " + reader["nvPaymentType"]); // MessageBox.Show("here1"); this.paymenttype = Convert.ToString(reader["nvPaymentType"]); // MessageBox.Show(paymenttype.ToString()); this.amount = Convert.ToDouble(reader["iAmount"]); this.paymentmethod = Convert.ToString(reader["nvMethod"]); this.employeename = Convert.ToString(reader["nvUsers"]); this.updatetime = Convert.ToDateTime(reader["tUpdateTime"]); } reader.Close(); } catch (Exception ex) { MessageBox.Show("Check it again" + ex); } } #endregion } } i have already binded the combo box items through designer, When i run the application i just get the reference populated in form 2 and combo box just populated not the particular value which is fetched. New to c# so help me to get familiar

    Read the article

  • Property trigger in XAML for IsMouseOver fails to set Border.Background

    - by Mal Ross
    I'm currently trying to create a ControlTemplate for the Button class in WPF, replacing the usual visual tree with something that makes the button look similar to the little (X) close icon on Google Chrome's tabs. I decided to use a Path object in XAML to achieve the effect. Using a property trigger, the control responds to a change in the IsMouseOver property by setting the icon's red background. Here's the XAML from a test app: <Window x:Class="Widgets.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Window.Resources> <Style x:Key="borderStyle" TargetType="Border"> <Style.Triggers> <Trigger Property="IsMouseOver" Value="True"> <Setter Property="Background"> <Setter.Value> <SolidColorBrush Color="#CC0000"/> </Setter.Value> </Setter> </Trigger> </Style.Triggers> </Style> <ControlTemplate x:Key="closeButtonTemplate" TargetType="Button"> <Border Width="12" Height="12" CornerRadius="6" BorderBrush="#AAAAAA" Background="Transparent" Style="{StaticResource borderStyle}" ToolTip="Close"> <Viewbox Margin="2.75"> <Path Data="M 0,0 L 10,10 M 0,10 L 10,0" Stroke="{Binding BorderBrush, RelativeSource={RelativeSource FindAncestor, AncestorType=Border, AncestorLevel=1}}" StrokeThickness="1.8"/> </Viewbox> </Border> </ControlTemplate> </Window.Resources> <Grid Background="White"> <Button Template="{StaticResource closeButtonTemplate}"/> </Grid> </Window> Note that the circular background is always there - it's just transparent when the mouse isn't over it. The problem with this is that the trigger just isn't working. Nothing changes in the button's appearance. However, if I remove the Background="Transparent" value from the Border object in the ControlTemplate, the trigger does work (albeit only when over the 'X'). I really can't explain this. Setters for any other properties placed in the borderStyle resource work fine, but the Background setter fails as soon as the default background is specified in the ControlTemplate. Any ideas why it's happening and how I can fix it? I know I could easily replace this code with, for example, a .PNG-based image, but I want to understand why the current implementation isn't working. Thanks! :)

    Read the article

  • How to refresh xmlDataProvider when xml document changes at runtime in WPF?

    - by Kajsa
    I am trying to make a image viewer/album creator in visual studio, wpf. The image paths for each album is stored in an xml document which i bind to to show the images from each album in a listbox. The problem is when i add a image or an album at runtime and write it to the xml document. I can't seem to make the bindings to the xml document update so they show the new images and albums aswell. Calling Refresh() on the XmlDataProvider doesn't change anything. I don't wish to redo the binding of the XmlDataProvider, just make it read from the same source again. XAML: ... <Grid.DataContext> <XmlDataProvider x:Name="Images" Source="Data/images.xml" XPath="/albums/album[@name='no album']/image" /> </Grid.DataContext> ... <Label Grid.Row="1" Grid.Column="0" HorizontalAlignment="Right" VerticalAlignment="Bottom" Padding="0" Margin="0,0,0,5" Content="{x:Static resx:Resource.AddImageLabel}"/> <TextBox Grid.Row="1" Grid.Column="1" HorizontalAlignment="Stretch" VerticalAlignment="Bottom" Name="newImagePath" Margin="0" /> <Button Grid.Row="1" Grid.Column="2" HorizontalAlignment="Left" VerticalAlignment="Bottom" Name="newImagePathButton" Content="{x:Static resx:Resource.BrowseImageButton}" Click="newImagePathButton_Click" /> ... <ListBox Grid.Column="0" Grid.ColumnSpan="4" Grid.Row="3" HorizontalAlignment="Stretch" Name="thumbnailList" VerticalAlignment="Bottom" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding BindingGroupName=Images}" SelectedIndex="0" Background="#FFE0E0E0" Height="110"> ... Code behind: private void newImagePathButton_Click(object sender, RoutedEventArgs e) { string imagePath = newImagePath.Text; albumCreator.addImage(imagePath, null); //Reset import image elements to default newImagePath.Text = ""; //Refresh thumbnail listbox Images.Refresh(); Console.WriteLine("Image added!"); } public void addImage(string source, XmlElement parent) { if (parent == null) { //Use default album parent = (XmlElement)root.FirstChild; } //Create image element with source element within XmlElement newImage = xmlDoc.CreateElement(null, "image", null); XmlElement newSource = xmlDoc.CreateElement(null, "source", null); newSource.InnerText = source; newImage.AppendChild(newSource); //Add image element to parent parent.AppendChild(newImage); xmlDoc.Save(xmlFile); } Thank you very much for any help!

    Read the article

  • In .NET Xml Serialization, is it possible to serialize a class with an enum property with different

    - by Lasse V. Karlsen
    I have a class, containing a list property, where the list contains objects that has an enum property. When I serialize this, it looks like this: <?xml version="1.0" encoding="ibm850"?> <test> <events> <test-event type="changing" /> <test-event type="changed" /> </events> </test> Is it possible, through attributes, or similar, to get the Xml to look like this? <?xml version="1.0" encoding="ibm850"?> <test> <events> <changing /> <changed /> </events> </test> Basically, use the property value of the enum as a way to determine the tag-name? Is using a class hierarchy (ie. creating subclasses instead of using the property value) the only way? Edit: After testing, it seems even a class-hierarchy won't actually work. If there is a way to structure the classes to get the output I want, even with sub-classes, that is also an acceptable answer. Here's a sample program that will output the above Xml (remember to hit Ctrl+F5 to run in Visual Studio, otherwise the program window will close immediately): using System; using System.Collections.Generic; using System.Xml.Serialization; namespace ConsoleApplication18 { public enum TestEventTypes { [XmlEnum("changing")] Changing, [XmlEnum("changed")] Changed } [XmlType("test-event")] public class TestEvent { [XmlAttribute("type")] public TestEventTypes Type { get; set; } } [XmlType("test")] public class Test { private List<TestEvent> _Events = new List<TestEvent>(); [XmlArray("events")] public List<TestEvent> Events { get { return _Events; } } } class Program { static void Main(string[] args) { Test test = new Test(); test.Events.Add(new TestEvent { Type = TestEventTypes.Changing }); test.Events.Add(new TestEvent { Type = TestEventTypes.Changed }); XmlSerializer serializer = new XmlSerializer(typeof(Test)); XmlSerializerNamespaces ns = new XmlSerializerNamespaces(); ns.Add("", ""); serializer.Serialize(Console.Out, test, ns); } } }

    Read the article

  • How to ensure structures are completly initialized (by name) in GCC?

    - by Steven Spark
    How do I ensure each and every field of my structures are initialized in GCC when using designated initializers? (I'm especially interested in function pointers.) (I'm using C not C++.) Here is an example: typedef struct { int a; int b; } foo_t; typedef struct { void (*Start)(void); void (*Stop)(void); } bar_t; foo_t fooo = { 5 }; foo_t food = { .b=4 }; bar_t baro = { NULL }; bar_t bard = { .Start = NULL }; -Wmissing-field-initializers does not help at all. It works for fooo only in GCC (mingw 4.7.3, 4.8.1), and clang does only marginally better (no warnings for food and bard). I'm sure there is a reason for not producing warnings for designated initializer (even when I explicitly ask for them) but I want/need them. I do not want to initialize structures based on order/position because that is more error prone (for example swapping Start and Stop won't even give any warning). And neither gcc nor clang will give any warning that I failed to explicitly initialize a field (when initializing by name). I also don't want to litter my code with if(x.y==NULL) lines for multiple reasons, one of which is I want compile time warnings and not runtime errors. At least splint will give me warnings on all 4 cases, but unfortunately I cannot use splint all the time (it chokes on some of the code (fails to parse some C99, GCC extensions)). Note: If I'm using a real function instead of NULL GCC will also show a warning for baro (but not bard). I searched google and stack overflow but only found related questions and have not found answer for this specific problem. The best match I have found is 'Ensure that all elements in a structure are initialized' Ensure that all elements in a structure are initialized Which asks pretty much the same question, but has no satisfying answer. Is there a better way dealing with this that I have not mentioned? (Maybe other code analysis tool? Preferably something (free) that can be integrated into Eclipse or Visual Studio...)

    Read the article

  • c++ program debugged well with Cygwin4 (under Netbeans 7.2) but not with MinGW (under QT 4.8.1)

    - by GoldenAxe
    I have a c++ program which take a map text file and output it to a graph data structure I have made, I am using QT as I needed cross-platform program and GUI as well as visual representation of the map. I have several maps in different sizes (8x8 to 4096x4096). I am using unordered_map with a vector as key and vertex as value, I'm sending hash(1) and equal functions which I wrote to the unordered_map in creation. Under QT I am debugging my program with QT 4.8.1 for desktop MinGW (QT SDK), the program works and debug well until I try the largest map of 4096x4096, then the program stuck with the following error: "the inferior stopped because it received a signal from operating system", when debugging, the program halt at the hash function which used inside the unordered_map and not as part of the insertion state, but at a getter(2). Under Netbeans IDE 7.2 and Cygwin4 all works fine (debug and run). some code info: typedef std::vector<double> coordinate; typedef std::unordered_map<coordinate const*, Vertex<Element>*, container_hash, container_equal> vertexsContainer; vertexsContainer *m_vertexes (1) hash function: struct container_hash { size_t operator()(coordinate const *cord) const { size_t sum = 0; std::ostringstream ss; for ( auto it = cord->begin() ; it != cord->end() ; ++it ) { ss << *it; } sum = std::hash<std::string>()(ss.str()); return sum; } }; (2) the getter: template <class Element> Vertex<Element> *Graph<Element>::getVertex(const coordinate &cord) { try { Vertex<Element> *v = m_vertexes->at(&cord); return v; } catch (std::exception& e) { return NULL; } } I was thinking maybe it was some memory issue at the beginning, so before I was thinking of trying Netbeans I checked it with QT on my friend pc with a 16GB RAM and got the same error. Thanks.

    Read the article

  • java.lang.NumberFormatException: unable to parse '' as integer one more time

    - by Quzziy
    I will take two numbers from user, but this number from EditText must be converted to int. I think it should be working, but I still have problem with compilation code in Android Studio. CatLog show error in line with: int wiek = Integer.parseInt(wiekEditText.getText().toString()); Below is my full Android code: public class MyActivity extends ActionBarActivity { int Wynik; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_my); int Tmax, RT; EditText wiekEditText = (EditText) findViewById(R.id.inWiek); EditText tspoczEditText = (EditText) findViewById(R.id.inTspocz); int wiek = Integer.parseInt(wiekEditText.getText().toString()); int tspocz = Integer.parseInt(tspoczEditText.getText().toString()); Tmax = 220 - wiek; RT = Tmax - tspocz; Wynik = 70*RT/100 + tspocz; final EditText tempWiekEdit = wiekEditText; TabHost tabHost = (TabHost) findViewById(R.id.tabHost); //Do TabHost'a z layoutu tabHost.setup(); TabHost.TabSpec tabSpec = tabHost.newTabSpec("Calc"); tabSpec.setContent(R.id.Calc); tabSpec.setIndicator("Calc"); tabHost.addTab(tabSpec); tabSpec = tabHost.newTabSpec("Hints"); tabSpec.setContent(R.id.Hints); tabSpec.setIndicator("Hints"); tabHost.addTab(tabSpec); final Button Btn = (Button) findViewById(R.id.Btn); Btn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Toast.makeText(getApplicationContext(),"blablabla"+ "Wynik",Toast.LENGTH_SHORT).show(); } }); wiekEditText.addTextChangedListener(new TextWatcher() { @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { Btn.setEnabled(!(tempWiekEdit.getText().toString().trim().isEmpty())); } @Override public void afterTextChanged(Editable s) { } }); } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.my, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // Handle action bar item clicks here. The action bar will // automatically handle clicks on the Home/Up button, so long // as you specify a parent activity in AndroidManifest.xml. int id = item.getItemId(); if (id == R.id.action_settings) { return true; } return super.onOptionsItemSelected(item); } }

    Read the article

  • Detect if class has overloaded function fails on Comeau compiler

    - by Frank
    Hi Everyone, I'm trying to use SFINAE to detect if a class has an overloaded member function that takes a certain type. The code I have seems to work correctly in Visual Studio and GCC, but does not compile using the Comeau online compiler. Here is the code I'm using: #include <stdio.h> //Comeau doesnt' have boost, so define our own enable_if_c template<bool value> struct enable_if_c { typedef void type; }; template<> struct enable_if_c< false > {}; //Class that has the overloaded member function class TestClass { public: void Func(float value) { printf( "%f\n", value ); } void Func(int value) { printf( "%i\n", value ); } }; //Struct to detect if TestClass has an overloaded member function for type T template<typename T> struct HasFunc { template<typename U, void (TestClass::*)( U )> struct SFINAE {}; template<typename U> static char Test(SFINAE<U, &TestClass::Func>*); template<typename U> static int Test(...); static const bool Has = sizeof(Test<T>(0)) == sizeof(char); }; //Use enable_if_c to only allow the function call if TestClass has a valid overload for T template<typename T> typename enable_if_c<HasFunc<T>::Has>::type CallFunc(TestClass &test, T value) { test.Func( value ); } int main() { float value1 = 0.0f; int value2 = 0; TestClass testClass; CallFunc( testClass, value1 ); //Should call TestClass::Func( float ) CallFunc( testClass, value2 ); //Should call TestClass::Func( int ) } The error message is: no instance of function template "CallFunc" matches the argument list. It seems that HasFunc::Has is false for int and float when it should be true. Is this a bug in the Comeau compiler? Am I doing something that's not standard? And if so, what do I need to do to fix it?

    Read the article

  • C++ class is not recognizing string data type

    - by reallythecrash
    I'm working on a program from my C++ textbook, and this this the first time I've really run into trouble. I just can't seem to see what is wrong here. Visual Studio is telling me Error: identifier "string" is undefined. I separated the program into three files. A header file for the class specification, a .cpp file for the class implementation and the main program file. These are the instructions from my book: Write a class named Car that has the following member variables: year. An int that holds the car's model year. make. A string that holds the make of the car. speed. An int that holds the car's current speed. In addition, the class should have the following member functions. Constructor. The constructor should accept the car's year and make as arguments and assign these values to the object's year and make member variables. The constructor should initialize the speed member variable to 0. Accessors. Appropriate accessor functions should be created to allow values to be retrieved from an object's year, make and speed member variables. There are more instructions, but they are not necessary to get this part to work. Here is my source code: // File Car.h -- Car class specification file #ifndef CAR_H #define CAR_H class Car { private: int year; string make; int speed; public: Car(int, string); int getYear(); string getMake(); int getSpeed(); }; #endif // File Car.cpp -- Car class function implementation file #include "Car.h" // Default Constructor Car::Car(int inputYear, string inputMake) { year = inputYear; make = inputMake; speed = 0; } // Accessors int Car::getYear() { return year; } string Car::getMake() { return make; } int Car::getSpeed() { return speed; } // Main program #include <iostream> #include <string> #include "Car.h" using namespace std; int main() { } I haven't written anything in the main program yet, because I can't get the class to compile. I've only linked the header file to the main program. Thanks in advance to all who take the time to investigate this problem for me.

    Read the article

  • Where can I find "canonical" sample programs that give quick refreshers for any given language? [on hold]

    - by acheong87
    Note to those close-voting this question: I understand this isn't a conventional programming question and I can agree with the reasoning that it's in the subjective domain (like best-of lists). In other ways though I think it's appropriate because, though it's not a "a specific programming problem," nor concerning "a software algorithm", nor (strictly) concerning "software tools commonly used by programmers", I think it is a "practical, answerable [problem that is] unique to the programming profession," and I think it is "based on an actual [problem I] face." I've been wanting this for some time now, because both approaches of (a) Googling for samples as I write every other line of code and (b) just winging it and seeing what errors crop up, distract me from coding efficiently. This note will be removed if the question gains popularity; this question will be deleted otherwise. I spend most of my time developing in C++, PHP, or Javascript, and every once in a while I have to do something in, say, VBA. In those times, it'd be convenient if I could just put up some sample code on a second monitor, something in between a cheat sheet (often too compact; and doesn't resemble anything that could actually compile/run), and a language reference (often too verbose, or segmented; requires extra steps to search or click through an index), so I can just glance at it and recall things, like how to loop through non-empty cells in a column. I think there's a hidden benefit to seeing formed code, that triggers the right spots in our brains to get back into a language we only need to brush up on. Similar in spirit is how http://ideone.com lets you click "Template" in any given language so you can get started without even doing a search. That template alone tells a lot, sometimes! Case-sensitivity, whitespace conventions, identifier conventions, the spelling of certain types, etc. I couldn't find a resource that pulled together such samples, so if there indeed doesn't exist such a repository, I was hoping this question would inspire professionals and experts to contribute links to the most useful sample code they've used for just this purpose: a keep-on-the-side, form-as-well-as-content, compilable/executable, reminder of a language's basic and oft-used features. Personally I am interested in seeing "samplers" for: VBA, Perl, Python, Java, C# (though for some of these autocompleters in Eclipse, Visual Studio, etc. help enough), awk, and sed. I'm tagging c++, php, and javascript because these are languages for which I'd best be able to evaluate whether proffered sample code matches what I had in mind.

    Read the article

  • Differences Between NHibernate and Entity Framework

    - by Ricardo Peres
    Introduction NHibernate and Entity Framework are two of the most popular O/RM frameworks on the .NET world. Although they share some functionality, there are some aspects on which they are quite different. This post will describe this differences and will hopefully help you get started with the one you know less. Mind you, this is a personal selection of features to compare, it is by no way an exhaustive list. History First, a bit of history. NHibernate is an open-source project that was first ported from Java’s venerable Hibernate framework, one of the first O/RM frameworks, but nowadays it is not tied to it, for example, it has .NET specific features, and has evolved in different ways from those of its Java counterpart. Current version is 3.3, with 3.4 on the horizon. It currently targets .NET 3.5, but can be used as well in .NET 4, it only makes no use of any of its specific functionality. You can find its home page at NHForge. Entity Framework 1 came out with .NET 3.5 and is now on its second major version, despite being version 4. Code First sits on top of it and but came separately and will also continue to be released out of line with major .NET distributions. It is currently on version 4.3.1 and version 5 will be released together with .NET Framework 4.5. All versions will target the current version of .NET, at the time of their release. Its home location is located at MSDN. Architecture In NHibernate, there is a separation between the Unit of Work and the configuration and model instances. You start off by creating a Configuration object, where you specify all global NHibernate settings such as the database and dialect to use, the batch sizes, the mappings, etc, then you build an ISessionFactory from it. The ISessionFactory holds model and metadata that is tied to a particular database and to the settings that came from the Configuration object, and, there will typically be only one instance of each in a process. Finally, you create instances of ISession from the ISessionFactory, which is the NHibernate representation of the Unit of Work and Identity Map. This is a lightweight object, it basically opens and closes a database connection as required and keeps track of the entities associated with it. ISession objects are cheap to create and dispose, because all of the model complexity is stored in the ISessionFactory and Configuration objects. As for Entity Framework, the ObjectContext/DbContext holds the configuration, model and acts as the Unit of Work, holding references to all of the known entity instances. This class is therefore not lightweight as its NHibernate counterpart and it is not uncommon to see examples where an instance is cached on a field. Mappings Both NHibernate and Entity Framework (Code First) support the use of POCOs to represent entities, no base classes are required (or even possible, in the case of NHibernate). As for mapping to and from the database, NHibernate supports three types of mappings: XML-based, which have the advantage of not tying the entity classes to a particular O/RM; the XML files can be deployed as files on the file system or as embedded resources in an assembly; Attribute-based, for keeping both the entities and database details on the same place at the expense of polluting the entity classes with NHibernate-specific attributes; Strongly-typed code-based, which allows dynamic creation of the model and strongly typing it, so that if, for example, a property name changes, the mapping will also be updated. Entity Framework can use: Attribute-based (although attributes cannot express all of the available possibilities – for example, cascading); Strongly-typed code mappings. Database Support With NHibernate you can use mostly any database you want, including: SQL Server; SQL Server Compact; SQL Server Azure; Oracle; DB2; PostgreSQL; MySQL; Sybase Adaptive Server/SQL Anywhere; Firebird; SQLLite; Informix; Any through OLE DB; Any through ODBC. Out of the box, Entity Framework only supports SQL Server, but a number of providers exist, both free and commercial, for some of the most used databases, such as Oracle and MySQL. See a list here. Inheritance Strategies Both NHibernate and Entity Framework support the three canonical inheritance strategies: Table Per Type Hierarchy (Single Table Inheritance), Table Per Type (Class Table Inheritance) and Table Per Concrete Type (Concrete Table Inheritance). Associations Regarding associations, both support one to one, one to many and many to many. However, NHibernate offers far more collection types: Bags of entities or values: unordered, possibly with duplicates; Lists of entities or values: ordered, indexed by a number column; Maps of entities or values: indexed by either an entity or any value; Sets of entities or values: unordered, no duplicates; Arrays of entities or values: indexed, immutable. Querying NHibernate exposes several querying APIs: LINQ is probably the most used nowadays, and really does not need to be introduced; Hibernate Query Language (HQL) is a database-agnostic, object-oriented SQL-alike language that exists since NHibernate’s creation and still offers the most advanced querying possibilities; well suited for dynamic queries, even if using string concatenation; Criteria API is an implementation of the Query Object pattern where you create a semi-abstract conceptual representation of the query you wish to execute by means of a class model; also a good choice for dynamic querying; Query Over offers a similar API to Criteria, but using strongly-typed LINQ expressions instead of strings; for this, although more refactor-friendlier that Criteria, it is also less suited for dynamic queries; SQL, including stored procedures, can also be used; Integration with Lucene.NET indexer is available. As for Entity Framework: LINQ to Entities is fully supported, and its implementation is considered very complete; it is the API of choice for most developers; Entity-SQL, HQL’s counterpart, is also an object-oriented, database-independent querying language that can be used for dynamic queries; SQL, of course, is also supported. Caching Both NHibernate and Entity Framework, of course, feature first-level cache. NHibernate also supports a second-level cache, that can be used among multiple ISessionFactorys, even in different processes/machines: Hashtable (in-memory); SysCache (uses ASP.NET as the cache provider); SysCache2 (same as above but with support for SQL Server SQL Dependencies); Prevalence; SharedCache; Memcached; Redis; NCache; Appfabric Caching. Out of the box, Entity Framework does not have any second-level cache mechanism, however, there are some public samples that show how we can add this. ID Generators NHibernate supports different ID generation strategies, coming from the database and otherwise: Identity (for SQL Server, MySQL, and databases who support identity columns); Sequence (for Oracle, PostgreSQL, and others who support sequences); Trigger-based; HiLo; Sequence HiLo (for databases that support sequences); Several GUID flavors, both in GUID as well as in string format; Increment (for single-user uses); Assigned (must know what you’re doing); Sequence-style (either uses an actual sequence or a single-column table); Table of ids; Pooled (similar to HiLo but stores high values in a table); Native (uses whatever mechanism the current database supports, identity or sequence). Entity Framework only supports: Identity generation; GUIDs; Assigned values. Properties NHibernate supports properties of entity types (one to one or many to one), collections (one to many or many to many) as well as scalars and enumerations. It offers a mechanism for having complex property types generated from the database, which even include support for querying. It also supports properties originated from SQL formulas. Entity Framework only supports scalars, entity types and collections. Enumerations support will come in the next version. Events and Interception NHibernate has a very rich event model, that exposes more than 20 events, either for synchronous pre-execution or asynchronous post-execution, including: Pre/Post-Load; Pre/Post-Delete; Pre/Post-Insert; Pre/Post-Update; Pre/Post-Flush. It also features interception of class instancing and SQL generation. As for Entity Framework, only two events exist: ObjectMaterialized (after loading an entity from the database); SavingChanges (before saving changes, which include deleting, inserting and updating). Tracking Changes For NHibernate as well as Entity Framework, all changes are tracked by their respective Unit of Work implementation. Entities can be attached and detached to it, Entity Framework does, however, also support self-tracking entities. Optimistic Concurrency Control NHibernate supports all of the imaginable scenarios: SQL Server’s ROWVERSION; Oracle’s ORA_ROWSCN; A column containing date and time; A column containing a version number; All/dirty columns comparison. Entity Framework is more focused on Entity Framework, so it only supports: SQL Server’s ROWVERSION; Comparing all/some columns. Batching NHibernate has full support for insertion batching, but only if the ID generator in use is not database-based (for example, it cannot be used with Identity), whereas Entity Framework has no batching at all. Cascading Both support cascading for collections and associations: when an entity is deleted, their conceptual children are also deleted. NHibernate also offers the possibility to set the foreign key column on children to NULL instead of removing them. Flushing Changes NHibernate’s ISession has a FlushMode property that can have the following values: Auto: changes are sent to the database when necessary, for example, if there are dirty instances of an entity type, and a query is performed against this entity type, or if the ISession is being disposed; Commit: changes are sent when committing the current transaction; Never: changes are only sent when explicitly calling Flush(). As for Entity Framework, changes have to be explicitly sent through a call to AcceptAllChanges()/SaveChanges(). Lazy Loading NHibernate supports lazy loading for Associated entities (one to one, many to one); Collections (one to many, many to many); Scalar properties (thing of BLOBs or CLOBs). Entity Framework only supports lazy loading for: Associated entities; Collections. Generating and Updating the Database Both NHibernate and Entity Framework Code First (with the Migrations API) allow creating the database model from the mapping and updating it if the mapping changes. Extensibility As you can guess, NHibernate is far more extensible than Entity Framework. Basically, everything can be extended, from ID generation, to LINQ to SQL transformation, HQL native SQL support, custom column types, custom association collections, SQL generation, supported databases, etc. With Entity Framework your options are more limited, at least, because practically no information exists as to what can be extended/changed. It features a provider model that can be extended to support any database. Integration With Other Microsoft APIs and Tools When it comes to integration with Microsoft technologies, it will come as no surprise that Entity Framework offers the best support. For example, the following technologies are fully supported: ASP.NET (through the EntityDataSource); ASP.NET Dynamic Data; WCF Data Services; WCF RIA Services; Visual Studio (through the integrated designer). Documentation This is another point where Entity Framework is superior: NHibernate lacks, for starters, an up to date API reference synchronized with its current version. It does have a community mailing list, blogs and wikis, although not much used. Entity Framework has a number of resources on MSDN and, of course, several forums and discussion groups exist. Conclusion Like I said, this is a personal list. I may come as a surprise to some that Entity Framework is so behind NHibernate in so many aspects, but it is true that NHibernate is much older and, due to its open-source nature, is not tied to product-specific timeframes and can thus evolve much more rapidly. I do like both, and I chose whichever is best for the job I have at hands. I am looking forward to the changes in EF5 which will add significant value to an already interesting product. So, what do you think? Did I forget anything important or is there anything else worth talking about? Looking forward for your comments!

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • LLBLGen Pro feature highlights: automatic element name construction

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) One of the things one might take for granted but which has a huge impact on the time spent in an entity modeling environment is the way the system creates names for elements out of the information provided, in short: automatic element name construction. Element names are created in both directions of modeling: database first and model first and the more names the system can create for you without you having to rename them, the better. LLBLGen Pro has a rich, fine grained system for creating element names out of the meta-data available, which I'll describe more in detail below. First the model element related element naming features are highlighted, in the section Automatic model element naming features and after that I'll go more into detail about the relational model element naming features LLBLGen Pro has to offer in the section Automatic relational model element naming features. Automatic model element naming features When working database first, the element names in the model, e.g. entity names, entity field names and so on, are in general determined from the relational model element (e.g. table, table field) they're mapped on, as the model elements are reverse engineered from these relational model elements. It doesn't take rocket science to automatically name an entity Customer if the entity was created after reverse engineering a table named Customer. It gets a little trickier when the entity which was created by reverse engineering a table called TBL_ORDER_LINES has to be named 'OrderLine' automatically. Automatic model element naming also takes into effect with model first development, where some settings are used to provide you with a default name, e.g. in the case of navigator name creation when you create a new relationship. The features below are available to you in the Project Settings. Open Project Settings on a loaded project and navigate to Conventions -> Element Name Construction. Strippers! The above example 'TBL_ORDER_LINES' shows that some parts of the table name might not be needed for name creation, in this case the 'TBL_' prefix. Some 'brilliant' DBAs even add suffixes to table names, fragments you might not want to appear in the entity names. LLBLGen Pro offers you to define both prefix and suffix fragments to strip off of table, view, stored procedure, parameter, table field and view field names. In the example above, the fragment 'TBL_' is a good candidate for such a strip pattern. You can specify more than one pattern for e.g. the table prefix strip pattern, so even a really messy schema can still be used to produce clean names. Underscores Be Gone Another thing you might get rid of are underscores. After all, most naming schemes for entities and their classes use PasCal casing rules and don't allow for underscores to appear. LLBLGen Pro can automatically strip out underscores for you. It's an optional feature, so if you like the underscores, you're not forced to see them go: LLBLGen Pro will leave them alone when ordered to to so. PasCal everywhere... or not, your call LLBLGen Pro can automatically PasCal case names on word breaks. It determines word breaks in a couple of ways: a space marks a word break, an underscore marks a word break and a case difference marks a word break. It will remove spaces in all cases, and based on the underscore removal setting, keep or remove the underscores, and upper-case the first character of a word break fragment, and lower case the rest. Say, we keep the defaults, which is remove underscores and PasCal case always and strip the TBL_ fragment, we get with our example TBL_ORDER_LINES, after stripping TBL_ from the table name two word fragments: ORDER and LINES. The underscores are removed, the first character of each fragment is upper-cased, the rest lower-cased, so this results in OrderLines. Almost there! Pluralization and Singularization In general entity names are singular, like Customer or OrderLine so LLBLGen Pro offers a way to singularize the names. This will convert OrderLines, the result we got after the PasCal casing functionality, into OrderLine, exactly what we're after. Show me the patterns! There are other situations in which you want more flexibility. Say, you have an entity Customer and an entity Order and there's a foreign key constraint defined from the target of Order and the target of Customer. This foreign key constraint results in a 1:n relationship between the entities Customer and Order. A relationship has navigators mapped onto the relationship in both entities the relationship is between. For this particular relationship we'd like to have Customer as navigator in Order and Orders as navigator in Customer, so the relationship becomes Customer.Orders 1:n Order.Customer. To control the naming of these navigators for the various relationship types, LLBLGen Pro defines a set of patterns which allow you, using macros, to define how the auto-created navigator names will look like. For example, if you rather have Customer.OrderCollection, you can do so, by changing the pattern from {$EndEntityName$P} to {$EndEntityName}Collection. The $P directive makes sure the name is pluralized, which is not what you want if you're going for <EntityName>Collection, hence it's removed. When working model first, it's a given you'll create foreign key fields along the way when you define relationships. For example, you've defined two entities: Customer and Order, and they have their fields setup properly. Now you want to define a relationship between them. This will automatically create a foreign key field in the Order entity, which reflects the value of the PK field in Customer. (No worries if you hate the foreign key fields in your classes, on NHibernate and EF these can be hidden in the generated code if you want to). A specific pattern is available for you to direct LLBLGen Pro how to name this foreign key field. For example, if all your entities have Id as PK field, you might want to have a different name than Id as foreign key field. In our Customer - Order example, you might want to have CustomerId instead as foreign key name in Order. The pattern for foreign key fields gives you that freedom. Abbreviations... make sense of OrdNr and friends I already described word breaks in the PasCal casing paragraph, how they're used for the PasCal casing in the constructed name. Word breaks are used for another neat feature LLBLGen Pro has to offer: abbreviation support. Burt, your friendly DBA in the dungeons below the office has a hate-hate relationship with his keyboard: he can't stand it: typing is something he avoids like the plague. This has resulted in tables and fields which have names which are very short, but also very unreadable. Example: our TBL_ORDER_LINES example has a lovely field called ORD_NR. What you would like to see in your fancy new OrderLine entity mapped onto this table is a field called OrderNumber, not a field called OrdNr. What you also like is to not have to rename that field manually. There are better things to do with your time, after all. LLBLGen Pro has you covered. All it takes is to define some abbreviation - full word pairs and during reverse engineering model elements from tables/views, LLBLGen Pro will take care of the rest. For the ORD_NR field, you need two values: ORD as abbreviation and Order as full word, and NR as abbreviation and Number as full word. LLBLGen Pro will now convert every word fragment found with the word breaks which matches an abbreviation to the given full word. They're case sensitive and can be found in the Project Settings: Navigate to Conventions -> Element Name Construction -> Abbreviations. Automatic relational model element naming features Not everyone works database first: it may very well be the case you start from scratch, or have to add additional tables to an existing database. For these situations, it's key you have the flexibility that you can control the created table names and table fields without any work: let the designer create these names based on the entity model you defined and a set of rules. LLBLGen Pro offers several features in this area, which are described in more detail below. These features are found in Project Settings: navigate to Conventions -> Model First Development. Underscores, welcome back! Not every database is case insensitive, and not every organization requires PasCal cased table/field names, some demand all lower or all uppercase names with underscores at word breaks. Say you create an entity model with an entity called OrderLine. You work with Oracle and your organization requires underscores at word breaks: a table created from OrderLine should be called ORDER_LINE. LLBLGen Pro allows you to do that: with a simple checkbox you can order LLBLGen Pro to insert an underscore at each word break for the type of database you're working with: case sensitive or case insensitive. Checking the checkbox Insert underscore at word break case insensitive dbs will let LLBLGen Pro create a table from the entity called Order_Line. Half-way there, as there are still lower case characters there and you need all caps. No worries, see below Casing directives so everyone can sleep well at night For case sensitive databases and case insensitive databases there is one setting for each of them which controls the casing of the name created from a model element (e.g. a table created from an entity definition using the auto-mapping feature). The settings can have the following values: AsProjectElement, AllUpperCase or AllLowerCase. AsProjectElement is the default, and it keeps the casing as-is. In our example, we need to get all upper case characters, so we select AllUpperCase for the setting for case sensitive databases. This will produce the name ORDER_LINE. Sequence naming after a pattern Some databases support sequences, and using model-first development it's key to have sequences, when needed, to be created automatically and if possible using a name which shows where they're used. Say you have an entity Order and you want to have the PK values be created by the database using a sequence. The database you're using supports sequences (e.g. Oracle) and as you want all numeric PK fields to be sequenced, you have enabled this by the setting Auto assign sequences to integer pks. When you're using LLBLGen Pro's auto-map feature, to create new tables and constraints from the model, it will create a new table, ORDER, based on your settings I previously discussed above, with a PK field ID and it also creates a sequence, SEQ_ORDER, which is auto-assigns to the ID field mapping. The name of the sequence is created by using a pattern, defined in the Model First Development setting Sequence pattern, which uses plain text and macros like with the other patterns previously discussed. Grouping and schemas When you start from scratch, and you're working model first, the tables created by LLBLGen Pro will be in a catalog and / or schema created by LLBLGen Pro as well. If you use LLBLGen Pro's grouping feature, which allows you to group entities and other model elements into groups in the project (described in a future blog post), you might want to have that group name reflected in the schema name the targets of the model elements are in. Say you have a model with a group CRM and a group HRM, both with entities unique for these groups, e.g. Employee in HRM, Customer in CRM. When auto-mapping this model to create tables, you might want to have the table created for Employee in the HRM schema but the table created for Customer in the CRM schema. LLBLGen Pro will do just that when you check the setting Set schema name after group name to true (default). This gives you total control over where what is placed in the database from your model. But I want plural table names... and TBL_ prefixes! For now we follow best practices which suggest singular table names and no prefixes/suffixes for names. Of course that won't keep everyone happy, so we're looking into making it possible to have that in a future version. Conclusion LLBLGen Pro offers a variety of options to let the modeling system do as much work for you as possible. Hopefully you enjoyed this little highlight post and that it has given you new insights in the smaller features available to you in LLBLGen Pro, ones you might not have thought off in the first place. Enjoy!

    Read the article

< Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >