Search Results

Search found 41135 results on 1646 pages for 'non relational database'.

Page 80/1646 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Displaying a message after adding duplicate records in database

    - by user1770370
    I wrote program in C# winforms and SQL server and LINQ to SQL. I use user control instead of form. In my user control, I put 3 textbox, txtStartNumber, txtEndNumber, txtQuantity. user define value of textboxes, when clicked button, it will insert some records according to the value of txtQuantity. I want to when duplicate number is created, it won't add to database and display message. how do i do? I must write code in code behind or server side? i must set this in store procedure or trigger? private void btnSave_Click(object sender, EventArgs e) { long from = Convert.ToInt64(txt_barcode_f.Text); long to = Convert.ToInt64(txt_barcode_t.Text); long quantity = Convert.ToInt64(to - from); int card_Type_ID=Convert.ToInt32(cmb_BracodeType .SelectedValue); long[] arrCardNum = new long[(to - from)]; arrCardNum[0]=from; for (long i = from; i < to; i++) { for(int j=0; j<(to-from) ;j++) { arrCardNum[j]=from+j; string r = arrCardNum[j].ToString(); sp.SaveCards(r, 2, card_Type_ID, SaveDate, 2); } } } Stored Procedure code. ALTER PROCEDURE dbo.SaveCards @Barcode_Num int ,@Card_Status_ID int ,@Card_Type_ID int ,@SaveDate varchar(10) ,@Save_User_ID int AS BEGIN INSERT INTO [Parking].[dbo].[TBL_Cards] ([Barcode_Num] ,[Card_Status_ID] ,[Card_Type_ID] ,[Save_User_ID]) VALUES (@Barcode_Num ,@Card_Status_ID ,@Card_Type_ID ,@Save_User_ID) END

    Read the article

  • Best way to store data in database when you don't know the type

    - by stiank81
    I have a table in my database that represents datafields in a custom form. The DataField gives some representation of what kind of control it should be represented with, and what value type it should take. Simplified you can say that I have 2 entities in this table - Textbox taking any string and Textbox only taking numbers. Now I have the different values stored in a separate table, referencing the datafield definition. What is the best way to store the data value here, when the type differs? One possible solution is to have the FieldValue table hold one field per possible value type. Now this would certainly be redundant, but at least I would get the value stored in its correct form - simplifying queries later. FieldValue ---------- Id DataFieldId IntValue DoubleValue BoolValue DataValue .. Another possibility is just storing everything as String, and casting this in the queries. I am using .Net with NHibernate, and I see that at least here there is a Projections.Cast that can be used to cast e.g. string to int in the query. Either way in these two solutions I need to know which type to use when doing the query, but I will know that from the DataField, so that won't be a problem. Anyway; I don't think any of these solutions sounds good. Are they? Or is there a better way?

    Read the article

  • Sharepoint list fields stored in content database?

    - by nav
    Hi, I am trying to find out where data for Sharepoint list fields are stored in the content database. For example from the AllLists table filtering on the listid i am intrested in I can derive the following from the tp_ContentTypes column on the field I'm interested in: <Field Type="CascadingDropDownListFieldWithFilter" DisplayName="Secondary Subject" Required="FALSE" ID="{b4143ff9-d5a4-468f-8793-7bb2f06b02a0}" SourceID="{6c1e9bbf-4f02-49fd-8e6c-87dd9f26158a}" StaticName="Secondary_x0020_Subject" Name="Secondary_x0020_Subject" ColName="nvarchar13" RowOrdinal="0" Version="1"><Customization><ArrayOfProperty><Property><Name>SiteUrl</Name><Value xmlns:q1="http://www.w3.org/2001/XMLSchema" p4:type="q1:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">http://lvis.amberlnk.net/knowhow</Value></Property><Property><Name>CddlName</Name><Value xmlns:q2="http://www.w3.org/2001/XMLSchema" p4:type="q2:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">SS</Value></Property><Property><Name>CddlParentName</Name><Value xmlns:q3="http://www.w3.org/2001/XMLSchema" p4:type="q3:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">PS</Value></Property><Property><Name>CddlChildName</Name><Value xmlns:q4="http://www.w3.org/2001/XMLSchema" p4:type="q4:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance"/></Property><Property><Name>ListName</Name><Value xmlns:q5="http://www.w3.org/2001/XMLSchema" p4:type="q5:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Secondary</Value></Property><Property><Name>ListTextField</Name><Value xmlns:q6="http://www.w3.org/2001/XMLSchema" p4:type="q6:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Title</Value></Property><Property><Name>ListValueField</Name><Value xmlns:q7="http://www.w3.org/2001/XMLSchema" p4:type="q7:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Title</Value></Property><Property><Name>JoinField</Name><Value xmlns:q8="http://www.w3.org/2001/XMLSchema" p4:type="q8:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Primary</Value></Property><Property><Name>FilterField</Name><Value xmlns:q9="http://www.w3.org/2001/XMLSchema" p4:type="q9:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Content Type ID</Value></Property><Property><Name>FilterOperator</Name><Value xmlns:q10="http://www.w3.org/2001/XMLSchema" p4:type="q10:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Show All</Value></Property><Property><Name>FilterValue</Name><Value xmlns:q11="http://www.w3.org/2001/XMLSchema" p4:type="q11:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance"/></Property></ArrayOfProperty></Customization></Field> Which table do I need to query to find the data held on this field? Many Thanks Nav

    Read the article

  • sharepoint custom aspx page with database connection

    - by Megini
    hi there i have created a custom aspx page whithin my sharepoint site with a sql server connection to a database on that server to select data when i view the page it works but when another user tries to view it it gives the following error : Server Error in '/' Application. Login failed for user 'GRINCOR\GuguK'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'GRINCOR\GuguK'. Source Error: The source code that generated this unhandled exception can only be shown when compiled in debug mode. To enable this, please follow one of the below steps, then request the URL: Add a "Debug=true" directive at the top of the file that generated the error. Example: <%@ Page Language="C#" Debug="true" % or: 2) Add the following section to the configuration file of your application: Note that this second technique will cause all files within a given application to be compiled in debug mode. The first technique will cause only that particular file to be compiled in debug mode. Important: Running applications in debug mode does incur a memory/performance overhead. You should make sure that an application has debugging disabled before deploying into production scenario. Stack Trace: [SqlException (0x80131904): Login failed for user 'GRINCOR\GuguK'.] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +248 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +245 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2811 System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) +53 System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) +327 System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) +2445370 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) +2445224 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) +354 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) +703 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) +54 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) +2414776 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) +92 System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +1657 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +84 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +1645767 System.Data.SqlClient.SqlConnection.Open() +258 ASP.d7922f0d_ac20_4f87_91a2_a99a52c2b2fa__233736835.DisplayData() in C:\inetpub\wwwroot\wss\VirtualDirectories\80\sites\hrportal2\tester.aspx:151 ASP.d7922f0d_ac20_4f87_91a2_a99a52c2b2fa_233736835._RenderMain(HtmlTextWriter __w, Control parameterContainer) in C:\inetpub\wwwroot\wss\VirtualDirectories\80\sites\hrportal2\tester.aspx:346 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +115 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +240 System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) +42 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +240 System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter writer) +253 System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) +87 System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) +53 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +240 System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) +42 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +240 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +240 System.Web.UI.Page.Render(HtmlTextWriter writer) +38 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +4240 Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3601 can someone give me a solution to this problem ? i am using sharepoint services 3.0

    Read the article

  • What are the DB smells?

    - by Jonas Byström
    We all know 'code smells', but what are the fundamental 'database smells'? I'm a DB n00b, but I'll give an example of something that I find fishy. It seems to me like when I have to join 6-8 tables together to optimize our loading that we have a DB smell? Or would that be a pretty 'normal' database layout? (Sure, early optimization is the root of all evil, but this seems to me like early pessimisation, not to mention the cumbersomeness?)

    Read the article

  • Scalable Database Tagging Schema

    - by Longpoke
    EDIT: To people building tagging systems. Don't read this. It is not what you are looking for. I asked this when I wasn't aware that RDBMS all have their own optimization methods, just use a simple many to many scheme. I have a posting system that has millions of posts. Each post can have an infinite number of tags associated with it. Users can create tags which have notes, date created, owner, etc. A tag is almost like a post itself, because people can post notes about the tag. Each tag association has an owner and date, so we can see who added the tag and when. My question is how can I implement this? It has to be fast searching posts by tag, or tags by post. Also, users can add tags to posts by typing the name into a field, kind of like the google search bar, it has to fill in the rest of the tag name for you. I have 3 solutions at the moment, but not sure which is the best, or if there is a better way. Note that I'm not showing the layout of notes since it will be trivial once I get a proper solution for tags. Method 1. Linked list tagId in post points to a linked list in tag_assoc, the application must traverse the list until flink=0 post: id, content, ownerId, date, tagId, notesId tag_assoc: id, tagId, ownerId, flink tag: id, name, notesId Method 2. Denormalization tags is simply a VARCHAR or TEXT field containing a tab delimited array of tagId:ownerId. It cannot be a fixed size. post: id, content, ownerId, date, tags, notesId tag: id, name, notesId Method 3. Toxi (from: http://www.pui.ch/phred/archives/2005/04/tags-database-schemas.html, also same thing here: http://stackoverflow.com/questions/20856/how-do-you-recommend-implementing-tags-or-tagging) post: id, content, ownerId, date, notesId tag_assoc: ownerId, tagId, postId tag: id, name, notesId Method 3 raises the question, how fast will it be to iterate through every single row in tag_assoc? Methods 1 and 2 should be fast for returning tags by post, but for posts by tag, another lookup table must be made. The last thing I have to worry about is optimizing searching tags by name, I have not worked that out yet. I made an ASCII diagram here: http://pastebin.com/f1c4e0e53

    Read the article

  • Access Creates new file every time I Compact & Repair

    - by NickSentowski
    It didn't always do this, but ever since I split my database and made the front-end an ACCDE file, any time I try to compact and repair either file, a new file called "Database 1" is generated and my original file size doesn't change. Is this normal? My ACCDB is roughly 20MB, and my ACCDE is just over 1M after being used the first time. Before opening, the ACCDE was only 600k (I have lots of forms and queries, and regularly store PDF attachments.

    Read the article

  • Techniques for querying a set of object in-memory in a Java application

    - by Edd Grant
    Hi All, We have a system which performs a 'coarse search' by invoking an interface on another system which returns a set of Java objects. Once we have received the search results I need to be able to further filter the resulting Java objects based on certain criteria describing the state of the attributes (e.g. from the initial objects return all objects where x.y z && a.b == c). The criteria used to filter the set of objects each time is partially user configurable, by this I mean that users will be able to select the values and ranges to match on but the attributes they can pick from will be a fixed set. The data sets are likely to contain <= 10,000 objects for each search. The search will be executed manually by the application user base probably no more than 2000 times a day (approx). It's probably worth mentioning that all the objects in the result set are known domain object classes which have Hibernate and JPA annotations describing their structure and relationship. Off the top of my head I can think of 3 ways of doing this: For each search persist the initial result set objects in our database, then use Hibernate to re-query them using the finer grained criteria. Use an in-memory Database (such as hsqldb?) to query and refine the initial result set. Write some custom code which iterates the initial result set and pulls out the desired records. Option 1 seems to involve a lot of toing and froing across a network to a physical Database (Oracle 10g) which might result in a lot of network and disk activity. It would also require the results from each search to be isolated from other result sets to ensure that different searches don't interfere with each other. Option 2 seems like a good idea in principle as it would allow me to do the finer query in memory and would not require the persistence of result data which would only be discarded after the search was complete. Gut feeling is that this could be pretty performant too but might result in larger memory overheads (which is fine as we can be pretty flexible on the amount of memory our JVM gets). Option 3 could be very performant but is something I would like to avoid as any code we write would require such careful testing that the time taken to acheive something flexible and robust enough would probably be prohibitive. I don't have time to prototype all 3 ideas so I am looking for comments people may have on the 3 options above, plus any further ideas I have not considered, to help me decide which idea might be most suitable. I'm currently leaning toward option 2 (in memory database) so would be keen to hear from people with experience of querying POJOs in memory too. Hopefully I have described the situation in enough detail but don't hesitate to ask if any further information is required to better understand the scenario. Cheers, Edd

    Read the article

  • How I can Optimize this mySQL transaction within java code?

    - by worldpython
    Dear All, I am new to MySql database. I've large table(ID,...). I select ID frequently with java code and.And that make a heavy load on transaction select from tableName where ID=someID notes: 1.Database could be 100,000 records 2.I can't cache result 3.ID is a primary key 4.I try to optimize time needed to return result from query. Any ideas for optimization ? thanks in advance

    Read the article

  • Inbreeding-immune database structure

    - by Nick Savage
    I have an application that requires a "simple" family tree. I would like to be able to perform queries that will give me data for an entire family given one id from a member in the family. I say simple because it does not need to take into account adoption or any other obscurities. The requirements for the application are as follows: Any two people will not be able to breed if they're from the same genetic line Needs to allow for the addition of new family lines (new people with no previous family) Need to be able to pull siblings, parents separately through queries I'm having trouble coming up with the proper structure for the database. So far I've come up with two solutions but they're not very reliable and will probably get out of hand quite quickly. Solution 1 involves placing a family_ids field on the people table and storing a list of unique family ids. Each time two people breed the lists are checked against each other to make sure no ids match and if everything checks out will merge the two lists and set that as the child's family_ids field. Example: Father (family_ids: (null)) breeds with Mother (family_ids: (213, 519)) -> Child (family_ids: (213, 519)) breeds with Random Person (family_ids: (813, 712, 122, 767)) -> Grandchild (family_ids: (213, 519, 813, 712, 122, 767)) And so on and so forth... The problem I see with this is the lists becoming unreasonably large as time goes on. Solution 2 uses cakephp's associations to declare: public $belongsTo = array( 'Father' => array( 'className' => 'User', 'foreignKey' => 'father_id' ), 'Mother' => array( 'className' => 'User', 'foreignKey' => 'mother_id' ) ); Now setting recursive to 2 will fetch the results of the mother and father, along with their mother and father, and so on and so forth all the way down the line. The problem with this route is that the data is in nested arrays and I'm unsure of how to efficiently work through the code. If anyone would be able to steer me in the direction of the most efficient way to handle what I want to achieve that would be tremendously helpful. Any and all help is greatly appreciated and I'll gladly answer any questions anyone has. Thanks a lot.

    Read the article

  • Help Desk Database Design

    - by user237244
    The company I work at has very specific and unique needs for a help desk system, so none of the open source systems will work for us. That being the case, I created a custom system using PHP and MySQL. It's far from perfect, but it's infinitely better than the last system they were using; trust me! It meets most of our needs quite nicely, but I have a question about the way I have the database set up. Here are the main tables: ClosedTickets ClosedTicketSolutions Locations OpenTickets OpenTicketSolutions Statuses Technicians When a user submits a help request, it goes in the "OpenTickets" table. As the technicians work on the problem, they submit entries with a description of what they've done. These entries go in the "OpenTicketSolutions" table. When the problem has been resolved, the last technician to work on the problem closes the ticket and it gets moved to the "ClosedTickets" table. All of the solution entries get moved to the "ClosedTicketSolutions" table as well. The other tables (Locations, Statuses, and Technicians) exist as a means of normalization (each location, status, and technician has an ID which is referenced). The problem I'm having now is this: When I want to view a list of all the open tickets, the SQL statement is somewhat complicated because I have to left join the "Locations", "Statuses", and "Technicians" tables. Fields from various tables need to be searchable as well. Check out how complicated the SQL statement is to search closed tickets for tickets submitted by anybody with a first name containing "John": SELECT ClosedTickets.*, date_format(ClosedTickets.EntryDate, '%c/%e/%y %l:%i %p') AS Formatted_Date, date_format(ClosedDate, '%c/%e/%y %l:%i %p') AS Formatted_ClosedDate, Concat(Technicians.LastName, ', ', Technicians.FirstName) AS TechFullName, Locations.LocationName, date_format(ClosedTicketSolutions.EntryDate, '%c/%e/%y') AS Formatted_Solution_EntryDate, ClosedTicketSolutions.HoursSpent AS SolutionHoursSpent, ClosedTicketSolutions.Tech_ID AS SolutionTech_ID, ClosedTicketSolutions.EntryText FROM ClosedTickets LEFT JOIN Technicians ON ClosedTickets.Tech_ID = Technicians.Tech_ID LEFT JOIN Locations ON ClosedTickets.Location_ID = Locations.Location_ID LEFT JOIN ClosedTicketSolutions ON ClosedTickets.TicketNum = ClosedTicketSolutions.TicketNum WHERE (ClosedTickets.FirstName LIKE '%John%') ORDER BY ClosedDate Desc, ClosedTicketSolutions.EntryDate, ClosedTicketSolutions.Entry_ID One thing that I'm not able to do right now is search both open and closed tickets at the same time. I don't think a union would work in my case. So I'm wondering if I should store the open and closed tickets in the same table and just have a field indicating whether or not the ticket is closed. The only problem I can forsee is that we have so many closed tickets already (nearly 30,000) so the whole system might perform slowly. Would it be a bad idea to combine the open and closed tickets?

    Read the article

  • SQLAuthority News – MS Access Database is the Way to Go – April 1st Humor

    - by pinaldave
    First of all, today is April 1- April Fool’s Day, so I have written this post for some light entertainment. My friend has just sent me an email about why a person should go for Access Database. For a short background, I used to be an MS Access user once (I will not call myself MS Access DBA), and I must say I had a good time with Database at that time. As time passed by, I moved from MS Access to SQL Server. Well, as for my friend’s email, his reasons considering MS Access usage really made me laugh. MS Access may have a few points where it totally makes sense to use it. However, in the email that I received, there was not a single reason which was valid.  In fact, I thought it is an April 1st joke- just delivered a little earlier. Let us see some of the reasons from that email. Thanks to Mahesh Bhesania for sending this email to me. MS Access comes with lots of free stuff, e.g. MS Excel MS Access is the most preferred desktop database system MS Access can import data from MS Excel and SQL Server MS Access provides a real time database MS Access has a free IDE-to-VB Script MS Access fits well in your hard drive I actually think that the above points are either incorrect beliefs of some users, or someone just wrote them to give some laughter with such inaccurate data. And, for the same reason I decided to browse the Internet and do some research on MS Access database to verify my thoughts. While searching on this subject, I found the following two interesting statements from the site: Microsoft Access Database, Why Choose It? Other software manufacturers are more likely to provide interfaces to MS Access than any other desktop database system Microsoft Access consulting rates are typically lower for Access consultants compared to Oracle or SQL Server consultants The second one is may be the worst reason for you to switch to MS Access if you are already an SQL Server consultant. With this cartoon, have you ever felt like you were one of these chickens at some point in time? I guess that the moment might have just happened before the minute we say “I guess we were on the same page?” Does this mean we are IN the same table, or ON the same table?! (I accept bad joke!) It is All Fools’ Day after all, so just laugh! If you have something funny but non-offensive to share, just  leave your comment here. Reference: Pinal Dave (http://blog.SQLAuthority.com), Cartoon source unknown. Filed under: Software Development, SQL, SQL Authority, SQL Humor, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: MS ACCESS

    Read the article

  • LLBLGen Pro v3.1 released!

    - by FransBouma
    Yesterday we released LLBLGen Pro v3.1! Version 3.1 comes with new features and enhancements, which I'll describe briefly below. v3.1 is a free upgrade for v3.x licensees. What's new / changed? Designer Extensible Import system. An extensible import system has been added to the designer to import project data from external sources. Importers are plug-ins which import project meta-data (like entity definitions, mappings and relational model data) from an external source into the loaded project. In v3.1, an importer plug-in for importing project elements from existing LLBLGen Pro v3.x project files has been included. You can use this importer to create source projects from which you import parts of models to build your actual project with. Model-only relationships. In v3.1, relationships of the type 1:1, m:1 and 1:n can be marked as model-only. A model-only relationship isn't required to have a backing foreign key constraint in the relational model data. They're ideal for projects which have to work with relational databases where changes can't always be made or some relationships can't be added to (e.g. the ones which are important for the entity model, but are not allowed to be added to the relational model for some reason). Custom field ordering. Although fields in an entity definition don't really have an ordering, it can be important for some situations to have the entity fields in a given order, e.g. when you use compound primary keys. Field ordering can be defined using a pop-up dialog which can be opened through various ways, e.g. inside the project explorer, model view and entity editor. It can also be set automatically during refreshes based on new settings. Command line relational model data refresher tool, CliRefresher.exe. The command line refresh tool shipped with v2.6 is now available for v3.1 as well Navigation enhancements in various designer elements. It's now easier to find elements like entities, typed views etc. in the project explorer from editors, to navigate to related entities in the project explorer by right clicking a relationship, navigate to the super-type in the project explorer when right-clicking an entity and navigate to the sub-type in the project explorer when right-clicking a sub-type node in the project explorer. Minor visual enhancements / tweaks LLBLGen Pro Runtime Framework Entity creation is now up to 30% faster and takes 5% less memory. Creating an entity object has been optimized further by tweaks inside the framework to make instantiating an entity object up to 30% faster. It now also takes up to 5% less memory than in v3.0 Prefetch Path node merging is now up to 20-25% faster. Setting entity references required the creation of a new relationship object. As this relationship object is always used internally it could be cached (as it's used for syncing only). This increases performance by 20-25% in the merging functionality. Entity fetches are now up to 20% faster. A large number of tweaks have been applied to make entity fetches up to 20% faster than in v3.0. Full WCF RIA support. It's now possible to use your LLBLGen Pro runtime framework powered domain layer in a WCF RIA application using the VS.NET tools for WCF RIA services. WCF RIA services is a Microsoft technology for .NET 4 and typically used within silverlight applications. SQL Server DQE compatibility level is now per instance. (Usable in Adapter). It's now possible to set the compatibility level of the SQL Server Dynamic Query Engine (DQE) per instance of the DQE instead of the global setting it was before. The global setting is still available and is used as the default value for the compatibility level per-instance. You can use this to switch between CE Desktop and normal SQL Server compatibility per DataAccessAdapter instance. Support for COUNT_BIG aggregate function (SQL Server specific). The aggregate function COUNT_BIG has been added to the list of available aggregate functions to be used in the framework. Minor changes / tweaks I'm especially pleased with the import system, as that makes working with entity models a lot easier. The import system lets you import from another LLBLGen Pro v3 project any entity definition, mapping and / or meta-data like table definitions. This way you can build repository projects where you store model fragments, e.g. the building blocks for a customer-order system, a user credential model etc., any model you can think of. In most projects, you'll recognize that some parts of your new model look familiar. In these cases it would have been easier if you would have been able to import these parts from projects you had pre-created. With LLBLGen Pro v3.1 you can. For example, say you have an Oracle schema called CRM which contains the bread 'n' butter customer-order-product kind of model. You create an entity model from that schema and save it in a project file. Now you start working on another project for another customer and you have to use SQL Server. You also start using model-first development, so develop the entity model from scratch as there's no existing database. As this customer also requires some CRM like entity model, you import the entities from your saved Oracle project into this new SQL Server targeting project. Because you don't work with Oracle this time, you don't import the relational meta-data, just the entities, their relationships and possibly their inheritance hierarchies, if any. As they're now entities in your project you can change them a bit to match the new customer's requirements. This can save you a lot of time, because you can re-use pre-fab model fragments for new projects. In the example above there are no tables yet (as you work model first) so using the forward mapping capabilities of LLBLGen Pro v3 creates the tables, PK constraints, Unique Constraints and FK constraints for you. This way you can build a nice repository of model fragments which you can re-use in new projects.

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Perfect Your MySQL Database Administrators Skills

    - by Antoinette O'Sullivan
    With its proven ease-of-use, performance, and scalability, MySQL has become the leading database choice for web-based applications, used by high profile web properties including Google, Yahoo!, Facebook, YouTube, Wikipedia and thousands of mid-sized companies. Many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL Database will broaden your career path with growing job demand. Hone your skills as a MySQL Database Administrator by taking the MySQL for Database Administrators course which teaches you how to secure privileges, set resource limitations, access controls and describe backup and recovery basics. You also learn how to create and use stored procedures, triggers and views. You can take this 5 day course through three delivery methods: Training-on-Demand: Take this course at your own pace and at a time that suits you through this high-quality streaming video delivery. You also get to schedule time on a classroom environment to perform the hands-on exercises. Live-Virtual: Attend a live instructor led event from your own desk. 100s of events already of the calendar in many timezones. In-Class: Travel to an education center to attend this class. A sample of events is shown below:  Location  Date  Delivery Language  Budapest, Hungary  26 November 2012  Hungarian  Prague, Czech Republic  19 November 2012  Czech  Warsaw, Poland  10 December 2012  Polish  Belfast, Northern Ireland  26 November, 2012  English  London, England  26 November, 2012  English  Rome, Italy  19 November, 2012  Italian  Lisbon, Portugal  12 November, 2012  European Portugese  Porto, Portugal  21 January, 2013  European Portugese  Amsterdam, Netherlands  19 November, 2012  Dutch  Nieuwegein, Netherlands  8 April, 2013  Dutch  Barcelona, Spain  4 February, 2013  Spanish  Madrid, Spain  19 November, 2012  Spanish  Mechelen, Belgium  25 February, 2013  English  Windhof, Luxembourg  19 November, 2012  English  Johannesburg, South Africa  9 December, 2012  English  Cairo, Egypt  20 October, 2012  English  Nairobi, Kenya  26 November, 2012  English  Petaling Jaya, Malaysia  29 October, 2012  English  Auckland, New Zealand  5 November, 2012  English  Wellington, New Zealand  23 October, 2012  English  Brisbane, Australia  19 November, 2012  English  Edmonton, Canada  7 January, 2013  English  Vancouver, Canada  7 January, 2013  English  Ottawa, Canada  22 October, 2012  English  Toronto, Canada  22 October, 2012  English  Montreal, Canada  22 October, 2012  English  Mexico City, Mexico  10 December, 2012  Spanish  Sao Paulo, Brazil  10 December, 2012  Brazilian Portugese For more information on this course or any aspect of the MySQL curriculum, visit http://oracle.com/education/mysql.

    Read the article

  • But what version is the database now?

    - by BuckWoody
    When you upgrade your system to SQL Server 2008 R2, you’ll know that the instance is at that version by using the standard commands like SELECT @@VERSION or EXEC xp_msver. My system came back with this info when I typed those: Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86)   Apr  2 2010 15:53:02   Copyright (c) Microsoft Corporation  Developer Edition on Windows NT 6.0 <X86> (Build 6002: Service Pack 2) (Hypervisor) Index Name Internal_Value Character_Value 1 ProductName NULL Microsoft SQL Server 2 ProductVersion 655410 10.50.1600.1 3 Language 1033 English (United States) 4 Platform NULL NT INTEL X86 5 Comments NULL SQL 6 CompanyName NULL Microsoft Corporation 7 FileDescription NULL SQL Server Windows NT 8 FileVersion NULL 2009.0100.1600.01 ((KJ_RTM).100402-1540 ) 9 InternalName NULL SQLSERVR 10 LegalCopyright NULL Microsoft Corp. All rights reserved. 11 LegalTrademarks NULL Microsoft SQL Server is a registered trademark of Microsoft Corporation. 12 OriginalFilename NULL SQLSERVR.EXE 13 PrivateBuild NULL NULL 14 SpecialBuild 104857601 NULL 15 WindowsVersion 393347078 6.0 (6002) 16 ProcessorCount 1 1 17 ProcessorActiveMask 1 1 18 ProcessorType 586 PROCESSOR_INTEL_PENTIUM 19 PhysicalMemory 2047 2047 (2146934784) 20 Product ID NULL NULL   But a database properties are separate from the Instance. After an upgrade, you always want to make sure that the compatibility options (which have much to do with how NULLs and other objects are treated) is at what you expect. For the most part, as long as the application can handle it, I set my compatibility levels to the latest version. For SQL Server 2008, that was “10.0” or “10”. You can do this with the ALTER DATABASE command or you can just right-click the database and select “Properties” and then “Database Options” in SQL Server Management Studio. To check the database compatibility level, I use this query: SELECT name, cmptlevel FROM sys.sysdatabases When I did that this morning I saw that the databases (all of them) were at 10.0 – not 10.5 like the Instance. That’s expected – we didn’t revise the database format up with the Instance for this particular release. Didn’t want to catch you by surprise on that. While your databases should be at the “proper” level for your situation, you can’t rely on the compatibility level to indicate the Instance level. More info on the ALTER DATABASE command in SQL Server 2008 R2 is here: http://technet.microsoft.com/en-us/library/bb510680(SQL.105).aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • reference list for non-IT driven algorithmic patterns

    - by Quicker
    I am looking for a reference list for non-IT driven algorithmic patterns (which still can be helped with IT implementations of IT). An Example List would be: name; short desc; reference Travelling Salesman; find the shortest possible route on a multiple target path; http://en.wikipedia.org/wiki/Travelling_salesman_problem Ressource Disposition (aka Regulation); Distribute a limited/exceeding input on a given number output receivers based on distribution rules; http://database-programmer.blogspot.de/2010/12/critical-analysis-of-algorithm-sproc.html If there is no such list, but you instantly think of something specific, please 'put it on the desk'. Maybe I can compile something out of the input I get here (actually I am very frustrated as I did not find any such list via research by myself). Details on Scoping: I found it very hard to formulate what I want in a way everything is out that I do not need (which may be the issue why I did not find anything at google). There is a database centric definition for what I am looking for in the section 'Processes' of the second example link. That somehow fits, but the database focus sort of drifts away from the pattern thinking, which I have in mind. So here are my own thoughts around what's in and what's out: I am NOT looking for a foundational algo ref list, which is implemented as basis for any programming language. Eg. the php reference describes substr and strlen. That implements algos, but is not what I am looking for. the problem the algo does address would even exist, if there were no computers (or other IT components) the main focus of the algo is NOT to help other algo's chances are high, that there are implementions of the solution or any workaround without any IT support out there in the world however the algo could be benefitialy implemented/fully supported by a software application = means: the problem of the algo has to be addressed anyway, but running an algo implementation with software automates the process (that is why I posted it on stackoverflow and not somewhere else) typically such algo implementations have more than one input field value and more than one output field value - which implies it could not be implemented as simple function (which is fixed to produce not more than one output value) in a normalized data model often times such algo implementation outputs span accross multiple rows (sometimes multiple tables), whereby the number of output rows depends on the input paraters and rows in the table(s) at start time - which implies that any algo implementation/procedure must interact with a database (read and/or write) I am mainly looking for patterns, not for specific implementations. Example: The Travelling Salesman assumes any coordinates, however it does not say: You need a table targets with fields x and y. - however sometimes descriptions are focussed on examples with specific implementations very much - no worries, as long as the pattern gets clear

    Read the article

  • Advantage database throws an exception when attempting to delete a record with a like statement used

    - by ChrisR
    The code below shows that a record is deleted when the sql statement is: select * from test where qty between 50 and 59 but the sql statement: select * from test where partno like 'PART/005%' throws the exception: Advantage.Data.Provider.AdsException: Error 5072: Action requires read-write access to the table How can you reliably delete a record with a where clause applied? Note: I'm using Advantage Database v9.10.1.9, VS2008, .Net Framework 3.5 and WinXP 32 bit using System.IO; using Advantage.Data.Provider; using AdvantageClientEngine; using NUnit.Framework; namespace NetworkEidetics.Core.Tests.Dbf { [TestFixture] public class AdvantageDatabaseTests { private const string DefaultConnectionString = @"data source={0};ServerType=local;TableType=ADS_CDX;LockMode=COMPATIBLE;TrimTrailingSpaces=TRUE;ShowDeleted=FALSE"; private const string TestFilesDirectory = "./TestFiles"; [SetUp] public void Setup() { const string createSql = @"CREATE TABLE [{0}] (ITEM_NO char(4), PARTNO char(20), QTY numeric(6,0), QUOTE numeric(12,4)) "; const string insertSql = @"INSERT INTO [{0}] (ITEM_NO, PARTNO, QTY, QUOTE) VALUES('{1}', '{2}', {3}, {4})"; const string filename = "test.dbf"; var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var transaction = connection.BeginTransaction()) { using (var command = connection.CreateCommand()) { command.CommandText = string.Format(createSql, filename); command.Transaction = transaction; command.ExecuteNonQuery(); } transaction.Commit(); } using (var transaction = connection.BeginTransaction()) { for (var i = 0; i < 1000; ++i) { using (var command = connection.CreateCommand()) { var itemNo = string.Format("{0}", i); var partNumber = string.Format("PART/{0:d4}", i); var quantity = i; var quote = i * 10; command.CommandText = string.Format(insertSql, filename, itemNo, partNumber, quantity, quote); command.Transaction = transaction; command.ExecuteNonQuery(); } } transaction.Commit(); } connection.Close(); } } [TearDown] public void TearDown() { File.Delete("./TestFiles/test.dbf"); } [Test] public void CanDeleteRecord() { const string sqlStatement = @"select * from test"; Assert.AreEqual(1000, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(999, GetRecordCount(sqlStatement)); } [Test] public void CanDeleteRecordBetween() { const string sqlStatement = @"select * from test where qty between 50 and 59"; Assert.AreEqual(10, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(9, GetRecordCount(sqlStatement)); } [Test] public void CanDeleteRecordWithLike() { const string sqlStatement = @"select * from test where partno like 'PART/005%'"; Assert.AreEqual(10, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(9, GetRecordCount(sqlStatement)); } public int GetRecordCount(string sqlStatement) { var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = sqlStatement; var reader = command.ExecuteExtendedReader(); return reader.GetRecordCount(AdsExtendedReader.FilterOption.RespectFilters); } } } public void DeleteRecord(string sqlStatement, int rowIndex) { var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = sqlStatement; var reader = command.ExecuteExtendedReader(); reader.GotoBOF(); reader.Read(); if (rowIndex != 0) { ACE.AdsSkip(reader.AdsActiveHandle, rowIndex); } reader.DeleteRecord(); } connection.Close(); } } } }

    Read the article

  • Teminal non-responsive on load, can't enter anything until CTRL+C

    - by Silver Light
    Hello! I have an issue with terminal in Ubuntu 10.04. When I launch it, it hangs, like this: I cannot do anything until I press CTRL+C: I cannot remember when this started. What can be wrong? Looks like teminal is loading or processing something each time it loads. How can I diagnose and solve this problem? EDIT: Here are the conents of ~/.bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt #force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi # Source .profile if [ -f ~/.profile ]; then . ~/.profile fi Setting -x at the beginning showed me that it tries to repeat this without stopping: +++++++++++++++++++ '[' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' '!=' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' ']' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line=' acroread gpdf xpdf' +++++++++++++++++++ list=("${list[@]}" $line) +++++++++++++++++++ read line

    Read the article

  • The World of SQL Database Deployment

    - by GGBlogger
    In my early development days, I used Microsoft Access for building databases. It made things easy since I only needed to package the database with the installation package so my clients would have access to it. When we began the development of a new package in Visual Studio .NET I decided to use SQL Server Express. It was free and provided good tools - also free. I thought it was a tremendous idea until it came time to distribute our new software! What a surprise. The nightmare Ah, the choices! Detach the database and have the client reattach it to a newly installed – oh wait. FIRST my new client needs to download and install SQL Server Express with SQL Server Management Studio. That’s not a great thing, but it is one more nightmare step for users who may have other versions of SQL installed. Then the question became – do we detach and reattach or do we do a backup. It was too late (bad planning) to revert to Microsoft Access but we badly needed a simple way to package and distribute both the database AND sample contents. Red Gate to the rescue It took me a while to find an answer but I did find it in a package called SQL Packager sold by a relatively unpublicized company in England called Red Gate. They call their products “ingeniously simple” and I must agree with that description. With SQL Packager you point to the database (more in a minute) you want to distribute. A few mouse clicks and dialogs and you have an executable file that you can ship virtually anywhere and virtually any way which, when run, installs the database on your destination SQL Server instance! It really is that simple. Easier to show than tell Let’s explore a hypothetical case. Let’s say you have a local SQL database of customers and you have decided you want to share it with your subsidiaries or partners. Here is the underlying screen you will see on starting SQL Packager. There are a bunch of possibilities here but I’m going to keep this relatively simple. At this point I simply want to illustrate the simplicity of generating an executable to deliver your database. You will notice that you can set up a new package, edit an existing package or change a bunch of options. Start SQL packager And the following is the default dialog you get on startup. In the next dialog, I’ve selected the Server and Database. I’ve also selected Windows Authentication. Pressing Next causes SQL Packager to run a number of checks and produce a report. Now you’re given a comprehensive list of what is going to be packaged and you’re allowed to change it if you desire. I’ve never made any changes here so I can’t really make any suggestions. The just illustrates the comprehensive nature of so many Red Gate products including this one. Clicking Next gives you still further options. SQL Packager then works its magic and shows you a dialog with the results. Packager then gives you a dialog of the scripts it has generated. The capture above only shows 1 of 4 tabs. Finally pressing Next gives you the option to generate a .NET executable of a C# project. I’ve only generated an executable so I’m not in a position to tell you what the C# project looks like. That may be the subject of further discussions. You can rename the package and tell SQL Packager where to save it. I’ve skipped a lot but this will serve to illustrate the comprehensive (and ingenious) things Red Gate does. All in all, it’s a superb way to distribute populated SQL databases. Oh – we’ll save running the resulting executable for later also but believe me it’s insanely simple.

    Read the article

  • Non-blocking I/O using Servlet 3.1: Scalable applications using Java EE 7 (TOTD #188)

    - by arungupta
    Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. In a typical application, ServletInputStream is read in a while loop. public class TestServlet extends HttpServlet {    protected void doGet(HttpServletRequest request, HttpServletResponse response)         throws IOException, ServletException {     ServletInputStream input = request.getInputStream();       byte[] b = new byte[1024];       int len = -1;       while ((len = input.read(b)) != -1) {          . . .        }   }} If the incoming data is blocking or streamed slower than the server can read then the server thread is waiting for that data. The same can happen if the data is written to ServletOutputStream. This is resolved in Servet 3.1 (JSR 340, to be released as part Java EE 7) by adding event listeners - ReadListener and WriteListener interfaces. These are then registered using ServletInputStream.setReadListener and ServletOutputStream.setWriteListener. The listeners have callback methods that are invoked when the content is available to be read or can be written without blocking. The updated doGet in our case will look like: AsyncContext context = request.startAsync();ServletInputStream input = request.getInputStream();input.setReadListener(new MyReadListener(input, context)); Invoking setXXXListener methods indicate that non-blocking I/O is used instead of the traditional I/O. At most one ReadListener can be registered on ServletIntputStream and similarly at most one WriteListener can be registered on ServletOutputStream. ServletInputStream.isReady and ServletInputStream.isFinished are new methods to check the status of non-blocking I/O read. ServletOutputStream.canWrite is a new method to check if data can be written without blocking.  MyReadListener implementation looks like: @Overridepublic void onDataAvailable() { try { StringBuilder sb = new StringBuilder(); int len = -1; byte b[] = new byte[1024]; while (input.isReady() && (len = input.read(b)) != -1) { String data = new String(b, 0, len); System.out.println("--> " + data); } } catch (IOException ex) { Logger.getLogger(MyReadListener.class.getName()).log(Level.SEVERE, null, ex); }}@Overridepublic void onAllDataRead() { System.out.println("onAllDataRead"); context.complete();}@Overridepublic void onError(Throwable t) { t.printStackTrace(); context.complete();} This implementation has three callbacks: onDataAvailable callback method is called whenever data can be read without blocking onAllDataRead callback method is invoked data for the current request is completely read. onError callback is invoked if there is an error processing the request. Notice, context.complete() is called in onAllDataRead and onError to signal the completion of data read. For now, the first chunk of available data need to be read in the doGet or service method of the Servlet. Rest of the data can be read in a non-blocking way using ReadListener after that. This is going to get cleaned up where all data read can happen in ReadListener only. The sample explained above can be downloaded from here and works with GlassFish 4.0 build 64 and onwards. The slides and a complete re-run of What's new in Servlet 3.1: An Overview session at JavaOne is available here. Here are some more references for you: Java EE 7 Specification Status Servlet Specification Project JSR Expert Group Discussion Archive Servlet 3.1 Javadocs

    Read the article

  • Creating an object relational schema from a Class diagram

    - by Caylem
    Hi Ladies and Gents. I'd like some help converting the following UML diagram: UML Diagram The diagram shows 4 classes and is related to a Loyalty card scheme for an imaginary supermarket. I'd like to create an object relational data base schema from it for use with Oracle 10g/11g. Not sure where to begin, if somebody could give me a head start that would be great. Looking for actually starting the schema, show abstraction, constraints, types(subtypes, supertypes) methods and functions. Note: I'm not looking for anyone to make any comments regarding the actual classes and whether changes should be made to the Diagram, just the schema. Thanks

    Read the article

  • Reading data from database and binding them to custom ListView

    - by N.K.
    I try to read data from a database i have made and to show some of the data in a row at a custom ListView. I can not understand what is my mistake. This is my code: public class EsodaMainActivity extends Activity { public static final String ROW_ID = "row_id"; //Intent extra key private ListView esodaListView; // the ListActivitys ListView private SimpleCursorAdapter esodaAdapter; // adapter for ListView DatabaseConnector databaseConnector = new DatabaseConnector(EsodaMainActivity.this); // called when the activity is first created @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_esoda_main); esodaListView = (ListView)findViewById(R.id.esodaList); esodaListView.setOnItemClickListener(viewEsodaListener); databaseConnector.open(); //Cursor cursor= databaseConnector.query("esoda", new String[] // {"name", "amount"}, null,null,null); Cursor cursor=databaseConnector.getAllEsoda(); startManagingCursor(cursor); // map each esoda to a TextView in the ListView layout // The desired columns to be bound String[] from = new String[] {"name","amount"}; // built an String array named "from" //The XML defined views which the data will be bound to int[] to = new int[] { R.id.esodaTextView, R.id.amountTextView}; // built an int array named "to" // EsodaMainActivity.this = The context in which the ListView is running // R.layout.esoda_list_item = Id of the layout that is used to display each item in ListView // null = // from = String array containing the column names to display // to = Int array containing the column names to display esodaAdapter = new SimpleCursorAdapter (this, R.layout.esoda_list_item, cursor, from, to); esodaListView.setAdapter(esodaAdapter); // set esodaView's adapter } // end of onCreate method @Override protected void onResume() { super.onResume(); // call super's onResume method // create new GetEsodaTask and execute it // GetEsodaTask is an AsyncTask object new GetEsodaTask().execute((Object[]) null); } // end of onResume method // onStop method is executed when the Activity is no longer visible to the user @Override protected void onStop() { Cursor cursor= esodaAdapter.getCursor(); // gets current cursor from esodaAdapter if (cursor != null) cursor.deactivate(); // deactivate cursor esodaAdapter.changeCursor(null); // adapter now has no cursor (removes the cursor from the CursorAdapter) super.onStop(); } // end of onStop method // this class performs db query outside the GUI private class GetEsodaTask extends AsyncTask<Object, Object, Cursor> { // we create a new DatabaseConnector obj // EsodaMainActivity.this = Context DatabaseConnector databaseConnector = new DatabaseConnector(EsodaMainActivity.this); // perform the db access @Override protected Cursor doInBackground(Object... params) { databaseConnector.open(); // get a cursor containing call esoda return databaseConnector.getAllEsoda(); // the cursor returned by getAllContacts() is passed to method onPostExecute() } // end of doInBackground method // here we use the cursor returned from the doInBackground() method @Override protected void onPostExecute(Cursor result) { esodaAdapter.changeCursor(result); // set the adapter's Cursor databaseConnector.close(); } // end of onPostExecute() method } // end of GetEsodaTask class // creates the Activity's menu from a menu resource XML file @Override public boolean onCreateOptionsMenu(Menu menu) { super.onCreateOptionsMenu(menu); MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.esoda_menu, menu); // inflates(eµf?s?) esodamainactivity_menu.xml to the Options menu return true; } // end of onCreateOptionsMenu() method //handles choice from options menu - is executed when the user touches a MenuItem @Override public boolean onOptionsItemSelected(MenuItem item) { // creates a new Intent to launch the AddEditEsoda Activity // EsodaMainActivity.this = Context from which the Activity will be launched // AddEditEsoda.class = target Activity Intent addNewEsoda = new Intent(EsodaMainActivity.this, AddEditEsoda.class); startActivity(addNewEsoda); return super.onOptionsItemSelected(item); } // end of method onPtionsItemSelected() // event listener that responds to the user touching a esoda's name in the ListView OnItemClickListener viewEsodaListener = new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> arg0, View arg1, int arg2, long arg3) { // create an intent to launch the ViewEsoda Activity Intent viewEsoda = new Intent(EsodaMainActivity.this, ViewEsoda.class); // pass the selected esoda's row ID as an extra with the Intent viewEsoda.putExtra(ROW_ID, arg3); startActivity(viewEsoda); // start viewEsoda.class Activity } // end of onItemClick() method }; // end of viewEsodaListener } // end of EsodaMainActivity class The statement: Cursor cursor=databaseConnector.getAllEsoda(); queries all data (columns) From the data I want to show at my custom ListView 2 of them: "name" and "amount". But I still get a debugger error. Please help.

    Read the article

  • Import de-normalized relational data from Excel into SQL Server

    - by roryf
    I need to import data from an Excel spreadsheet into SQL Server, but the data isn't in a relational/normalized format so the import wizard isn't going to cut it (as far as I know). The data is in this format: Category SubCategory Name Description Category#1 SubCategory#1 Product#1 Description#1 Category#1 SubCategory#1 Product#2 Description#2 Category#1 SubCategory#2 Product#3 Description#3 Category#1 SubCategory#2 Product#4 Description#4 Category#2 SubCategory#3 Product#5 Description#5 (apologies I'm lacking the inventiveness to come up with 'real' data at this time in the morning...) Each row contains a unique product, but the cateogry structure is duplicated. I want to import this data into three tables: Category SubCategory Product (I know SubCategory should really be contained within Category, DB was not my design) I need a way to import unique rows based on the Category and then SubCategory columns, and then when importing the other columns into Product, obtain a reference to the SubCategory based on name. Short of scripting this, is there any way to do it using the import wizard or some other tool?

    Read the article

  • Unique constraint on more than 10 columns

    - by tk
    I have a time-series simulation model which has more than 10 input variables. The number of distinct simulation instances would be more than 1 million, and each simulation instance generates a few output rows every day. To save the simulation result in a relational database, i designed tables like this. Table SimulationModel { simul_id : integer (primary key), input0 : string or numeric, input1 : string or numeric, ...} Table SimulationOutput { dt : DateTime (primary key), simul_id : integer (primary key), output0 : numeric, ...} My question is, is it fine to put an unique constraint on all of the input columns of SimulationModel table? If it is not a good idea, then what kind of other options do i have to make sure each model is unique?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >