Search Results

Search found 10101 results on 405 pages for 'temporary tables'.

Page 342/405 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

  • HTML Agility Pack

    - by Harikrishna
    I have html tables in one webpage like <table border=1> <tr><td>sno</td><td>sname</td></tr> <tr><td>111</td><td>abcde</td></tr> <tr><td>213</td><td>ejkll</td></tr> </table> <table border=1> <tr><td>adress</td><td>phoneno</td><td>note</td></tr> <tr><td>asdlkj</td><td>121510</td><td>none</td></tr> <tr><td>asdlkj</td><td>214545</td><td>none</td></tr> </table> Now from this webpage using html agility pack I want to extract the data of the column address and phone no only. It means for that I have find first in which table there is column address and phoneno.After finding that table I want to extract the data of that column address and phoneno what should I do ? I can get the table. But after that what should I do don't understand.

    Read the article

  • How do I make software that preserves database integrity and correctness?

    - by user287745
    I have made an application project in Visual Studio 2008 C#, SQL Server from Visual Studio 2008. The database has like 20 tables and many fields in each. I have made an interface for adding deleting editing and retrieving data according to predefined needs of the users. Now I have to Make to project into software which I can deliver to my professor. That is, he can just double click the icon and the software simply starts. No Visual Studio 2008 needed to start the debugging. The database will be on one powerful computer (dual core latest everything Windows XP) and the user will access it from another computer connected using LAN. I am able to change the connection string to the shared database using Visual Studio 2008/ debugger whenever the server changes but how am I supposed to do that when it's software? There will by many clients. Am I supposed to give the same software to every one, so they all can connect to the database? How will the integrity and correctness of the database be maintained? I mean the db.mdf file will be in a folder which will be shared with read and write access. So it's not necessary that only one user will write at a time. So is there any coding for this or?

    Read the article

  • What other things would be good to include in CSS reset (along with eric meyer reset) for any projec

    - by metal-gear-solid
    I know and use eric meyer CSS reset, but is there any more things which would be good to add in reset css? and can save our time and increase compatibility. This is default meyer's latest CSS reset code. /* v1.0 | 20080212 */ html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, font, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, b, u, i, center, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td { margin: 0; padding: 0; border: 0; outline: 0; font-size: 100%; vertical-align: baseline; background: transparent; } body { line-height: 1; } ol, ul { list-style: none; } blockquote, q { quotes: none; } blockquote:before, blockquote:after, q:before, q:after { content: ''; content: none; } /* remember to define focus styles! */ :focus { outline: 0; } /* remember to highlight inserts somehow! */ ins { text-decoration: none; } del { text-decoration: line-through; } /* tables still need 'cellspacing="0"' in the markup */ table { border-collapse: collapse; border-spacing: 0; }

    Read the article

  • SQL Server: Output an XML field as tabular data using a stored procedure

    - by Pawan
    I am using a table with an XML data field to store the audit trails of all other tables in the database. That means the same XML field has various XML information. For example my table has two records with XML data like this: 1st record: <client> <name>xyz</name> <ssn>432-54-4231</ssn> </client> 2nd record: <emp> <name>abc</name> <sal>5000</sal> </emp> These are the two sample formats and just two records. The table actually has many more XML formats in the same field and many records in each format. Now my problem is that upon query I need these XML formats to be converted into tabular result sets. What are the options for me? It would be a regular task to query this table and generate reports from it. I want to create a stored procedure to which I can pass that I need to query "<emp>" or "<client>", then my stored procedure should return tabular data.

    Read the article

  • Linq-to-SQL: How to shape the data with group by?

    - by Cheeso
    I have an example database, it contains tables for Movies, People and Credits. The Movie table contains a Title and an Id. The People table contains a Name and an Id. The Credits table relates Movies to the People that worked on those Movies, in a particular role. The table looks like this: CREATE TABLE [dbo].[Credits] ( [Id] [int] IDENTITY (1, 1) NOT NULL PRIMARY KEY, [PersonId] [int] NOT NULL FOREIGN KEY REFERENCES People(Id), [MovieId] [int] NOT NULL FOREIGN KEY REFERENCES Movies(Id), [Role] [char] (1) NULL In this simple example, the [Role] column is a single character, by my convention either 'A' to indicate the person was an actor on that particular movie, or 'D' for director. I'd like to perform a query on a particular person that returns the person's name, plus a list of all the movies the person has worked on, and the roles in those movies. If I were to serialize it to json, it might look like this: { "name" : "Clint Eastwood", "movies" : [ { "title": "Unforgiven", "roles": ["actor", "director"] }, { "title": "Sands of Iwo Jima", "roles": ["director"] }, { "title": "Dirty Harry", "roles": ["actor"] }, ... ] } How can I write a LINQ-to-SQL query that shapes the output like that? I'm having trouble doing it efficiently. if I use this query: int personId = 10007; var persons = from p in db.People where p.Id == personId select new { name = p.Name, movies = (from m in db.Movies join c in db.Credits on m.Id equals c.MovieId where (c.PersonId == personId) select new { title = m.Title, role = (c.Role=="D"?"director":"actor") }) }; I get something like this: { "name" : "Clint Eastwood", "movies" : [ { "title": "Unforgiven", "role": "actor" }, { "title": "Unforgiven", "role": "director" }, { "title": "Sands of Iwo Jima", "role": "director" }, { "title": "Dirty Harry", "role": "actor" }, ... ] } ...but as you can see there's a duplicate of each movie for which Eastwood played multiple roles. How can I shape the output the way I want?

    Read the article

  • Excel VBA Macro for Pivot Table with Dynamic Data Range

    - by John Ziebro
    CODE IS WORKING! THANKS FOR THE HELP! I am attempting to create a dynamic pivot table that will work on data that varies in the number of rows. Currently, I have 28,300 rows, but this may change daily. Example of data format as follows: Case Number Branch Driver 1342 NYC Bob 4532 PHL Jim 7391 CIN John 8251 SAN John 7211 SAN Mary 9121 CLE John 7424 CIN John Example of finished table: Driver NYC PHL CIN SAN CLE Bob 1 0 0 0 0 Jim 0 1 0 0 0 John 0 0 2 1 1 Mary 0 0 0 1 0 Code as follows: Sub CreateSummaryReportUsingPivot() ' Use a Pivot Table to create a static summary report ' with model going down the rows and regions across Dim WSD As Worksheet Dim PTCache As PivotCache Dim PT As PivotTable Dim PRange As Range Dim FinalRow As Long Dim FinalCol As Long Set WSD = Worksheets("PivotTable") 'Name active worksheet as "PivotTable" ActiveSheet.Name = "PivotTable" ' Delete any prior pivot tables For Each PT In WSD.PivotTables PT.TableRange2.Clear Next PT ' Define input area and set up a Pivot Cache FinalRow = WSD.Cells(Application.Rows.Count, 1).End(xlUp).Row FinalCol = WSD.Cells(1, Application.Columns.Count). _ End(xlToLeft).Column Set PRange = WSD.Cells(1, 1).Resize(FinalRow, FinalCol) Set PTCache = ActiveWorkbook.PivotCaches.Add(SourceType:= _ xlDatabase, SourceData:=PRange) ' Create the Pivot Table from the Pivot Cache Set PT = PTCache.CreatePivotTable(TableDestination:=WSD. _ Cells(2, FinalCol + 2), TableName:="PivotTable1") ' Turn off updating while building the table PT.ManualUpdate = True ' Set up the row fields PT.AddFields RowFields:="Driver", ColumnFields:="Branch" ' Set up the data fields With PT.PivotFields("Case Number") .Orientation = xlDataField .Function = xlCount .Position = 1 End With With PT .ColumnGrand = False .RowGrand = False .NullString = "0" End With ' Calc the pivot table PT.ManualUpdate = False PT.ManualUpdate = True End Sub

    Read the article

  • Adaptive user interface/environment algorithm

    - by WowtaH
    Hi all, I'm working on an information system (in C#) that (while my users use it) gathers statistical data on what pieces of information (tables & records) each user is requesting the most, and what parts of the interface he/she uses most. I'm using this statistical data to make the application adaptive to the user's needs, both in the way the interface presents itself (eg: tab/pane-ordering) as in the way of using the frequently viewed information to (eg:) show higher in search results/suggestion-lists. What i'm looking for is an algorithm/formula to determine the current 'hotness'/relevance of these objects for a specific user. A simple 'hitcounter' for each object won't be sufficient because the user might view some information quite frequently for a period of time, and then moving on to the next, making the old information less relevant. So i think my algorithm also needs some sort of sliding/historical principle to account for the changing popularity of the objects in the application over time. So, the question is: Does anybody have some sort of algorithm that accounts for that 'popularity over time' ? Preferably with some explanation on the parameters :) Thanks! PS I've looked at other posts like http://stackoverflow.com/questions/32397/popularity-algorithm but i could't quite port it to my specific case. Any help is appreciated.

    Read the article

  • Doctrine 1.2: How do i prevent a contraint from being assigned to both sides of a One-to-many relati

    - by prodigitalson
    Is there a way to prevent Doctrine from assigning a contraint on both sides of a one-to-one relationship? Ive tried moving the definition from one side to the other and using owning side but it still places a constraint on both tables. when I only want the parent table to have a constraint - ie. its possible for the parent to not have an associated child. For example iwant the following sql schema essentially: CREATE TABLE `parent_table` ( `child_id` varchar(50) NOT NULL, `id` integer UNSIGNED NOT NULL auto_increment, PRIMARY KEY (`id`) ); CREATE TABLE `child_table` ( `id` integer UNSIGNED NOT NULL auto_increment, `child_id` varchar(50) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY (`child_id`), CONSTRAINT `parent_table_child_id_FK_child_table_child_id` FOREIGN KEY (`child_id`) REFERENCES `parent_table` (`child_id`) ); However im getting something like this: CREATE TABLE `parent_table` ( `child_id` varchar(50) NOT NULL, `id` integer UNSIGNED NOT NULL auto_increment, PRIMARY KEY (`id`), CONSTRAINT `child_table_child_id_FK_parent_table_child_id` FOREIGN KEY (`child_id`) REFERENCES `child_table` (`child_id`) ); CREATE TABLE `child_table` ( `id` integer UNSIGNED NOT NULL auto_increment, `child_id` varchar(50) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY (`child_id`), CONSTRAINT `parent_table_child_id_FK_child_table_child_id` FOREIGN KEY (`child_id`) REFERENCES `parent_table` (`child_id`) ); I could just remove the constraint manually or modify my accessors to return/set a single entity in the collection (using a one-to-many) but it seems like there should built in way to handle this. Also im using Symfony 1.4.4 (pear installtion ATM) - in case its an sfDoctrinePlugin issue and not necessarily Doctrine itself.

    Read the article

  • django powering multiple shops from one code base on a single domain

    - by imanc
    Hey, I am new to django and python and am trying to figure out how to modify an existing app to run multiple shops through a single domain. Django's sites middleware seems inappropriate in this particular case because it manages different domains, not sites run through the same domain, e.g. : domain.com/uk domain.com/us domain.com/es etc. Each site will need translated content - and minor template changes. The solution needs to be flexible enough to allow for easy modification of templates. The forms will also need to vary a bit, e.g minor variances in fields and validation for each country specific shop. I am thinking along the lines of the following as a solution and would love some feedback from experienced django-ers: In short: same codebase, but separate country specific urls files, separate templates and separate database Create a middleware class that does IP localisation, determines the country based on the URL and creates a database connection, e.g. /au/ will point to the au specific database and so on. in root urls.py have routes that point to a separate country specific routing file, e..g (r'^au/',include('urls_au')), (r'^es/',include('urls_es')), use a single template directory but in that directory have a localised directory structure, e.g. /base.html and /uk/base.html and write a custom template loader that looks for local templates first. (or have a separate directory for each shop and set the template directory path in middleware) use the django internationalisation to manage translation strings throughout slight variances in forms and models (e.g. ZA has an ID field, France has 'door code' and 'floor' etc.) I am unsure how to handle these variations but I suspect the tables will contain all fields but allowing nulls and the model will have all fields but allowing nulls. The forms will to be modified slightly for each shop. Anyway, I am keen to get feedback on the best way to go about achieving this multi site solution. It seems like it would work, but feels a bit "hackish" and I wonder if there's a more elegant way of getting this solution to work. Thanks, imanc

    Read the article

  • Why put a DAO layer over a persistence layer (like JDO or Hibernate)

    - by Todd Owen
    Data Access Objects (DAOs) are a common design pattern, and recommended by Sun. But the earliest examples of Java DAOs interacted directly with relational databases -- they were, in essence, doing object-relational mapping (ORM). Nowadays, I see DAOs on top of mature ORM frameworks like JDO and Hibernate, and I wonder if that is really a good idea. I am developing a web service using JDO as the persistence layer, and am considering whether or not to introduce DAOs. I foresee a problem when dealing with a particular class which contains a map of other objects: public class Book { // Book description in various languages, indexed by ISO language codes private Map<String,BookDescription> descriptions; } JDO is clever enough to map this to a foreign key constraint between the "BOOKS" and "BOOKDESCRIPTIONS" tables. It transparently loads the BookDescription objects (using lazy loading, I believe), and persists them when the Book object is persisted. If I was to introduce a "data access layer" and write a class like BookDao, and encapsulate all the JDO code within this, then wouldn't this JDO's transparent loading of the child objects be circumventing the data access layer? For consistency, shouldn't all the BookDescription objects be loaded and persisted via some BookDescriptionDao object (or BookDao.loadDescription method)? Yet refactoring in that way would make manipulating the model needlessly complicated. So my question is, what's wrong with calling JDO (or Hibernate, or whatever ORM you fancy) directly in the business layer? Its syntax is already quite concise, and it is datastore-agnostic. What is the advantage, if any, of encapsulating it in Data Access Objects?

    Read the article

  • Using A Local file path in a Streamwriter object ASP.Net

    - by Nick LaMarca
    I am trying to create a csv file of some data. I have wrote a function that successfully does this.... Private Sub CreateCSVFile(ByVal dt As DataTable, ByVal strFilePath As String) Dim sw As New StreamWriter(strFilePath, False) ''# First we will write the headers. ''EDataTable dt = m_dsProducts.Tables[0]; Dim iColCount As Integer = dt.Columns.Count For i As Integer = 0 To iColCount - 1 sw.Write(dt.Columns(i)) If i < iColCount - 1 Then sw.Write(",") End If Next sw.Write(sw.NewLine) ''# Now write all the rows. For Each dr As DataRow In dt.Rows For i As Integer = 0 To iColCount - 1 If Not Convert.IsDBNull(dr(i)) Then sw.Write(dr(i).ToString()) End If If i < iColCount - 1 Then sw.Write(",") End If Next sw.Write(sw.NewLine) Next sw.Close() End Sub The problem is I am not using the streamwriter object correctly for what I trying to accomplish. Since this is an asp.net I need the user to pick a local filepath to put the file on. If I pass any path to this function its gonna try to write it to the directory specified on the server where the code is. I would like this to popup and let the user select a place on their local machine to put the file.... Dim exData As Byte() = File.ReadAllBytes(Server.MapPath(eio)) File.Delete(Server.MapPath(eio)) Response.AddHeader("content-disposition", String.Format("attachment; filename={0}", fn)) Response.ContentType = "application/x-msexcel" Response.BinaryWrite(exData) Response.Flush() Response.End() I am calling the first function in code like this... Dim emplTable As DataTable = SiteAccess.DownloadEmployee_H() CreateCSVFile(emplTable, "C:\\EmplTable.csv") Where I dont want to have specify the file loaction (because this will put the file on the server and not on a client machine) but rather let the user select the location on their client machine. Can someone help me put this together? Thanks in advance.

    Read the article

  • Delphi dbExpress and Interbase: UTF8 migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally.

    Read the article

  • Minimum privileges to read SQL Jobs using SQL SMO

    - by Gustavo Cavalcanti
    I wrote an application to use SQL SMO to find all SQL Servers, databases, jobs and job outcomes. This application is executed through a scheduled task using a local service account. This service account is local to the application server only and is not present in any SQL Server to be inspected. I am having problems getting information on job and job outcomes when connecting to the servers using a user with dbReader rights on system tables. If we set the user to be sysadmin on the server it all works fine. My question to you is: What are the minimum privileges a local SQL Server user needs to have in order to connect to the server and inspect jobs/job outcomes using the SQL SMO API? I connect to each SQL Server by doing the following: var conn = new ServerConnection { LoginSecure = false, ApplicationName = "SQL Inspector", ServerInstance = serverInstanceName, ConnectAsUser = false, Login = user, Password = password }; var smoServer = new Server (conn); I read the jobs by reading smoServer.JobServer.Jobs and read the JobSteps property on each of these jobs. The variable server is of type Microsoft.SqlServer.Management.Smo.Server. user/password are of the user found in each SQL Server to be inspected. If "user" is SysAdmin on the SQL Server to be inspected all works ok, as well as if we set ConnectAsUser to true and execute the scheduled task using my own credentials, which grants me SysAdmin privileges on SQL Server per my Active Directory membership. Thanks!

    Read the article

  • How to combine a Distance and Keyword SQL query?

    - by Jason
    Hi Folks, I have a tables in my database called "points" and "category". A user will input info into both a location input and a keyword input text box. Then I want to find points in my table where the keyword matches either the "title" field in the points table, or the "category" but are within a certain distance from the user's location. I want to order the results by distance. Here are the 2 queries which btoh work independently: $mysql = "SELECT *, ( 3959 * acos( cos( radians('$search_lat') ) * cos( radians( lat ) ) * cos( radians( longi ) - radians('$search_lng') ) + sin( radians('$search_lat') ) * sin( radians( lat ) ) ) ) AS distance FROM points HAVING distance < '$radius'"; $mysql2 = "SELECT * FROM `points` LEFT JOIN category USING ( category_id ) WHERE (point_title LIKE '%$esc_catsearch%' OR category.title LIKE '%$esc_catsearch%')"; Here is what I tried: $sql_search = sprintf("SELECT *,point_id FROM points WHERE point_title LIKE '%%%s%%' UNION SELECT *, ( 3959 * acos( cos( radians('%s') ) * cos( radians( lat ) ) * cos( radians( longi ) - radians('%s') ) + sin( radians('%s') ) * sin( radians( lat ) ) ) ) AS distance FROM points HAVING distance < '%s' ORDER BY distance LIMIT %d , %d", $esc_catsearch, mysql_real_escape_string($search_lat), mysql_real_escape_string($search_lng), mysql_real_escape_string($search_lat), mysql_real_escape_string($radius), $offset, $rowsPerPage); But it tells me there is no know column "distance". If I remove the "Order By" phrase then it works but I'm still not sure this is giving me the results I want. I also tried the query the other way around with the distance search first but that seems to ignore my keyword. Any thoughts would be much appreciated!

    Read the article

  • problem with mysql character set & GWT

    - by Ehsan Khodarahmi
    Hi I've a SmartGWT application which interacts with a mysql database using rpc services. Suppose it as a simple form with a textbox & two save & load buttons. My database & tables & all fields collation is utf8_persian_ci. All java source files & module html & xml files have saved with utf8 character set. & also I've a meta tag in module html file which contains my form : <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> my application works correctly in eclipse develpment mode & also in my local tomcat server. Then i put it on remote server (I compress it using jar.exe into a war file with -cvf flag & then upload it using my server's plesk control panel). In this mode, when I load data from a mysql table (load a record from any table), data will load into my form with no problem, but when I want to save some data (in persian language), mysql just writes some ? (question sign) in characteristic table fields. Any idea ?

    Read the article

  • C# - Class Type as Parameter in methods

    - by Claudio
    Hi guys, I'm using SQLite.cs wrapper for helping me with the database and I have this method for create XML from a table, that is working fine. public void GenerateInvoiceXML(string filePath) { var invoices = app.db.Table<Invoice>().ToList(); XmlSerializer serializer = new XmlSerializer( typeof(List<Invoice>) ); TextWriter writer = new StreamWriter(filePath); serializer.Serialize(writer,invoices); writer.Close(); } All tables that I have are defined like this: [Serializable] public class Invoice { [PrimaryKey, AutoIncrement] public int Id { get; set; } public string Supplier {get; set;} public string Date {get; set;} public string PaymentMethod {get; set;} public string Notes {get; set;} public Invoice(int newID) { Id = newID; } public Invoice() { } } But I want to change this method for something like this: public void GenerateInvoiceXML(string filePath, Type table) { var dataForXML = app.db.Table<table>().ToList(); XmlSerializer serializer = new XmlSerializer( typeof(List<table>) ); TextWriter writer = new StreamWriter(filePath); serializer.Serialize(writer,dataForXML); writer.Close(); } Does anybody have an idea how to do it? Kind Regards, Claudio

    Read the article

  • Model login constraints based on time

    - by DaDaDom
    Good morning, for an existing web application I need to implement "time based login constraints". It means that for each user, later maybe each group, I can define timeslots when they are (not) allowed to log in into the system. As all data for the application is stored in database tables, I need to somehow create a way to model this idea in that way. My first approach, I will try to explain it here: Create a tree of login constraints (called "timeslots") with the main "categories", like "workday", "weekend", "public holiday", etc. on the top level, which are in a "sorted" order (meaning "public holiday" has a higher priority than "weekday") for each top level node create subnodes, which have a finer timespan, like "monday", "tuesday", ... below that, create an "hour" level: 0, 1, 2, ..., 23. No further details are necessary. set every member to "allowed" by default For every member of the system create a 1:n relationship member:timeslots which defines constraints, e.g. a member A may have A:monday-forbidden and A:tuesday-forbidden Do a depth-first search at every login and check if the member has a constraint. Why a depth first search? Well, I thought that it may be that a member has the rules: A:monday->forbidden, A:monday-10->allowed, A:mondey-11->allowed So a login on monday at 12:30 would fail, but one at 10:30 succeed. For performance reasons I could break the relational database paradigm and set a flag for every entry in the member-to-timeslots-table which is set to true if the member has information set for "finer" timeslots, but that's a second step. Is this model in principle a good idea? Are there existing models? Thanks.

    Read the article

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

  • Sql Serve - Cascade delete has multiple paths

    - by Anders Juul
    Hi all, I have two tables, Results and ComparedResults. ComparedResults has two columns which reference the primary key of the Results table. My problem is that if a record in Results is deleted, I wish to delete all records in ComparedResults which reference the deleted record, regardless of whether it's one column or the other (and the columns may reference the same Results row). A row in Results may deleted directly or through cascade delete caused by deleting in a third table. Googling this could indicate that I need to disable cascade delete and rewrite all cascade deletes to use triggers instead. Is that REALLY nessesary? I'd be prepared to do much restructuring of the database to avoid this, as my main area is OO programming, and databases should 'just work'. It is hard to see, however, how a restructuring could help as I would just move the problem around... Or am I missing something? I am also a bit at a loss as to why my initial construct should even be a problem for the Sql Server?! Any comments welcome and much appreciated! Anders, Denmark

    Read the article

  • nhibernate - mapping with contraints

    - by Tobias Müller
    Hello everybody, I am having a Problem with my nhibernate-mapping and I can't find a solution by searching on stackoverflow/google/documentation. The database I am using has (amongst others) two tables. One is unit with the following fields: id enduring_id starts ends damage_enduring_id [...] The other one is damage, which has the following fields: id enduring_id starts ends [...] The units are assigned to a damage and one damage can have zero, one or more units working on it. Every time a unit moves to annother damage, the dataset is copied. The field "ends" of the old record and "starts" of the new record are set to the current time stamp, enduring_id stays the same. So if I want to know which units were working on a damage at a certain time, I do the following select: select * from unit join damage on damage.enduring_id = unit.damage_enduring_id where unit.starts <= 'time' and unit.ends = 'time' (This is not an actualy query from the database, I made it up to make clear what I mean. The the real database is a little more complex) Now I want to map it that way, so I can load all the damages which are valid at one time (starts <= wanted time <= ends) and that each of them has a Bag with all the attached units at that time (again starts <= wanted time <= ends). Is this possible within the mapping? Sorry if this is a stupid question, but I am pretty new to nhibernate and I have no clue how to do it. Thanks a lot for reading my post! Bye, Tobias

    Read the article

  • Rails: Oracle constraint violation

    - by justinbach
    I'm doing maintenance work on a Rails site that I inherited; it's driven by an Oracle database, and I've got access to both development and production installations of the site (each with its own Oracle DB). I'm running into an Oracle error when trying to insert data on the production site, but not the dev site: ActiveRecord::StatementInvalid (OCIError: ORA-00001: unique constraint (DATABASE_NAME.PK_REGISTRATION_OWNERSHIP) violated: INSERT INTO registration_ownerships (updated_at, company_ownership_id, created_by, updated_by, registration_id, created_at) VALUES ('2006-05-04 16:30:47', 3, NULL, NULL, 2920, '2006-05-04 16:30:47')): /usr/local/lib/ruby/gems/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:221:in `execute' app/controllers/vendors_controller.rb:94:in `create' As far as I can tell (I'm using Navicat as an Oracle client), the DB schema for the dev site is identical to that of the live site. I'm not an Oracle expert; can anyone shed light on why I'd be getting the error in one installation and not the other? Incidentally, both dev and production registration_ownerships tables are populated with lots of data, including duplicate entries for country_ownership_id (driven by index PK_REGISTRATION_OWNERSHIP). Please let me know if you need more information to troubleshoot. I'm sorry I haven't given more already, but I just wasn't sure which details would be helpful.

    Read the article

  • HTML to 'pretty' text conversion for printing on text only printer (dot matrix)

    - by Gala101
    Hi, I have a web-site that generates some simple tabular data as html tables, many of my users print the web-page on a laser/inkjet printer; however some like to print on legacy Dot Matrix printers (text only) and there-in lies the problem. When giving Print from web-browser onto dot-matrix printer, the printer actually perceives data as 'graphic'/image and proceeds to print it dot-by-dot. i.e If printing a character 'C', printer slices it horizontally and prints in 3-4 passes. Same printer prints a text from an ASCII file (say from notepad) as complete characters in single pass, thereby being 5 times faster and much quieter than when printing a web-page. (Even tried 'generic text-only driver' but Mozilla Firefox has a know bug that it does not print anything over this particular driver since 2.0+) So is there some clean way of formatting an already generated HTML (say method takes the entire html table as string) and generates a corresponding text file with properly aligned columns? I have tried stripping the html tags, but the major issue there is performing good 'wrapping' of a cell's data and maintaining integrity of other cells' data (from same row). eg: ( '|' and '_' not really required) Col1 | Col2 | Colum_Name3 | Col4 | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 1 | this cell | this column | smaller | | is in three| spans 2 rows | | | rows | | | - - - - - - - - - - - - - - - - - - - - - - - - 2 | smaller now| this also | but this| | | | cell's | | | | data is | | | | now | | | | bigger | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Could you please suggest preferred approach? I've thought of using xslt and somehow outputting text (instead of more prevalent pdf), but Apache FOP's text renderer is really broken and perhaps forgotten in development path. Commercial one's are way too costly.

    Read the article

  • PostgreSQL - Error: SQL state: XX000.

    - by rob
    I have a table in Postgres that looks like this: CREATE TABLE "Population" ( "Id" bigint NOT NULL DEFAULT nextval('"population_Id_seq"'::regclass), "Name" character varying(255) NOT NULL, "Description" character varying(1024), "IsVisible" boolean NOT NULL CONSTRAINT "pk_Population" PRIMARY KEY ("Id") ) WITH ( OIDS=FALSE ); And a select function that looks like this: CREATE OR REPLACE FUNCTION "Population_SelectAll"() RETURNS SETOF "Population" AS $BODY$select "Id", "Name", "Description", "IsVisible" from "Population"; $BODY$ LANGUAGE 'sql' STABLE COST 100 Calling the select function returns all the rows in the table as expected. I have a need to add a couple of columns to the table (both of which are foreign keys to other tables in the database). This gives me a new table def as follows: CREATE TABLE "Population" ( "Id" bigint NOT NULL DEFAULT nextval('"population_Id_seq"'::regclass), "Name" character varying(255) NOT NULL, "Description" character varying(1024), "IsVisible" boolean NOT NULL, "DefaultSpeciesId" bigint NOT NULL, "DefaultEcotypeId" bigint NOT NULL, CONSTRAINT "pk_Population" PRIMARY KEY ("Id"), CONSTRAINT "fk_Population_DefaultEcotypeId" FOREIGN KEY ("DefaultEcotypeId") REFERENCES "Ecotype" ("Id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT "fk_Population_DefaultSpeciesId" FOREIGN KEY ("DefaultSpeciesId") REFERENCES "Species" ("Id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ) WITH ( OIDS=FALSE ); and function: CREATE OR REPLACE FUNCTION "Population_SelectAll"() RETURNS SETOF "Population" AS $BODY$select "Id", "Name", "Description", "IsVisible", "DefaultSpeciesId", "DefaultEcotypeId" from "Population"; $BODY$ LANGUAGE 'sql' STABLE COST 100 ROWS 1000; Calling the function after these changes results in the following error message: ERROR: could not find attribute 11 in subquery targetlist SQL state: XX000 What is causing this error and how do I fix it? I have tried to drop and recreate the columns and function - but the same error occurs. Platform is PostgreSQL 8.4 running on Windows Server. Thanks.

    Read the article

  • What is ADO.NET?

    - by ChrisC
    I've written a few Access db's and used some light VBA, and had an OO class. Now I'm undertaking to write a C# db app. I've got VS and System.Data.SQLite installed and connected, and have entered my tables and columns, but that's where I'm stuck. I'm trying to find what info and tutorials I need to look for, but there are a lot of terms I don't understand and I don't know if or exactly how they apply to my project. I've read definitions for these terms (Wikipedia and elsewhere), but the definitions don't make sense to me because I don't know what they are or how they fit together or which ones are optional or not optional for my project. Some of the terms on the System.Data.SQLite website (I wanted to use System.Data.SQLite for my db). I figured my first step in my project would be to get the db and queries set up and tested. Please tell me if there are other pieces of this part of the puzzle I will need to know about, too. If I can figure out what's what, I can start looking for the tutorials I need. (btw, I know I don't want to use an ORM because my app is so simple, and because I want to keep from biting off too much too soon.) Thank you very much. SQLite.NET ADO.NET ADO.NET provider ADO.NET 2.0 Provider for SQLite SQLite Entity Framework SQLite Entity Framework provider

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >