Search Results

Search found 5527 results on 222 pages for 'unique constraint'.

Page 40/222 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • How to Auto-Increment Non-Primary Key? - SQL Server

    - by user311509
    CREATE TABLE SupplierQuote ( supplierQuoteID int identity (3504,2) CONSTRAINT supquoteid_pk PRIMARY KEY, PONumber int identity (9553,20) NOT NULL . . . CONSTRAINT ponumber_uq UNIQUE(PONumber) ); The above ddl produces an error: Msg 2744, Level 16, State 2, Line 1 Multiple identity columns specified for table 'SupplierQuote'. Only one identity column per table is allowed. How can i solve it? I want PONumber to be auto-incremented.

    Read the article

  • Method-Object pairs in Java

    - by John Manak
    I'd like to create a list of method-object pairs. Each method is a function returning a boolean. Then: foreach(pair) { if method evaluates to true { do something with the object } } One way of modelling this that I can think of is to have a class Constraint with a method isValid() and for each constraint produce an anonymous class (overriding the isValid() method). I feel like there could be a nicer way. Can you think of any?

    Read the article

  • SQL SERVER Spatial Data

    - by Sam
    Hi All, I am struggeling finding an effectient way to find a distance between a Point that interetcts a polygon and the border of that polygon. I was able to use the STDistance comparing the point to every point that made up the polygon but that is taking a lot of time. Using SPatial indexed wasnt much helpful because the STDistance is not part of any constraint and even when I did put the constraint, the index didnt help much. I appreciate any feedback. Thanks.

    Read the article

  • mysql error dont understand what it is saying

    - by sea_1987
    Cannot add or update a child row: a foreign key constraint fails (`mydb`.`job_listing_has_employer_details`, CONSTRAINT `job_listing_has_employer_details_ibfk_2` FOREIGN KEY (`employer_details_id`) REFERENCES `employer_details` (`id`)) INSERT INTO `job_listing_has_employer_details` (`job_listing_id`, `employer_details_id`) VALUES (6, '5') What does this mean? The two ID's I am inserting into the table exsist.

    Read the article

  • What does ON [PRIMARY] mean?

    - by Icono123
    I'm creating an SQL setup script and I'm using someone else's script as an example. Here's an example of the script: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[be_Categories]( [CategoryID] [uniqueidentifier] ROWGUIDCOL NOT NULL CONSTRAINT [DF_be_Categories_CategoryID] DEFAULT (newid()), [CategoryName] [nvarchar](50) NULL, [Description] [nvarchar](200) NULL, [ParentID] [uniqueidentifier] NULL, CONSTRAINT [PK_be_Categories] PRIMARY KEY CLUSTERED ( [CategoryID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO Does anyone know what the ON [PRIMARY] command does? Regards.

    Read the article

  • Understanding HTTP Cookies in Indy 10 for Delphi XE2

    - by Jerry Dodge
    I have been working with Indy 10 HTTP Servers / Clients lately in Delphi XE2, and I need to make sure I'm understanding session management correctly. In the server, I have a "bucket" of sessions, which is a list of objects which each represent a unique session. I don't use username and password to authenticate users, but I rather use a unique API key which is issued to a client, and has an expiration. When a client wishes to connect to the server, it first logs in by calling the "login" command, which is a path like this: http://localhost:1234/login?APIKey=abcdefghij. The server checks this API Key against the database, and if it's valid, it creates a new session in the bucket, issues a new cookie (unique string), and sets the response cookies with Success=Y and Cookie=abcdefghij. This is where I have the question. Assuming the client end has its own method of cookie management, the client will receive this login response back from the server and automatically save the cookies as necessary. Any future request from the client to the server shall automatically send along these cookies, and the client side doesn't have to necessarily worry about setting these cookies when sending requests to the server. Right? PS - I'm asking this question here on programmers.stackexchange.com because I didn't see it fit to ask on stackoverflow.com. If anyone thinks this is appropriate enough for stackoverflow.com, please let me know.

    Read the article

  • How to manage Areas/Levels in an RPG?

    - by Hexlan
    I'm working on an RPG and I'm trying to figure out how to manage the different levels/areas in the game. Currently I create a new state (source file) for every area, defining its unique aspects. My concern is that as the game grows the number of class files will become unmanageable with all the towns, houses, shops, dungeons, etc. that I need to keep track of. I would also prefer to separate my levels from the source code because non-programmer members of the team will be creating levels, and I would like the engine to be as free from game specific code as possible. I'm thinking of creating a class that provides all the functions that will be the same between all the levels/areas with a unique member variable that can be used to look up level specifics from data. This way I only need to define level/area once in the code, but can create multiple instances each with its own unique aspects provided by data. Is this a good way to go about solving the issue? Is there a better way to handle a growing number of levels?

    Read the article

  • Multiple URL's going to same page - Kosher for Google?

    - by Ashoka15
    I hear conflicting answers from people about this, and I'm a developer by trade, and my SEO knowledge is not what it should be. Here's my situation: I run a website that lists hotels, restaurants, bars, shops, etc for a small Asian beach town. Lots of establishments here are hotels with a restaurant and bar, as well as restaurants that are also bars. As en example, a Mexican restaurant that also functions as a full cocktail bar. I first set it up so each establishment has one page, but can create multiple pages based on their other areas of business. This forces people to create TWO listings under the same name, and most just add the exact same information onto each page, making things redundant. I am re-arranging the database so that a establishment has only ONE listing (one unique page referenced by the unique code '12345ABCDEF') that is accessible from browsing under "Restaurants" and "Bars", and has the URL structures: site.com/dining/mexican/12345ABCDEF/business-name.html site.com/bars/cocktail_bars/12345ABCDEF/business-name.html I could easily simplify the URL to just the unique code and name: site.com/12345ABCDEF/business-name.html But, I found that Google has parsed by URL structure and lists like this on their SERP: Home > Dining > Mexican With each pointing to the default page for homepage, restaurants and Mexican restaurants. If I simplify the URL structure, will I lose these associations? Could Google also be picking up this structure from my breadcrumb trail at the top of the page? What is the best way to set up URL's on these pages so I am not penalized by Google for having identical information on two URL's, while still being able to have places show up as they did with the old system?

    Read the article

  • Single CAS web application in a cluster

    - by Dolf Dijkstra
    Recently a customer wanted to set up a cluster of CAS nodes to be used together with WebCenter Sites. In the process of setting this up they realized that they needed to create a web application per managed server. They did not want to have this management burden but would like to have one web application deployed to multiple nodes. The reason that there is a need for a unique application per node is that the web-application contains information that needs to be unique per node, the postfix for the ticket id.  My customer would like to externalize the node specific configuration to either a specific classpath per managed server or to system properties set at startup.It turns out that the postfix for ticket ids is managed through a property host.name and that this property can be externalized.The host.name property is used in: /webapps/cas/WEB-INF/spring-configuration/uniqueIdGenerators.xmlIt is set in /webapps/cas/WEB-INF/spring-configuration/propertyFileConfigurer.xmlin a PropertyPlaceholderConfigurer.The documentation for PropertyPlaceholderConfigurer:http://static.springsource.org/spring/docs/2.0.x/api/org/springframework/beans/factory/config/PropertyPlaceholderConfigurer.htmlThis indicates that the properties defined through the PropertyPlaceHolderConfigurer can be externalized.To enable this externalization you would need to change host.properties so it is generic for all the managed servers and thus can be reused for all the managed servers: host.name=${cluster.node.id}Next step is to change the startup scripts for the managed servers and add a system property for -Dcluster.node.id=<something unique and stable>.Viola, the postfix is externalized and the web application can be shared amongst the cluster nodes.

    Read the article

  • Suggest the best options to me to design the dynamic web interface using PHP MYSQL and AJAX

    - by Krishna
    Hello, I am designing a web interface for a company. I am describing the company's profile: company is currently having 5 branches and planning to extend their branches all over the country. it is an insurance surveying company. they are dealing with 6 Categories in the insurance domain, vide .. Engineering Fire Marine Motor Miscellaneous Risk Inspection and branches named as b1, b2, b3, b4, b5 and Extending. and finally they have contract with 22 companies. For each claim they are assign a unique ID. like contractcompany/category/serialno Ex: take a contracted company names as xxx, sss, zzz. xxx/Engineering/001 sss/Engineering/001 . . . xxx/Enginnering/002 sss/Engineering/002 . . . xxx/Fire/001 sss/Fire/001 . . . xxx/Fire/002 . . . xxx/Fire/002 . . . and so on..... by this way they issue the unique ID for each claim. Finally what i want is developing the interface with PHP mysql and ajax auto generating the unique id for each claim. store full details of the claims with reference to unique id. show all claims in one page, and they can view by branch wise and category wise. send monthly Report (All claims they have given and status of claims) to contract companies. give access to contracted companies, but they can view only their respective claims. Each claim has its own documents. So they can be uploaded by own company users or administrator. these files are associated with unique ID. contracted companies can view files. Give access to branches to enter new claims and update old claims. Administrator can create, update and delete all the claims and their details. Only administrator can grant new users (own company branches / contracted companies) Finally the the panel is completely database driven. Could any body can help. Thanks in advance Kindly do the needful and oblige Thanks and Regards Krishna. P [email protected]

    Read the article

  • Approach for authentication and storing user details.

    - by cappuccino
    Hey folks, I am using the Zend Framework but my question is broadly about sessions / databases / auth (PHP MySQL). Currently this is my approach to authentication: 1) User signs in, the details are checked in database. - Standard stuff really. 2) If the details are correct only the user's unique ID is stored in the session and a security token (user unique ID + IP + Browser info + salt). The session in written to the filesystem. I've been reading around and many are saying that storing stuff in sessions is not a good idea, and that you should really only write a unique ID which refers back to the user's details and a security token to prevent session hijacking. So this is the approach i've taken, i use to write the user's details in session, but i've moved that out. Wanted to know your opinions on this. I'm keeping sessions in the filesystem since i don't run on multiple servers, and since i'm only writting a tiny tiny bit of data to sessions, i thought that performance would be greater keeping sessions in the filesystem to reduce load on the database. Once the session is written on authentication, it really is only read-only from then on. 3) The rest of the user's details (like subscription details, permissions, account info etc) are cached in the filesystem (this can always be easily moved to memory if i wanted even more performance). So rather than keeping the user's details in session, the user's details are cached in the file system. I'm using Zend_Cache and the unique cache id is something like md5(/cache/auth/2892), the number is the unique id of the user. I guess the benefit of this method is that once the user is logged in, there is essentially not database queries being run to get the user's details. Just wonder if this approach is better than keeping the whole lot in session... 4) As the user moves throughout the site the only thing that is checked is the ID in the session and the security token. So, overall the first question is 1) is the filesystem more efficient than a database for this purpose 2) have i taken enough security precautions 3) is separating user detail's from the session into a cached file a pointless task? Thanks.

    Read the article

  • MySQL forgot about automatically creating an index for a foreign key?

    - by bobo
    After running the following SQL statements, you will see that, MySQL has automatically created the non-unique index question_tag_tag_id_tag_id on the tag_id column for me after the first ALTER TABLE statement has run. But after the second ALTER TABLE statement has run, I think MySQL should also automatically create another non-unique index question_tag_question_id_question_id on the question_id column for me. But as you can see from the SHOW INDEXES statement output, it's not there. Why does MySQL forget about the second ALTER TABLE statement? By the way, since I have already created a unique index question_id_tag_id_idx used by both question_id and tag_id columns. Is creating a separate index for each of them redundant? mysql> DROP DATABASE mydatabase; Query OK, 1 row affected (0.00 sec) mysql> CREATE DATABASE mydatabase; Query OK, 1 row affected (0.00 sec) mysql> USE mydatabase; Database changed mysql> CREATE TABLE question (id BIGINT AUTO_INCREMENT, html TEXT, PRIMARY KEY(id)) ENGINE = INNODB; Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE tag (id BIGINT AUTO_INCREMENT, name VARCHAR(10) NOT NULL, UNIQUE INDEX name_idx (name), PRIMARY KEY(id)) ENGINE = INNODB; Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE question_tag (question_id BIGINT, tag_id BIGINT, UNIQUE INDEX question_id_tag_id_idx (question_id, tag_id), PRIMARY KEY(question_id, tag_id)) ENGINE = INNODB; Query OK, 0 rows affected (0.00 sec) mysql> ALTER TABLE question_tag ADD CONSTRAINT question_tag_tag_id_tag_id FOREIGN KEY (tag_id) REFERENCES tag(id); Query OK, 0 rows affected (0.10 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> ALTER TABLE question_tag ADD CONSTRAINT question_tag_question_id_question_id FOREIGN KEY (question_id) REFERENCES question(id); Query OK, 0 rows affected (0.13 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> SHOW INDEXES FROM question_tag; +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | question_tag | 0 | PRIMARY | 1 | question_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | PRIMARY | 2 | tag_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | question_id_tag_id_idx | 1 | question_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | question_id_tag_id_idx | 2 | tag_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 1 | question_tag_tag_id_tag_id | 1 | tag_id | A | 0 | NULL | NULL | | BTREE | | +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ 5 rows in set (0.01 sec) mysql>

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #034

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 UDF – User Defined Function to Strip HTML – Parse HTML – No Regular Expression The UDF used in the blog does fantastic task – it scans entire HTML text and removes all the HTML tags. It keeps only valid text data without HTML task. This is one of the quite commonly requested tasks many developers have to face everyday. De-fragmentation of Database at Operating System to Improve Performance Operating system skips MDF file while defragging the entire filesystem of the operating system. It is absolutely fine and there is no impact of the same on performance. Read the entire blog post for my conversation with our network engineers. Delay Function – WAITFOR clause – Delay Execution of Commands How do you delay execution of the commands in SQL Server – ofcourse by using WAITFOR keyword. In this blog post, I explain the same with the help of T-SQL script. Find Length of Text Field To measure the length of TEXT fields the function is DATALENGTH(textfield). Len will not work for text field. As of SQL Server 2005, developers should migrate all the text fields to VARCHAR(MAX) as that is the way forward. Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} There are three ways to retrieve the current datetime in SQL SERVER. CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Explanation and Comparison of NULLIF and ISNULL An interesting observation is NULLIF returns null if it comparison is successful, whereas ISNULL returns not null if its comparison is successful. In one way they are opposite to each other. Here is my question to you - How to create infinite loop using NULLIF and ISNULL? If this is even possible? 2008 Introduction to SERVERPROPERTY and example SERVERPROPERTY is a very interesting system function. It returns many of the system values. I use it very frequently to get different server values like Server Collation, Server Name etc. SQL Server Start Time We can use DMV to find out what is the start time of SQL Server in 2008 and later version. In this blog you can see how you can do the same. Find Current Identity of Table Many times we need to know what is the current identity of the column. I have found one of my developers using aggregated function MAX () to find the current identity. However, I prefer following DBCC command to figure out current identity. Create Check Constraint on Column Some time we just need to create a simple constraint over the table but I have noticed that developers do many different things to make table column follow rules than just creating constraint. I suggest constraint is a very useful concept and every SQL Developer should pay good attention to this subject. 2009 List Schema Name and Table Name for Database This is one of the blog post where I straight forward display script. One of the kind of blog posts, which I still love to read and write. Clustered Index on Separate Drive From Table Location A table devoid of primary key index is called heap, and here data is not arranged in a particular order, which gives rise to issues that adversely affect performance. Data must be stored in some kind of order. If we put clustered index on it then the order will be forced by that index and the data will be stored in that particular order. Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. 2010 Data Pages in Buffer Pool – Data Stored in Memory Cache One of my earlier year article, which I still read it many times and point developers to read it again. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. TRANSACTION, DML and Schema Locks Can you create a situation where you can see Schema Lock? Well, this is a very simple question, however during the interview I notice over 50 candidates failed to come up with the scenario. In this blog post, I have demonstrated the situation where we can see the schema lock in database. 2011 Solution – Puzzle – Statistics are not updated but are Created Once In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? Selecting Domain from Email Address This is a straight to script blog post where I explain how to select only domain name from entire email address. Solution – Generating Zero Without using Any Numbers in T-SQL How to get zero digit without using any digit? This is indeed a very interesting question and the answer is even interesting. Try to come up with answer in next 10 minutes and if you can’t come up with the answer the blog post read this post for solution. 2012 Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. Read Only Files and SQL Server Management Studio (SSMS) I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. Identifying Column Data Type of uniqueidentifier without Querying System Tables How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is the table name and you are allowed to return any kind of error if the table does not have uniqueidentifier column. Read the blog post to find the answer. Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue Interesting question – “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it?” Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video Here is interesting small 60 second video on how to import CSV file into Database. ColumnStore Index – Batch Mode vs Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput. Follow up – Usage of $rowguid and $IDENTITY This is an excellent follow up blog post of my earlier blog post where I explain where to use $rowguid and $identity.  If you do not know the difference between them, this is a blog with a script example. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • Designing An ACL Based Permission System

    - by ryanzec
    I am trying to create a permissions system where everything is going to be stored in MySQL (or some database) and pulled using PHP for a project management system I am building.  I am right now trying to do it is an ACL kind of way.  There are a number key features I want to be able to support: 1.  Being able to assign permissions without being tied to a specific object. The reason for this is that I want to be able to selectively show/hide elements of the UI based on permissions at a point where I am not directly looking at a domain object instance.  For instance, a button to create a new project should only should only be shown to users that have the pm.project.create permission but obviously you can assign a create permission to an domain object instance (as it is already created). 2.  Not have to assign permissions for every single object. Obviously creating permissions entries for every single object (projects, tickets, comments, etc…) would become a nightmare to maintain so I want to have some level of permission inheritance. *3.  Be able to filter queries based on permissions. This would be a really nice to have but I am not sure if it is possible.  What I mean by this is say I have a page that list all projects.  I want the query that pulls all projects to incorporate the ACL so that it would not show projects that the current user does not have pm.project.read access to.  This would have to be incorporated into the main query as if it is a process that is done after that main query (which I know I could do) certain features like pagination become much more difficult. Right now this is my basic design for the tables: AclEntities id - the primary key key - the unique identifier for the domain object (usually the primary key of that object) parentId - the parent of the domain object (like the project object if this was a ticket object) aclDomainObjectId - metadata about the domain object AclDomainObjects id - primary key title - simple string to unique identify the domain object(ie. project, ticket, comment, etc…) fullyQualifiedClassName - the fully qualified class name for use in code (I am using namespaces) There would also be tables mapping AclEntities to Users and UserGroups. I also have this interface that all acl entity based object have to implement: IAclEntity getAclKey() - to the the unique key for this specific instance of the acl domain object (generally return the primary key or a concatenated string of a composite primary key) getAclTitle() - to get the unique title for the domain object (generally just returning a static string) getAclDisplayString() - get the string that represents this entity (generally one or more field on the object) getAclParentEntity() - get the parent acl entity object (or null if no parent) getAclEntity() - get the acl enitty object for this instance of the domain object (or null if one has not been created yet) hasPermission($permissionString, $user = null) - whether or not the user has the permission for this instance of the domain object static getFromAclEntityId($aclEntityId) - get a specific instance of the domain object from an acl entity id. Do any of these features I am looking for seems hard to support or are just way off base? Am I missing or not taking in account anything in my implementation? Is performance something I should keep in mind?

    Read the article

  • UITableViewCell repeating problem

    - by cannyboy
    I have a UItableview with cells. Some cells have uilabels and some have uibuttons. The UIbuttons are created whenever the first character in an array is "^". However, the uibuttons repeat when i scroll down (appearing over the uilabel).. and then multiply over the uilabels when I scroll up. Any clues why? My voluminous code is below: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { const NSInteger LABEL_TAG = 1001; UILabel *label; UIButton *linkButton; //NSString *linkString; static NSString *CellIdentifier; UITableViewCell *cell; CellIdentifier = @"TableCell"; cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; label = [[UILabel alloc] initWithFrame:CGRectZero]; [cell.contentView addSubview:label]; [label setLineBreakMode:UILineBreakModeWordWrap]; [label setMinimumFontSize:FONT_SIZE]; [label setNumberOfLines:0]; //[label setBackgroundColor:[UIColor redColor]]; [label setTag:LABEL_TAG]; NSString *firstChar = [[paragraphs objectAtIndex:indexPath.row] substringToIndex:1]; NSLog(@"firstChar %@", firstChar); NSLog(@"before comparison"); if ([firstChar isEqualToString:@"^"]) { // not called NSLog(@"BUTTON"); //[label setFont:[UIFont boldSystemFontOfSize:FONT_SIZE]]; linkButton = [UIButton buttonWithType:UIButtonTypeCustom]; linkButton.frame = CGRectMake(CELL_CONTENT_MARGIN, CELL_CONTENT_MARGIN, 280, 30); [cell.contentView addSubview:linkButton]; NSString *myString = [paragraphs objectAtIndex:indexPath.row]; NSArray *myArray = [myString componentsSeparatedByCharactersInSet:[NSCharacterSet characterSetWithCharactersInString:@"*"]]; NSString *noHash = [myArray objectAtIndex:1]; [linkButton setBackgroundImage:[UIImage imageNamed:@"linkButton.png"] forState:UIControlStateNormal]; linkButton.adjustsImageWhenHighlighted = YES; [linkButton setTitle:noHash forState:UIControlStateNormal]; linkButton.titleLabel.font = [UIFont systemFontOfSize:FONT_SIZE]; [linkButton setTitleColor:[UIColor whiteColor] forState:UIControlStateNormal]; [linkButton setTag:indexPath.row]; [linkButton addTarget:self action:@selector(openSafari:) forControlEvents:UIControlEventTouchUpInside]; //size = [noAsterisks sizeWithFont:[UIFont boldSystemFontOfSize:FONT_SIZE] constrainedToSize:constraint lineBreakMode:UILineBreakModeWordWrap]; [label setBackgroundColor:[UIColor clearColor]]; [label setText:@""]; } cell.selectionStyle = UITableViewCellSelectionStyleNone; } else { label = (UILabel *)[cell viewWithTag:LABEL_TAG]; NSString *firstChar = [[paragraphs objectAtIndex:indexPath.row] substringToIndex:1]; NSLog(@"firstChar %@", firstChar); NSLog(@"before comparison"); if ([firstChar isEqualToString:@"^"]) { NSLog(@"cell not nil, reusing linkButton"); linkButton = (UIButton *)[cell viewWithTag:indexPath.row]; } } if (!label) label = (UILabel*)[cell viewWithTag:LABEL_TAG]; NSString *textString = [paragraphs objectAtIndex:indexPath.row]; NSString *noAsterisks = [textString stringByReplacingOccurrencesOfString:@"*" withString:@""] ; CGSize constraint = CGSizeMake(CELL_CONTENT_WIDTH - (CELL_CONTENT_MARGIN * 2), 20000.0f); CGSize size; NSString *firstChar = [[paragraphs objectAtIndex:indexPath.row] substringToIndex:1]; //NSLog(@"firstChar %@", firstChar); if ([firstChar isEqualToString:@"^"]) { NSLog(@"BUTTON2"); if (!linkButton) linkButton = (UIButton*)[cell viewWithTag:indexPath.row]; linkButton = [UIButton buttonWithType:UIButtonTypeCustom]; linkButton.frame = CGRectMake(CELL_CONTENT_MARGIN, CELL_CONTENT_MARGIN, 280, 30); [cell.contentView addSubview:linkButton]; NSString *myString = [paragraphs objectAtIndex:indexPath.row]; NSArray *myArray = [myString componentsSeparatedByCharactersInSet:[NSCharacterSet characterSetWithCharactersInString:@"*"]]; NSString *noHash = [myArray objectAtIndex:1]; [linkButton setBackgroundImage:[UIImage imageNamed:@"linkButton.png"] forState:UIControlStateNormal]; linkButton.adjustsImageWhenHighlighted = YES; [linkButton setTitle:noHash forState:UIControlStateNormal]; linkButton.titleLabel.font = [UIFont systemFontOfSize:FONT_SIZE]; [linkButton setTitleColor:[UIColor whiteColor] forState:UIControlStateNormal]; [linkButton setTag:indexPath.row]; [linkButton addTarget:self action:@selector(openSafari:) forControlEvents:UIControlEventTouchUpInside]; size = [noAsterisks sizeWithFont:[UIFont boldSystemFontOfSize:FONT_SIZE] constrainedToSize:constraint lineBreakMode:UILineBreakModeWordWrap]; [label setBackgroundColor:[UIColor clearColor]]; [label setText:@""]; } else if ([firstChar isEqualToString:@"*"]) { size = [noAsterisks sizeWithFont:[UIFont boldSystemFontOfSize:FONT_SIZE] constrainedToSize:constraint lineBreakMode:UILineBreakModeWordWrap]; [label setFont:[UIFont boldSystemFontOfSize:FONT_SIZE]]; [label setText:noAsterisks]; NSLog(@"bold"); } else { size = [noAsterisks sizeWithFont:[UIFont systemFontOfSize:FONT_SIZE] constrainedToSize:constraint lineBreakMode:UILineBreakModeWordWrap]; [label setFont:[UIFont systemFontOfSize:FONT_SIZE]]; [label setText:noAsterisks]; } [label setFrame:CGRectMake(CELL_CONTENT_MARGIN, CELL_CONTENT_MARGIN, CELL_CONTENT_WIDTH - (CELL_CONTENT_MARGIN * 2), MAX(size.height, 20.0f))]; cell.selectionStyle = UITableViewCellSelectionStyleNone; cell.accessoryType = UITableViewCellAccessoryNone; return cell; }

    Read the article

  • Cast then check or check then cast?

    - by jamesrom
    Which method is regarded as best practice? Cast first? public string Describe(ICola cola) { var coke = cola as CocaCola; if (coke != null) { string result; // some unique coca-cola only code here. return result; } var pepsi = cola as Pepsi; if (pepsi != null) { string result; // some unique pepsi only code here. return result; } } Or should I check first, cast later? public string Describe(ICola cola) { if (cola is CocaCola) { coke = (CocaCola) cola; string result; // some unique coca-cola only code here. return result; } if (cola is Pepsi) { pepsi = (Pepsi) cola; string result; // some unique pepsi only code here. return result; } } Can you see any other way to do this?

    Read the article

  • Dynamic Google Maps API InfoWindow HTML Content

    - by Peter Hanneman
    I am working in Flash Builder 4 with Google Map's ActionScript API. I have created a map, loaded some custom markers onto it and added some MouseEvent listeners to each marker. The trouble comes when I load an InfoWindow panel. I want to dynamically set the htmlContent based off of information stored in a database. The trouble is that this information can change every couple of seconds and each marker has a unique data set so I can not statically set it at the time I actually create the markers. I have a method that will every minute or so load all of the records from my database into an Object variable. Everything I need to display in the htmlContent is contained in this object under a unique identifier. The basic crux of the problem is that there is no way for me to uniquely identify an info window, so I can not determine what information to pull into the panel. marker.addEventListener(MapMouseEvent.ROLL_OVER, function(e:MapMouseEvent):void { showInfoWindow(e.latLng) }, false, 0, false); That is my mouse event listener. The function I call, "showInfowindow" looks like this: private function showInfoWindow(latlng:LatLng):void { var options:InfoWindowOptions = new InfoWindowOptions({title: appData[*I NEED A UNIQUE ID HERE!!!*].type + " Summary", contentHTML: appData[*I NEED A UNIQUE ID HERE!!!*].info}); this.map.openInfoWindow(latlng, options); } I thought I was onto something by being able to pass a variable in my event listener declaration, but it simply hates having a dynamic variable passed through, it only returns the last value use. Example: marker.addEventListener(MapMouseEvent.ROLL_OVER, function(e:MapMouseEvent):void { showInfoWindow(e.latLng, record.unit_id) }, false, 0, false); That solution is painfully close to working. I iterate through a loop to create my markers when I try the above solution and roll over a marker I get information, but every marker's information reflects whatever information the last marker created had. I apologize for the long explaination but I just wanted to make my question as clear as possible. Does anyone have any ideas about how to patch up my almost-there-solution that I posted at the bottom or any from the ground up solutions? Thanks in advance, Peter Hanneman

    Read the article

  • mysql complex key or + auto increment key (guid)

    - by darko
    Hi, I have not very big db. I am using auto increment primary keys and in my case there is no problem with that. GUID is not necessary. I have a table containing this fields: from_destination to_testination shipper quantity Where the fields 1,2,3 needs to be unique. Also I have second table that for the fields 1,2,3 stores bought quantities per day One to many. from_destination to_destination shipper date reserved_quantity case 1 Is it better to make fields 1,2,3 as primary complex key in the first table and the same fields in the second table to be foreign key First table from_destination | to_destination | primary shipper | quaitity Second table second_id - autoincrement primary from_destination | to_destination | foreign key shipper | date reserved_quantity Case 2 or just to add auto increment filed in the first table and make fields 1,2,3 unique. In the second table there will be one ingeger foreign key pointing to the first table, and one auto increment key for the table. First table first_id - autoincrement primary from_destination | to_destination | unique shipper | quaitity Second table second_id - autoincrement primary first_id - forein date reserved_quantity If so why we need complex keys, when we can have one field auto increment or GUID and all other fields that are candidates for complex key to be unique. Regards

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

  • Modeling objects with multiple table relationships in Zend Framework

    - by andybaird
    I'm toying with Zend Framework and trying to use the "QuickStart" guide against a website I'm making just to see how the process would work. Forgive me if this answer is obvious, hopefully someone experienced can shed some light on this. I have three database tables: CREATE TABLE `users` ( `id` int(11) NOT NULL auto_increment, `email` varchar(255) NOT NULL, `username` varchar(255) NOT NULL default '', `first` varchar(128) NOT NULL default '', `last` varchar(128) NOT NULL default '', `gender` enum('M','F') default NULL, `birthyear` year(4) default NULL, `postal` varchar(16) default NULL, `auth_method` enum('Default','OpenID','Facebook','Disabled') NOT NULL default 'Default', PRIMARY KEY (`id`), UNIQUE KEY `email` (`email`), UNIQUE KEY `username` (`username`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 CREATE TABLE `user_password` ( `user_id` int(11) NOT NULL, `password` varchar(16) NOT NULL default '', PRIMARY KEY (`user_id`), UNIQUE KEY `user_id` (`user_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 CREATE TABLE `user_metadata` ( `user_id` int(11) NOT NULL default '0', `signup_date` datetime default NULL, `signup_ip` varchar(15) default NULL, `last_login_date` datetime default NULL, `last_login_ip` varchar(15) default NULL, PRIMARY KEY (`user_id`), UNIQUE KEY `user_id` (`user_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 I want to create a User model that uses all three tables in certain situations. E.g., the metadata table is accessed if/when the meta data is needed. The user_password table is accessed only if the 'Default' auth_method is set. I'll likely be adding a profile table later on that I would like to be able to access from the user model. What is the best way to do this with ZF and why?

    Read the article

  • Password hashing, salt and storage of hashed values

    - by Jonathan Leffler
    Suppose you were at liberty to decide how hashed passwords were to be stored in a DBMS. Are there obvious weaknesses in a scheme like this one? To create the hash value stored in the DBMS, take: A value that is unique to the DBMS server instance as part of the salt, And the username as a second part of the salt, And create the concatenation of the salt with the actual password, And hash the whole string using the SHA-256 algorithm, And store the result in the DBMS. This would mean that anyone wanting to come up with a collision should have to do the work separately for each user name and each DBMS server instance separately. I'd plan to keep the actual hash mechanism somewhat flexible to allow for the use of the new NIST standard hash algorithm (SHA-3) that is still being worked on. The 'value that is unique to the DBMS server instance' need not be secret - though it wouldn't be divulged casually. The intention is to ensure that if someone uses the same password in different DBMS server instances, the recorded hashes would be different. Likewise, the user name would not be secret - just the password proper. Would there be any advantage to having the password first and the user name and 'unique value' second, or any other permutation of the three sources of data? Or what about interleaving the strings? Do I need to add (and record) a random salt value (per password) as well as the information above? (Advantage: the user can re-use a password and still, probably, get a different hash recorded in the database. Disadvantage: the salt has to be recorded. I suspect the advantage considerably outweighs the disadvantage.) There are quite a lot of related SO questions - this list is unlikely to be comprehensive: Encrypting/Hashing plain text passwords in database Secure hash and salt for PHP passwords The necessity of hiding the salt for a hash Clients-side MD5 hash with time salt Simple password encryption Salt generation and Open Source software I think that the answers to these questions support my algorithm (though if you simply use a random salt, then the 'unique value per server' and username components are less important).

    Read the article

  • Enum exeeding the 65535 bytes limit of static initializer... what's best to do?

    - by Daniel Bleisteiner
    I've started a rather large Enum of so called Descriptors that I've wanted to use as a reference list in my model. But now I've come across a compiler/VM limit the first time and so I'm looking for the best solution to handle this. Here is my error : The code for the static initializer is exceeding the 65535 bytes limit It is clear where this comes from - my Enum simply has far to much elements. But I need those elements - there is no way to reduce that set. Initialy I've planed to use a single Enum because I want to make sure that all elements within the Enum are unique. It is used in a Hibernate persistence context where the reference to the Enum is stored as String value in the database. So this must be unique! The content of my Enum can be devided into several groups of elements belonging together. But splitting the Enum would remove the unique safety I get during compile time. Or can this be achieved with multiple Enums in some way? My only current idea is to define some Interface called Descriptor and code several Enums implementing it. This way I hope to be able to use the Hibernate Enum mapping as if it were a single Enum. But I'm not even sure if this will work. And I loose unique safety. Any ideas how to handle that case?

    Read the article

  • Best way to associate data files with particular tests in RSpec / Ruby

    - by Bill T
    For my RSpec tests I would to automatically associate data files with each test. To clarify, if my tests each require an xml file as input data and then some xpath statements to validate the responses they get back I would like to externalize the xml and xpath as files and have the testing framework easily associate them with the particular test being run by using the unique ID of the test as the file(s) name. I tried to get this behavior but the solution isn't very clean. I wrote a helper method that takes the value of "description" and combines it with FILE to create a unique identifier which is set into a global variable that other utilities can access. The unique identifier is used to associate the data files I need. I have to call this helper method as the first line of every test, which is ugly. If I have an RSpec example that looks like this: describe "Basic functions of this server I'm testing" do it "should give me back a response" do # Sets a global var to: "my_tests_spec.rb_should_give_me_back_a_response" TestHelper::who_am_i __FILE__, description ... end end Is there some better/cleaner/slicker way I can get an unique ID for each test that I could use to associate data files with? Perhaps something build into RSpec I'm unaware of? Thank you, -Bill

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >