Search Results

Search found 41147 results on 1646 pages for 'database security'.

Page 94/1646 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • User Management: Managing users in user-defined "groups", database schema and logistics

    - by Kevin Brown
    I'm a noob, development wise and logistically-wise. I'm developing a site that lets people take a test... My client wants the ability for a user with the roll/privledge "admin" (a step below a super-admin) to be allowed to create users and only see/edit the users that they create... The users created in that "category" or group need some information that their superior provides. For example, I log in as a "manager", I have the ability to invite people to take the test, and manage those people. Before adding those people, I will have filled out a short survey about myself... Right now, the users that are invited will be asked some of the same questions as the manager. I'd like to cut down the redundancy by using the information put into the database by the manager and apply it to the invited users. How do I set up my database to work with this criterion? I'm a little confused about how to do this! Let me know if I can add more details... (This is a mysql and php app)

    Read the article

  • Logging exceptions to database in NServiceBus

    - by IGoor
    If an exception occurs in my MessageHandler I want to write the exception details to my database. How do I do this? Obviously, I cant just catch the exception, write to database, and rethrow it since NSB rollbacks all changes. (IsTransactional is set to true) I tried adding logging functionality in a seperate handler, which I caledl using SendLocal if an exception occured, but this does not work: public void Handle(MessageItem message) { try { DoWork(); } catch(Exception exc) { Bus.SendLocal(new ExceptionMessage(exc.Message)); throw; } } I also tried using Log4Net with a custom appender, but this also rolled back. Configure.With() .Log4Net<DatabaseAppender>(a => a.Log = "Log") appender: public class DatabaseAppender : log4net.Appender.AppenderSkeleton { public string Log { get; set; } protected override void Append(log4net.Core.LoggingEvent loggingEvent) { if (loggingEvent.ExceptionObject != null) WriteToDatabase(loggingEvent.ExceptionObject); } } Is there anyway to log unhandled exceptions in the messagehandler when IsTransactional is true? Thanks in advance.

    Read the article

  • Commercial web application--scalable database design

    - by Rob Campbell
    I'm designing a set of web apps to track scientific laboratory data. Each laboratory has several members, each of whom will access both their own data and that of their laboratory as a whole. Many typical queries will thus be expected to return records of multiple members (e.g. my mouse, joe's mouse and sally's mouse). I think I have the database fairly well normalized. I'm now wondering how to ensure that users can efficiently access both their own data and their lab's data set when it is mixed among (hopefully) a whole ton of records from other labs. What I've come up with so far is that most tables will end with two fields: user_id and labgroup_id. The WHERE clause of any SELECT statement will include the appropriate reference to one of the id fields ("...WHERE 'labroup_id=n..." or "...WHERE user_id=n..."). My questions are: Is this an approach that will scale to 10^6 or more records? If so, what's the best way to use these fields in a query so that it most efficiently searches the relevant subset of the database? e.g. Should the first step in querying be to create a temporary table containing just the labgroup's data? Or will indexing using some combination of the id, user_id, and labroup_id fields be sufficient at that scale? I thank any responders very much in advance.

    Read the article

  • How do you make life easier for yourself when developing a really large database

    - by Hannes de Jager
    I am busy developing 2 web based systems with MySql databases and the amount of tables/views/stored routines is really becoming a lot and it is more and more challenging to handle the complexity. Now in programming languages we have namespacing e.g. Java packages, C++ namespaces to partition the software, grouping it together to make things more understandable. Databases on the other hand have more of a flat structure (MySql at least) e.g. tables and stored procedures are on the same level. So one have to be more creative, creating naming conventions, perhaps use more than one database or using tools to visualize things. What methods do you use to ease the pain? To be effective while developing your databases? To not get lost in a sea of tables and fields and stored procs? Feel free to mention tools you use also, but try to restrict it to open source and preferably Linux solutions if thats OK. b.t.w How many tables would a database have to be considered large in terms of design?

    Read the article

  • Calculate the retrieved rows in database Visual C#

    - by Tanya Lertwichaiworawit
    I am new in Visual C# and would want to know how to calculate the retrieved data from a database. Using the above GUI, when "Calculate" is click, the program will display the number of students in textBox1, and the average GPA of all students in textBox2. Here is my database table "Students": I was able to display the number of students but I'm still confused to how I can calculate the average GPA Here's my code: private void button1_Click(object sender, EventArgs e) { string connection = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Database1.accdb"; OleDbConnection connect = new OleDbConnection(connection); string sql = "SELECT * FROM Students"; connect.Open(); OleDbCommand command = new OleDbCommand(sql, connect); DataSet data = new DataSet(); OleDbDataAdapter adapter = new OleDbDataAdapter(command); adapter.Fill(data, "Students"); textBox1.Text = data.Tables["Students"].Rows.Count.ToString(); double gpa; for (int i = 0; i < data.Tables["Students"].Rows.Count; i++) { gpa = Convert.ToDouble(data.Tables["Students"].Rows[i][2]); } connect.Close(); }

    Read the article

  • Visual Studio + Database Edition + CDC = Deploy Fail

    - by Ben
    Hi All, I've got a database using change data capture (CDC) that is created from a Visual Studio database project (GDR2). My problem is that I have a stored procedure that is analyzing the CDC information and then returning data. How is that a problem you ask? Well, the order of operation is as follows. Pre-deployment Script Tables Indexes, keys, etc. Procedures Post-deployment Script Inside the post-deployment script is where I enable CDC. Here-in lies the problem. The procedure that is acting on the CDC tables is bombing because they don't exist yet! I've tried to put the call to sys.sp_cdc_enable_table in the script that creates the table, but it doesn't like that. Error 102 TSD03070: This statement is not recognized in this context. C:...\Schema Objects\Schemas\dbo\Tables\Foo.table.sql 20 1 Foo Is there a better/built-in way to enable CDC such that it's references are available when the stored procedures are created? Is there a way to run a script after tables are created but before other objects are created? How about a way to create the procedure dependencies be damned? Or maybe I'm just doing things that shouldn't be done?!?! Now, I have a work around. Comment out the sproc body Deploy (CDC is created) Uncomment sproc Deploy Everything is great until the next time I update a CDC tracked table. Then I need to comment out the 'offending' procedure. Thanks for reading my question and thanks for your help!

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • Architecture for database analytics

    - by David Cournapeau
    Hi, We have an architecture where we provide each customer Business Intelligence-like services for their website (internet merchant). Now, I need to analyze those data internally (for algorithmic improvement, performance tracking, etc...) and those are potentially quite heavy: we have up to millions of rows / customer / day, and I may want to know how many queries we had in the last month, weekly compared, etc... that is the order of billions entries if not more. The way it is currently done is quite standard: daily scripts which scan the databases, and generate big CSV files. I don't like this solutions for several reasons: as typical with those kinds of scripts, they fall into the write-once and never-touched-again category tracking things in "real-time" is necessary (we have separate toolset to query the last few hours ATM). this is slow and non-"agile" Although I have some experience in dealing with huge datasets for scientific usage, I am a complete beginner as far as traditional RDBM go. It seems that using column-oriented database for analytics could be a solution (the analytics don't need most of the data we have in the app database), but I would like to know what other options are available for this kind of issues.

    Read the article

  • Schema for storing "binary" values, such as Male/Female, in a database

    - by latentflip
    Intro I am trying to decide how best to set up my database schema for a (Rails) model. I have a model related to money which indicates whether the value is an income (positive cash value) or an expense (negative cash value). I would like separate column(s) to indicate whether it is an income or an expense, rather than relying on whether the value stored is positive or negative. Question: How would you store these values, and why? Have a single column, say Income, and store 1 if it's an income, 0 if it's an expense, null if not known. Have two columns, Income and Expense, setting their values to 1 or 0 as appropriate. Something else? I figure the question is similar to storing a person's gender in a database (ignoring aliens/transgender/etc) hence my title. My thoughts so far Lookup might be easier with a single column, but there is a risk of mistaking 0 (false, expense) for null (unknown). Having seperate columns might be more difficult to maintain (what happens if we end up with a 1 in both columns? Maybe it's not that big a deal which way I go, but it would be great to have any concerns/thoughts raised before I get too far down the line and have to change my code-base because I missed something that should have been obvious! Thanks, Philip

    Read the article

  • Designing model/database for distance between any two locations (that may change)

    - by Yo Ludke
    We should create a web app which has a number of events each with a location (created as user-generated content, so the number of events will be increasingly large). The distance between any events should be available, for example to determine the top 5 closest events and such things. Users may change the locations of events. How should one design the database/model for this (in a scalable way)? I was thinking of doing it with a "distance table" (like so http://www.deutschland-tourist.info/images/entfernungstabelle.gif). Then every time, if a location changes, one row and one column have to be recalculated (this should be done with a delayed job, because it is not important to have the changes instantly). Possible problems in Scaling: Database to large (n² items for n events), too much calculation to be done. For example we should see if this is okay for 10.000 users. If each has created just one event, then this would be 100 million integers... Do you think this would be a good way to do it efficiently? How could one realize such a distance table with an rails model? Is it possible with a SQL databse? Would you start other approaches?

    Read the article

  • MySQL Database is Indexed at Apache Solr, How to access it via URL

    - by Wasim
    data-config.xml <dataConfig> <dataSource encoding="UTF-8" type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/somevisits" user="root" password=""/> <document name="somevisits"> <entity name="login" query="select * from login"> <field column="sv_id" name="sv_id" /> <field column="sv_username" name="sv_username" /> </entity> </document> </dataConfig> schema.xml <?xml version="1.0" encoding="UTF-8" ?> <schema name="example" version="1.5"> <fields> <field name="sv_id" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <field name="username" type="string" indexed="true" stored="true" required="true"/> <field name="_version_" type="long" indexed="true" stored="true" multiValued="false"/> <field name="text" type="string" indexed="true" stored="false" multiValued="true"/> </fields> <uniqueKey>sv_id</uniqueKey> <types> <fieldType name="string" class="solr.StrField" sortMissingLast="true" /> <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/> </types> </schema> Solr successfully imported mysql database using full http://[localSolr]:8983/solr/#/collection1/dataimport?command=full-import My question is, how to access that mysql imported database now?

    Read the article

  • Database contents setting themselves to 0

    - by Luis Armando
    I have a Database that contains 4 tables, however I'm using 1 of them which is separated from the others. In this table I have 4 fields which are varchar and the rest are ints (11 other fields), when the users fill up the DB everything gets saved correctly, however it has happened 3 times so far that the database values for the int's reset to 0 without any apparent reason. At first, I thought, it was because those fields (where the numbers should go) were varchars not ints. However since I changed it, it happened again. I've already double checked my code and I have nothing that even updates or inserts a 0 value. Also I'm using codeigniter and active records which protect against SQL injections AND have XSS filtering enabled, could anyone point out something I might be missing or a reason for this to be happening? Also, I'm pretty sure about the answer of this but, is there ANY way to recover some data?? Other than having to ask everyone to fill in everything again.. =/ ** EDIT ** The Storage Engine is MyISAM and Collation is latin1_swedish_ci, Pack Keys are default, for all intents and purposes it's a normal DB

    Read the article

  • Creating android app Database with big amount of data

    - by Thomas
    Hi all, The database of my application need to be filled with a lot of data, so during onCreate(), it's not only some create table sql instructions, there is a lot of inserts. The solution I chose is to store all this instructions in a sql file located in res/raw and which is loaded with Resources.openRawResource(id). It works well but I face to encoding issue, I have some accentuated caharacters in the sql file which appears bad in my application. This my code to do this : public String getFileContent(Resources resources, int rawId) throws IOException { InputStream is = resources.openRawResource(rawId); int size = is.available(); // Read the entire asset into a local byte buffer. byte[] buffer = new byte[size]; is.read(buffer); is.close(); // Convert the buffer into a string. return new String(buffer); } public void onCreate(SQLiteDatabase db) { try { // get file content String sqlCode = getFileContent(mCtx.getResources(), R.raw.db_create); // execute code for (String sqlStatements : sqlCode.split(";")) { db.execSQL(sqlStatements); } Log.v("Creating database done."); } catch (IOException e) { // Should never happen! Log.e("Error reading sql file " + e.getMessage(), e); throw new RuntimeException(e); } catch (SQLException e) { Log.e("Error executing sql code " + e.getMessage(), e); throw new RuntimeException(e); } The solution I found to avoid this is to load the sql instructions from a huge static final string instead of a file, and all accentutated characters appears well. But Isn't there a more elegant way to load sql instructions than a big static final String attribute with all sql instructions ? Thanks in advance Thomas

    Read the article

  • Page URL and database organization.

    - by shurik2533
    I want that its name would be the page address. For example, if page has heading "Some Page", than its address should be http://somesite/some_page/. "some_page"-name generated by system automatically. "some_page" - is the unique identifier of page. The problem in that the user in the future can enter a name which already exists that will cause an error. It is necessary to find an optimum variant of the decision of a problem for great volumes of the data. I have solved a problem as follows: The page identifier in a database is the name of page and a suffix which is by default equal to zero. At page addition there is a check on existence. If such page does not exist, the suffix is equal 0 and its name is "some_page", if page is exist, than - search for the maximum number of a suffix and suffix=suffix+1 and page name become "some_page_1". For this I create in a database the compound key from fields "suffix" and "pageName": Table Pages suffix|pageName |pageTitle 0 |some_page |Some Page 1 |some_page |Some Page 0 |other_page|Other Page Addition of pages occurs through stored procedure: CREATE PROCEDURE addPage (pageNameVal VARCHAR(100), pageTitleVal VARCHAR(100)) BEGIN DECLARE v INT DEFAULT 0; SELECT MAX(suffix) FROM pages WHERE pageName=pageNameVal INTO v; IF v >= 0 THEN SET v = v + 1; ELSE SET v = 0; END IF; INSERT INTO pages (suffix, pageName) VALUES (pageNameVal, v, pageTitleVal); END; Whether there are more the best decisions?

    Read the article

  • MYSQL - SImple database design

    - by sequelDesigner
    Hello guys, I would like to develop a system, where user will get the data dynamically(what I mean dynamic is, without reloading pages, using AJAX.. but well, it does not matter much). My situation is like this. I have this table, I called it "player", in this player table, I will store the player information like, player name, level, experience etc. Each player can have different clothes, start from tops(shirts), bottoms, shoes, and hairstyle, and each player can have more than 1 tops, bottoms, shoes etc. What I am hesitated or not very sure about is, how do you normally store the data? My current design is like this: Player Table =========================================================================================== id | name | (others player's info) | wearing | tops | bottoms =========================================================================================== 1 | player1 | | top=1;bottom=2;shoes=5;hair=8 | 1,2,3| 7,2,3 Tops Table ===================== id | name | etc... ===================== 1 | t-shirt | ... I am not sure if this design is good. If you are the database designer, how would you design the database? Or how you will store them? Please advise. Thanks

    Read the article

  • insert multiple rows into database from arrays

    - by Mark
    Hi, i need some help with inserting multiple rows from different arrays into my database. I am making the database for a seating plan, for each seating block there is 5 rows (A-E) with each row having 15 seats. my DB rows are seat_id, seat_block, seat_row, seat_number, therefore i need to add 15 seat_numbers for each seat_row and 5 seat_rows for each seat_block. I mocked it up with some foreach loops but need some help turning it into an (hopefully single) SQL statement. $blocks = array("A","B","C","D"); $seat_rows = array("A","B","C","D","E"); $seat_nums = array("1","2","3","4","5","6","7","8","9","10","11","12","13","14","15"); foreach($blocks as $block){ echo "<br><br>"; echo "Block: " . $block . " - "; foreach($seat_rows as $rows){ echo "Row: " . $rows . ", "; foreach($seat_nums as $seats){ echo "seat:" . $seats . " "; } } } Maybe there's a better way of doing it instead of using arrays? i just want to avoid writing an SQL statement that is over 100 lines long ;) (im using codeigniter too if anyone knows of a CI specific way of doing it but im not too bothered about that)

    Read the article

  • The best way to structure this database?

    - by James P
    At the moment I'm doing this: gems(id, name, colour, level, effects, source) id is the primary key and is not auto-increment. A typical row of data would look like this: id => 40153 name => Veiled Ametrine colour => Orange level => 80 effects => +12 sp, +10 hit source => Ametrine (Some of you gamers might see what I'm doing here :) ) But I realise this could be sorted a lot better. I have studied database relationships and secondary keys in my A-Level computing class but never got as far as to set one up properly. I just need help with how this database should be organised, like what tables should have what data with what secondary and foreign keys? I was thinking maybe 3 tables: gem, effects, source. Which then have relationships to each other? Can anyone shed some light on this? Is a complex way like I'm proposing really the way to go or should I just carry on with what I'm doing? Cheers.

    Read the article

  • How to handle product ratings in a database

    - by Mel
    Hello, I would like to know what is the best approach to storing product ratings in a database. I have in mind the following two (simplified, and assuming a MySQL db) scenarios: Scenario 1: Create two columns in the product table to store number of votes and the sum of all votes. Use columns to get an average on the product display page: products(productedID, productName, voteCount, voteSum) Pros: I will only need to access one table, and thus execute one query to display product data and ratings. Cons: Write operations will be executed in a table whose original purpose is only to furnish product data. Scenario 2: Create an additional table to store ratings. products(productID, productName) ratings(productID, voteCount, voteSum) Pros: Isolate ratings into a separate table, leaving the products table to furnish data on available products. Cons: I will have to execute two separate queries on product page requests (one for data and another for ratings). In terms of performance, which of the following two approaches is best: Allow users to execute an occasional write query to a table that will handle hundreds of read requests? Execute two queries at every product page, but isolate the write query into a separate table. I'm a novice to database development, and often find myself struggling with simple questions such as these. Many thanks,

    Read the article

  • Is it possible for double-escaping to cause harm to the DB?

    - by waiwai933
    If I accidentally double escape a string, can the DB be harmed? For the purposes of this question, let's say I'm not using parametrized queries For example, let's say I get the following input: bob's bike And I escape that: bob\'s bike But my code is horrible, and escapes it again: bob\\\'s bike Now, if I insert that into a DB, the value in the DB will be bob\'s bike Which, while is not what I want, won't harm the DB. Is it possible for any input that's double escaped to do something malicious to the DB assuming that I take all other necessary security precautions?

    Read the article

  • Unaccounted for database size

    - by Nazadus
    I currently have a database that is 20GB in size. I've run a few scripts which show on each tables size (and other incredibly useful information such as index stuff) and the biggest table is 1.1 million records which takes up 150MB of data. We have less than 50 tables most of which take up less than 1MB of data. After looking at the size of each table I don't understand why the database shouldn't be 1GB in size after a shrink. The amount of available free space that SqlServer (2005) reports is 0%. The log mode is set to simple. At this point my main concern is I feel like I have 19GB of unaccounted for used space. Is there something else I should look at? Normally I wouldn't care and would make this a passive research project except this particular situation calls for us to do a backup and restore on a weekly basis to put a copy on a satellite (which has no internet, so it must be done manually). I'd much rather copy 1GB (or even if it were down to 5GB!) than 20GB of data each week. sp_spaceused reports the following: Navigator-Production 19184.56 MB 3.02 MB And the second part of it: 19640872 KB 19512112 KB 108184 KB 20576 KB while I've found a few other scripts (such as the one from two of the server database size questions here, they all report the same information either found above or below). The script I am using is from SqlTeam. Here is the header info: * BigTables.sql * Bill Graziano (SQLTeam.com) * graz@<email removed> * v1.11 The top few tables show this (table, rows, reserved space, data, index, unused, etc): Activity 1143639 131 MB 89 MB 41768 KB 1648 KB 46% 1% EventAttendance 883261 90 MB 58 MB 32264 KB 328 KB 54% 0% Person 113437 31 MB 15 MB 15752 KB 912 KB 103% 3% HouseholdMember 113443 12 MB 6 MB 5224 KB 432 KB 82% 4% PostalAddress 48870 8 MB 6 MB 2200 KB 280 KB 36% 3% The rest of the tables are either the same in size or smaller. No more than 50 tables. Update 1: - All tables use unique identifiers. Usually an int incremented by 1 per row. I've also re-indexed everything. I ran the dbcc shrink command as well as updating the usage before and after. And over and over. An interesting thing I found is that when I restarted the server and confirmed no one was using it (and no maintenance procs are running, this is a very new application -- under a week old) and when I went to run the shrink, every now and then it would say something about data changed. Googling yielded too few useful answers with the obvious not applying (it was 1am and I disconnected everyone, so it seems impossible that was really the case). The data was migrated via C# code which basically looked at another server and brought things over. The quantity of deletes, at this point in time, are probably under 50k in rows. Even if those rows were the biggest rows, that wouldn't be more than 100M I would imagine. When I go to shrink via the GUI it reports 0% available to shrink, indicating that I've already gotten it as small as it thinks it can go. Update 2: sp_spaceused 'Activity' yields this (which seems right on the money): Activity 1143639 134488 KB 91072 KB 41768 KB 1648 KB Fill factor was 90. All primary keys are ints. Here is the command I used to 'updateusage': DBCC UPDATEUSAGE(0); Update 3: Per Edosoft's request: Image 111975 2407773 19262184 It appears as though the image table believes it's the 19GB portion. I don't understand what this means though. Is it really 19GB or is it misrepresented? Update 4: Talking to a co-worker and I found out that it's because of the pages, as someone else here has also state the potential for that. The only index on the image table is a clustered PK. Is this something I can fix or do I just have to deal with it? The regular script shows the Image table to be 6MB in size. Update 5: I think I'm just going to have to deal with it after further research. The images have been resized to be roughly 2-5KB each and on a normal file system doesn't consume much space but on SqlServer it seems to consume considerably more. The real answer, in the long run, will likely be separating that table in to another partition or something similar.

    Read the article

  • What is the sense of permiting the user to use no passwords longer than xx chars?

    - by reox
    Its more like a usability question or maybe database, or even maybe security (consider injection attacks) but what is the sense of permiting the user's password to a be not longer than xx chars? It does not make any sense to me, because longer passwords are mostly considered better and even harder to crack, and some users use password safes, so the password length should not matter. I understand that passwords with more than 20 chars are hardly to remember, but if you use diceware or password safe you dont have any problem with that. I really cant understand why there are sites that say "your password need to be between 5 and 8 chars"... also should the password saved as hash, so the length of the field in the database is fixed, so where is the problem? i think that most of the sites where the password is has to be a fixed length are not even using any hashing method...

    Read the article

  • Why does Spring Security Core RC4 require Grails 2.3?

    - by bksaville
    On the Spring Security Core plugin GitHub repo, I see that Graeme on May 21st upped the required version of Grails from 2.0 to 2.3 before the RC4 version was released a couple of months later, but I don't see any explanation for why. Was it mismatched dependencies, bug reports, etc? I run a 2.2.4 app, and I would prefer not to upgrade at this point just to get the latest RC of spring security core. I understand if the upgrade to 3.2.0.RELEASE of spring security caused mismatched dependencies with older versions of Grails since I've run into the same issues before. This originally came up due to a pull request on the spring security OAuth2 provider plugin that I maintain. The pull request upped the required version to 2.3, and the requester pointed me to the RC4 release of core as the reason. Thanks for the good works as always!

    Read the article

  • How to set up WebLogic 10.3.3. security for JAX_WS web services?

    - by Roman Kagan
    I have quite simple task to accomplish - I have to set up the security for web services ( basic authentication with hardcoded in WLES user id and password). I set the web.xml (see code fragment below) but I have tough time configuring WebLogic. I added IdentityAssertionAuthenticator Authentication Provider, set it as Required, modified DefaultAuthenticator as Optional and I went to deployed application's security and set the role to "thisIsUser" and at some point it worked, but not anymore (I redeployed war file and set web service security the same way but no avail.) I'd greatly appreciate for all your help. <security-constraint> <display-name>SecurityConstraint</display-name> <web-resource-collection> <web-resource-name>ABC</web-resource-name> <url-pattern>/ABC</url-pattern> </web-resource-collection> <auth-constraint> <role-name>thisIsUser</role-name> </auth-constraint> </security-constraint>

    Read the article

  • password limitations in SQL Server and MySql

    - by asteroid
    Does MySql 5.1 and SQL Server 2008 (Web edition, Standard) have any functional password limitations other than length limits? Are metacharacters in any form a bad idea to use, like bang, pipe, hash, any slash, carrot, and so on? I know that MySql 5.1 has a password length limitation of 16 characters that is hardcoded, but I was wondering, are any metacharacters (i.e. non alphanumerics) a bad idea to use? And is this true in SQL Server 2008 Web edition, Standard? So specifically: can symbols like: /`~:}{[]^ be used successfully? I would hope it doesn't matter to the database, but I don't understand enough about password storage in enterprise database systems yet to know for sure, and I was looking for confirmation or an explanation.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >